id
stringlengths
6
113
author
stringlengths
2
36
task_category
stringclasses
42 values
tags
listlengths
1
4.05k
created_time
timestamp[ns, tz=UTC]date
2022-03-02 23:29:04
2025-04-10 08:38:38
last_modified
stringdate
2020-05-14 13:13:12
2025-04-19 04:15:39
downloads
int64
0
118M
likes
int64
0
4.86k
README
stringlengths
30
1.01M
matched_bigbio_names
listlengths
1
8
is_bionlp
stringclasses
3 values
model_cards
stringlengths
0
1M
metadata
stringlengths
2
698k
source
stringclasses
2 values
matched_task
listlengths
1
10
__index_level_0__
int64
0
46.9k
jolenechong/lora-bart-samsum-tib-1024
jolenechong
summarization
[ "peft", "bart", "summarization", "dataset:gigant/tib", "base_model:philschmid/bart-large-cnn-samsum", "base_model:adapter:philschmid/bart-large-cnn-samsum", "license:mit", "region:us" ]
2023-12-13T15:43:36Z
2023-12-13T15:49:35+00:00
6
0
--- base_model: philschmid/bart-large-cnn-samsum datasets: - gigant/tib library_name: peft license: mit pipeline_tag: summarization model-index: - name: lora-bart-samsum-tib-1024 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora-bart-samsum-tib-1024 This model is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on the TIB dataset. ## Model description Fine Tuned with LORA on the TIB dataset. A quick demo of it's capabilities: ``` Moderator: Good afternoon, everyone, and welcome to today's webinar on the fascinating and rapidly evolving topic of Artificial Intelligence. We have a distinguished panel of experts with us today who will shed light on the latest developments in AI and its impact on various aspects of our lives. I'll start by introducing our first speaker, Dr. Emily Rodriguez, a renowned AI researcher and professor. Dr. Rodriguez: Thank you, it's a pleasure to be here. Artificial Intelligence has witnessed remarkable growth over the past few decades, and it's now ingrained in our daily lives, from voice assistants in our smartphones to self-driving cars and even in healthcare diagnostics. AI technologies are advancing at an unprecedented rate, driven by deep learning and neural networks. These innovations have allowed machines to perform tasks that were once thought to be exclusive to humans, such as natural language understanding, image recognition, and decision-making. The future of AI holds immense promise, but it also presents important ethical and societal challenges that we need to address. Moderator: Indeed, the ethical aspect of AI is a crucial issue. Let's hear from our next speaker, Dr. James Chen, a pioneer in AI ethics. Dr. Chen: Thank you for having me. As AI technologies continue to advance, it's essential that we consider the ethical implications. AI can perpetuate biases, invade privacy, and disrupt the job market. We must work collectively to ensure that AI is developed and deployed in a way that respects human rights, diversity, and transparency. Regulatory frameworks and ethical guidelines are crucial to navigate this evolving landscape and strike a balance between innovation and safeguarding societal values. Moderator: Excellent points, Dr. Chen. Now, I'd like to turn to Dr. Sarah Patel, who has expertise in AI and its applications in healthcare. Dr. Patel: Thank you. AI in healthcare is revolutionizing how we diagnose, treat, and manage diseases. Machine learning models can analyze vast datasets to predict disease outcomes and personalize treatment plans. It can improve the accuracy of medical imaging and reduce diagnostic errors. However, we must be cautious about data privacy and the need for responsible AI implementation in the healthcare sector. Ensuring data security and patient trust is essential for the successful integration of AI into healthcare systems. Moderator: Thank you, Dr. Patel. Lastly, we have Dr. Michael Johnson, an expert in AI and its economic implications. Dr. Johnson: AI is reshaping industries and economies worldwide. While it has the potential to boost productivity and drive economic growth, it also poses challenges in terms of job displacement and workforce adaptation. The role of governments, businesses, and educational institutions in upskilling and retraining the workforce is paramount. Additionally, fostering innovation and entrepreneurship in AI-related fields can create new opportunities and ensure a balanced and prosperous AI-driven economy. Moderator: Thank you to all our speakers for their valuable insights on the multifaceted world of AI. It's clear that AI's impact on our society is immense, with profound implications across ethics, healthcare, and the economy. As we continue to advance, it is crucial that we remain vigilant and considerate of the ethical and societal dimensions, ensuring that AI remains a force for good. Thank you all for participating in this enlightening webinar ``` Is summarized as ``` Artificial Intelligence (AI) is a rapidly evolving technology that has profound implications for society, industry, and the economy. It has the potential to revolutionize many aspects of our lives, but it also presents important ethical and societal challenges that we need to address. In this webinar, we will hear from Dr. Emily Rodriguez, a renowned AI researcher and professor, Dr. James Chen, a pioneer in AI ethics, and Dr. Sarah Patel, an expert in AI and its applications in healthcare, who will discuss the ethical, societal, and economic implications of AI. Dr. Michael Johnson, a leading expert in the field of AI-related industries, will also discuss the economic implications. ``` ## Intended uses & limitations Intended for summarizing video conferences/webinars. Try out the model with the code below :D ```python from peft import PeftModel, PeftConfig from transformers import AutoModelForSeq2SeqLM, AutoTokenizer config = PeftConfig.from_pretrained("jolenechong/lora-bart-samsum-tib-1024") model = AutoModelForSeq2SeqLM.from_pretrained("philschmid/bart-large-cnn-samsum") model = PeftModel.from_pretrained(model, "jolenechong/lora-bart-samsum-tib-1024") tokenizer = AutoTokenizer.from_pretrained("jolenechong/lora-bart-samsum-tib-1024", from_pt=True) text = """[add transcript you want to summarize here]""" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): outputs = model.generate(input_ids=inputs["input_ids"]) print(tokenizer.batch_decode(outputs.detach().cpu().numpy())[0]) ``` ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - PEFT 0.5.0 - PEFT 0.5.0 - Transformers 4.34.1 - Pytorch 2.1.0+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora-bart-samsum-tib-1024 This model is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on the TIB dataset. ## Model description Fine Tuned with LORA on the TIB dataset. A quick demo of it's capabilities: ``` Moderator: Good afternoon, everyone, and welcome to today's webinar on the fascinating and rapidly evolving topic of Artificial Intelligence. We have a distinguished panel of experts with us today who will shed light on the latest developments in AI and its impact on various aspects of our lives. I'll start by introducing our first speaker, Dr. Emily Rodriguez, a renowned AI researcher and professor. Dr. Rodriguez: Thank you, it's a pleasure to be here. Artificial Intelligence has witnessed remarkable growth over the past few decades, and it's now ingrained in our daily lives, from voice assistants in our smartphones to self-driving cars and even in healthcare diagnostics. AI technologies are advancing at an unprecedented rate, driven by deep learning and neural networks. These innovations have allowed machines to perform tasks that were once thought to be exclusive to humans, such as natural language understanding, image recognition, and decision-making. The future of AI holds immense promise, but it also presents important ethical and societal challenges that we need to address. Moderator: Indeed, the ethical aspect of AI is a crucial issue. Let's hear from our next speaker, Dr. James Chen, a pioneer in AI ethics. Dr. Chen: Thank you for having me. As AI technologies continue to advance, it's essential that we consider the ethical implications. AI can perpetuate biases, invade privacy, and disrupt the job market. We must work collectively to ensure that AI is developed and deployed in a way that respects human rights, diversity, and transparency. Regulatory frameworks and ethical guidelines are crucial to navigate this evolving landscape and strike a balance between innovation and safeguarding societal values. Moderator: Excellent points, Dr. Chen. Now, I'd like to turn to Dr. Sarah Patel, who has expertise in AI and its applications in healthcare. Dr. Patel: Thank you. AI in healthcare is revolutionizing how we diagnose, treat, and manage diseases. Machine learning models can analyze vast datasets to predict disease outcomes and personalize treatment plans. It can improve the accuracy of medical imaging and reduce diagnostic errors. However, we must be cautious about data privacy and the need for responsible AI implementation in the healthcare sector. Ensuring data security and patient trust is essential for the successful integration of AI into healthcare systems. Moderator: Thank you, Dr. Patel. Lastly, we have Dr. Michael Johnson, an expert in AI and its economic implications. Dr. Johnson: AI is reshaping industries and economies worldwide. While it has the potential to boost productivity and drive economic growth, it also poses challenges in terms of job displacement and workforce adaptation. The role of governments, businesses, and educational institutions in upskilling and retraining the workforce is paramount. Additionally, fostering innovation and entrepreneurship in AI-related fields can create new opportunities and ensure a balanced and prosperous AI-driven economy. Moderator: Thank you to all our speakers for their valuable insights on the multifaceted world of AI. It's clear that AI's impact on our society is immense, with profound implications across ethics, healthcare, and the economy. As we continue to advance, it is crucial that we remain vigilant and considerate of the ethical and societal dimensions, ensuring that AI remains a force for good. Thank you all for participating in this enlightening webinar ``` Is summarized as ``` Artificial Intelligence (AI) is a rapidly evolving technology that has profound implications for society, industry, and the economy. It has the potential to revolutionize many aspects of our lives, but it also presents important ethical and societal challenges that we need to address. In this webinar, we will hear from Dr. Emily Rodriguez, a renowned AI researcher and professor, Dr. James Chen, a pioneer in AI ethics, and Dr. Sarah Patel, an expert in AI and its applications in healthcare, who will discuss the ethical, societal, and economic implications of AI. Dr. Michael Johnson, a leading expert in the field of AI-related industries, will also discuss the economic implications. ``` ## Intended uses & limitations Intended for summarizing video conferences/webinars. Try out the model with the code below :D ```python from peft import PeftModel, PeftConfig from transformers import AutoModelForSeq2SeqLM, AutoTokenizer config = PeftConfig.from_pretrained("jolenechong/lora-bart-samsum-tib-1024") model = AutoModelForSeq2SeqLM.from_pretrained("philschmid/bart-large-cnn-samsum") model = PeftModel.from_pretrained(model, "jolenechong/lora-bart-samsum-tib-1024") tokenizer = AutoTokenizer.from_pretrained("jolenechong/lora-bart-samsum-tib-1024", from_pt=True) text = """[add transcript you want to summarize here]""" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): outputs = model.generate(input_ids=inputs["input_ids"]) print(tokenizer.batch_decode(outputs.detach().cpu().numpy())[0]) ``` ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - PEFT 0.5.0 - PEFT 0.5.0 - Transformers 4.34.1 - Pytorch 2.1.0+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"base_model": "philschmid/bart-large-cnn-samsum", "datasets": ["gigant/tib"], "library_name": "peft", "license": "mit", "pipeline_tag": "summarization", "model-index": [{"name": "lora-bart-samsum-tib-1024", "results": []}]}
task
[ "SUMMARIZATION" ]
43,582
Omar-Nasr/setfitmodel
Omar-Nasr
text-classification
[ "setfit", "safetensors", "roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:Omar-Nasr/setfitmodel", "base_model:finetune:Omar-Nasr/setfitmodel", "model-index", "region:us" ]
2024-04-20T15:53:27Z
2024-04-21T23:08:49+00:00
5
0
--- base_model: Omar-Nasr/setfitmodel library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: ' I like art and nature but you can’t really talk about those for more than a few seconds' - text: ' That''s kind of the nature of my volunteer work, but you could volunteer with a food bank or boys and girls club, which would involve more social interaction Just breaking that cycle by going for a short walk around the neighbourhood is a good idea' - text: ' That being heat, sweat, more people outside and you wear less clothes on you (not so comfortable being a fat guy and sweat can be seen on your shirt)' - text: ' Want to join soccer club but so scared' - text: ' I literally do not leave the house, I will sometimes go in the garden but no further So I just ran outside and back to the car' inference: true model-index: - name: SetFit with Omar-Nasr/setfitmodel results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.5866666666666667 name: Accuracy --- # SetFit with Omar-Nasr/setfitmodel This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [Omar-Nasr/setfitmodel](https://huggingface.co/Omar-Nasr/setfitmodel) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [Omar-Nasr/setfitmodel](https://huggingface.co/Omar-Nasr/setfitmodel) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 256 tokens - **Number of Classes:** 4 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1.0 | <ul><li>' Go out for a walk once a day additionally and slowly start increasing the time you spend outside Go out for a walk once a day additionally and slowly start increasing the time you spend outside Start doing sport, either outdoors or at a gym If you can, try to take your dog to a dog park or something like that'</li><li>' Try challenging yourself more, take a walk in the park, small things like that make you better Try challenging yourself more, take a walk in the park, small things like that make you better '</li><li>" Now I'm not saying to go to a party on the spot, just go out, shop, take a walk in the park, that kind of thing Now I'm not saying to go to a party on the spot, just go out, shop, take a walk in the park, that kind of thing"</li></ul> | | 2.0 | <ul><li>' I’m an equestrian, so I ride horses and manage for a pretty famous trainer I can hold a non work related conversation with a stranger while I’m working but if I met that same person outside of the work day I’d have a panic attack and not be able to say a word'</li><li>' On long walks to errands, and whilst power walking for exercise'</li><li>' She said no, but we have a tasty forest fruit mix cake I felt high as a kite walking home'</li></ul> | | 0.0 | <ul><li>' Good to know that some people are in the same camp'</li><li>" I'm sure if the worlds ever did clash that your friends would understand (few people actually enjoy being at work) and, worst case scenario, your coworkers would be surprised at your outgoing nature while around friends"</li><li>' If anything you should be thinking about wearing sun screen so you retain your good skin as it becomes your ally as you age outside'</li></ul> | | 3.0 | <ul><li>" While I ended up making progress, it wasn't as fast as I had hoped and I still had a lot of trouble doing some things (such as jogging in public)"</li><li>' One, frack you guys who say “just get over it”, you’ve probably never dealt with anxiety, it’s like you are carrying the weight of everyone’s judgements and eyes on you with every possibility of any and every event running through your head all the time I am trying, I force myself outside and to interact but it’s terrifying and people just don’t seem to get that'</li><li>" I want to go swimming, anxiety and low self esteem make it really hard I want to go swimming, anxiety and low self esteem make it really hard I'm at least planning to go for a swim at a nearby lake but there is one problem I have: I'm not really confident with my body"</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.5867 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Omar-Nasr/setfitmodel") # Run inference preds = model(" Want to join soccer club but so scared") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:-----| | Word count | 4 | 51.2656 | 1083 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 16 | | 1.0 | 16 | | 2.0 | 16 | | 3.0 | 16 | ### Training Hyperparameters - batch_size: (8, 8) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0026 | 1 | 0.0 | - | | 0.1302 | 50 | 0.0001 | - | | 0.2604 | 100 | 0.0 | - | | 0.3906 | 150 | 0.0 | - | | 0.5208 | 200 | 0.0 | - | | 0.6510 | 250 | 0.0 | - | | 0.7812 | 300 | 0.0 | - | | 0.9115 | 350 | 0.0 | - | ### Framework Versions - Python: 3.10.13 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.39.3 - PyTorch: 2.1.2 - Datasets: 2.18.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with Omar-Nasr/setfitmodel This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [Omar-Nasr/setfitmodel](https://huggingface.co/Omar-Nasr/setfitmodel) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [Omar-Nasr/setfitmodel](https://huggingface.co/Omar-Nasr/setfitmodel) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 256 tokens - **Number of Classes:** 4 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1.0 | <ul><li>' Go out for a walk once a day additionally and slowly start increasing the time you spend outside Go out for a walk once a day additionally and slowly start increasing the time you spend outside Start doing sport, either outdoors or at a gym If you can, try to take your dog to a dog park or something like that'</li><li>' Try challenging yourself more, take a walk in the park, small things like that make you better Try challenging yourself more, take a walk in the park, small things like that make you better '</li><li>" Now I'm not saying to go to a party on the spot, just go out, shop, take a walk in the park, that kind of thing Now I'm not saying to go to a party on the spot, just go out, shop, take a walk in the park, that kind of thing"</li></ul> | | 2.0 | <ul><li>' I’m an equestrian, so I ride horses and manage for a pretty famous trainer I can hold a non work related conversation with a stranger while I’m working but if I met that same person outside of the work day I’d have a panic attack and not be able to say a word'</li><li>' On long walks to errands, and whilst power walking for exercise'</li><li>' She said no, but we have a tasty forest fruit mix cake I felt high as a kite walking home'</li></ul> | | 0.0 | <ul><li>' Good to know that some people are in the same camp'</li><li>" I'm sure if the worlds ever did clash that your friends would understand (few people actually enjoy being at work) and, worst case scenario, your coworkers would be surprised at your outgoing nature while around friends"</li><li>' If anything you should be thinking about wearing sun screen so you retain your good skin as it becomes your ally as you age outside'</li></ul> | | 3.0 | <ul><li>" While I ended up making progress, it wasn't as fast as I had hoped and I still had a lot of trouble doing some things (such as jogging in public)"</li><li>' One, frack you guys who say “just get over it”, you’ve probably never dealt with anxiety, it’s like you are carrying the weight of everyone’s judgements and eyes on you with every possibility of any and every event running through your head all the time I am trying, I force myself outside and to interact but it’s terrifying and people just don’t seem to get that'</li><li>" I want to go swimming, anxiety and low self esteem make it really hard I want to go swimming, anxiety and low self esteem make it really hard I'm at least planning to go for a swim at a nearby lake but there is one problem I have: I'm not really confident with my body"</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.5867 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Omar-Nasr/setfitmodel") # Run inference preds = model(" Want to join soccer club but so scared") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:-----| | Word count | 4 | 51.2656 | 1083 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 16 | | 1.0 | 16 | | 2.0 | 16 | | 3.0 | 16 | ### Training Hyperparameters - batch_size: (8, 8) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0026 | 1 | 0.0 | - | | 0.1302 | 50 | 0.0001 | - | | 0.2604 | 100 | 0.0 | - | | 0.3906 | 150 | 0.0 | - | | 0.5208 | 200 | 0.0 | - | | 0.6510 | 250 | 0.0 | - | | 0.7812 | 300 | 0.0 | - | | 0.9115 | 350 | 0.0 | - | ### Framework Versions - Python: 3.10.13 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.39.3 - PyTorch: 2.1.2 - Datasets: 2.18.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "Omar-Nasr/setfitmodel", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": " I like art and nature but you can’t really talk about those for more than a few seconds"}, {"text": " That's kind of the nature of my volunteer work, but you could volunteer with a food bank or boys and girls club, which would involve more social interaction Just breaking that cycle by going for a short walk around the neighbourhood is a good idea"}, {"text": " That being heat, sweat, more people outside and you wear less clothes on you (not so comfortable being a fat guy and sweat can be seen on your shirt)"}, {"text": " Want to join soccer club but so scared"}, {"text": " I literally do not leave the house, I will sometimes go in the garden but no further So I just ran outside and back to the car"}], "inference": true, "model-index": [{"name": "SetFit with Omar-Nasr/setfitmodel", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.5866666666666667, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,583
harshhmaniya/fine-tuned-bert-imdb-sentiment-analysis
harshhmaniya
null
[ "tf", "bert", "en", "dataset:stanfordnlp/imdb", "arxiv:1810.04805", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:mit", "region:us" ]
2025-02-17T07:07:03Z
2025-02-17T07:28:52+00:00
15
1
--- base_model: - google-bert/bert-base-uncased datasets: - stanfordnlp/imdb language: - en license: mit metrics: - accuracy --- # Fine-Tuned BERT for IMDB Sentiment Analysis > **Author:** [Harsh Maniya](https://huggingface.co/harshhmaniya) > **Model Type:** Text Classification (Sentiment Analysis) > **Language:** English --- ## Overview This repository hosts a **BERT-based model** fine-tuned on the [IMDB movie reviews dataset](https://www.imdb.com/). The goal is to classify movie reviews as either **positive** or **negative** with high accuracy. - **Base Model:** `bert-base-uncased` - **Dataset:** IMDB (25,000 training samples, 25,000 testing samples) - **Task:** Binary Sentiment Classification If you want to quickly gauge whether a movie review is glowing or scathing, this model is for you! --- ## Model Architecture - **Backbone:** [BERT](https://arxiv.org/abs/1810.04805) (Bidirectional Encoder Representations from Transformers) - **Classification Head:** A single linear layer on top of the pooled `[CLS]` token output for binary classification. > **Why BERT?** BERT’s bidirectional training helps it capture context from both directions in a sentence, making it especially powerful for understanding nuances in text like movie reviews. --- ## Training Procedure 1. **Data Loading:** - The IMDB dataset was loaded (from Hugging Face Datasets or another source) with an even split of positive and negative reviews. 2. **Preprocessing:** - Tokenization using the BERT tokenizer (`bert-base-uncased`), truncating/padding to a fixed length (e.g., 128 tokens). 3. **Hyperparameters (Example):** - **Learning Rate:** 5e-5 - **Batch Size:** 8 - **Epochs:** 3 - **Optimizer:** Adam - **Loss Function:** Sparse Categorical Cross-entropy 4. **Hardware:** - Fine-tuned on a GPU (e.g., Google Colab or local machine with CUDA). 5. **Validation:** - Periodic evaluation on the validation set to monitor accuracy and loss. > **Notebook:** The entire fine-tuning process is documented in the [notebook](https://huggingface.co/harshhmaniya/fine-tuned-bert-imdb-sentiment-analysis/blob/main/imdb_reviews_bert.ipynb) included in this repository, so you can see exactly how training was performed. --- ## Evaluation and Performance - **Accuracy:** ~**93%** on the IMDB test set. This performance indicates that the model handles most typical movie reviews well. However, it might still struggle with highly sarcastic or context-dependent reviews. --- ## How to Use **In Python:** ```python from transformers import AutoTokenizer, TFAutoModelForSequenceClassification # Replace with your repository model_name = "harshhmaniya/fine-tuned-bert-imdb-sentiment-analysis" # Load tokenizer and model tokenizer = AutoTokenizer.from_pretrained(model_name) model = TFAutoModelForSequenceClassification.from_pretrained(model_name) # Example text review_text = "I absolutely loved this movie! The plot was gripping and the acting was top-notch." # Prepare input inputs = tokenizer(review_text, return_tensors="tf", truncation=True, padding=True) # Perform inference outputs = model(inputs) logits = outputs.logits # Convert logits to probabilities (softmax) import tensorflow as tf probs = tf.nn.softmax(logits, axis=-1) pred_class = tf.argmax(probs, axis=-1).numpy()[0] # Interpret results label_map = {0: "Negative", 1: "Positive"} print(f"Review Sentiment: {label_map[pred_class]}") ``` - **Positive:** Model predicts a favorable sentiment. - **Negative:** Model predicts an unfavorable sentiment. --- ## Intended Use - **Primary Use Case:** Classifying sentiment of English-language movie reviews. - **Extended Use Cases:** General sentiment analysis tasks for product reviews, social media comments, or other short English texts (though performance may vary). --- ## Limitations and Biases 1. **Domain Specificity:** Trained primarily on movie reviews. May not generalize to other domains (e.g., financial or medical text) without further fine-tuning. 2. **Language Support:** English only. Non-English text or text containing heavy slang/emojis may reduce performance. 3. **Bias in Data:** IMDB reviews often contain colloquial language and potential biases from user-generated content. The model might inadvertently learn these biases. 4. **Sarcasm and Nuance:** Subtle sarcasm or culturally specific references may be misclassified. --- ## Ethical Considerations - **User-Generated Content:** The IMDB dataset contains user-submitted reviews. Some reviews may contain explicit or biased language. - **Misuse:** The model is intended for sentiment classification. Using it to make decisions about individuals or high-stakes scenarios without additional checks is **not recommended**. --- ## Model Card Author - **Name:** Harsh Maniya - **Contact:** For questions or feedback, please open an [issue](https://huggingface.co/harshhmaniya/fine-tuned-bert-imdb-sentiment-analysis/discussions) or reach out directly via your preferred channel. - **GitHub:** [Github](https://github.com/harshhmaniya) - **LinkedIn:** [Linkedin](https://www.linkedin.com/in/harsh-maniya/) --- ## Citation If you use this model or reference the code in your research or project, please cite it as follows (adjust for your specific citation style): ``` @misc{Maniya2025IMDBBERT, title = {Fine-Tuned BERT for IMDB Sentiment Analysis}, author = {Harsh Maniya}, year = {2025}, url = {https://huggingface.co/harshhmaniya/fine-tuned-bert-imdb-sentiment-analysis} } ``` --- ### Thank You for Visiting! We hope this model helps you classify movie reviews quickly and accurately. For more details, check out the [training notebook](https://huggingface.co/harshhmaniya/fine-tuned-bert-imdb-sentiment-analysis/blob/main/imdb_reviews_bert.ipynb), experiment with the model, and share your feedback!
null
Non_BioNLP
# Fine-Tuned BERT for IMDB Sentiment Analysis > **Author:** [Harsh Maniya](https://huggingface.co/harshhmaniya) > **Model Type:** Text Classification (Sentiment Analysis) > **Language:** English --- ## Overview This repository hosts a **BERT-based model** fine-tuned on the [IMDB movie reviews dataset](https://www.imdb.com/). The goal is to classify movie reviews as either **positive** or **negative** with high accuracy. - **Base Model:** `bert-base-uncased` - **Dataset:** IMDB (25,000 training samples, 25,000 testing samples) - **Task:** Binary Sentiment Classification If you want to quickly gauge whether a movie review is glowing or scathing, this model is for you! --- ## Model Architecture - **Backbone:** [BERT](https://arxiv.org/abs/1810.04805) (Bidirectional Encoder Representations from Transformers) - **Classification Head:** A single linear layer on top of the pooled `[CLS]` token output for binary classification. > **Why BERT?** BERT’s bidirectional training helps it capture context from both directions in a sentence, making it especially powerful for understanding nuances in text like movie reviews. --- ## Training Procedure 1. **Data Loading:** - The IMDB dataset was loaded (from Hugging Face Datasets or another source) with an even split of positive and negative reviews. 2. **Preprocessing:** - Tokenization using the BERT tokenizer (`bert-base-uncased`), truncating/padding to a fixed length (e.g., 128 tokens). 3. **Hyperparameters (Example):** - **Learning Rate:** 5e-5 - **Batch Size:** 8 - **Epochs:** 3 - **Optimizer:** Adam - **Loss Function:** Sparse Categorical Cross-entropy 4. **Hardware:** - Fine-tuned on a GPU (e.g., Google Colab or local machine with CUDA). 5. **Validation:** - Periodic evaluation on the validation set to monitor accuracy and loss. > **Notebook:** The entire fine-tuning process is documented in the [notebook](https://huggingface.co/harshhmaniya/fine-tuned-bert-imdb-sentiment-analysis/blob/main/imdb_reviews_bert.ipynb) included in this repository, so you can see exactly how training was performed. --- ## Evaluation and Performance - **Accuracy:** ~**93%** on the IMDB test set. This performance indicates that the model handles most typical movie reviews well. However, it might still struggle with highly sarcastic or context-dependent reviews. --- ## How to Use **In Python:** ```python from transformers import AutoTokenizer, TFAutoModelForSequenceClassification # Replace with your repository model_name = "harshhmaniya/fine-tuned-bert-imdb-sentiment-analysis" # Load tokenizer and model tokenizer = AutoTokenizer.from_pretrained(model_name) model = TFAutoModelForSequenceClassification.from_pretrained(model_name) # Example text review_text = "I absolutely loved this movie! The plot was gripping and the acting was top-notch." # Prepare input inputs = tokenizer(review_text, return_tensors="tf", truncation=True, padding=True) # Perform inference outputs = model(inputs) logits = outputs.logits # Convert logits to probabilities (softmax) import tensorflow as tf probs = tf.nn.softmax(logits, axis=-1) pred_class = tf.argmax(probs, axis=-1).numpy()[0] # Interpret results label_map = {0: "Negative", 1: "Positive"} print(f"Review Sentiment: {label_map[pred_class]}") ``` - **Positive:** Model predicts a favorable sentiment. - **Negative:** Model predicts an unfavorable sentiment. --- ## Intended Use - **Primary Use Case:** Classifying sentiment of English-language movie reviews. - **Extended Use Cases:** General sentiment analysis tasks for product reviews, social media comments, or other short English texts (though performance may vary). --- ## Limitations and Biases 1. **Domain Specificity:** Trained primarily on movie reviews. May not generalize to other domains (e.g., financial or medical text) without further fine-tuning. 2. **Language Support:** English only. Non-English text or text containing heavy slang/emojis may reduce performance. 3. **Bias in Data:** IMDB reviews often contain colloquial language and potential biases from user-generated content. The model might inadvertently learn these biases. 4. **Sarcasm and Nuance:** Subtle sarcasm or culturally specific references may be misclassified. --- ## Ethical Considerations - **User-Generated Content:** The IMDB dataset contains user-submitted reviews. Some reviews may contain explicit or biased language. - **Misuse:** The model is intended for sentiment classification. Using it to make decisions about individuals or high-stakes scenarios without additional checks is **not recommended**. --- ## Model Card Author - **Name:** Harsh Maniya - **Contact:** For questions or feedback, please open an [issue](https://huggingface.co/harshhmaniya/fine-tuned-bert-imdb-sentiment-analysis/discussions) or reach out directly via your preferred channel. - **GitHub:** [Github](https://github.com/harshhmaniya) - **LinkedIn:** [Linkedin](https://www.linkedin.com/in/harsh-maniya/) --- ## Citation If you use this model or reference the code in your research or project, please cite it as follows (adjust for your specific citation style): ``` @misc{Maniya2025IMDBBERT, title = {Fine-Tuned BERT for IMDB Sentiment Analysis}, author = {Harsh Maniya}, year = {2025}, url = {https://huggingface.co/harshhmaniya/fine-tuned-bert-imdb-sentiment-analysis} } ``` --- ### Thank You for Visiting! We hope this model helps you classify movie reviews quickly and accurately. For more details, check out the [training notebook](https://huggingface.co/harshhmaniya/fine-tuned-bert-imdb-sentiment-analysis/blob/main/imdb_reviews_bert.ipynb), experiment with the model, and share your feedback!
{"base_model": ["google-bert/bert-base-uncased"], "datasets": ["stanfordnlp/imdb"], "language": ["en"], "license": "mit", "metrics": ["accuracy"]}
task
[ "TEXT_CLASSIFICATION" ]
43,584
gokuls/HBERTv1_48_L12_H64_A2_massive
gokuls
text-classification
[ "transformers", "pytorch", "hybridbert", "text-classification", "generated_from_trainer", "dataset:massive", "base_model:gokuls/HBERTv1_48_L12_H64_A2", "base_model:finetune:gokuls/HBERTv1_48_L12_H64_A2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-10-04T21:18:26Z
2023-10-04T21:38:22+00:00
5
0
--- base_model: gokuls/HBERTv1_48_L12_H64_A2 datasets: - massive metrics: - accuracy tags: - generated_from_trainer model-index: - name: HBERTv1_48_L12_H64_A2_massive results: - task: type: text-classification name: Text Classification dataset: name: massive type: massive config: en-US split: validation args: en-US metrics: - type: accuracy value: 0.5553369404820462 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HBERTv1_48_L12_H64_A2_massive This model is a fine-tuned version of [gokuls/HBERTv1_48_L12_H64_A2](https://huggingface.co/gokuls/HBERTv1_48_L12_H64_A2) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 1.8009 - Accuracy: 0.5553 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 33 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.9467 | 1.0 | 180 | 3.7242 | 0.1121 | | 3.5338 | 2.0 | 360 | 3.3402 | 0.1121 | | 3.2555 | 3.0 | 540 | 3.1034 | 0.1820 | | 3.0004 | 4.0 | 720 | 2.8411 | 0.3074 | | 2.7522 | 5.0 | 900 | 2.6134 | 0.3384 | | 2.5415 | 6.0 | 1080 | 2.4233 | 0.3856 | | 2.366 | 7.0 | 1260 | 2.2615 | 0.4112 | | 2.2167 | 8.0 | 1440 | 2.1359 | 0.4609 | | 2.1008 | 9.0 | 1620 | 2.0361 | 0.4835 | | 2.0016 | 10.0 | 1800 | 1.9606 | 0.5061 | | 1.9204 | 11.0 | 1980 | 1.8984 | 0.5298 | | 1.8517 | 12.0 | 2160 | 1.8549 | 0.5352 | | 1.8078 | 13.0 | 2340 | 1.8158 | 0.5499 | | 1.78 | 14.0 | 2520 | 1.8009 | 0.5553 | | 1.7531 | 15.0 | 2700 | 1.7919 | 0.5548 | ### Framework versions - Transformers 4.34.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.14.5 - Tokenizers 0.14.0
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HBERTv1_48_L12_H64_A2_massive This model is a fine-tuned version of [gokuls/HBERTv1_48_L12_H64_A2](https://huggingface.co/gokuls/HBERTv1_48_L12_H64_A2) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 1.8009 - Accuracy: 0.5553 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 33 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.9467 | 1.0 | 180 | 3.7242 | 0.1121 | | 3.5338 | 2.0 | 360 | 3.3402 | 0.1121 | | 3.2555 | 3.0 | 540 | 3.1034 | 0.1820 | | 3.0004 | 4.0 | 720 | 2.8411 | 0.3074 | | 2.7522 | 5.0 | 900 | 2.6134 | 0.3384 | | 2.5415 | 6.0 | 1080 | 2.4233 | 0.3856 | | 2.366 | 7.0 | 1260 | 2.2615 | 0.4112 | | 2.2167 | 8.0 | 1440 | 2.1359 | 0.4609 | | 2.1008 | 9.0 | 1620 | 2.0361 | 0.4835 | | 2.0016 | 10.0 | 1800 | 1.9606 | 0.5061 | | 1.9204 | 11.0 | 1980 | 1.8984 | 0.5298 | | 1.8517 | 12.0 | 2160 | 1.8549 | 0.5352 | | 1.8078 | 13.0 | 2340 | 1.8158 | 0.5499 | | 1.78 | 14.0 | 2520 | 1.8009 | 0.5553 | | 1.7531 | 15.0 | 2700 | 1.7919 | 0.5548 | ### Framework versions - Transformers 4.34.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.14.5 - Tokenizers 0.14.0
{"base_model": "gokuls/HBERTv1_48_L12_H64_A2", "datasets": ["massive"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "HBERTv1_48_L12_H64_A2_massive", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "massive", "type": "massive", "config": "en-US", "split": "validation", "args": "en-US"}, "metrics": [{"type": "accuracy", "value": 0.5553369404820462, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,585
Helsinki-NLP/opus-mt-xh-en
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "xh", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T12:08:52+00:00
2,436
1
--- license: apache-2.0 tags: - translation --- ### opus-mt-xh-en * source languages: xh * target languages: en * OPUS readme: [xh-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/xh-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.xh.en | 45.8 | 0.610 |
null
Non_BioNLP
### opus-mt-xh-en * source languages: xh * target languages: en * OPUS readme: [xh-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/xh-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.xh.en | 45.8 | 0.610 |
{"license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
43,586
momtaro/distilbert-base-uncased-finetuned-emotion
momtaro
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-06-03T18:54:03Z
2023-06-03T23:25:58+00:00
10
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion args: split metrics: - type: accuracy value: 0.925 name: Accuracy - type: f1 value: 0.9249839189677801 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2096 - Accuracy: 0.925 - F1: 0.9250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8134 | 1.0 | 250 | 0.2938 | 0.9115 | 0.9088 | | 0.2412 | 2.0 | 500 | 0.2096 | 0.925 | 0.9250 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.0.1+cu118 - Datasets 2.9.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2096 - Accuracy: 0.925 - F1: 0.9250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8134 | 1.0 | 250 | 0.2938 | 0.9115 | 0.9088 | | 0.2412 | 2.0 | 500 | 0.2096 | 0.925 | 0.9250 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.0.1+cu118 - Datasets 2.9.0 - Tokenizers 0.13.3
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.925, "name": "Accuracy"}, {"type": "f1", "value": 0.9249839189677801, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,587
germla/satoken-en-fr-ru-ja-ko-pt
germla
text-classification
[ "sentence-transformers", "pytorch", "bert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-08-15T13:19:59Z
2023-08-15T13:20:40+00:00
10
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # germla/satoken-en-fr-ru-ja-ko-pt This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("germla/satoken-en-fr-ru-ja-ko-pt") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# germla/satoken-en-fr-ru-ja-ko-pt This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("germla/satoken-en-fr-ru-ja-ko-pt") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
43,588
kaixkhazaki/multilingual-e5-doclaynet
kaixkhazaki
text-classification
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "document-classification", "multilingual", "doclaynet", "e5", "dataset:pierreguillou/DocLayNet-base", "base_model:intfloat/multilingual-e5-large", "base_model:finetune:intfloat/multilingual-e5-large", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-01-06T06:58:28Z
2025-01-14T12:43:23+00:00
11
0
--- base_model: intfloat/multilingual-e5-large datasets: - pierreguillou/DocLayNet-base language: multilingual library_name: transformers metrics: - accuracy pipeline_tag: text-classification tags: - document-classification - text-classification - multilingual - doclaynet - e5 model-index: - name: multilingual-e5-doclaynet results: - task: type: text-classification name: Document Classification dataset: name: DocLayNet type: pierreguillou/DocLayNet-base metrics: - type: accuracy value: 0.9719 name: Test Accuracy - type: loss value: 0.5192 name: Test Loss --- # Multilingual E5 for Document Classification (DocLayNet) This model is a fine-tuned version of intfloat/multilingual-e5-large for document text classification based on the DocLayNet dataset. ## Evaluation results - Test Loss: 0.5192, Test Acc: 0.9719 ## Usage: ```python # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="kaixkhazaki/multilingual-e5-doclaynet") prediction = pipe("This is some text from a financial report") print(prediction) ``` ## Model description - Base model: intfloat/multilingual-e5-large - Task: Document text classification - Languages: Multilingual ## Training data - Dataset: DocLayNet-base - Source: https://huggingface.co/datasets/pierreguillou/DocLayNet-base - Categories: ```python { 'financial_reports': 0, 'government_tenders': 1, 'laws_and_regulations': 2, 'manuals': 3, 'patents': 4, 'scientific_articles': 5 } ``` ## Training procedure Trained on single gpu for 2 epochs for apx. 20 minutes. hyperparameters: ```python { 'batch_size': 8, 'num_epochs': 10, 'learning_rate': 2e-5, 'weight_decay': 0.01, 'warmup_ratio': 0.1, 'gradient_clip': 1.0, 'label_smoothing': 0.1, 'optimizer': 'AdamW', 'scheduler': 'cosine_with_warmup' } ```
null
Non_BioNLP
# Multilingual E5 for Document Classification (DocLayNet) This model is a fine-tuned version of intfloat/multilingual-e5-large for document text classification based on the DocLayNet dataset. ## Evaluation results - Test Loss: 0.5192, Test Acc: 0.9719 ## Usage: ```python # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="kaixkhazaki/multilingual-e5-doclaynet") prediction = pipe("This is some text from a financial report") print(prediction) ``` ## Model description - Base model: intfloat/multilingual-e5-large - Task: Document text classification - Languages: Multilingual ## Training data - Dataset: DocLayNet-base - Source: https://huggingface.co/datasets/pierreguillou/DocLayNet-base - Categories: ```python { 'financial_reports': 0, 'government_tenders': 1, 'laws_and_regulations': 2, 'manuals': 3, 'patents': 4, 'scientific_articles': 5 } ``` ## Training procedure Trained on single gpu for 2 epochs for apx. 20 minutes. hyperparameters: ```python { 'batch_size': 8, 'num_epochs': 10, 'learning_rate': 2e-5, 'weight_decay': 0.01, 'warmup_ratio': 0.1, 'gradient_clip': 1.0, 'label_smoothing': 0.1, 'optimizer': 'AdamW', 'scheduler': 'cosine_with_warmup' } ```
{"base_model": "intfloat/multilingual-e5-large", "datasets": ["pierreguillou/DocLayNet-base"], "language": "multilingual", "library_name": "transformers", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["document-classification", "text-classification", "multilingual", "doclaynet", "e5"], "model-index": [{"name": "multilingual-e5-doclaynet", "results": [{"task": {"type": "text-classification", "name": "Document Classification"}, "dataset": {"name": "DocLayNet", "type": "pierreguillou/DocLayNet-base"}, "metrics": [{"type": "accuracy", "value": 0.9719, "name": "Test Accuracy"}, {"type": "loss", "value": 0.5192, "name": "Test Loss"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,589
akot/jina-semantic-bmf-matryoshka-1024-10epochs
akot
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:4957", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "custom_code", "en", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:jinaai/jina-embeddings-v2-base-de", "base_model:finetune:jinaai/jina-embeddings-v2-base-de", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-08-14T08:34:08Z
2024-08-14T09:22:20+00:00
25
0
--- base_model: jinaai/jina-embeddings-v2-base-de datasets: [] language: - en library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:4957 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: 312 Aus steuerlicher Sicht ist es möglich, mehrere Versorgungszusagen nebeneinander, also neben einer Altzusage auch eine Neuzusage zu erteilen (z. B. „alte“ Direktversicherung und „neuer“ Pensionsfonds). sentences: - Wann liegt bei der betrieblichen Altersversorgung eine schädliche Verwendung vor? - Welche steuerliche Behandlung erfahren Auszahlungen aus Altersvorsorgeverträgen nach § 22 Nr. 5 EStG? - Können verschiedene Versorgungszusagen wie Direktversicherung und Pensionsfonds gleichzeitig bestehen? - source_sentence: 5 Pflichtversicherte nach dem Gesetz über die Alterssicherung der Landwirte gehören, soweit sie nicht als Pflichtversicherte der gesetzlichen Rentenversicherung ohnehin bereits anspruchsberechtigt sind, in dieser Eigenschaft ebenfalls zum begünstigten Personenkreis. Darunter fallen insbesondere die in Anlage 1 Abschnitt B aufgeführten Personen. sentences: - Wann wird das Anrecht der ausgleichsberechtigten Person bei intern geteilter Altersvorsorge als abgeschlossen betrachtet? - Welche Personen sind in der Anlage 1 Abschnitt B bezüglich der Alterssicherung der Landwirte aufgeführt? - In welchen Fällen führt die Möglichkeit einer Beitragserstattung nicht zur Versagung der Anerkennung als betriebliche Altersversorgung? - source_sentence: 233 Voraussetzung für die Förderung durch Sonderausgabenabzug nach § 10a EStG und Zulage nach Abschnitt XI EStG ist in den Fällen der Rz. 231 f., dass der Steuerpflichtige zum begünstigten Personenkreis gehört. Die zeitliche Zuordnung dieser Altersvorsorgebeiträge richtet sich grundsätzlich nach § 11 Abs. 2 EStG. sentences: - Wer gehört zum begünstigten Personenkreis für die Altersvorsorgeförderung? - Wie werden erstattete Kosten eines Altersvorsorgevertrags besteuert, wenn sie dem Steuerpflichtigen ausgezahlt werden? - Ist der Übertragungswert einer betrieblichen Altersversorgung bei einem Arbeitgeberwechsel steuerfrei? - source_sentence: 127 Die Entnahme des Teilkapitalbetrags von bis zu 30 % des zur Verfügung stehenden Kapitals aus dem Vertrag hat zu Beginn der Auszahlungsphase zu erfolgen. Eine Verteilung über mehrere Auszahlungszeitpunkte ist nicht möglich. sentences: - Kann ich den Teilkapitalbetrag aus meiner Altersvorsorge zu verschiedenen Zeitpunkten entnehmen? - Welche Einkunftsarten können Leistungen aus einer Versorgungszusage des Arbeitgebers sein? - Was ist im Todesfall des Zulageberechtigten bezüglich der Förderbeiträge zu tun? - source_sentence: '67 Abwandlung des Beispiels 1 in Rn. 66: A erhält zudem zwei Kinderzulagen für seine in den Jahren 2004 und 2005 geborenen Kinder. Beitragspflichtige Einnahmen 53.000 € 4 % 2.120 € höchstens 2.100 € anzusetzen 2.100 € abzüglich Zulage 175 € Mindesteigenbeitrag (§ 86 Abs. 1 Satz 2 EStG) 1.925 € Sockelbetrag (§ 86 Abs. 1 Satz 4 EStG) 60 € maßgebend (§ 86 Abs. 1 Satz 5 EStG) 1.925 € Die von A geleisteten Beiträge übersteigen den Mindesteigenbeitrag. Die Zulage wird nicht gekürzt.' sentences: - Wird die Zulage für A gekürzt, wenn die Beiträge den Mindesteigenbeitrag übersteigen? - Was versteht man unter Sonderzahlungen des Arbeitgebers? - Wie erfolgt die Besteuerung bei der ausgleichsberechtigten Person nach einer externen Teilung? model-index: - name: German Semantic V3 BMF results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.0 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.0018148820326678765 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.0018148820326678765 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.003629764065335753 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.0 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.0006049606775559588 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.00036297640653357535 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.00036297640653357535 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.0 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.0018148820326678765 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.0018148820326678765 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.003629764065335753 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.0017915370974219231 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.0012099213551119176 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.001909500365693067 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.0 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.0018148820326678765 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.003629764065335753 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.003629764065335753 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.0 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.0006049606775559588 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.0007259528130671507 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.00036297640653357535 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.0 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.0018148820326678765 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.003629764065335753 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.003629764065335753 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.0019266902207710538 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.0013611615245009074 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.0022709772837192373 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.0 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.0018148820326678765 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.0018148820326678765 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.003629764065335753 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.0 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.0006049606775559588 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.00036297640653357535 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.00036297640653357535 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.0 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.0018148820326678765 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.0018148820326678765 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.003629764065335753 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.001717594610448614 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.001134301270417423 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.0020295043929584406 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.0 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.0 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.0054446460980036296 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.0 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.0 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.0 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.000544464609800363 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.0 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.0 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.0 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.0054446460980036296 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.001765339491030367 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.0007309941520467837 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.0014606683373623462 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.0 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.0 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.0018148820326678765 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.0018148820326678765 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.0 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.0 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.00036297640653357535 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.00018148820326678767 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.0 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.0 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.0018148820326678765 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.0018148820326678765 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.0007816271471386444 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.00045372050816696913 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.0011358919077603595 name: Cosine Map@100 --- # German Semantic V3 BMF This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [jinaai/jina-embeddings-v2-base-de](https://huggingface.co/jinaai/jina-embeddings-v2-base-de). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [jinaai/jina-embeddings-v2-base-de](https://huggingface.co/jinaai/jina-embeddings-v2-base-de) <!-- at revision 13b8b30bd0bbee829ceffb82b282cc714cef836e --> - **Maximum Sequence Length:** 1024 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("akot/jina-semantic-bmf-matryoshka-1024-10epochs") # Run inference sentences = [ '67 Abwandlung des Beispiels 1 in Rn. 66: A erhält zudem zwei Kinderzulagen für seine in den Jahren 2004 und 2005 geborenen Kinder. Beitragspflichtige Einnahmen 53.000 € 4 % 2.120 € höchstens 2.100 € anzusetzen 2.100 € abzüglich Zulage 175 € Mindesteigenbeitrag (§ 86 Abs. 1 Satz 2 EStG) 1.925 € Sockelbetrag (§ 86 Abs. 1 Satz 4 EStG) 60 € maßgebend (§ 86 Abs. 1 Satz 5 EStG) 1.925 € Die von A geleisteten Beiträge übersteigen den Mindesteigenbeitrag. Die Zulage wird nicht gekürzt.', 'Wird die Zulage für A gekürzt, wenn die Beiträge den Mindesteigenbeitrag übersteigen?', 'Wie erfolgt die Besteuerung bei der ausgleichsberechtigten Person nach einer externen Teilung?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0 | | cosine_accuracy@3 | 0.0018 | | cosine_accuracy@5 | 0.0018 | | cosine_accuracy@10 | 0.0036 | | cosine_precision@1 | 0.0 | | cosine_precision@3 | 0.0006 | | cosine_precision@5 | 0.0004 | | cosine_precision@10 | 0.0004 | | cosine_recall@1 | 0.0 | | cosine_recall@3 | 0.0018 | | cosine_recall@5 | 0.0018 | | cosine_recall@10 | 0.0036 | | cosine_ndcg@10 | 0.0018 | | cosine_mrr@10 | 0.0012 | | **cosine_map@100** | **0.0019** | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0 | | cosine_accuracy@3 | 0.0018 | | cosine_accuracy@5 | 0.0036 | | cosine_accuracy@10 | 0.0036 | | cosine_precision@1 | 0.0 | | cosine_precision@3 | 0.0006 | | cosine_precision@5 | 0.0007 | | cosine_precision@10 | 0.0004 | | cosine_recall@1 | 0.0 | | cosine_recall@3 | 0.0018 | | cosine_recall@5 | 0.0036 | | cosine_recall@10 | 0.0036 | | cosine_ndcg@10 | 0.0019 | | cosine_mrr@10 | 0.0014 | | **cosine_map@100** | **0.0023** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.0 | | cosine_accuracy@3 | 0.0018 | | cosine_accuracy@5 | 0.0018 | | cosine_accuracy@10 | 0.0036 | | cosine_precision@1 | 0.0 | | cosine_precision@3 | 0.0006 | | cosine_precision@5 | 0.0004 | | cosine_precision@10 | 0.0004 | | cosine_recall@1 | 0.0 | | cosine_recall@3 | 0.0018 | | cosine_recall@5 | 0.0018 | | cosine_recall@10 | 0.0036 | | cosine_ndcg@10 | 0.0017 | | cosine_mrr@10 | 0.0011 | | **cosine_map@100** | **0.002** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0 | | cosine_accuracy@3 | 0.0 | | cosine_accuracy@5 | 0.0 | | cosine_accuracy@10 | 0.0054 | | cosine_precision@1 | 0.0 | | cosine_precision@3 | 0.0 | | cosine_precision@5 | 0.0 | | cosine_precision@10 | 0.0005 | | cosine_recall@1 | 0.0 | | cosine_recall@3 | 0.0 | | cosine_recall@5 | 0.0 | | cosine_recall@10 | 0.0054 | | cosine_ndcg@10 | 0.0018 | | cosine_mrr@10 | 0.0007 | | **cosine_map@100** | **0.0015** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0 | | cosine_accuracy@3 | 0.0 | | cosine_accuracy@5 | 0.0018 | | cosine_accuracy@10 | 0.0018 | | cosine_precision@1 | 0.0 | | cosine_precision@3 | 0.0 | | cosine_precision@5 | 0.0004 | | cosine_precision@10 | 0.0002 | | cosine_recall@1 | 0.0 | | cosine_recall@3 | 0.0 | | cosine_recall@5 | 0.0018 | | cosine_recall@10 | 0.0018 | | cosine_ndcg@10 | 0.0008 | | cosine_mrr@10 | 0.0005 | | **cosine_map@100** | **0.0011** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 4,957 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 145.09 tokens</li><li>max: 1024 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 19.57 tokens</li><li>max: 41 tokens</li></ul> | * Samples: | positive | anchor | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>134 Eine Rückzahlungsverpflichtung besteht nicht für den Teil der Zulagen, der auf nach § 1 Abs. 1 Nr. 2 AltZertG angespartes gefördertes Altersvorsorgevermögen entfällt, wenn es in Form einer Hinterbliebenenrente an die dort genannten Hinterbliebenen ausgezahlt wird. Dies gilt auch für den entsprechenden Teil der Steuerermäßigung.</code> | <code>Muss man Zulagen zurückzahlen, wenn das Altersvorsorgevermögen als Hinterbliebenenrente ausgezahlt wird?</code> | | <code>140 Beendet der Zulageberechtigte vor der vollständigen Rückzahlung des AltersvorsorgeEigenheimbetrags die Nutzung zu eigenen Wohnzwecken, wird er so behandelt, als habe er den noch nicht zurückgezahlten Betrag schädlich verwendet. Die auf den noch ausstehenden Rückzahlungsbetrag entfallenden Zulagen sowie die nach § 10a Abs. 4 EStG gesondert festgestellten Steuerermäßigungen sind zurückzuzahlen (§ 92a Abs. 3 EStG). Die im noch ausstehenden Rückzahlungsbetrag enthaltenen Zuwächse (z.B. Zinserträge und Kursgewinne) Seite 41 sind als sonstige Einkünfte zu versteuern (§ 22 Nr. 5 Satz 5 Halbsatz 1 EStG). Außerdem hat der Zulageberechtigte den Vorteil zu versteuern, der sich aus der zinslosen Nutzung des noch nicht zurückgezahlten Betrags ergibt. Zugrunde gelegt wird hierbei eine Verzinsung von 5 % (Zins und Zinseszins) für jedes volle Kalenderjahr der Nutzung (§ 22 Nr. 5 Satz 5 Halbsatz 2 EStG). Diese Folgen treten nicht ein, wenn er den noch nicht zurückgezahlten Betrag in ein Folgeobjekt investiert (§ 92a Abs. 4 Satz 3 Nr. 1 EStG) oder zugunsten eines auf seinen Namen lautenden zertifizierten Altersvorsorgevertrags einzahlt (§ 92a Abs. 4 Satz 3 Nr. 2 EStG).</code> | <code>Was geschieht steuerlich, wenn der AltersvorsorgeEigenheimbetrag nicht vollständig zurückgezahlt wird und die Immobilie nicht mehr selbst genutzt wird?</code> | | <code>144 Die als Einkünfte nach § 22 Nr. 5 Satz 3 EStG i.V.m. § 22 Nr. 5 Satz 2 EStG zu besteuernden Beträge muss der Anbieter gem. § 94 Abs. 1 Satz 4 EStG dem Zulageberechtigten bescheinigen und im Wege des Rentenbezugsmitteilungsverfahrens (§ 22a EStG) mitteilen. Ergeben sich insoweit steuerpflichtige Einkünfte nach § 22 Nr. 5 Satz 3 EStG für einen anderen Leistungsempfänger (z. B. Erben), ist für diesen eine entsprechende Rentenbezugsmitteilung der ZfA zu übermitteln.</code> | <code>Was muss im Falle eines anderen Leistungsempfängers, wie Erben, hinsichtlich der Rentenbezugsmitteilung getan werden?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 10 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: True - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 | |:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:| | 0.5161 | 10 | 19.1189 | - | - | - | - | - | | 0.9806 | 19 | - | 0.0008 | 0.0009 | 0.0007 | 0.0004 | 0.0006 | | 1.0323 | 20 | 19.0037 | - | - | - | - | - | | 1.5484 | 30 | 18.7625 | - | - | - | - | - | | 1.9613 | 38 | - | 0.0007 | 0.0008 | 0.0007 | 0.0004 | 0.0008 | | 2.0645 | 40 | 18.3193 | - | - | - | - | - | | 2.5806 | 50 | 18.5378 | - | - | - | - | - | | **2.9935** | **58** | **-** | **0.0018** | **0.002** | **0.0017** | **0.0018** | **0.0016** | | 3.0968 | 60 | 18.103 | - | - | - | - | - | | 3.6129 | 70 | 18.0413 | - | - | - | - | - | | 3.9742 | 77 | - | 0.0011 | 0.0006 | 0.0004 | 0.0006 | 0.0004 | | 4.1290 | 80 | 17.7272 | - | - | - | - | - | | 4.6452 | 90 | 17.4282 | - | - | - | - | - | | 4.9548 | 96 | - | 0.0014 | 0.0011 | 0.0009 | 0.0005 | 0.0007 | | 5.1613 | 100 | 17.2805 | - | - | - | - | - | | 5.6774 | 110 | 17.3362 | - | - | - | - | - | | 5.9871 | 116 | - | 0.0005 | 0.0002 | 0.0002 | 0.0016 | 0.0003 | | 6.1935 | 120 | 17.1479 | - | - | - | - | - | | 6.7097 | 130 | 17.1761 | - | - | - | - | - | | 6.9677 | 135 | - | 0.0020 | 0.0025 | 0.0025 | 0.0017 | 0.0020 | | 7.2258 | 140 | 16.8886 | - | - | - | - | - | | 7.7419 | 150 | 16.9517 | - | - | - | - | - | | 8.0 | 155 | - | 0.0008 | 0.0009 | 0.0010 | 0.0005 | 0.0010 | | 8.2581 | 160 | 17.0198 | - | - | - | - | - | | 8.7742 | 170 | 16.7575 | - | - | - | - | - | | 8.9806 | 174 | - | 0.0013 | 0.0009 | 0.0007 | 0.0008 | 0.0007 | | 9.2903 | 180 | 16.8138 | - | - | - | - | - | | 9.8065 | 190 | 16.9424 | 0.0015 | 0.0020 | 0.0023 | 0.0011 | 0.0019 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.11.4 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.1.2+cu121 - Accelerate: 0.33.0 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# German Semantic V3 BMF This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [jinaai/jina-embeddings-v2-base-de](https://huggingface.co/jinaai/jina-embeddings-v2-base-de). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [jinaai/jina-embeddings-v2-base-de](https://huggingface.co/jinaai/jina-embeddings-v2-base-de) <!-- at revision 13b8b30bd0bbee829ceffb82b282cc714cef836e --> - **Maximum Sequence Length:** 1024 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("akot/jina-semantic-bmf-matryoshka-1024-10epochs") # Run inference sentences = [ '67 Abwandlung des Beispiels 1 in Rn. 66: A erhält zudem zwei Kinderzulagen für seine in den Jahren 2004 und 2005 geborenen Kinder. Beitragspflichtige Einnahmen 53.000 € 4 % 2.120 € höchstens 2.100 € anzusetzen 2.100 € abzüglich Zulage 175 € Mindesteigenbeitrag (§ 86 Abs. 1 Satz 2 EStG) 1.925 € Sockelbetrag (§ 86 Abs. 1 Satz 4 EStG) 60 € maßgebend (§ 86 Abs. 1 Satz 5 EStG) 1.925 € Die von A geleisteten Beiträge übersteigen den Mindesteigenbeitrag. Die Zulage wird nicht gekürzt.', 'Wird die Zulage für A gekürzt, wenn die Beiträge den Mindesteigenbeitrag übersteigen?', 'Wie erfolgt die Besteuerung bei der ausgleichsberechtigten Person nach einer externen Teilung?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0 | | cosine_accuracy@3 | 0.0018 | | cosine_accuracy@5 | 0.0018 | | cosine_accuracy@10 | 0.0036 | | cosine_precision@1 | 0.0 | | cosine_precision@3 | 0.0006 | | cosine_precision@5 | 0.0004 | | cosine_precision@10 | 0.0004 | | cosine_recall@1 | 0.0 | | cosine_recall@3 | 0.0018 | | cosine_recall@5 | 0.0018 | | cosine_recall@10 | 0.0036 | | cosine_ndcg@10 | 0.0018 | | cosine_mrr@10 | 0.0012 | | **cosine_map@100** | **0.0019** | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0 | | cosine_accuracy@3 | 0.0018 | | cosine_accuracy@5 | 0.0036 | | cosine_accuracy@10 | 0.0036 | | cosine_precision@1 | 0.0 | | cosine_precision@3 | 0.0006 | | cosine_precision@5 | 0.0007 | | cosine_precision@10 | 0.0004 | | cosine_recall@1 | 0.0 | | cosine_recall@3 | 0.0018 | | cosine_recall@5 | 0.0036 | | cosine_recall@10 | 0.0036 | | cosine_ndcg@10 | 0.0019 | | cosine_mrr@10 | 0.0014 | | **cosine_map@100** | **0.0023** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.0 | | cosine_accuracy@3 | 0.0018 | | cosine_accuracy@5 | 0.0018 | | cosine_accuracy@10 | 0.0036 | | cosine_precision@1 | 0.0 | | cosine_precision@3 | 0.0006 | | cosine_precision@5 | 0.0004 | | cosine_precision@10 | 0.0004 | | cosine_recall@1 | 0.0 | | cosine_recall@3 | 0.0018 | | cosine_recall@5 | 0.0018 | | cosine_recall@10 | 0.0036 | | cosine_ndcg@10 | 0.0017 | | cosine_mrr@10 | 0.0011 | | **cosine_map@100** | **0.002** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0 | | cosine_accuracy@3 | 0.0 | | cosine_accuracy@5 | 0.0 | | cosine_accuracy@10 | 0.0054 | | cosine_precision@1 | 0.0 | | cosine_precision@3 | 0.0 | | cosine_precision@5 | 0.0 | | cosine_precision@10 | 0.0005 | | cosine_recall@1 | 0.0 | | cosine_recall@3 | 0.0 | | cosine_recall@5 | 0.0 | | cosine_recall@10 | 0.0054 | | cosine_ndcg@10 | 0.0018 | | cosine_mrr@10 | 0.0007 | | **cosine_map@100** | **0.0015** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0 | | cosine_accuracy@3 | 0.0 | | cosine_accuracy@5 | 0.0018 | | cosine_accuracy@10 | 0.0018 | | cosine_precision@1 | 0.0 | | cosine_precision@3 | 0.0 | | cosine_precision@5 | 0.0004 | | cosine_precision@10 | 0.0002 | | cosine_recall@1 | 0.0 | | cosine_recall@3 | 0.0 | | cosine_recall@5 | 0.0018 | | cosine_recall@10 | 0.0018 | | cosine_ndcg@10 | 0.0008 | | cosine_mrr@10 | 0.0005 | | **cosine_map@100** | **0.0011** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 4,957 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 145.09 tokens</li><li>max: 1024 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 19.57 tokens</li><li>max: 41 tokens</li></ul> | * Samples: | positive | anchor | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>134 Eine Rückzahlungsverpflichtung besteht nicht für den Teil der Zulagen, der auf nach § 1 Abs. 1 Nr. 2 AltZertG angespartes gefördertes Altersvorsorgevermögen entfällt, wenn es in Form einer Hinterbliebenenrente an die dort genannten Hinterbliebenen ausgezahlt wird. Dies gilt auch für den entsprechenden Teil der Steuerermäßigung.</code> | <code>Muss man Zulagen zurückzahlen, wenn das Altersvorsorgevermögen als Hinterbliebenenrente ausgezahlt wird?</code> | | <code>140 Beendet der Zulageberechtigte vor der vollständigen Rückzahlung des AltersvorsorgeEigenheimbetrags die Nutzung zu eigenen Wohnzwecken, wird er so behandelt, als habe er den noch nicht zurückgezahlten Betrag schädlich verwendet. Die auf den noch ausstehenden Rückzahlungsbetrag entfallenden Zulagen sowie die nach § 10a Abs. 4 EStG gesondert festgestellten Steuerermäßigungen sind zurückzuzahlen (§ 92a Abs. 3 EStG). Die im noch ausstehenden Rückzahlungsbetrag enthaltenen Zuwächse (z.B. Zinserträge und Kursgewinne) Seite 41 sind als sonstige Einkünfte zu versteuern (§ 22 Nr. 5 Satz 5 Halbsatz 1 EStG). Außerdem hat der Zulageberechtigte den Vorteil zu versteuern, der sich aus der zinslosen Nutzung des noch nicht zurückgezahlten Betrags ergibt. Zugrunde gelegt wird hierbei eine Verzinsung von 5 % (Zins und Zinseszins) für jedes volle Kalenderjahr der Nutzung (§ 22 Nr. 5 Satz 5 Halbsatz 2 EStG). Diese Folgen treten nicht ein, wenn er den noch nicht zurückgezahlten Betrag in ein Folgeobjekt investiert (§ 92a Abs. 4 Satz 3 Nr. 1 EStG) oder zugunsten eines auf seinen Namen lautenden zertifizierten Altersvorsorgevertrags einzahlt (§ 92a Abs. 4 Satz 3 Nr. 2 EStG).</code> | <code>Was geschieht steuerlich, wenn der AltersvorsorgeEigenheimbetrag nicht vollständig zurückgezahlt wird und die Immobilie nicht mehr selbst genutzt wird?</code> | | <code>144 Die als Einkünfte nach § 22 Nr. 5 Satz 3 EStG i.V.m. § 22 Nr. 5 Satz 2 EStG zu besteuernden Beträge muss der Anbieter gem. § 94 Abs. 1 Satz 4 EStG dem Zulageberechtigten bescheinigen und im Wege des Rentenbezugsmitteilungsverfahrens (§ 22a EStG) mitteilen. Ergeben sich insoweit steuerpflichtige Einkünfte nach § 22 Nr. 5 Satz 3 EStG für einen anderen Leistungsempfänger (z. B. Erben), ist für diesen eine entsprechende Rentenbezugsmitteilung der ZfA zu übermitteln.</code> | <code>Was muss im Falle eines anderen Leistungsempfängers, wie Erben, hinsichtlich der Rentenbezugsmitteilung getan werden?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 10 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: True - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 | |:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:| | 0.5161 | 10 | 19.1189 | - | - | - | - | - | | 0.9806 | 19 | - | 0.0008 | 0.0009 | 0.0007 | 0.0004 | 0.0006 | | 1.0323 | 20 | 19.0037 | - | - | - | - | - | | 1.5484 | 30 | 18.7625 | - | - | - | - | - | | 1.9613 | 38 | - | 0.0007 | 0.0008 | 0.0007 | 0.0004 | 0.0008 | | 2.0645 | 40 | 18.3193 | - | - | - | - | - | | 2.5806 | 50 | 18.5378 | - | - | - | - | - | | **2.9935** | **58** | **-** | **0.0018** | **0.002** | **0.0017** | **0.0018** | **0.0016** | | 3.0968 | 60 | 18.103 | - | - | - | - | - | | 3.6129 | 70 | 18.0413 | - | - | - | - | - | | 3.9742 | 77 | - | 0.0011 | 0.0006 | 0.0004 | 0.0006 | 0.0004 | | 4.1290 | 80 | 17.7272 | - | - | - | - | - | | 4.6452 | 90 | 17.4282 | - | - | - | - | - | | 4.9548 | 96 | - | 0.0014 | 0.0011 | 0.0009 | 0.0005 | 0.0007 | | 5.1613 | 100 | 17.2805 | - | - | - | - | - | | 5.6774 | 110 | 17.3362 | - | - | - | - | - | | 5.9871 | 116 | - | 0.0005 | 0.0002 | 0.0002 | 0.0016 | 0.0003 | | 6.1935 | 120 | 17.1479 | - | - | - | - | - | | 6.7097 | 130 | 17.1761 | - | - | - | - | - | | 6.9677 | 135 | - | 0.0020 | 0.0025 | 0.0025 | 0.0017 | 0.0020 | | 7.2258 | 140 | 16.8886 | - | - | - | - | - | | 7.7419 | 150 | 16.9517 | - | - | - | - | - | | 8.0 | 155 | - | 0.0008 | 0.0009 | 0.0010 | 0.0005 | 0.0010 | | 8.2581 | 160 | 17.0198 | - | - | - | - | - | | 8.7742 | 170 | 16.7575 | - | - | - | - | - | | 8.9806 | 174 | - | 0.0013 | 0.0009 | 0.0007 | 0.0008 | 0.0007 | | 9.2903 | 180 | 16.8138 | - | - | - | - | - | | 9.8065 | 190 | 16.9424 | 0.0015 | 0.0020 | 0.0023 | 0.0011 | 0.0019 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.11.4 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.1.2+cu121 - Accelerate: 0.33.0 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "jinaai/jina-embeddings-v2-base-de", "datasets": [], "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:4957", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "312 Aus steuerlicher Sicht ist es möglich, mehrere Versorgungszusagen nebeneinander, also neben einer Altzusage auch eine Neuzusage zu erteilen (z. B. „alte“ Direktversicherung und „neuer“ Pensionsfonds).", "sentences": ["Wann liegt bei der betrieblichen Altersversorgung eine schädliche Verwendung vor?", "Welche steuerliche Behandlung erfahren Auszahlungen aus Altersvorsorgeverträgen nach § 22 Nr. 5 EStG?", "Können verschiedene Versorgungszusagen wie Direktversicherung und Pensionsfonds gleichzeitig bestehen?"]}, {"source_sentence": "5 Pflichtversicherte nach dem Gesetz über die Alterssicherung der Landwirte gehören, soweit sie nicht als Pflichtversicherte der gesetzlichen Rentenversicherung ohnehin bereits anspruchsberechtigt sind, in dieser Eigenschaft ebenfalls zum begünstigten Personenkreis. Darunter fallen insbesondere die in Anlage 1 Abschnitt B aufgeführten Personen.", "sentences": ["Wann wird das Anrecht der ausgleichsberechtigten Person bei intern geteilter Altersvorsorge als abgeschlossen betrachtet?", "Welche Personen sind in der Anlage 1 Abschnitt B bezüglich der Alterssicherung der Landwirte aufgeführt?", "In welchen Fällen führt die Möglichkeit einer Beitragserstattung nicht zur Versagung der Anerkennung als betriebliche Altersversorgung?"]}, {"source_sentence": "233 Voraussetzung für die Förderung durch Sonderausgabenabzug nach § 10a EStG und Zulage nach Abschnitt XI EStG ist in den Fällen der Rz. 231 f., dass der Steuerpflichtige zum begünstigten Personenkreis gehört. Die zeitliche Zuordnung dieser Altersvorsorgebeiträge richtet sich grundsätzlich nach § 11 Abs. 2 EStG.", "sentences": ["Wer gehört zum begünstigten Personenkreis für die Altersvorsorgeförderung?", "Wie werden erstattete Kosten eines Altersvorsorgevertrags besteuert, wenn sie dem Steuerpflichtigen ausgezahlt werden?", "Ist der Übertragungswert einer betrieblichen Altersversorgung bei einem Arbeitgeberwechsel steuerfrei?"]}, {"source_sentence": "127 Die Entnahme des Teilkapitalbetrags von bis zu 30 % des zur Verfügung stehenden Kapitals aus dem Vertrag hat zu Beginn der Auszahlungsphase zu erfolgen. Eine Verteilung über mehrere Auszahlungszeitpunkte ist nicht möglich.", "sentences": ["Kann ich den Teilkapitalbetrag aus meiner Altersvorsorge zu verschiedenen Zeitpunkten entnehmen?", "Welche Einkunftsarten können Leistungen aus einer Versorgungszusage des Arbeitgebers sein?", "Was ist im Todesfall des Zulageberechtigten bezüglich der Förderbeiträge zu tun?"]}, {"source_sentence": "67 Abwandlung des Beispiels 1 in Rn. 66: A erhält zudem zwei Kinderzulagen für seine in den Jahren 2004 und 2005 geborenen Kinder. Beitragspflichtige Einnahmen 53.000 € 4 % 2.120 € höchstens 2.100 € anzusetzen 2.100 € abzüglich Zulage 175 € Mindesteigenbeitrag (§ 86 Abs. 1 Satz 2 EStG) 1.925 € Sockelbetrag (§ 86 Abs. 1 Satz 4 EStG) 60 € maßgebend (§ 86 Abs. 1 Satz 5 EStG) 1.925 € Die von A geleisteten Beiträge übersteigen den Mindesteigenbeitrag. Die Zulage wird nicht gekürzt.", "sentences": ["Wird die Zulage für A gekürzt, wenn die Beiträge den Mindesteigenbeitrag übersteigen?", "Was versteht man unter Sonderzahlungen des Arbeitgebers?", "Wie erfolgt die Besteuerung bei der ausgleichsberechtigten Person nach einer externen Teilung?"]}], "model-index": [{"name": "German Semantic V3 BMF", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 768", "type": "dim_768"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.0, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.0018148820326678765, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.0018148820326678765, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.003629764065335753, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.0, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.0006049606775559588, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.00036297640653357535, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.00036297640653357535, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.0, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.0018148820326678765, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.0018148820326678765, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.003629764065335753, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.0017915370974219231, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.0012099213551119176, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.001909500365693067, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.0, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.0018148820326678765, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.003629764065335753, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.003629764065335753, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.0, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.0006049606775559588, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.0007259528130671507, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.00036297640653357535, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.0, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.0018148820326678765, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.003629764065335753, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.003629764065335753, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.0019266902207710538, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.0013611615245009074, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.0022709772837192373, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.0, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.0018148820326678765, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.0018148820326678765, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.003629764065335753, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.0, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.0006049606775559588, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.00036297640653357535, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.00036297640653357535, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.0, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.0018148820326678765, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.0018148820326678765, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.003629764065335753, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.001717594610448614, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.001134301270417423, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.0020295043929584406, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 128", "type": "dim_128"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.0, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.0, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.0, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.0054446460980036296, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.0, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.0, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.0, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.000544464609800363, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.0, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.0, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.0, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.0054446460980036296, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.001765339491030367, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.0007309941520467837, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.0014606683373623462, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 64", "type": "dim_64"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.0, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.0, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.0018148820326678765, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.0018148820326678765, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.0, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.0, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.00036297640653357535, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.00018148820326678767, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.0, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.0, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.0018148820326678765, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.0018148820326678765, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.0007816271471386444, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.00045372050816696913, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.0011358919077603595, "name": "Cosine Map@100"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,590
Lagstill/Varsity_module2_bot
Lagstill
text-generation
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:2305.14314", "arxiv:2312.11805", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2304.06364", "arxiv:2206.04615", "arxiv:1804.06876", "arxiv:2110.08193", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:2203.09509", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
2024-04-24T08:31:05Z
2024-04-24T16:37:13+00:00
18
0
--- license: gemma --- # Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it). **Resources and Technical Documentation**: * [Gemma Technical Report](https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf) * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-gg-hf) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Context Length Models are trained on a context length of 8192 tokens. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Fine-tuning examples You can find fine-tuning notebooks under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples). We provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using [QLoRA](https://huggingface.co/papers/2305.14314) * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset. You can also find the copy of the notebook [here](https://github.com/huggingface/notebooks/blob/main/peft/gemma_7b_english_quotes.ipynb). #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", revision="float16") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 | | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 49.7 | 51.8 | | [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | 12.5 | 23 | | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | | ------------------------------ | ------------- | ----------- | --------- | | **Average** | | **45.0** | **56.9** | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 | | [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 | | [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 | | [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 | | [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 | | [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 | | [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 | | [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 | | [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 | | [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 | | ------------------------------ | ------------- | ----------- | --------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
null
Non_BioNLP
# Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it). **Resources and Technical Documentation**: * [Gemma Technical Report](https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf) * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-gg-hf) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Context Length Models are trained on a context length of 8192 tokens. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Fine-tuning examples You can find fine-tuning notebooks under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples). We provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using [QLoRA](https://huggingface.co/papers/2305.14314) * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset. You can also find the copy of the notebook [here](https://github.com/huggingface/notebooks/blob/main/peft/gemma_7b_english_quotes.ipynb). #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", revision="float16") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 | | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 49.7 | 51.8 | | [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | 12.5 | 23 | | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | | ------------------------------ | ------------- | ----------- | --------- | | **Average** | | **45.0** | **56.9** | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 | | [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 | | [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 | | [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 | | [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 | | [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 | | [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 | | [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 | | [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 | | [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 | | ------------------------------ | ------------- | ----------- | --------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
{"license": "gemma"}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
43,591
gamchan/custom_summarization_dataset
gamchan
null
[ "region:us" ]
2024-09-20T03:34:08Z
2024-09-20T03:34:11+00:00
0
0
--- {} --- # Dataset Card for Custom Text Dataset ## Dataset Name Custom Text Dataset for Summarization (based on CNN/DailyMail) ## Overview This dataset is a customized version of the CNN/DailyMail dataset, which contains news articles and their corresponding summaries. The dataset is structured for text summarization tasks, where the input is a long news article, and the output is its concise summary. In this customized version, a small subset of the data has been extracted for quick experimentation and training. - Train Size: 1 example - Test Size: 100 examples - Task: Abstractive Summarization ## Composition The dataset contains two features: 1. **sentence**: This is the full news article (input). 2. **labels**: This is the corresponding human-written summary (output). ### Example: ```text Article: "(CNN)The Palestinian Authority officially became the 123rd member of the International Criminal Court on Wednesday, ..." Summary: "The Palestinian Authority officially becomes the 123rd member of the International Criminal Court..."
null
Non_BioNLP
# Dataset Card for Custom Text Dataset ## Dataset Name Custom Text Dataset for Summarization (based on CNN/DailyMail) ## Overview This dataset is a customized version of the CNN/DailyMail dataset, which contains news articles and their corresponding summaries. The dataset is structured for text summarization tasks, where the input is a long news article, and the output is its concise summary. In this customized version, a small subset of the data has been extracted for quick experimentation and training. - Train Size: 1 example - Test Size: 100 examples - Task: Abstractive Summarization ## Composition The dataset contains two features: 1. **sentence**: This is the full news article (input). 2. **labels**: This is the corresponding human-written summary (output). ### Example: ```text Article: "(CNN)The Palestinian Authority officially became the 123rd member of the International Criminal Court on Wednesday, ..." Summary: "The Palestinian Authority officially becomes the 123rd member of the International Criminal Court..."
{}
task
[ "SUMMARIZATION" ]
43,592
carlesoctav/SentimentClassifierBarbieDune-8shot
carlesoctav
text-classification
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "en", "arxiv:2209.11055", "license:apache-2.0", "model-index", "region:us" ]
2024-03-16T05:35:01Z
2024-03-16T05:35:36+00:00
4
0
--- language: en library_name: setfit license: apache-2.0 metrics: - accuracy - precision - recall - f1 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: not so jolly dolly so, last weekend my wife and i watched the oppen part of the "barbenheimer" 2023 box-office two-headed monster and this week it was barbie's turn. from the little i'd incidentally read in advance about the day-glo billion dollar blockbuster, i was expecting some kind of retro-cool, existentialist, post-modernist satire on the battle of the sexes, consumerism and childhood buffed up with a little diversity along the way, but somehow with all these ducks lined up in a row, i felt the film missed the mark.it starts brightly with eye-candy sets in fifty shades of pink as we're introduced to margot robbie's barbie in her barbie-world of alternative barbies, see through doll's houses and their various incomplete consumer goods, for example our girl has her daily dry-shower and drinks non-existent tea. everything appears to be perfect in her / their perfect world, unless you're the boyfriend ken, here also in a variety of forms, all doomed to exist only in barbie's slipstream and so experience recurring frustration at getting precisely nowhere, all the time, with the object of his / their, i hesitate to call it, desire.then things start to go wrong for robbie's "stereotypical" barbie. she thinks of death and starts to malfunction and after a visit to kate mckinnon's weird barbie, a concept i have to say i didn't get at all, she determines to go to the real world to connect with the disillusioned mattel employee, played by america ferrera, whose negativity, channelled through her disinterested daughter ariana greenblatt, is upsetting the living doll's equilibrium. ryan gosling's wheedling ken is also along for the ride and stows away in her penelope pitstop-mobile and together they head for l. a., where ferrera lives, the headquarters of the manufacturer mattel.for me, the film went downhill fast from there with ken's head getting messed-up with perceptions of patriarchy while barbie has a meltdown over her identity-crisis. there are unfunny, over-played scenes where barbie experiences humiliation at the hands of greenblatt and her school chums, traipses down to mattel hq to confront the all-male board of directors headed by a mis-cast will ferrell as the company ceo, before returning to barbie-world with ferrera and greenblatt in tow to take down ken's new-model kendom where outdated male-superiority is literally back in the saddle.i have to admit, i got very bored, very soon with this empty, supposedly satirical high-concept, fantasy-comedy. a world box-office of 1.5 billion dollars and eight oscar nominations actually makes me wonder if i've not switched places too with barbie-world as i'm afraid nothing about the movie, including the soundtrack and unsuccessful attempts at either comedy or pathos (especially when they wheel in rhea perlman as the doll's now-enlightened creator) all missing me by miles.when at one point, all her namesakes shout "go barbie!', i must admit i was with them 100% but for completely different reasons. - text: way better than expected i was amongst the people who thought they saw a majority of this film based on all of the filming stills posted on the bird app in 2022. i still wanted to see it. something about their perfect neon rollerblading outfits. i saw one preview and wasn't sure what the plot was going to be, i didn't care, i still wanted to see it. i wasn't expecting it to be amazing, but amazing it was. well done!! margot really knocked it out of the dollhouse. ryan i'll never look at the same way way again. this 1980s barbie superman is very pleased. it won't be long until the opening dance scene is all over the clock app. i haven't felt so compelled to learn choreography since michael jackson's thriller. also, girls rule. sorry, ken. 12 out of 24 found this helpful. was this review helpful? sign in to vote. permalink - text: 'anyone remember the film "life size"? from a far, i can see why people would absolutely hate this movie. just the concept of "barbie: the movie" is enough to make people feel like the art of cinema has been compromised by corporate america. but, as a whole, this movie was very well received. it made over a billion dollars at the box-office and was nominated for 8 oscars including "best picture", so clearly it some people really liked it.there is a lot to enjoy in this film. the movie does a good job with poking fun at the barbie brand without it feeling too much like a spoof. this is a comedy, so the fact that the film is really funny is kind of an important element. understandably comedy is a subjective thing, so all i can say is for me, i laughed out loud several times through the movie. the movie is clever in how it treats its "worldbuilding" and nicely avoids any firm answers about how this world works. because, yeah, if you think about that sort of stuff in the film there is a lot that doesn''t add up.it is nice that they don''t spend too much time in "the real world" and focus on the creative fun of "barbie world". the movie is more visually unique and can do more gags when that is the case.when the movie is focusing on being a bizarre comedy, that is when some of the best and most memorable things happen. when it tries to have a more serious message, that is where it loses some momentum. don''t misunderstand me, the movie needs some serious stuff in order to make the comedy work. and the stuff with ugly betty and her daughter is good emotional stuff. but towards the end, they realize that barbie needs to have a character arc and feel like they tack one on last second. there are some very funny jokes towards the end, but it does become a little repetitive and the message feels heavy handed by the 5th time it''s brought up. side note: i wanted a cameo from the voice actress of barbie, kelly sheridan, but she wasn''t there.i will emphasize this because hollywood will learn all the wrong lessons from this movie''s success. we do not want a "polly pocket" movie or an "uno" movie. what made this film a success, beyond its brand or its marketing campaign, is that it was uniquely greta gerwig''s vision. the movie wasn''t concerned with mass audience appeal, it would tell jokes that they thought were funny and hoped others would enjoy as well. if you want to duplicate barbie''s success, give creative people control to make some out there stuff.wrapped in plastic, it''s fantastic.' - text: great expectations this film exceeded all of my expectations a n d i was looking forward to seeing it. i wonder about any parents who might bring their children to see it expecting something quite different from what this film is. one hour fifty four minutes of fun from beginning to end. satire, sarcasm , humor at every turn. production values and acting off the charts good. i can't believe mattel let them make the movie with complete artistic freedom. think of nicole kidman in the amc promo before a movie starts and says, " somehow heartbreak feels good in a place like this ". well somehow watching a silly spoof like this movie feels great all the way through and even afterwards. i loved it and i am not surprised at the huge box office , this movie rocks. 4 out of 11 found this helpful. was this review helpful? sign in to vote. permalink - text: decent i like what they did with this movie and the characters with its combining the barbie world and the real world. barbie starts getting "vibes" and has to go into the real world to find the girl who played with her to set things right and winds up in the mattel headquarters. something resembling chaos ensues. ken joins her and winds up causing further damage. i like what they do in various stages of the story and with the characters. it was overall a very pleasant surprise snd a good movie with a good cast. margot robbie, ryan gosling, america ferrera, and will ferrell were all good in their roles. if you are a movie and/or a barbie fan, you will love this movie.*** out of **** 2 out of 7 found this helpful. was this review helpful? sign in to vote. permalink inference: true model-index: - name: SetFit results: - task: type: text-classification name: Text Classification dataset: name: data/raw/barbie.jsonl type: unknown split: test metrics: - type: accuracy value: 0.8811688311688312 name: Accuracy - type: precision value: 0.9952114924181963 name: Precision - type: recall value: 0.8757022471910112 name: Recall - type: f1 value: 0.9316398954053045 name: F1 --- # SetFit This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit <!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) --> - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:---------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | negative | <ul><li>"definitely not for kids i've just taken 2 nieces, 9 and 14, to see barbie. they both hated it... really, really hated it!! and, me: i'm just stunned. i've no idea what i've just seen.if barbie is meant to be a dark satire, it's alright. if it's meant to be a kids movie, it's unconscionably bad. and, i'm totally confused which type of movie it's supposed to be. what the literal heck is going on!!if you're looking for a twenty first century dark cultural satire, you're at least in your twenties, you've an open mind, this is your movie... maybe. if you're looking for a movie to take your kids to, before getting ice cream, this is absolutely not the movie you're looking for. my advice: give barbie a miss and go straight for the ice cream. you will be so much happier. i wish i had. 15 out of 30 found this helpful. was this review helpful? sign in to vote. permalink"</li><li>"the most political movie i've ever seen. i loved the effort with the sets and the fun outfits. i also really enjoyed the ads threw out movie.i was really enjoying this movie but then it started getting really political, i was kind of fine with it but then they started making politics the whole story line and so it was really no longer a fun movie. after the movie finished i just ended up leaving miserable and like i waisted money... how is a family meant to enjoy this?? i can't see any 13 year old understanding anything they talk about.also they want to talk about how being a human is hard and that were not perfect but then they hire someone to play a teen girl who has perfect hair and skin. a bit of a confusing message? 180 out of 276 found this helpful. was this review helpful? sign in to vote. permalink"</li><li>"mediocre at best i had high expectations as a result of the media press, however these we're crushed from the start.first of all, i did expect it to feel a little generic and cheesy although all i could feel throughout the movie is that it was rushed and cheap. the storyline was dry and over-political.i'd have liked to see more lgbtq+ representation and breaking stereotypes, as it felt like just a generic film. even the barbies were considered stereotypes!comedy throughout was limited and wasn't engaging at all - sometimes trying too hard to emphasise feminism and going too far in the opposite direction to the world is heading in (i'm not encouraging patriarchy).however, i did find the music quite good (especially the billie eilish song) and did redeem the movie a little bit.in summary, the movie was mediocre at best and there's not much to discuss. it felt like a cheap version of movies like spirited and the greatest showman with no real emotion."</li></ul> | | positive | <ul><li>"fabulously done so, i didn't have that much of a high expectation going in to see this because i really wasn't sure what is was going to be. i grew up playing barbie's and have watched the animated films many times with my kids.the movie from start to finish is so thoughtfully executed. it's for everyone, but mainly young girls need to see this. it's a great reminder of how special we are as women and how special all of us are individually as a person and the life we get to experience.another fantastic element is adding a very 90s feel to the film. you will laugh like you used to laugh when fun comedies would come out in theatre's years ago. i haven't seen just a good comedy with heart in years.i brought my 7yr old and we dressed up for it too. i wanted her to see me comfortable in my own skin and her as well and the film touches on that. it's suited for any age, regardless of the rating.i don't give high ratings like this to films usually, as i find small things usually that i wish had been addressed more... but not in barbie. what a fun movie and a great time you'll have watching this in theatre (or at home) knowing it's your childhood come to life."</li><li>"sure, life in plastic is fantastic but only in barbieland the barbie movie was unexpected. it was a colorful and clever ride in the cinema. i was not expecting all the deep emotions that the film gave. life is not all about perfection, which a lot of people wants to constantly achieve these days, imperfections is what identifies us and that's okay.i'm a visual kind of cinema goer and i have the say that dance sequence (dance the night) and the scene where barbie and ken where traveling to the real world were amazing. kudos to the creative team behind it, i absolutely adore the set.greta gerwig and her team delivered a visual spectacle with a heart. 10 out of 28 found this helpful. was this review helpful? sign in to vote. permalink"</li><li>"funny with social commentary barbie (margot robbie) lives in barbieland. ken (ryan gosling) is always desperate for her attention. all the ken and barbie dolls live in a happy matriarchy unaware of the real world. out of the blue, barbie starts pondering death and her existence. weird barbie (kate mckinnon) tells her that she has opened a portal to the real world and must fix whatever is her real world problem before she can return to her perfect self. she and stowaway ken find the real world completely different than their expectations. mattel ceo (will ferrell) insists on putting her back in the box. mattel secretary gloria (america ferrera) had caused the problem by drawing new forms of barbie.first, i love the premise and i love the 2001 opening. the start is a lot funnier than i expected. margot robbie is great and ryan gosling is hilarious. it's a great start in barbieland. the first moments in la is a little too much. quite frankly, barbie and ken would not stand out at venice beach. the male leering is good enough. they don't need to do the stereotype abusive guys. the real world should be realistic. barbie and ken would be better fishes out of the water in a recognizable regular world. the more average the real world is, the better it is for the characters to showcase their outrageousness.mattel is fun and i like the ridiculous silliness. will ferrell is a good way to show their outrageous quality. the movie has a lot of social commentary and that aspect does threaten to overwhelm it. the last act has some wonky moments as the movie tries to wrap its arms around the heavier social discussions. i would like to keep that more contained and concentrate on the mother daughter relationships. it's gloria and sasha, but it's also ruth and barbara. that's a perfect way to end the movie. i do like the montage idea, but i didn't realize what it was doing initially. it would have been fine to do that for the closing credits. all in all, this is a funny engaging movie and it's able to deal with some of the tougher social material."</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | Precision | Recall | F1 | |:--------|:---------|:----------|:-------|:-------| | **all** | 0.8812 | 0.9952 | 0.8757 | 0.9316 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("carlesoctav/SentimentClassifierBarbieDune-8shot") # Run inference preds = model("decent i like what they did with this movie and the characters with its combining the barbie world and the real world. barbie starts getting \"vibes\" and has to go into the real world to find the girl who played with her to set things right and winds up in the mattel headquarters. something resembling chaos ensues. ken joins her and winds up causing further damage. i like what they do in various stages of the story and with the characters. it was overall a very pleasant surprise snd a good movie with a good cast. margot robbie, ryan gosling, america ferrera, and will ferrell were all good in their roles. if you are a movie and/or a barbie fan, you will love this movie.*** out of **** 2 out of 7 found this helpful. was this review helpful? sign in to vote. permalink") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:-----| | Word count | 112 | 234.1953 | 1424 | | Label | Training Sample Count | |:---------|:----------------------| | negative | 64 | | positive | 64 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:-------:|:-------------:|:---------------:| | 0.0019 | 1 | 0.3627 | - | | 0.0962 | 50 | 0.0007 | - | | 0.1923 | 100 | 0.1003 | - | | 0.2885 | 150 | 0.0001 | - | | 0.3846 | 200 | 0.0001 | - | | 0.4808 | 250 | 0.0001 | - | | 0.5769 | 300 | 0.0001 | - | | 0.6731 | 350 | 0.0 | - | | 0.7692 | 400 | 0.0001 | - | | 0.8654 | 450 | 0.0 | - | | 0.9615 | 500 | 0.0 | - | | **1.0** | **520** | **-** | **0.2312** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.11 - SetFit: 1.0.3 - Sentence Transformers: 2.5.1 - Transformers: 4.38.2 - PyTorch: 2.0.1 - Datasets: 2.18.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit <!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) --> - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:---------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | negative | <ul><li>"definitely not for kids i've just taken 2 nieces, 9 and 14, to see barbie. they both hated it... really, really hated it!! and, me: i'm just stunned. i've no idea what i've just seen.if barbie is meant to be a dark satire, it's alright. if it's meant to be a kids movie, it's unconscionably bad. and, i'm totally confused which type of movie it's supposed to be. what the literal heck is going on!!if you're looking for a twenty first century dark cultural satire, you're at least in your twenties, you've an open mind, this is your movie... maybe. if you're looking for a movie to take your kids to, before getting ice cream, this is absolutely not the movie you're looking for. my advice: give barbie a miss and go straight for the ice cream. you will be so much happier. i wish i had. 15 out of 30 found this helpful. was this review helpful? sign in to vote. permalink"</li><li>"the most political movie i've ever seen. i loved the effort with the sets and the fun outfits. i also really enjoyed the ads threw out movie.i was really enjoying this movie but then it started getting really political, i was kind of fine with it but then they started making politics the whole story line and so it was really no longer a fun movie. after the movie finished i just ended up leaving miserable and like i waisted money... how is a family meant to enjoy this?? i can't see any 13 year old understanding anything they talk about.also they want to talk about how being a human is hard and that were not perfect but then they hire someone to play a teen girl who has perfect hair and skin. a bit of a confusing message? 180 out of 276 found this helpful. was this review helpful? sign in to vote. permalink"</li><li>"mediocre at best i had high expectations as a result of the media press, however these we're crushed from the start.first of all, i did expect it to feel a little generic and cheesy although all i could feel throughout the movie is that it was rushed and cheap. the storyline was dry and over-political.i'd have liked to see more lgbtq+ representation and breaking stereotypes, as it felt like just a generic film. even the barbies were considered stereotypes!comedy throughout was limited and wasn't engaging at all - sometimes trying too hard to emphasise feminism and going too far in the opposite direction to the world is heading in (i'm not encouraging patriarchy).however, i did find the music quite good (especially the billie eilish song) and did redeem the movie a little bit.in summary, the movie was mediocre at best and there's not much to discuss. it felt like a cheap version of movies like spirited and the greatest showman with no real emotion."</li></ul> | | positive | <ul><li>"fabulously done so, i didn't have that much of a high expectation going in to see this because i really wasn't sure what is was going to be. i grew up playing barbie's and have watched the animated films many times with my kids.the movie from start to finish is so thoughtfully executed. it's for everyone, but mainly young girls need to see this. it's a great reminder of how special we are as women and how special all of us are individually as a person and the life we get to experience.another fantastic element is adding a very 90s feel to the film. you will laugh like you used to laugh when fun comedies would come out in theatre's years ago. i haven't seen just a good comedy with heart in years.i brought my 7yr old and we dressed up for it too. i wanted her to see me comfortable in my own skin and her as well and the film touches on that. it's suited for any age, regardless of the rating.i don't give high ratings like this to films usually, as i find small things usually that i wish had been addressed more... but not in barbie. what a fun movie and a great time you'll have watching this in theatre (or at home) knowing it's your childhood come to life."</li><li>"sure, life in plastic is fantastic but only in barbieland the barbie movie was unexpected. it was a colorful and clever ride in the cinema. i was not expecting all the deep emotions that the film gave. life is not all about perfection, which a lot of people wants to constantly achieve these days, imperfections is what identifies us and that's okay.i'm a visual kind of cinema goer and i have the say that dance sequence (dance the night) and the scene where barbie and ken where traveling to the real world were amazing. kudos to the creative team behind it, i absolutely adore the set.greta gerwig and her team delivered a visual spectacle with a heart. 10 out of 28 found this helpful. was this review helpful? sign in to vote. permalink"</li><li>"funny with social commentary barbie (margot robbie) lives in barbieland. ken (ryan gosling) is always desperate for her attention. all the ken and barbie dolls live in a happy matriarchy unaware of the real world. out of the blue, barbie starts pondering death and her existence. weird barbie (kate mckinnon) tells her that she has opened a portal to the real world and must fix whatever is her real world problem before she can return to her perfect self. she and stowaway ken find the real world completely different than their expectations. mattel ceo (will ferrell) insists on putting her back in the box. mattel secretary gloria (america ferrera) had caused the problem by drawing new forms of barbie.first, i love the premise and i love the 2001 opening. the start is a lot funnier than i expected. margot robbie is great and ryan gosling is hilarious. it's a great start in barbieland. the first moments in la is a little too much. quite frankly, barbie and ken would not stand out at venice beach. the male leering is good enough. they don't need to do the stereotype abusive guys. the real world should be realistic. barbie and ken would be better fishes out of the water in a recognizable regular world. the more average the real world is, the better it is for the characters to showcase their outrageousness.mattel is fun and i like the ridiculous silliness. will ferrell is a good way to show their outrageous quality. the movie has a lot of social commentary and that aspect does threaten to overwhelm it. the last act has some wonky moments as the movie tries to wrap its arms around the heavier social discussions. i would like to keep that more contained and concentrate on the mother daughter relationships. it's gloria and sasha, but it's also ruth and barbara. that's a perfect way to end the movie. i do like the montage idea, but i didn't realize what it was doing initially. it would have been fine to do that for the closing credits. all in all, this is a funny engaging movie and it's able to deal with some of the tougher social material."</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | Precision | Recall | F1 | |:--------|:---------|:----------|:-------|:-------| | **all** | 0.8812 | 0.9952 | 0.8757 | 0.9316 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("carlesoctav/SentimentClassifierBarbieDune-8shot") # Run inference preds = model("decent i like what they did with this movie and the characters with its combining the barbie world and the real world. barbie starts getting \"vibes\" and has to go into the real world to find the girl who played with her to set things right and winds up in the mattel headquarters. something resembling chaos ensues. ken joins her and winds up causing further damage. i like what they do in various stages of the story and with the characters. it was overall a very pleasant surprise snd a good movie with a good cast. margot robbie, ryan gosling, america ferrera, and will ferrell were all good in their roles. if you are a movie and/or a barbie fan, you will love this movie.*** out of **** 2 out of 7 found this helpful. was this review helpful? sign in to vote. permalink") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:-----| | Word count | 112 | 234.1953 | 1424 | | Label | Training Sample Count | |:---------|:----------------------| | negative | 64 | | positive | 64 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:-------:|:-------------:|:---------------:| | 0.0019 | 1 | 0.3627 | - | | 0.0962 | 50 | 0.0007 | - | | 0.1923 | 100 | 0.1003 | - | | 0.2885 | 150 | 0.0001 | - | | 0.3846 | 200 | 0.0001 | - | | 0.4808 | 250 | 0.0001 | - | | 0.5769 | 300 | 0.0001 | - | | 0.6731 | 350 | 0.0 | - | | 0.7692 | 400 | 0.0001 | - | | 0.8654 | 450 | 0.0 | - | | 0.9615 | 500 | 0.0 | - | | **1.0** | **520** | **-** | **0.2312** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.11 - SetFit: 1.0.3 - Sentence Transformers: 2.5.1 - Transformers: 4.38.2 - PyTorch: 2.0.1 - Datasets: 2.18.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"language": "en", "library_name": "setfit", "license": "apache-2.0", "metrics": ["accuracy", "precision", "recall", "f1"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "not so jolly dolly so, last weekend my wife and i watched the oppen part of the \"barbenheimer\" 2023 box-office two-headed monster and this week it was barbie's turn. from the little i'd incidentally read in advance about the day-glo billion dollar blockbuster, i was expecting some kind of retro-cool, existentialist, post-modernist satire on the battle of the sexes, consumerism and childhood buffed up with a little diversity along the way, but somehow with all these ducks lined up in a row, i felt the film missed the mark.it starts brightly with eye-candy sets in fifty shades of pink as we're introduced to margot robbie's barbie in her barbie-world of alternative barbies, see through doll's houses and their various incomplete consumer goods, for example our girl has her daily dry-shower and drinks non-existent tea. everything appears to be perfect in her / their perfect world, unless you're the boyfriend ken, here also in a variety of forms, all doomed to exist only in barbie's slipstream and so experience recurring frustration at getting precisely nowhere, all the time, with the object of his / their, i hesitate to call it, desire.then things start to go wrong for robbie's \"stereotypical\" barbie. she thinks of death and starts to malfunction and after a visit to kate mckinnon's weird barbie, a concept i have to say i didn't get at all, she determines to go to the real world to connect with the disillusioned mattel employee, played by america ferrera, whose negativity, channelled through her disinterested daughter ariana greenblatt, is upsetting the living doll's equilibrium. ryan gosling's wheedling ken is also along for the ride and stows away in her penelope pitstop-mobile and together they head for l. a., where ferrera lives, the headquarters of the manufacturer mattel.for me, the film went downhill fast from there with ken's head getting messed-up with perceptions of patriarchy while barbie has a meltdown over her identity-crisis. there are unfunny, over-played scenes where barbie experiences humiliation at the hands of greenblatt and her school chums, traipses down to mattel hq to confront the all-male board of directors headed by a mis-cast will ferrell as the company ceo, before returning to barbie-world with ferrera and greenblatt in tow to take down ken's new-model kendom where outdated male-superiority is literally back in the saddle.i have to admit, i got very bored, very soon with this empty, supposedly satirical high-concept, fantasy-comedy. a world box-office of 1.5 billion dollars and eight oscar nominations actually makes me wonder if i've not switched places too with barbie-world as i'm afraid nothing about the movie, including the soundtrack and unsuccessful attempts at either comedy or pathos (especially when they wheel in rhea perlman as the doll's now-enlightened creator) all missing me by miles.when at one point, all her namesakes shout \"go barbie!', i must admit i was with them 100% but for completely different reasons."}, {"text": "way better than expected i was amongst the people who thought they saw a majority of this film based on all of the filming stills posted on the bird app in 2022. i still wanted to see it. something about their perfect neon rollerblading outfits. i saw one preview and wasn't sure what the plot was going to be, i didn't care, i still wanted to see it. i wasn't expecting it to be amazing, but amazing it was. well done!! margot really knocked it out of the dollhouse. ryan i'll never look at the same way way again. this 1980s barbie superman is very pleased. it won't be long until the opening dance scene is all over the clock app. i haven't felt so compelled to learn choreography since michael jackson's thriller. also, girls rule. sorry, ken. 12 out of 24 found this helpful. was this review helpful? sign in to vote. permalink"}, {"text": "anyone remember the film \"life size\"? from a far, i can see why people would absolutely hate this movie. just the concept of \"barbie: the movie\" is enough to make people feel like the art of cinema has been compromised by corporate america. but, as a whole, this movie was very well received. it made over a billion dollars at the box-office and was nominated for 8 oscars including \"best picture\", so clearly it some people really liked it.there is a lot to enjoy in this film. the movie does a good job with poking fun at the barbie brand without it feeling too much like a spoof. this is a comedy, so the fact that the film is really funny is kind of an important element. understandably comedy is a subjective thing, so all i can say is for me, i laughed out loud several times through the movie. the movie is clever in how it treats its \"worldbuilding\" and nicely avoids any firm answers about how this world works. because, yeah, if you think about that sort of stuff in the film there is a lot that doesn't add up.it is nice that they don't spend too much time in \"the real world\" and focus on the creative fun of \"barbie world\". the movie is more visually unique and can do more gags when that is the case.when the movie is focusing on being a bizarre comedy, that is when some of the best and most memorable things happen. when it tries to have a more serious message, that is where it loses some momentum. don't misunderstand me, the movie needs some serious stuff in order to make the comedy work. and the stuff with ugly betty and her daughter is good emotional stuff. but towards the end, they realize that barbie needs to have a character arc and feel like they tack one on last second. there are some very funny jokes towards the end, but it does become a little repetitive and the message feels heavy handed by the 5th time it's brought up. side note: i wanted a cameo from the voice actress of barbie, kelly sheridan, but she wasn't there.i will emphasize this because hollywood will learn all the wrong lessons from this movie's success. we do not want a \"polly pocket\" movie or an \"uno\" movie. what made this film a success, beyond its brand or its marketing campaign, is that it was uniquely greta gerwig's vision. the movie wasn't concerned with mass audience appeal, it would tell jokes that they thought were funny and hoped others would enjoy as well. if you want to duplicate barbie's success, give creative people control to make some out there stuff.wrapped in plastic, it's fantastic."}, {"text": "great expectations this film exceeded all of my expectations a n d i was looking forward to seeing it. i wonder about any parents who might bring their children to see it expecting something quite different from what this film is. one hour fifty four minutes of fun from beginning to end. satire, sarcasm , humor at every turn. production values and acting off the charts good. i can't believe mattel let them make the movie with complete artistic freedom. think of nicole kidman in the amc promo before a movie starts and says, \" somehow heartbreak feels good in a place like this \". well somehow watching a silly spoof like this movie feels great all the way through and even afterwards. i loved it and i am not surprised at the huge box office , this movie rocks. 4 out of 11 found this helpful. was this review helpful? sign in to vote. permalink"}, {"text": "decent i like what they did with this movie and the characters with its combining the barbie world and the real world. barbie starts getting \"vibes\" and has to go into the real world to find the girl who played with her to set things right and winds up in the mattel headquarters. something resembling chaos ensues. ken joins her and winds up causing further damage. i like what they do in various stages of the story and with the characters. it was overall a very pleasant surprise snd a good movie with a good cast. margot robbie, ryan gosling, america ferrera, and will ferrell were all good in their roles. if you are a movie and/or a barbie fan, you will love this movie.*** out of **** 2 out of 7 found this helpful. was this review helpful? sign in to vote. permalink"}], "inference": true, "model-index": [{"name": "SetFit", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "data/raw/barbie.jsonl", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.8811688311688312, "name": "Accuracy"}, {"type": "precision", "value": 0.9952114924181963, "name": "Precision"}, {"type": "recall", "value": 0.8757022471910112, "name": "Recall"}, {"type": "f1", "value": 0.9316398954053045, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,593
vhpvmx/mt5-small-finetuned-amazon-en-es
vhpvmx
summarization
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "generated_from_trainer", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-10-12T10:43:47Z
2023-10-12T13:43:44+00:00
13
0
--- base_model: google/mt5-small license: apache-2.0 metrics: - rouge tags: - summarization - generated_from_trainer model-index: - name: mt5-small-finetuned-amazon-en-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0882 - Rouge1: 17.4498 - Rouge2: 8.7404 - Rougel: 16.8415 - Rougelsum: 16.9066 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 6.4445 | 1.0 | 1209 | 3.3476 | 13.3795 | 5.5143 | 12.8433 | 12.7807 | | 3.9098 | 2.0 | 2418 | 3.2364 | 15.5805 | 7.6998 | 14.9371 | 14.9673 | | 3.5854 | 3.0 | 3627 | 3.1560 | 17.0237 | 8.2938 | 16.3307 | 16.3798 | | 3.4231 | 4.0 | 4836 | 3.1527 | 18.0902 | 9.0059 | 17.1599 | 17.2816 | | 3.3166 | 5.0 | 6045 | 3.1183 | 17.5474 | 8.6267 | 16.9442 | 17.0014 | | 3.2545 | 6.0 | 7254 | 3.0967 | 17.6619 | 8.625 | 17.0709 | 17.0763 | | 3.2021 | 7.0 | 8463 | 3.0897 | 18.1442 | 9.1184 | 17.6043 | 17.5848 | | 3.1818 | 8.0 | 9672 | 3.0882 | 17.4498 | 8.7404 | 16.8415 | 16.9066 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.14.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0882 - Rouge1: 17.4498 - Rouge2: 8.7404 - Rougel: 16.8415 - Rougelsum: 16.9066 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 6.4445 | 1.0 | 1209 | 3.3476 | 13.3795 | 5.5143 | 12.8433 | 12.7807 | | 3.9098 | 2.0 | 2418 | 3.2364 | 15.5805 | 7.6998 | 14.9371 | 14.9673 | | 3.5854 | 3.0 | 3627 | 3.1560 | 17.0237 | 8.2938 | 16.3307 | 16.3798 | | 3.4231 | 4.0 | 4836 | 3.1527 | 18.0902 | 9.0059 | 17.1599 | 17.2816 | | 3.3166 | 5.0 | 6045 | 3.1183 | 17.5474 | 8.6267 | 16.9442 | 17.0014 | | 3.2545 | 6.0 | 7254 | 3.0967 | 17.6619 | 8.625 | 17.0709 | 17.0763 | | 3.2021 | 7.0 | 8463 | 3.0897 | 18.1442 | 9.1184 | 17.6043 | 17.5848 | | 3.1818 | 8.0 | 9672 | 3.0882 | 17.4498 | 8.7404 | 16.8415 | 16.9066 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.14.1
{"base_model": "google/mt5-small", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-amazon-en-es", "results": []}]}
task
[ "SUMMARIZATION" ]
43,594
fine-tuned/jina-embeddings-v2-base-en-562024-j9xx-webapp
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Internet", "Backbone", "Colocation", "IPv6", "Network", "custom_code", "en", "dataset:fine-tuned/jina-embeddings-v2-base-en-562024-j9xx-webapp", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-05-06T22:23:07Z
2024-05-06T22:23:22+00:00
6
0
--- datasets: - fine-tuned/jina-embeddings-v2-base-en-562024-j9xx-webapp - allenai/c4 language: - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Internet - Backbone - Colocation - IPv6 - Network --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: Internet Backbone and Colocation Provider ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/jina-embeddings-v2-base-en-562024-j9xx-webapp', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
null
Non_BioNLP
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: Internet Backbone and Colocation Provider ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/jina-embeddings-v2-base-en-562024-j9xx-webapp', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
{"datasets": ["fine-tuned/jina-embeddings-v2-base-en-562024-j9xx-webapp", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Internet", "Backbone", "Colocation", "IPv6", "Network"]}
task
[ "TEXT_CLASSIFICATION" ]
43,596
RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf
RichardErkhov
null
[ "gguf", "endpoints_compatible", "region:us" ]
2024-10-26T12:55:08Z
2024-10-26T13:32:47+00:00
46
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Walter-StableLM-3B - GGUF - Model creator: https://huggingface.co/KnutJaegersberg/ - Original model: https://huggingface.co/KnutJaegersberg/Walter-StableLM-3B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Walter-StableLM-3B.Q2_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q2_K.gguf) | Q2_K | 1.01GB | | [Walter-StableLM-3B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q3_K_S.gguf) | Q3_K_S | 1.17GB | | [Walter-StableLM-3B.Q3_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q3_K.gguf) | Q3_K | 1.3GB | | [Walter-StableLM-3B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q3_K_M.gguf) | Q3_K_M | 1.3GB | | [Walter-StableLM-3B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q3_K_L.gguf) | Q3_K_L | 1.4GB | | [Walter-StableLM-3B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.IQ4_XS.gguf) | IQ4_XS | 1.43GB | | [Walter-StableLM-3B.Q4_0.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q4_0.gguf) | Q4_0 | 1.5GB | | [Walter-StableLM-3B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.IQ4_NL.gguf) | IQ4_NL | 1.51GB | | [Walter-StableLM-3B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q4_K_S.gguf) | Q4_K_S | 1.51GB | | [Walter-StableLM-3B.Q4_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q4_K.gguf) | Q4_K | 1.59GB | | [Walter-StableLM-3B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q4_K_M.gguf) | Q4_K_M | 1.59GB | | [Walter-StableLM-3B.Q4_1.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q4_1.gguf) | Q4_1 | 1.65GB | | [Walter-StableLM-3B.Q5_0.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q5_0.gguf) | Q5_0 | 1.81GB | | [Walter-StableLM-3B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q5_K_S.gguf) | Q5_K_S | 1.81GB | | [Walter-StableLM-3B.Q5_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q5_K.gguf) | Q5_K | 1.86GB | | [Walter-StableLM-3B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q5_K_M.gguf) | Q5_K_M | 1.86GB | | [Walter-StableLM-3B.Q5_1.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q5_1.gguf) | Q5_1 | 1.96GB | | [Walter-StableLM-3B.Q6_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q6_K.gguf) | Q6_K | 2.14GB | | [Walter-StableLM-3B.Q8_0.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q8_0.gguf) | Q8_0 | 2.77GB | Original model description: --- license: apache-2.0 datasets: - KnutJaegersberg/Auton --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63732ebbbd81fae2b3aaf3fb/b3PQZKV_xZCGL8W5pjCpu.png) Walter is an unaligned, free thinking AI assistant that has been given time to think about things. It's trained on instruction datasets with open source licenses. It covers a lot of tasks, 2/3 of the samples are from large datasets like flan, but also other datasets. It knows a few tricks, shown by the prompting examples below: General Prompt Examples: ``` ### Instruction: Please answer the following question: Process: - The bat sends out sound waves from their mouth or nose - The sound waves hit an object - Echoes are produced - The echo returns to the bat&#x27;s ears - The bat hears the echo - The bat can figure out where the object is located. suppose No bats live here happens, how will it affect STRONGER echolocation. Which of the following is the supposed perturbation? - directly impacting a step of the process - indirectly impacting a step of the process - not impacting any step of the process Answer: ### Response: indirectly impacting a step of the process ``` ``` ### Instruction: You will be given a text in Russian language which contain different emotion labels from the list - ['joy', ' sadness', 'surprise', 'fear', 'anger']. You need to output the incorrect emotion label, which is irrelevant to the input text. Your answer (i) should contain only one emotion label (ii) should be unambiguous. ### Input: То чувство, когда идешь домой после 3-х часовой тренировки и в подъезде пахнет запеченной курицей - зачем вы меня дразните :( ### Response: surprise ``` CoT Prompt Example: ``` ### Instruction: In this task, you are given a sentence with a missing word that can be an object, a person, and/or an action. Fill in the blank with a plausible word. Although each sentence has many correct answers, you only have to write one answer. PersonX puts ___ on PersonY's car ### Reasoning: Since PersonX is putting water on PersonY's car, it could be assumed that the car is on fire. ### Response: water ``` Summarization Example: ``` ### Instruction: Below is a text. Write a comprehensive, concise and coherent summary. ### Input: Embodied bilingual language, also known as L2 embodiment, is the idea that people mentally simulate their actions, perceptions, and emotions when speaking and understanding a second language (L2) as with their first language (L1). It is closely related to embodied cognition and embodied language processing, both of which only refer to native language thinking and speaking. An example of embodied bilingual language would be situation in which a L1 English speaker learning Spanish as a second language hears the word rápido ("fast") in Spanish while taking notes and then proceeds to take notes more quickly. ==Overview== Embodied bilingual language refers to the role ...... ### Response: Embodied bilingual language, also known as L2 embodiment, is the idea that people mentally simulate their actions, perceptions, and emotions when speaking and understanding a second language (L2) as with their first language (L1). It is closely related to embodied cognition and embodied language processing, both of which only refer to native language thinking and speaking. An example of embodied bilingual language would be situation in which a L1 English speaker learning Spanish as a second language hears the word rápido ("fast") in Spanish while taking notes and then proceeds to take notes more quickly. ``` Essay Example: ``` ### Instruction: Write the original essay for the following summary: A sleeveless dress with a round neck is a great day dress and can be worn for both formal and casual occasions. It has a sweetheart neckline and is made from 100% silk. The dress has a chiffon overlay that covers up the miniskirt and still adds to the class of the dress. ### Response: Sleeveless Dress with a Round Neck Essay Casual wear, day dress, more formal dress The garment is a sleeveless dress with a round neck and has a sweetheart neckline. It has sheer detail on the neckline and on the back. These aspects of design make it suitable to be worn for day and evening occasions. Its great detail and chicness make it suitable for more formal events, .... ```
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Walter-StableLM-3B - GGUF - Model creator: https://huggingface.co/KnutJaegersberg/ - Original model: https://huggingface.co/KnutJaegersberg/Walter-StableLM-3B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Walter-StableLM-3B.Q2_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q2_K.gguf) | Q2_K | 1.01GB | | [Walter-StableLM-3B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q3_K_S.gguf) | Q3_K_S | 1.17GB | | [Walter-StableLM-3B.Q3_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q3_K.gguf) | Q3_K | 1.3GB | | [Walter-StableLM-3B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q3_K_M.gguf) | Q3_K_M | 1.3GB | | [Walter-StableLM-3B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q3_K_L.gguf) | Q3_K_L | 1.4GB | | [Walter-StableLM-3B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.IQ4_XS.gguf) | IQ4_XS | 1.43GB | | [Walter-StableLM-3B.Q4_0.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q4_0.gguf) | Q4_0 | 1.5GB | | [Walter-StableLM-3B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.IQ4_NL.gguf) | IQ4_NL | 1.51GB | | [Walter-StableLM-3B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q4_K_S.gguf) | Q4_K_S | 1.51GB | | [Walter-StableLM-3B.Q4_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q4_K.gguf) | Q4_K | 1.59GB | | [Walter-StableLM-3B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q4_K_M.gguf) | Q4_K_M | 1.59GB | | [Walter-StableLM-3B.Q4_1.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q4_1.gguf) | Q4_1 | 1.65GB | | [Walter-StableLM-3B.Q5_0.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q5_0.gguf) | Q5_0 | 1.81GB | | [Walter-StableLM-3B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q5_K_S.gguf) | Q5_K_S | 1.81GB | | [Walter-StableLM-3B.Q5_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q5_K.gguf) | Q5_K | 1.86GB | | [Walter-StableLM-3B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q5_K_M.gguf) | Q5_K_M | 1.86GB | | [Walter-StableLM-3B.Q5_1.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q5_1.gguf) | Q5_1 | 1.96GB | | [Walter-StableLM-3B.Q6_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q6_K.gguf) | Q6_K | 2.14GB | | [Walter-StableLM-3B.Q8_0.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_Walter-StableLM-3B-gguf/blob/main/Walter-StableLM-3B.Q8_0.gguf) | Q8_0 | 2.77GB | Original model description: --- license: apache-2.0 datasets: - KnutJaegersberg/Auton --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63732ebbbd81fae2b3aaf3fb/b3PQZKV_xZCGL8W5pjCpu.png) Walter is an unaligned, free thinking AI assistant that has been given time to think about things. It's trained on instruction datasets with open source licenses. It covers a lot of tasks, 2/3 of the samples are from large datasets like flan, but also other datasets. It knows a few tricks, shown by the prompting examples below: General Prompt Examples: ``` ### Instruction: Please answer the following question: Process: - The bat sends out sound waves from their mouth or nose - The sound waves hit an object - Echoes are produced - The echo returns to the bat&#x27;s ears - The bat hears the echo - The bat can figure out where the object is located. suppose No bats live here happens, how will it affect STRONGER echolocation. Which of the following is the supposed perturbation? - directly impacting a step of the process - indirectly impacting a step of the process - not impacting any step of the process Answer: ### Response: indirectly impacting a step of the process ``` ``` ### Instruction: You will be given a text in Russian language which contain different emotion labels from the list - ['joy', ' sadness', 'surprise', 'fear', 'anger']. You need to output the incorrect emotion label, which is irrelevant to the input text. Your answer (i) should contain only one emotion label (ii) should be unambiguous. ### Input: То чувство, когда идешь домой после 3-х часовой тренировки и в подъезде пахнет запеченной курицей - зачем вы меня дразните :( ### Response: surprise ``` CoT Prompt Example: ``` ### Instruction: In this task, you are given a sentence with a missing word that can be an object, a person, and/or an action. Fill in the blank with a plausible word. Although each sentence has many correct answers, you only have to write one answer. PersonX puts ___ on PersonY's car ### Reasoning: Since PersonX is putting water on PersonY's car, it could be assumed that the car is on fire. ### Response: water ``` Summarization Example: ``` ### Instruction: Below is a text. Write a comprehensive, concise and coherent summary. ### Input: Embodied bilingual language, also known as L2 embodiment, is the idea that people mentally simulate their actions, perceptions, and emotions when speaking and understanding a second language (L2) as with their first language (L1). It is closely related to embodied cognition and embodied language processing, both of which only refer to native language thinking and speaking. An example of embodied bilingual language would be situation in which a L1 English speaker learning Spanish as a second language hears the word rápido ("fast") in Spanish while taking notes and then proceeds to take notes more quickly. ==Overview== Embodied bilingual language refers to the role ...... ### Response: Embodied bilingual language, also known as L2 embodiment, is the idea that people mentally simulate their actions, perceptions, and emotions when speaking and understanding a second language (L2) as with their first language (L1). It is closely related to embodied cognition and embodied language processing, both of which only refer to native language thinking and speaking. An example of embodied bilingual language would be situation in which a L1 English speaker learning Spanish as a second language hears the word rápido ("fast") in Spanish while taking notes and then proceeds to take notes more quickly. ``` Essay Example: ``` ### Instruction: Write the original essay for the following summary: A sleeveless dress with a round neck is a great day dress and can be worn for both formal and casual occasions. It has a sweetheart neckline and is made from 100% silk. The dress has a chiffon overlay that covers up the miniskirt and still adds to the class of the dress. ### Response: Sleeveless Dress with a Round Neck Essay Casual wear, day dress, more formal dress The garment is a sleeveless dress with a round neck and has a sweetheart neckline. It has sheer detail on the neckline and on the back. These aspects of design make it suitable to be worn for day and evening occasions. Its great detail and chicness make it suitable for more formal events, .... ```
{}
task
[ "SUMMARIZATION" ]
43,597
IDEA-CCNL/Ziya2-13B-Chat
IDEA-CCNL
text-generation
[ "transformers", "pytorch", "llama", "text-generation", "transformer", "封神榜", "en", "zh", "cn", "arxiv:2210.08590", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-10-13T06:00:56Z
2023-10-16T05:48:15+00:00
14
11
--- language: - en - zh - cn license: apache-2.0 tags: - transformer - 封神榜 tasks: - text-generation model_type: - gpt - llama domain: - nlp --- # Ziya2-13B-Chat - Main Page:[Fengshenbang](https://fengshenbang-lm.com/) - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) # 姜子牙系列模型 - [Ziya-LLaMA-13B-v1.1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1.1) - [Ziya-LLaMA-7B-Reward](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-7B-Reward) - [Ziya-LLaMA-13B-Pretrain-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1) - [Ziya-Writing-LLaMa-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-Writing-LLaMa-13B-v1) - [Ziya-BLIP2-14B-Visual-v1](https://huggingface.co/IDEA-CCNL/Ziya-BLIP2-14B-Visual-v1) - [Ziya-Coding-15B-v1](https://huggingface.co/IDEA-CCNL/Ziya-Coding-15B-v1) - [Ziya-Coding-34B-v1.0](https://huggingface.co/IDEA-CCNL/Ziya-Coding-34B-v1.0) - [Ziya2-13B-Base](https://huggingface.co/IDEA-CCNL/Ziya2-13B-Base) - [Ziya-Reader-13B-v1.0](https://huggingface.co/IDEA-CCNL/Ziya-Reader-13B-v1.0) ## 简介 Brief Introduction Ziya2-13B-Chat是基于Ziya2-13B-Base的对话模型,在30万高质量的通用指令微调数据以及40万知识增强的指令微调数据上进行了有监督训练,并且在数万条高质量人类偏好数据训练的奖励模型上进行了全参数的人类反馈强化学习训练。 Ziya2-13B-Chat is a chat version of Ziya2-13B-Base. Ziya2-13B-Chat was fine-tuned on 300,000 high-quality general instruction data as well as 400,000 knowledge-enhanced instruction data, and then trained with full-parameter RLHF on a feedback model trained on tens of thousands of high-quality human preference data.S ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | |:----------:|:-------:|:---------:|:--------:|:------------:|:---------------:| | 通用 General | AGI模型 | 姜子牙 Ziya | LLaMA2 | 13B | English&Chinese | ## 模型信息 Model Information ### 继续预训练 Continual Pretraining Meta在2023年7月份发布了Llama2系列大模型,相比于LLaMA1的1.4万亿Token数据,Llama2预训练的Token达到了2万亿,并在各个榜单中明显超过LLaMA1。 Meta released the Llama2 series of large models in July 2023, with pre-trained tokens reaching 200 billion compared to Llama1's 140 billion tokens, significantly outperforming Llama1 in various rankings. Ziya2-13B-Base沿用了Ziya-LLaMA-13B高效的中文编解码方式,但采取了更优化的初始化算法使得初始训练loss更低。同时,我们对Fengshen-PT继续训练框架进行了优化,效率方面,整合了FlashAttention2、Apex RMS norm等技术来帮助提升效率,对比Ziya-LLaMA-13B训练速度提升38%(163 TFLOPS/per gpu/per sec)。稳定性方面,我们采取BF16进行训练,修复了底层分布式框架的bug,确保模型能够持续稳定训练,解决了Ziya-LLaMA-13B遇到的训练后期不稳定的问题,并在7.25号进行了直播,最终完成了全部数据的继续训练。我们也发现,模型效果还有进一步提升的趋势,后续也会对Ziya2-13B-Base进行继续优化。 Ziya2-13B-Base retained the efficient Chinese encoding and decoding techniques of Ziya-LLaMA-13B, but employed a more optimized initialization algorithm to achieve lower initial training loss. Additionally, we optimized the Fengshen-PT fine-tuning framework. In terms of efficiency, we integrated technologies such as FlashAttention2 and Apex RMS norm to boost efficiency, resulting in a 38% increase in training speed compared to Ziya-LLaMA-13B (163 TFLOPS per GPU per second). For stability, we used BF16 for training, fixed underlying distributed framework bugs to ensure consistent model training, and resolved the late-stage instability issues encountered in the training of Ziya-LLaMA-13B. We also conducted a live broadcast on July 25th to complete the continued training of all data. We have observed a trend towards further improvements in model performance and plan to continue optimizing Ziya2-13B-Base in the future. ![loss曲线](./img2.png) ### 指令微调 Supervised Fine-tuning 依托于Ziya2-13B-Base强大的基础能力,我们优化了SFT阶段的训练策略。 我们发现高质量和多样的任务指令数据能够最大程度地激发预训练阶段所学到的知识。因此,我们利用Evol-Instruct的方法,对我们收集到的指令数据集进行了数据增强,并利用奖励模型筛选出了高质量的样本。最终,我们从2000万的指令数据集中,构造得到了30万高质量的通用指令微调数据,涵盖了问答、推理、代码、常识、对话、写作、自然语言理解、安全性等广泛的任务。 此外,我们发现在有监督微调阶段,引入知识增强训练,可以进一步提升模型的效果。我们利用检索模块,显式地将与指令有关的知识拼到上下文后进行训练。在这一部分,我们构造了约10万条知识增强的指令样本。 最终,我们在经过了300B token预训练的Ziya2-13B-Base模型的基础上,使用约40万的指令样本,使用8k的上下文窗口,经过两个epoch的训练得到SFT阶段的模型。 Based on the strong capability of Ziya2-13B-Base, we optimized the training strategy for the supervised fine-tuning phase (SFT). We found that high-quality and varied task instruction data maximizes the stimulation of the knowledge learned in the pre-training phase. Therefore, we utilized the Evol-Instruct approach to augment our collected instruction dataset with data and filtered out high-quality samples using a reward model. We eventually constructed 300,000 high-quality general-purpose instruction fine-tuning data from a 20 million instruction dataset, covering a wide range of tasks such as QA, reasoning, coding, common sense, dialog, writing, natural language understanding, security, etc. In addition, we find that the introduction of knowledge-enhanced training can further improve the model. We used the retrieval module to obtain knowledge related to the questions and concatenated their text into the context of the training data. In this section, we constructed about 100,000 samples of knowledge-enhanced instructions. Finally, we obtained the SFT model after two epochs of training using about 400,000 instruction samples with a context window of 8k, based on the Ziya2-13B-Base model that had been pre-trained with 300B tokens. ### 人类反馈学习 Reinforcement learning from Human Feedback 基于SFT阶段的模型,Ziya2-13B-Chat针对多种问答、写作以及模型安全性的任务上进行了人类偏好的对齐。我们自行采集了数万条高质量人类偏好数据,使用Ziya2-13B-Base训练了人类偏好反馈模型,在各任务的偏好数据上达到了72%以上的准确率。 Based on SFT model, Ziya2-13B-Chat was aligned for human preferences on a variety of Q&A, writing, and safety tasks. We collected tens of thousands of high-quality human preference data on our own and trained a human preference feedback model using Ziya2-13B-Base, achieving over 72% accuracy on preference data across tasks. | 任务类型 task | 偏好识别准确率 Acc| |:----------:|:-------:| | 日常问答 Daily QA | 76.8% | | 知识问答 Knowledge Quizzing | 76.7% | | 日常写作 Daily Writing | 82.3% | | 任务型写作 Task-based Writing | 72.7% | | 故事写作 Story Writing | 75.1% | | 角色扮演 Role-playinh | 77.6% | | 安全类 Safety & Harmlessness| 72.0% | 基于 Fengshen-RLHF 框架,Ziya2-13B-Chat使用以上人类偏好反馈模型进行了人类反馈强化学习,使模型输出更贴合人类偏好的同时具有更高的安全性。 Using Fengshen-RLHF Framework, Ziya2-13B-Chat used the above feedback model for reinforcement learning, making itself more closely match human preferences with higher security. ### 效果评估 Performance 我们在涵盖了常识问答、写作、数学推理、自然语言理解、安全等多种任务的通用能力测试集上进行了人工评估。最终,Ziya2-13B-Chat模型与Ziya-LlaMA-13B-v1.1模型在side-by-side评测下取得了66.5%的胜率,并对人类反馈强化学习前的版本取得了58.4%的胜率。 We conducted human evaluations of Ziya2-13B-Chat on a variety of tasks covering knowledge quizzing, writing, mathematical reasoning, natural language understanding, security, etc. Ziya2-13B-Chat achieved a 66.5% win rate against Ziya-LlaMA-13B-v1.1 under side-by-side comparison, and a 58.4% win rate against the version before performing RLHF. | | Better | Worse | Same | Win Rate| |:----------:|:-------:|:---------:|:--------:|:------------:| | v.s. Ziya-LlaMA-13B-v1.1 | 53.2% | 20.3% | 26.5% | 66.5% | | v.s. w/o RLHF | 37.5% | 20.8% | 41.7% | 58.4% | ## 使用 Usage Ziya2-13B-Chat采用"\<human\>:"和"\<bot\>:"作为用户和模型的角色识别Prompt,使用"\n"分隔不同角色对话内容。 在推理时,需要将"\<human\>:"和"\<bot\>:"作为前缀分别拼接至用户问题和模型回复的前面,并使用"\n"串连各对话内容。 Ziya2-13B-Chat adopts "\<human\>:" and "\<bot\>:" as the role recognition prompts for users and models, and uses "\n" to separate the contents of different roles. When doing inference, "\<human\>:" and "\<bot\>:" need to be concatenated as prefixes in front of the user's question and the model's reply respectively, and "\n" is used to join the contents of each role. 以下为具体使用方法: Following are the details of how to use it: ```python3 from transformers import AutoTokenizer, AutoModelForCausalLM import torch device = torch.device("cuda") messages = [{"role": "user", "content": "手机如果贴膜贴了一张防指纹的钢化膜,那屏幕指纹解锁还有效吗?"}] user_prefix = "<human>:" assistant_prefix = "<bot>:" separator = "\n" prompt = [] for item in messages: prefix = user_prefix if item["role"] == "user" else assistant_prefix prompt.append(f"{prefix}{item['content']}") prompt.append(assistant_prefix) prompt = separator.join(prompt) model_path="IDEA-CCNL/Ziya2-13B-Chat" model = AutoModelForCausalLM.from_pretrained(model_path,torch_dtype=torch.bfloat16).to(device) tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False) input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device) generate_ids = model.generate( input_ids, max_new_tokens=512, do_sample = True, top_p = 0.9, temperature = 0.85, repetition_penalty=1.05, eos_token_id=tokenizer.encode("</s>"), ) output = tokenizer.batch_decode(generate_ids)[0] print(output) ``` 上面是简单的问答示例,其他更多prompt和玩法,感兴趣的朋友可以下载下来自行发掘。 The above is a simple example of question answering. For more prompts and creative ways to use the model, interested individuals can download it and explore further on their own. ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2210.08590): If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2210.08590): ```text @article{fengshenbang, author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen}, title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, journal = {CoRR}, volume = {abs/2209.02970}, year = {2022} } ``` You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): 欢迎引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
null
Non_BioNLP
# Ziya2-13B-Chat - Main Page:[Fengshenbang](https://fengshenbang-lm.com/) - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) # 姜子牙系列模型 - [Ziya-LLaMA-13B-v1.1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1.1) - [Ziya-LLaMA-7B-Reward](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-7B-Reward) - [Ziya-LLaMA-13B-Pretrain-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1) - [Ziya-Writing-LLaMa-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-Writing-LLaMa-13B-v1) - [Ziya-BLIP2-14B-Visual-v1](https://huggingface.co/IDEA-CCNL/Ziya-BLIP2-14B-Visual-v1) - [Ziya-Coding-15B-v1](https://huggingface.co/IDEA-CCNL/Ziya-Coding-15B-v1) - [Ziya-Coding-34B-v1.0](https://huggingface.co/IDEA-CCNL/Ziya-Coding-34B-v1.0) - [Ziya2-13B-Base](https://huggingface.co/IDEA-CCNL/Ziya2-13B-Base) - [Ziya-Reader-13B-v1.0](https://huggingface.co/IDEA-CCNL/Ziya-Reader-13B-v1.0) ## 简介 Brief Introduction Ziya2-13B-Chat是基于Ziya2-13B-Base的对话模型,在30万高质量的通用指令微调数据以及40万知识增强的指令微调数据上进行了有监督训练,并且在数万条高质量人类偏好数据训练的奖励模型上进行了全参数的人类反馈强化学习训练。 Ziya2-13B-Chat is a chat version of Ziya2-13B-Base. Ziya2-13B-Chat was fine-tuned on 300,000 high-quality general instruction data as well as 400,000 knowledge-enhanced instruction data, and then trained with full-parameter RLHF on a feedback model trained on tens of thousands of high-quality human preference data.S ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | |:----------:|:-------:|:---------:|:--------:|:------------:|:---------------:| | 通用 General | AGI模型 | 姜子牙 Ziya | LLaMA2 | 13B | English&Chinese | ## 模型信息 Model Information ### 继续预训练 Continual Pretraining Meta在2023年7月份发布了Llama2系列大模型,相比于LLaMA1的1.4万亿Token数据,Llama2预训练的Token达到了2万亿,并在各个榜单中明显超过LLaMA1。 Meta released the Llama2 series of large models in July 2023, with pre-trained tokens reaching 200 billion compared to Llama1's 140 billion tokens, significantly outperforming Llama1 in various rankings. Ziya2-13B-Base沿用了Ziya-LLaMA-13B高效的中文编解码方式,但采取了更优化的初始化算法使得初始训练loss更低。同时,我们对Fengshen-PT继续训练框架进行了优化,效率方面,整合了FlashAttention2、Apex RMS norm等技术来帮助提升效率,对比Ziya-LLaMA-13B训练速度提升38%(163 TFLOPS/per gpu/per sec)。稳定性方面,我们采取BF16进行训练,修复了底层分布式框架的bug,确保模型能够持续稳定训练,解决了Ziya-LLaMA-13B遇到的训练后期不稳定的问题,并在7.25号进行了直播,最终完成了全部数据的继续训练。我们也发现,模型效果还有进一步提升的趋势,后续也会对Ziya2-13B-Base进行继续优化。 Ziya2-13B-Base retained the efficient Chinese encoding and decoding techniques of Ziya-LLaMA-13B, but employed a more optimized initialization algorithm to achieve lower initial training loss. Additionally, we optimized the Fengshen-PT fine-tuning framework. In terms of efficiency, we integrated technologies such as FlashAttention2 and Apex RMS norm to boost efficiency, resulting in a 38% increase in training speed compared to Ziya-LLaMA-13B (163 TFLOPS per GPU per second). For stability, we used BF16 for training, fixed underlying distributed framework bugs to ensure consistent model training, and resolved the late-stage instability issues encountered in the training of Ziya-LLaMA-13B. We also conducted a live broadcast on July 25th to complete the continued training of all data. We have observed a trend towards further improvements in model performance and plan to continue optimizing Ziya2-13B-Base in the future. ![loss曲线](./img2.png) ### 指令微调 Supervised Fine-tuning 依托于Ziya2-13B-Base强大的基础能力,我们优化了SFT阶段的训练策略。 我们发现高质量和多样的任务指令数据能够最大程度地激发预训练阶段所学到的知识。因此,我们利用Evol-Instruct的方法,对我们收集到的指令数据集进行了数据增强,并利用奖励模型筛选出了高质量的样本。最终,我们从2000万的指令数据集中,构造得到了30万高质量的通用指令微调数据,涵盖了问答、推理、代码、常识、对话、写作、自然语言理解、安全性等广泛的任务。 此外,我们发现在有监督微调阶段,引入知识增强训练,可以进一步提升模型的效果。我们利用检索模块,显式地将与指令有关的知识拼到上下文后进行训练。在这一部分,我们构造了约10万条知识增强的指令样本。 最终,我们在经过了300B token预训练的Ziya2-13B-Base模型的基础上,使用约40万的指令样本,使用8k的上下文窗口,经过两个epoch的训练得到SFT阶段的模型。 Based on the strong capability of Ziya2-13B-Base, we optimized the training strategy for the supervised fine-tuning phase (SFT). We found that high-quality and varied task instruction data maximizes the stimulation of the knowledge learned in the pre-training phase. Therefore, we utilized the Evol-Instruct approach to augment our collected instruction dataset with data and filtered out high-quality samples using a reward model. We eventually constructed 300,000 high-quality general-purpose instruction fine-tuning data from a 20 million instruction dataset, covering a wide range of tasks such as QA, reasoning, coding, common sense, dialog, writing, natural language understanding, security, etc. In addition, we find that the introduction of knowledge-enhanced training can further improve the model. We used the retrieval module to obtain knowledge related to the questions and concatenated their text into the context of the training data. In this section, we constructed about 100,000 samples of knowledge-enhanced instructions. Finally, we obtained the SFT model after two epochs of training using about 400,000 instruction samples with a context window of 8k, based on the Ziya2-13B-Base model that had been pre-trained with 300B tokens. ### 人类反馈学习 Reinforcement learning from Human Feedback 基于SFT阶段的模型,Ziya2-13B-Chat针对多种问答、写作以及模型安全性的任务上进行了人类偏好的对齐。我们自行采集了数万条高质量人类偏好数据,使用Ziya2-13B-Base训练了人类偏好反馈模型,在各任务的偏好数据上达到了72%以上的准确率。 Based on SFT model, Ziya2-13B-Chat was aligned for human preferences on a variety of Q&A, writing, and safety tasks. We collected tens of thousands of high-quality human preference data on our own and trained a human preference feedback model using Ziya2-13B-Base, achieving over 72% accuracy on preference data across tasks. | 任务类型 task | 偏好识别准确率 Acc| |:----------:|:-------:| | 日常问答 Daily QA | 76.8% | | 知识问答 Knowledge Quizzing | 76.7% | | 日常写作 Daily Writing | 82.3% | | 任务型写作 Task-based Writing | 72.7% | | 故事写作 Story Writing | 75.1% | | 角色扮演 Role-playinh | 77.6% | | 安全类 Safety & Harmlessness| 72.0% | 基于 Fengshen-RLHF 框架,Ziya2-13B-Chat使用以上人类偏好反馈模型进行了人类反馈强化学习,使模型输出更贴合人类偏好的同时具有更高的安全性。 Using Fengshen-RLHF Framework, Ziya2-13B-Chat used the above feedback model for reinforcement learning, making itself more closely match human preferences with higher security. ### 效果评估 Performance 我们在涵盖了常识问答、写作、数学推理、自然语言理解、安全等多种任务的通用能力测试集上进行了人工评估。最终,Ziya2-13B-Chat模型与Ziya-LlaMA-13B-v1.1模型在side-by-side评测下取得了66.5%的胜率,并对人类反馈强化学习前的版本取得了58.4%的胜率。 We conducted human evaluations of Ziya2-13B-Chat on a variety of tasks covering knowledge quizzing, writing, mathematical reasoning, natural language understanding, security, etc. Ziya2-13B-Chat achieved a 66.5% win rate against Ziya-LlaMA-13B-v1.1 under side-by-side comparison, and a 58.4% win rate against the version before performing RLHF. | | Better | Worse | Same | Win Rate| |:----------:|:-------:|:---------:|:--------:|:------------:| | v.s. Ziya-LlaMA-13B-v1.1 | 53.2% | 20.3% | 26.5% | 66.5% | | v.s. w/o RLHF | 37.5% | 20.8% | 41.7% | 58.4% | ## 使用 Usage Ziya2-13B-Chat采用"\<human\>:"和"\<bot\>:"作为用户和模型的角色识别Prompt,使用"\n"分隔不同角色对话内容。 在推理时,需要将"\<human\>:"和"\<bot\>:"作为前缀分别拼接至用户问题和模型回复的前面,并使用"\n"串连各对话内容。 Ziya2-13B-Chat adopts "\<human\>:" and "\<bot\>:" as the role recognition prompts for users and models, and uses "\n" to separate the contents of different roles. When doing inference, "\<human\>:" and "\<bot\>:" need to be concatenated as prefixes in front of the user's question and the model's reply respectively, and "\n" is used to join the contents of each role. 以下为具体使用方法: Following are the details of how to use it: ```python3 from transformers import AutoTokenizer, AutoModelForCausalLM import torch device = torch.device("cuda") messages = [{"role": "user", "content": "手机如果贴膜贴了一张防指纹的钢化膜,那屏幕指纹解锁还有效吗?"}] user_prefix = "<human>:" assistant_prefix = "<bot>:" separator = "\n" prompt = [] for item in messages: prefix = user_prefix if item["role"] == "user" else assistant_prefix prompt.append(f"{prefix}{item['content']}") prompt.append(assistant_prefix) prompt = separator.join(prompt) model_path="IDEA-CCNL/Ziya2-13B-Chat" model = AutoModelForCausalLM.from_pretrained(model_path,torch_dtype=torch.bfloat16).to(device) tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False) input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device) generate_ids = model.generate( input_ids, max_new_tokens=512, do_sample = True, top_p = 0.9, temperature = 0.85, repetition_penalty=1.05, eos_token_id=tokenizer.encode("</s>"), ) output = tokenizer.batch_decode(generate_ids)[0] print(output) ``` 上面是简单的问答示例,其他更多prompt和玩法,感兴趣的朋友可以下载下来自行发掘。 The above is a simple example of question answering. For more prompts and creative ways to use the model, interested individuals can download it and explore further on their own. ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2210.08590): If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2210.08590): ```text @article{fengshenbang, author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen}, title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, journal = {CoRR}, volume = {abs/2209.02970}, year = {2022} } ``` You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): 欢迎引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
{"language": ["en", "zh", "cn"], "license": "apache-2.0", "tags": ["transformer", "封神榜"], "tasks": ["text-generation"], "model_type": ["gpt", "llama"], "domain": ["nlp"]}
task
[ "QUESTION_ANSWERING" ]
43,598
Helsinki-NLP/opus-mt-tc-big-fr-zle
Helsinki-NLP
translation
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "be", "fr", "ru", "uk", "zle", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-24T12:29:13Z
2023-10-10T11:11:18+00:00
29
0
--- language: - be - fr - ru - uk - zle license: cc-by-4.0 tags: - translation - opus-mt-tc model-index: - name: opus-mt-tc-big-fr-zle results: - task: type: translation name: Translation fra-rus dataset: name: flores101-devtest type: flores_101 args: fra rus devtest metrics: - type: bleu value: 25.8 name: BLEU - type: bleu value: 23.1 name: BLEU - task: type: translation name: Translation fra-bel dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: fra-bel metrics: - type: bleu value: 31.1 name: BLEU - type: bleu value: 46.1 name: BLEU - type: bleu value: 39.9 name: BLEU - task: type: translation name: Translation fra-rus dataset: name: newstest2012 type: wmt-2012-news args: fra-rus metrics: - type: bleu value: 23.1 name: BLEU - task: type: translation name: Translation fra-rus dataset: name: newstest2013 type: wmt-2013-news args: fra-rus metrics: - type: bleu value: 24.8 name: BLEU --- # opus-mt-tc-big-fr-zle Neural machine translation model for translating from French (fr) to East Slavic languages (zle). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-23 * source language(s): fra * target language(s): bel rus ukr * valid target language labels: >>bel<< >>rus<< >>ukr<< * model: transformer-big * data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-zle/opusTCv20210807_transformer-big_2022-03-23.zip) * more information released models: [OPUS-MT fra-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-zle/README.md) * more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<` ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>rus<< Ils ont acheté un très bon appareil photo.", ">>ukr<< Il s'est soudain mis à pleuvoir." ] model_name = "pytorch-models/opus-mt-tc-big-fr-zle" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Они купили очень хорошую камеру. # Раптом почався дощ. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-fr-zle") print(pipe(">>rus<< Ils ont acheté un très bon appareil photo.")) # expected output: Они купили очень хорошую камеру. ``` ## Benchmarks * test set translations: [opusTCv20210807_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-zle/opusTCv20210807_transformer-big_2022-03-23.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-zle/opusTCv20210807_transformer-big_2022-03-23.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | fra-bel | tatoeba-test-v2021-08-07 | 0.52711 | 31.1 | 283 | 1703 | | fra-rus | tatoeba-test-v2021-08-07 | 0.66502 | 46.1 | 11490 | 70123 | | fra-ukr | tatoeba-test-v2021-08-07 | 0.61860 | 39.9 | 10035 | 54372 | | fra-rus | flores101-devtest | 0.54106 | 25.8 | 1012 | 23295 | | fra-ukr | flores101-devtest | 0.52733 | 23.1 | 1012 | 22810 | | fra-rus | newstest2012 | 0.51254 | 23.1 | 3003 | 64790 | | fra-rus | newstest2013 | 0.52342 | 24.8 | 3000 | 58560 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Thu Mar 24 02:05:04 EET 2022 * port machine: LM0-400-22516.local
null
Non_BioNLP
# opus-mt-tc-big-fr-zle Neural machine translation model for translating from French (fr) to East Slavic languages (zle). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-23 * source language(s): fra * target language(s): bel rus ukr * valid target language labels: >>bel<< >>rus<< >>ukr<< * model: transformer-big * data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-zle/opusTCv20210807_transformer-big_2022-03-23.zip) * more information released models: [OPUS-MT fra-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-zle/README.md) * more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<` ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>rus<< Ils ont acheté un très bon appareil photo.", ">>ukr<< Il s'est soudain mis à pleuvoir." ] model_name = "pytorch-models/opus-mt-tc-big-fr-zle" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Они купили очень хорошую камеру. # Раптом почався дощ. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-fr-zle") print(pipe(">>rus<< Ils ont acheté un très bon appareil photo.")) # expected output: Они купили очень хорошую камеру. ``` ## Benchmarks * test set translations: [opusTCv20210807_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-zle/opusTCv20210807_transformer-big_2022-03-23.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-zle/opusTCv20210807_transformer-big_2022-03-23.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | fra-bel | tatoeba-test-v2021-08-07 | 0.52711 | 31.1 | 283 | 1703 | | fra-rus | tatoeba-test-v2021-08-07 | 0.66502 | 46.1 | 11490 | 70123 | | fra-ukr | tatoeba-test-v2021-08-07 | 0.61860 | 39.9 | 10035 | 54372 | | fra-rus | flores101-devtest | 0.54106 | 25.8 | 1012 | 23295 | | fra-ukr | flores101-devtest | 0.52733 | 23.1 | 1012 | 22810 | | fra-rus | newstest2012 | 0.51254 | 23.1 | 3003 | 64790 | | fra-rus | newstest2013 | 0.52342 | 24.8 | 3000 | 58560 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Thu Mar 24 02:05:04 EET 2022 * port machine: LM0-400-22516.local
{"language": ["be", "fr", "ru", "uk", "zle"], "license": "cc-by-4.0", "tags": ["translation", "opus-mt-tc"], "model-index": [{"name": "opus-mt-tc-big-fr-zle", "results": [{"task": {"type": "translation", "name": "Translation fra-rus"}, "dataset": {"name": "flores101-devtest", "type": "flores_101", "args": "fra rus devtest"}, "metrics": [{"type": "bleu", "value": 25.8, "name": "BLEU"}, {"type": "bleu", "value": 23.1, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation fra-bel"}, "dataset": {"name": "tatoeba-test-v2021-08-07", "type": "tatoeba_mt", "args": "fra-bel"}, "metrics": [{"type": "bleu", "value": 31.1, "name": "BLEU"}, {"type": "bleu", "value": 46.1, "name": "BLEU"}, {"type": "bleu", "value": 39.9, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation fra-rus"}, "dataset": {"name": "newstest2012", "type": "wmt-2012-news", "args": "fra-rus"}, "metrics": [{"type": "bleu", "value": 23.1, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation fra-rus"}, "dataset": {"name": "newstest2013", "type": "wmt-2013-news", "args": "fra-rus"}, "metrics": [{"type": "bleu", "value": 24.8, "name": "BLEU"}]}]}]}
task
[ "TRANSLATION" ]
43,599
SEBIS/code_trans_t5_base_source_code_summarization_csharp
SEBIS
summarization
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-06-23T05:12:35+00:00
175
1
--- tags: - summarization widget: - text: public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; } --- # CodeTrans model for source code summarization csharp Pretrained model on programming language csharp using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization csharp dataset. ## Intended uses & limitations The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_csharp"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_csharp", skip_special_tokens=True), device=0 ) tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/source%20code%20summarization/csharp/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
null
Non_BioNLP
# CodeTrans model for source code summarization csharp Pretrained model on programming language csharp using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization csharp dataset. ## Intended uses & limitations The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_csharp"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_csharp", skip_special_tokens=True), device=0 ) tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/source%20code%20summarization/csharp/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"}]}
task
[ "SUMMARIZATION" ]
43,600
Saxo/Linkbricks-Horizon-AI-Llama-3.3-Japanese-70B-sft-dpo-base
Saxo
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "en", "jp", "cn", "dataset:Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset", "dataset:Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset", "dataset:Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface", "dataset:Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface", "dataset:Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface", "dataset:Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface", "dataset:Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface", "dataset:Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled", "dataset:Saxo/ko-news-corpus-1", "dataset:Saxo/ko-news-corpus-2", "dataset:Saxo/ko-news-corpus-3", "dataset:Saxo/ko-news-corpus-4", "dataset:Saxo/ko-news-corpus-5", "dataset:Saxo/ko-news-corpus-6", "dataset:Saxo/ko-news-corpus-7", "dataset:Saxo/ko-news-corpus-8", "dataset:Saxo/ko-news-corpus-9", "dataset:maywell/ko_Ultrafeedback_binarized", "dataset:youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo", "dataset:lilacai/glaive-function-calling-v2-sharegpt", "dataset:kuotient/gsm8k-ko", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:finetune:meta-llama/Llama-3.3-70B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-12-11T14:34:37Z
2024-12-24T23:32:09+00:00
10
1
--- base_model: meta-llama/Llama-3.3-70B-Instruct datasets: - Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset - Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset - Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled - Saxo/ko-news-corpus-1 - Saxo/ko-news-corpus-2 - Saxo/ko-news-corpus-3 - Saxo/ko-news-corpus-4 - Saxo/ko-news-corpus-5 - Saxo/ko-news-corpus-6 - Saxo/ko-news-corpus-7 - Saxo/ko-news-corpus-8 - Saxo/ko-news-corpus-9 - maywell/ko_Ultrafeedback_binarized - youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo - lilacai/glaive-function-calling-v2-sharegpt - kuotient/gsm8k-ko language: - ko - en - jp - cn library_name: transformers license: apache-2.0 pipeline_tag: text-generation --- # Model Card for Model ID <div align="center"> <img src="http://www.linkbricks.com/wp-content/uploads/2024/11/fulllogo.png" /> </div> AIとビッグデータ分析の専門企業であるLinkbricksのデータサイエンティストであるジ・ユンソン(Saxo)ディレクターが <br> meta-llama/Llama-3.3-70B-Instructベースモデルを使用し、H100-80G 8個を通じて約 日本語 SFT->DPOした日本語強化言語モデル。<br> 2千万件の日本ニュース及びウィキコーパスを基準に、様々なタスク別の日本語・韓国語・中国語・英語クロス学習データと数学及び論理判断データを通じて、日中韓英言語クロス補強処理と複雑な論理問題にも対応できるように訓練したモデルです。 -トークナイザーは、単語拡張なしでベースモデルのまま使用します。<br> -カスタマーレビューやソーシャル投稿の高次元分析及びコーディングとライティング、数学、論理判断などが強化されたモデル。<br> -128k-Context Window<br> -Function Call<br> -128k-Context Window<br> -Deepspeed Stage=3、rslora及びBAdam Layer Modeを使用 <br> -「transformers_version」: 「4.46.3」<br> <br><br> AI 와 빅데이터 분석 전문 기업인 Linkbricks의 데이터사이언티스트인 지윤성(Saxo) 이사가 <br> meta-llama/Llama-3.3-70B-Instruct 베이스모델을 사용해서 H100-80G 8개를 통해 일본어 SFT->DPO 한 일본어 강화 언어 모델<br> 2천만건의 일본 뉴스 및 위키 코퍼스를 기준으로 다양한 테스크별 일본어-한국어-중국어-영어 교차 학습 데이터와 수학 및 논리판단 데이터를 통하여 한중일영 언어 교차 증강 처리와 복잡한 논리 문제 역시 대응 가능하도록 훈련한 모델이다.<br> -토크나이저는 단어 확장 없이 베이스 모델 그대로 사용<br> -고객 리뷰나 소셜 포스팅 고차원 분석 및 코딩과 작문, 수학, 논리판단 등이 강화된 모델<br> -128k-Context Window<br> -Function Call 및 Tool Calling 지원<br> -128k-Context Window<br> -Deepspeed Stage=3, rslora 및 BAdam Layer Mode 사용 <br> -"transformers_version": "4.46.3"<br> <br><br> Finetuned by Mr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics <br> Japanese SFT->DPO training model based on meta-llama/Llama-3.3-70B-Instruct through 8 H100-80Gs as a Japanese boosting language model <br> It is a model that has been trained to handle Japanese-Korean-Chinese-English cross-training data and 20M Japanese news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br> -Tokenizer uses the base model without word expansion<br> -Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br> -Function Calling<br> -128k-Context Window<br> -Deepspeed Stage=3, use rslora and BAdam Layer Mode<br> <br><br> <a href="www.linkbricks.com">www.linkbricks.com</a>, <a href="www.linkbricks.vc">www.linkbricks.vc</a>
null
Non_BioNLP
# Model Card for Model ID <div align="center"> <img src="http://www.linkbricks.com/wp-content/uploads/2024/11/fulllogo.png" /> </div> AIとビッグデータ分析の専門企業であるLinkbricksのデータサイエンティストであるジ・ユンソン(Saxo)ディレクターが <br> meta-llama/Llama-3.3-70B-Instructベースモデルを使用し、H100-80G 8個を通じて約 日本語 SFT->DPOした日本語強化言語モデル。<br> 2千万件の日本ニュース及びウィキコーパスを基準に、様々なタスク別の日本語・韓国語・中国語・英語クロス学習データと数学及び論理判断データを通じて、日中韓英言語クロス補強処理と複雑な論理問題にも対応できるように訓練したモデルです。 -トークナイザーは、単語拡張なしでベースモデルのまま使用します。<br> -カスタマーレビューやソーシャル投稿の高次元分析及びコーディングとライティング、数学、論理判断などが強化されたモデル。<br> -128k-Context Window<br> -Function Call<br> -128k-Context Window<br> -Deepspeed Stage=3、rslora及びBAdam Layer Modeを使用 <br> -「transformers_version」: 「4.46.3」<br> <br><br> AI 와 빅데이터 분석 전문 기업인 Linkbricks의 데이터사이언티스트인 지윤성(Saxo) 이사가 <br> meta-llama/Llama-3.3-70B-Instruct 베이스모델을 사용해서 H100-80G 8개를 통해 일본어 SFT->DPO 한 일본어 강화 언어 모델<br> 2천만건의 일본 뉴스 및 위키 코퍼스를 기준으로 다양한 테스크별 일본어-한국어-중국어-영어 교차 학습 데이터와 수학 및 논리판단 데이터를 통하여 한중일영 언어 교차 증강 처리와 복잡한 논리 문제 역시 대응 가능하도록 훈련한 모델이다.<br> -토크나이저는 단어 확장 없이 베이스 모델 그대로 사용<br> -고객 리뷰나 소셜 포스팅 고차원 분석 및 코딩과 작문, 수학, 논리판단 등이 강화된 모델<br> -128k-Context Window<br> -Function Call 및 Tool Calling 지원<br> -128k-Context Window<br> -Deepspeed Stage=3, rslora 및 BAdam Layer Mode 사용 <br> -"transformers_version": "4.46.3"<br> <br><br> Finetuned by Mr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics <br> Japanese SFT->DPO training model based on meta-llama/Llama-3.3-70B-Instruct through 8 H100-80Gs as a Japanese boosting language model <br> It is a model that has been trained to handle Japanese-Korean-Chinese-English cross-training data and 20M Japanese news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br> -Tokenizer uses the base model without word expansion<br> -Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br> -Function Calling<br> -128k-Context Window<br> -Deepspeed Stage=3, use rslora and BAdam Layer Mode<br> <br><br> <a href="www.linkbricks.com">www.linkbricks.com</a>, <a href="www.linkbricks.vc">www.linkbricks.vc</a>
{"base_model": "meta-llama/Llama-3.3-70B-Instruct", "datasets": ["Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset", "Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset", "Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface", "Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface", "Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface", "Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface", "Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface", "Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled", "Saxo/ko-news-corpus-1", "Saxo/ko-news-corpus-2", "Saxo/ko-news-corpus-3", "Saxo/ko-news-corpus-4", "Saxo/ko-news-corpus-5", "Saxo/ko-news-corpus-6", "Saxo/ko-news-corpus-7", "Saxo/ko-news-corpus-8", "Saxo/ko-news-corpus-9", "maywell/ko_Ultrafeedback_binarized", "youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo", "lilacai/glaive-function-calling-v2-sharegpt", "kuotient/gsm8k-ko"], "language": ["ko", "en", "jp", "cn"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation"}
task
[ "TRANSLATION", "SUMMARIZATION" ]
43,601
dawn78/minilm6_perfumerecommender_v4
dawn78
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:116121", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2025-02-05T16:10:54Z
2025-02-05T16:10:57+00:00
5
0
--- base_model: sentence-transformers/all-MiniLM-L6-v2 library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:116121 - loss:CosineSimilarityLoss widget: - source_sentence: jv x nj silver by john varvatos is a captivating mens fragrance that portrays an invigorating blend of natural elements evoking the essence of a fresh sunkissed oasis expertly crafted by perfumers carlos viñals and nathalie benareau this scent strikes a unique balance between mineral and aromatic qualities drawing inspiration from the rugged coastline and lush greenery upon wearing users describe the fragrance as vibrant yet grounding with an appealing citrus brightness that is punctuated by earthy undertones many reviewers highlight its refreshing quality perfect for both day and evening wear making it an excellent choice for any occasion the scents sophisticated woody depth offers a touch of elegance seamlessly transitioning from a lively burst to a warm embrace with an overall rating of 37 out of 5 from a diverse range of wearers jv x nj silver is considered longlasting and versatile fragrance enthusiasts appreciate its unique character and the balanced interplay of freshness and warmth making it a modern classic for the contemporary man who embraces both adventure and refined charm sentences: - petitgrain - styrax - pear blossom - source_sentence: red wood by dsquared2 inspired by the vibrant and adventurous spirit of nature red wood by dsquared2 is a captivating womens fragrance that embodies a bold yet elegant character launched in 2019 this scent effortlessly weaving together fruity and floral elements evokes a sense of freshness balanced with warmth users often describe it as radiant and uplifting with a pleasant interplay of sweet and spicy nuances that create an aura of sophistication the robust foundation of wood and musk lends a comforting depth while whispers of floral delicacy accentuate its femininity many reviews highlight its versatility making it suitable for both day and evening wear and appreciate its moderate longevity with some users noting that it lingers pleasantly without being overpowering overall red wood is celebrated for its ability to evoke a sense of confidence and charisma appealing to those who wish to make an impression with a fragrance that feels both polished and approachable sentences: - cyclamen - resins - davana - source_sentence: exalt nuit by navitus parfums invites you into a luxurious evening realm where warmth and mystique entwine this unisex fragrance exudes an opulent charm combining rich cacao and aromatic spices that create a captivating scent profile users describe exalt nuit as a perfect companion for a night out with many noting its alluring depth that evokes feelings of sophistication and allure the warm spicy and woody notes impart a sense of comfort and intimacy making it ideal for cooler weather reviewers rave about its intriguing balance that feels both exotic and familiar with a distinctive smoky nuance adding to its uniqueness while some appreciate its moderate longevity others enjoy how it evolves throughout the evening transitioning from a rich inviting aura to a more subtle refined finish crafted by the talented duo francis kurkdjian and jérôme di marino this fragrance promises to envelop you in an enchanting embrace perfect for those seeking an essence that resonates with individual elegance and understated confidence whether at a lavish gala or a cozy gathering exalt nuit is poised to leave an impression that lingers in the air long after youve departed sentences: - oud - gunflint - persimmon - source_sentence: leau dissey pour homme wood wood by issey miyake is a captivating fragrance that embodies an adventurous spirit and a connection to nature this scent is a harmonious blend of vibrant citrus and warm spices evoking the essence of a sunlit forest users describe it as fresh yet deeply grounded reflecting an elegant masculinity that is approachable and refined the fragrance is celebrated for its woodsy character interlaced with hints of aromatic complexity providing a unique olfactory experience that feels both invigorating and comforting reviewers often note its versatility making it suitable for both daily wear and special occasions seamlessly transitioning between casual outings and more formal settings with a rating of 384 out of 5 this scent has garnered positive feedback for its longevity and sillage allowing it to leave a lasting impression without overwhelming the senses overall leau dissey pour homme wood wood is a sophisticated choice for the modern man who revels in the beauty of nature and the elegance of simplicity sentences: - carnation - pink grapefruit - capsicum - source_sentence: scentini citrus chill by avon invites you into a vibrant sunsoaked escape with its exuberant blend of fruity and floral notes that perfectly capture the essence of a tropical paradise users describe this fragrance as refreshingly lively with a juicy brightness that invigorates the senses and uplifts the spirit its playful heart reveals a delicate floral charm which balances the effervescent citrus opening infusing the scent with a lighthearted and carefree vibe ideal for warm weather and casual outings this fragrance has garnered mixed reviews where many appreciate its refreshing quality and the delightful burst of sweetness it offers while some find its longevity to be moderate others revel in its cheerful presence that brings forth a feeling of joy and celebration overall scentini citrus chill is a delightful choice for those seeking a versatile easygoing fragrance that evokes the blissful feeling of a sunny day sentences: - coriander seed - marshmallow - frangipani model-index: - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: Unknown type: unknown metrics: - type: pearson_cosine value: 0.36641281050343105 name: Pearson Cosine - type: spearman_cosine value: 0.20018342620535076 name: Spearman Cosine --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'scentini citrus chill by avon invites you into a vibrant sunsoaked escape with its exuberant blend of fruity and floral notes that perfectly capture the essence of a tropical paradise users describe this fragrance as refreshingly lively with a juicy brightness that invigorates the senses and uplifts the spirit its playful heart reveals a delicate floral charm which balances the effervescent citrus opening infusing the scent with a lighthearted and carefree vibe ideal for warm weather and casual outings this fragrance has garnered mixed reviews where many appreciate its refreshing quality and the delightful burst of sweetness it offers while some find its longevity to be moderate others revel in its cheerful presence that brings forth a feeling of joy and celebration overall scentini citrus chill is a delightful choice for those seeking a versatile easygoing fragrance that evokes the blissful feeling of a sunny day', 'frangipani', 'coriander seed', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.3664 | | **spearman_cosine** | **0.2002** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 116,121 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 12 tokens</li><li>mean: 181.42 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 4.26 tokens</li><li>max: 8 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.03</li><li>max: 1.0</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------|:-----------------| | <code>rose hubris by ex nihilo is an enchanting unisex fragrance that beautifully marries the essence of lush florals with earthy undertones this scent released in 2014 exudes an inviting warmth and sophistication making it a perfect choice for those who appreciate depth in their fragrance users have noted its elegant balance between sweetness and earthiness with a prominent emphasis on a decadent floral heart that captivates the senses the mood of rose hubris is often described as both luxurious and introspective ideal for evening wear or special occasions reviewers highlight its complexity noting that it evolves gracefully on the skin revealing its musky character and rich woody base as time passes while some cherish its remarkable longevity others find its presence to be a touch introspective adding an air of mystery without being overwhelming in essence rose hubris stands out as a signature scent for those who seek a fragrance that is both beautifully floral and ruggedly grounded embodyi...</code> | <code>baies rose</code> | <code>0.0</code> | | <code>l a glow by jennifer lopez is an enchanting fragrance that captures a playful and vibrant essence with its luscious blend of fruity sweetness and delicate floral notes this scent evokes a sense of effortless femininity and youthful exuberance the initial burst of succulent berries and cherries creates an inviting and radiant atmosphere while hints of soft flowers bring a romantic touch to the heart of the fragrance users have described l a glow as a delightful and uplifting scent perfect for everyday wear many appreciate its joyful character and the way it captures attention without overwhelming the musky undertones add a warm depth leaving a lingering impression that balances lightness and sophistication with a solid rating from a diverse audience this fragrance is celebrated for its versatility and longlasting wear making it a perfect companion for both casual outings and special occasions</code> | <code>cypriol</code> | <code>0.0</code> | | <code>eternal magic by avon is an enchanting fragrance designed for the modern woman evoking a sense of elegant allure and mystique released in 2010 this captivating scent weaves together a tapestry of soft florals and warm vanilla presenting a beautifully balanced olfactory experience users frequently describe it as delicate yet assertive with powdery nuances that wrap around the senses like a gentle embrace the fragrance exudes a charming freshness making it suitable for both everyday wear and special occasions many appreciate its romantic character often highlighting the sophisticated interplay of floral delicacies intertwined with rich woody undertones despite its lightness it has garnered attention for its longevity with wearers relishing how the scent evolves throughout the day a frequent sentiment among users is the feeling of wearing a personal aura that captivates those around leaving a soft yet unforgettable impression eternal magic is not just a scent its a celebration of feminini...</code> | <code>cranberry</code> | <code>0.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `num_train_epochs`: 1 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | spearman_cosine | |:------:|:----:|:-------------:|:---------------:| | 0.0276 | 100 | - | 0.0722 | | 0.0551 | 200 | - | 0.1077 | | 0.0827 | 300 | - | 0.1314 | | 0.1102 | 400 | - | 0.1352 | | 0.1378 | 500 | 0.0285 | 0.1434 | | 0.1653 | 600 | - | 0.1604 | | 0.1929 | 700 | - | 0.1678 | | 0.2204 | 800 | - | 0.1695 | | 0.2480 | 900 | - | 0.1709 | | 0.2756 | 1000 | 0.0253 | 0.1690 | | 0.3031 | 1100 | - | 0.1709 | | 0.3307 | 1200 | - | 0.1786 | | 0.3582 | 1300 | - | 0.1794 | | 0.3858 | 1400 | - | 0.1733 | | 0.4133 | 1500 | 0.0252 | 0.1799 | | 0.4409 | 1600 | - | 0.1795 | | 0.4684 | 1700 | - | 0.1847 | | 0.4960 | 1800 | - | 0.1871 | | 0.5236 | 1900 | - | 0.1876 | | 0.5511 | 2000 | 0.024 | 0.1848 | | 0.5787 | 2100 | - | 0.1897 | | 0.6062 | 2200 | - | 0.1929 | | 0.6338 | 2300 | - | 0.1943 | | 0.6613 | 2400 | - | 0.1938 | | 0.6889 | 2500 | 0.023 | 0.1938 | | 0.7165 | 2600 | - | 0.1963 | | 0.7440 | 2700 | - | 0.1969 | | 0.7716 | 2800 | - | 0.1946 | | 0.7991 | 2900 | - | 0.1961 | | 0.8267 | 3000 | 0.0209 | 0.1968 | | 0.8542 | 3100 | - | 0.1971 | | 0.8818 | 3200 | - | 0.1979 | | 0.9093 | 3300 | - | 0.1988 | | 0.9369 | 3400 | - | 0.1996 | | 0.9645 | 3500 | 0.0237 | 0.1999 | | 0.9920 | 3600 | - | 0.2002 | | 1.0 | 3629 | - | 0.2002 | ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 3.3.1 - Transformers: 4.47.1 - PyTorch: 2.5.1+cu124 - Accelerate: 1.2.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'scentini citrus chill by avon invites you into a vibrant sunsoaked escape with its exuberant blend of fruity and floral notes that perfectly capture the essence of a tropical paradise users describe this fragrance as refreshingly lively with a juicy brightness that invigorates the senses and uplifts the spirit its playful heart reveals a delicate floral charm which balances the effervescent citrus opening infusing the scent with a lighthearted and carefree vibe ideal for warm weather and casual outings this fragrance has garnered mixed reviews where many appreciate its refreshing quality and the delightful burst of sweetness it offers while some find its longevity to be moderate others revel in its cheerful presence that brings forth a feeling of joy and celebration overall scentini citrus chill is a delightful choice for those seeking a versatile easygoing fragrance that evokes the blissful feeling of a sunny day', 'frangipani', 'coriander seed', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.3664 | | **spearman_cosine** | **0.2002** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 116,121 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 12 tokens</li><li>mean: 181.42 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 4.26 tokens</li><li>max: 8 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.03</li><li>max: 1.0</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------|:-----------------| | <code>rose hubris by ex nihilo is an enchanting unisex fragrance that beautifully marries the essence of lush florals with earthy undertones this scent released in 2014 exudes an inviting warmth and sophistication making it a perfect choice for those who appreciate depth in their fragrance users have noted its elegant balance between sweetness and earthiness with a prominent emphasis on a decadent floral heart that captivates the senses the mood of rose hubris is often described as both luxurious and introspective ideal for evening wear or special occasions reviewers highlight its complexity noting that it evolves gracefully on the skin revealing its musky character and rich woody base as time passes while some cherish its remarkable longevity others find its presence to be a touch introspective adding an air of mystery without being overwhelming in essence rose hubris stands out as a signature scent for those who seek a fragrance that is both beautifully floral and ruggedly grounded embodyi...</code> | <code>baies rose</code> | <code>0.0</code> | | <code>l a glow by jennifer lopez is an enchanting fragrance that captures a playful and vibrant essence with its luscious blend of fruity sweetness and delicate floral notes this scent evokes a sense of effortless femininity and youthful exuberance the initial burst of succulent berries and cherries creates an inviting and radiant atmosphere while hints of soft flowers bring a romantic touch to the heart of the fragrance users have described l a glow as a delightful and uplifting scent perfect for everyday wear many appreciate its joyful character and the way it captures attention without overwhelming the musky undertones add a warm depth leaving a lingering impression that balances lightness and sophistication with a solid rating from a diverse audience this fragrance is celebrated for its versatility and longlasting wear making it a perfect companion for both casual outings and special occasions</code> | <code>cypriol</code> | <code>0.0</code> | | <code>eternal magic by avon is an enchanting fragrance designed for the modern woman evoking a sense of elegant allure and mystique released in 2010 this captivating scent weaves together a tapestry of soft florals and warm vanilla presenting a beautifully balanced olfactory experience users frequently describe it as delicate yet assertive with powdery nuances that wrap around the senses like a gentle embrace the fragrance exudes a charming freshness making it suitable for both everyday wear and special occasions many appreciate its romantic character often highlighting the sophisticated interplay of floral delicacies intertwined with rich woody undertones despite its lightness it has garnered attention for its longevity with wearers relishing how the scent evolves throughout the day a frequent sentiment among users is the feeling of wearing a personal aura that captivates those around leaving a soft yet unforgettable impression eternal magic is not just a scent its a celebration of feminini...</code> | <code>cranberry</code> | <code>0.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `num_train_epochs`: 1 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | spearman_cosine | |:------:|:----:|:-------------:|:---------------:| | 0.0276 | 100 | - | 0.0722 | | 0.0551 | 200 | - | 0.1077 | | 0.0827 | 300 | - | 0.1314 | | 0.1102 | 400 | - | 0.1352 | | 0.1378 | 500 | 0.0285 | 0.1434 | | 0.1653 | 600 | - | 0.1604 | | 0.1929 | 700 | - | 0.1678 | | 0.2204 | 800 | - | 0.1695 | | 0.2480 | 900 | - | 0.1709 | | 0.2756 | 1000 | 0.0253 | 0.1690 | | 0.3031 | 1100 | - | 0.1709 | | 0.3307 | 1200 | - | 0.1786 | | 0.3582 | 1300 | - | 0.1794 | | 0.3858 | 1400 | - | 0.1733 | | 0.4133 | 1500 | 0.0252 | 0.1799 | | 0.4409 | 1600 | - | 0.1795 | | 0.4684 | 1700 | - | 0.1847 | | 0.4960 | 1800 | - | 0.1871 | | 0.5236 | 1900 | - | 0.1876 | | 0.5511 | 2000 | 0.024 | 0.1848 | | 0.5787 | 2100 | - | 0.1897 | | 0.6062 | 2200 | - | 0.1929 | | 0.6338 | 2300 | - | 0.1943 | | 0.6613 | 2400 | - | 0.1938 | | 0.6889 | 2500 | 0.023 | 0.1938 | | 0.7165 | 2600 | - | 0.1963 | | 0.7440 | 2700 | - | 0.1969 | | 0.7716 | 2800 | - | 0.1946 | | 0.7991 | 2900 | - | 0.1961 | | 0.8267 | 3000 | 0.0209 | 0.1968 | | 0.8542 | 3100 | - | 0.1971 | | 0.8818 | 3200 | - | 0.1979 | | 0.9093 | 3300 | - | 0.1988 | | 0.9369 | 3400 | - | 0.1996 | | 0.9645 | 3500 | 0.0237 | 0.1999 | | 0.9920 | 3600 | - | 0.2002 | | 1.0 | 3629 | - | 0.2002 | ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 3.3.1 - Transformers: 4.47.1 - PyTorch: 2.5.1+cu124 - Accelerate: 1.2.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/all-MiniLM-L6-v2", "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:116121", "loss:CosineSimilarityLoss"], "widget": [{"source_sentence": "jv x nj silver by john varvatos is a captivating mens fragrance that portrays an invigorating blend of natural elements evoking the essence of a fresh sunkissed oasis expertly crafted by perfumers carlos viñals and nathalie benareau this scent strikes a unique balance between mineral and aromatic qualities drawing inspiration from the rugged coastline and lush greenery upon wearing users describe the fragrance as vibrant yet grounding with an appealing citrus brightness that is punctuated by earthy undertones many reviewers highlight its refreshing quality perfect for both day and evening wear making it an excellent choice for any occasion the scents sophisticated woody depth offers a touch of elegance seamlessly transitioning from a lively burst to a warm embrace with an overall rating of 37 out of 5 from a diverse range of wearers jv x nj silver is considered longlasting and versatile fragrance enthusiasts appreciate its unique character and the balanced interplay of freshness and warmth making it a modern classic for the contemporary man who embraces both adventure and refined charm", "sentences": ["petitgrain", "styrax", "pear blossom"]}, {"source_sentence": "red wood by dsquared2 inspired by the vibrant and adventurous spirit of nature red wood by dsquared2 is a captivating womens fragrance that embodies a bold yet elegant character launched in 2019 this scent effortlessly weaving together fruity and floral elements evokes a sense of freshness balanced with warmth users often describe it as radiant and uplifting with a pleasant interplay of sweet and spicy nuances that create an aura of sophistication the robust foundation of wood and musk lends a comforting depth while whispers of floral delicacy accentuate its femininity many reviews highlight its versatility making it suitable for both day and evening wear and appreciate its moderate longevity with some users noting that it lingers pleasantly without being overpowering overall red wood is celebrated for its ability to evoke a sense of confidence and charisma appealing to those who wish to make an impression with a fragrance that feels both polished and approachable", "sentences": ["cyclamen", "resins", "davana"]}, {"source_sentence": "exalt nuit by navitus parfums invites you into a luxurious evening realm where warmth and mystique entwine this unisex fragrance exudes an opulent charm combining rich cacao and aromatic spices that create a captivating scent profile users describe exalt nuit as a perfect companion for a night out with many noting its alluring depth that evokes feelings of sophistication and allure the warm spicy and woody notes impart a sense of comfort and intimacy making it ideal for cooler weather reviewers rave about its intriguing balance that feels both exotic and familiar with a distinctive smoky nuance adding to its uniqueness while some appreciate its moderate longevity others enjoy how it evolves throughout the evening transitioning from a rich inviting aura to a more subtle refined finish crafted by the talented duo francis kurkdjian and jérôme di marino this fragrance promises to envelop you in an enchanting embrace perfect for those seeking an essence that resonates with individual elegance and understated confidence whether at a lavish gala or a cozy gathering exalt nuit is poised to leave an impression that lingers in the air long after youve departed", "sentences": ["oud", "gunflint", "persimmon"]}, {"source_sentence": "leau dissey pour homme wood wood by issey miyake is a captivating fragrance that embodies an adventurous spirit and a connection to nature this scent is a harmonious blend of vibrant citrus and warm spices evoking the essence of a sunlit forest users describe it as fresh yet deeply grounded reflecting an elegant masculinity that is approachable and refined the fragrance is celebrated for its woodsy character interlaced with hints of aromatic complexity providing a unique olfactory experience that feels both invigorating and comforting reviewers often note its versatility making it suitable for both daily wear and special occasions seamlessly transitioning between casual outings and more formal settings with a rating of 384 out of 5 this scent has garnered positive feedback for its longevity and sillage allowing it to leave a lasting impression without overwhelming the senses overall leau dissey pour homme wood wood is a sophisticated choice for the modern man who revels in the beauty of nature and the elegance of simplicity", "sentences": ["carnation", "pink grapefruit", "capsicum"]}, {"source_sentence": "scentini citrus chill by avon invites you into a vibrant sunsoaked escape with its exuberant blend of fruity and floral notes that perfectly capture the essence of a tropical paradise users describe this fragrance as refreshingly lively with a juicy brightness that invigorates the senses and uplifts the spirit its playful heart reveals a delicate floral charm which balances the effervescent citrus opening infusing the scent with a lighthearted and carefree vibe ideal for warm weather and casual outings this fragrance has garnered mixed reviews where many appreciate its refreshing quality and the delightful burst of sweetness it offers while some find its longevity to be moderate others revel in its cheerful presence that brings forth a feeling of joy and celebration overall scentini citrus chill is a delightful choice for those seeking a versatile easygoing fragrance that evokes the blissful feeling of a sunny day", "sentences": ["coriander seed", "marshmallow", "frangipani"]}], "model-index": [{"name": "SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "Unknown", "type": "unknown"}, "metrics": [{"type": "pearson_cosine", "value": 0.36641281050343105, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.20018342620535076, "name": "Spearman Cosine"}]}]}]}
task
[ "TEXT_CLASSIFICATION", "SEMANTIC_SIMILARITY" ]
43,602
HrayrM/distilbert-base-uncased-distilled-clinc
HrayrM
text-classification
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-06-10T05:50:40Z
2022-06-10T06:31:28+00:00
20
0
--- datasets: - clinc_oos license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: type: text-classification name: Text Classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - type: accuracy value: 0.9429032258064516 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.3209 - Accuracy: 0.9429 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.0228 | 1.0 | 318 | 2.2545 | 0.7548 | | 1.7605 | 2.0 | 636 | 1.2040 | 0.8513 | | 0.959 | 3.0 | 954 | 0.6910 | 0.9123 | | 0.5707 | 4.0 | 1272 | 0.4821 | 0.9294 | | 0.3877 | 5.0 | 1590 | 0.3890 | 0.9394 | | 0.3025 | 6.0 | 1908 | 0.3476 | 0.9410 | | 0.258 | 7.0 | 2226 | 0.3264 | 0.9432 | | 0.2384 | 8.0 | 2544 | 0.3209 | 0.9429 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0 - Datasets 2.2.2 - Tokenizers 0.10.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.3209 - Accuracy: 0.9429 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.0228 | 1.0 | 318 | 2.2545 | 0.7548 | | 1.7605 | 2.0 | 636 | 1.2040 | 0.8513 | | 0.959 | 3.0 | 954 | 0.6910 | 0.9123 | | 0.5707 | 4.0 | 1272 | 0.4821 | 0.9294 | | 0.3877 | 5.0 | 1590 | 0.3890 | 0.9394 | | 0.3025 | 6.0 | 1908 | 0.3476 | 0.9410 | | 0.258 | 7.0 | 2226 | 0.3264 | 0.9432 | | 0.2384 | 8.0 | 2544 | 0.3209 | 0.9429 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0 - Datasets 2.2.2 - Tokenizers 0.10.3
{"datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9429032258064516, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,603
TransferGraph/chiragasarpota_scotus-bert-finetuned-lora-tweet_eval_sentiment
TransferGraph
text-classification
[ "peft", "safetensors", "parquet", "text-classification", "dataset:tweet_eval", "base_model:chiragasarpota/scotus-bert", "base_model:adapter:chiragasarpota/scotus-bert", "license:apache-2.0", "model-index", "region:us" ]
2024-02-29T13:07:58Z
2024-02-29T13:08:04+00:00
0
0
--- base_model: chiragasarpota/scotus-bert datasets: - tweet_eval library_name: peft license: apache-2.0 metrics: - accuracy tags: - parquet - text-classification model-index: - name: chiragasarpota_scotus-bert-finetuned-lora-tweet_eval_sentiment results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: tweet_eval config: sentiment split: validation args: sentiment metrics: - type: accuracy value: 0.504 name: accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chiragasarpota_scotus-bert-finetuned-lora-tweet_eval_sentiment This model is a fine-tuned version of [chiragasarpota/scotus-bert](https://huggingface.co/chiragasarpota/scotus-bert) on the tweet_eval dataset. It achieves the following results on the evaluation set: - accuracy: 0.504 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.4095 | None | 0 | | 0.4345 | 1.0178 | 0 | | 0.4345 | 1.0150 | 1 | | 0.504 | 0.9958 | 2 | | 0.504 | 0.9875 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chiragasarpota_scotus-bert-finetuned-lora-tweet_eval_sentiment This model is a fine-tuned version of [chiragasarpota/scotus-bert](https://huggingface.co/chiragasarpota/scotus-bert) on the tweet_eval dataset. It achieves the following results on the evaluation set: - accuracy: 0.504 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.4095 | None | 0 | | 0.4345 | 1.0178 | 0 | | 0.4345 | 1.0150 | 1 | | 0.504 | 0.9958 | 2 | | 0.504 | 0.9875 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
{"base_model": "chiragasarpota/scotus-bert", "datasets": ["tweet_eval"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "chiragasarpota_scotus-bert-finetuned-lora-tweet_eval_sentiment", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "sentiment", "split": "validation", "args": "sentiment"}, "metrics": [{"type": "accuracy", "value": 0.504, "name": "accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,604
MultiBertGunjanPatrick/multiberts-seed-2-900k
MultiBertGunjanPatrick
null
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-2", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-10-04T05:04:18+00:00
123
0
--- datasets: - bookcorpus - wikipedia language: en license: apache-2.0 tags: - exbert - multiberts - multiberts-seed-2 --- # MultiBERTs Seed 2 Checkpoint 900k (uncased) Seed 2 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-900k') model = BertModel.from_pretrained("multiberts-seed-2-900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
null
Non_BioNLP
# MultiBERTs Seed 2 Checkpoint 900k (uncased) Seed 2 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-900k') model = BertModel.from_pretrained("multiberts-seed-2-900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"]}
task
[ "QUESTION_ANSWERING" ]
43,606
desarrolloasesoreslocales/SetFitPruebaMulti
desarrolloasesoreslocales
text-classification
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-10-10T11:58:07Z
2023-10-10T11:58:28+00:00
5
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # desarrolloasesoreslocales/SetFitPruebaMulti This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("desarrolloasesoreslocales/SetFitPruebaMulti") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# desarrolloasesoreslocales/SetFitPruebaMulti This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("desarrolloasesoreslocales/SetFitPruebaMulti") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
43,607
gchhablani/fnet-base-finetuned-cola
gchhablani
text-classification
[ "transformers", "pytorch", "tensorboard", "fnet", "text-classification", "generated_from_trainer", "fnet-bert-base-comparison", "en", "dataset:glue", "arxiv:2105.03824", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2021-09-20T09:07:35+00:00
109
0
--- datasets: - glue language: - en license: apache-2.0 metrics: - matthews_correlation tags: - generated_from_trainer - fnet-bert-base-comparison model-index: - name: fnet-base-finetuned-cola results: - task: type: text-classification name: Text Classification dataset: name: GLUE COLA type: glue args: cola metrics: - type: matthews_correlation value: 0.35940659235571387 name: Matthews Correlation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fnet-base-finetuned-cola This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.5929 - Matthews Correlation: 0.3594 The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used: ```bash #!/usr/bin/bash python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name cola \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-cola \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5895 | 1.0 | 535 | 0.6146 | 0.1699 | | 0.4656 | 2.0 | 1070 | 0.5667 | 0.3047 | | 0.3329 | 3.0 | 1605 | 0.5929 | 0.3594 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fnet-base-finetuned-cola This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.5929 - Matthews Correlation: 0.3594 The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used: ```bash #!/usr/bin/bash python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name cola \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-cola \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5895 | 1.0 | 535 | 0.6146 | 0.1699 | | 0.4656 | 2.0 | 1070 | 0.5667 | 0.3047 | | 0.3329 | 3.0 | 1605 | 0.5929 | 0.3594 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer", "fnet-bert-base-comparison"], "model-index": [{"name": "fnet-base-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE COLA", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.35940659235571387, "name": "Matthews Correlation"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,608
methodya/arabic-summarizer-philosophy-v2
methodya
text2text-generation
[ "safetensors", "mbart", "philosophy", "summarizer", "summarization", "text2text-generation", "ar", "base_model:Jezia/AraBART-finetuned-wiki-ar", "base_model:finetune:Jezia/AraBART-finetuned-wiki-ar", "license:apache-2.0", "region:us" ]
2024-12-18T10:01:22Z
2024-12-18T10:04:22+00:00
18
1
--- base_model: - Jezia/AraBART-finetuned-wiki-ar language: - ar license: apache-2.0 pipeline_tag: text2text-generation tags: - philosophy - summarizer - summarization ---
null
Non_BioNLP
{"base_model": ["Jezia/AraBART-finetuned-wiki-ar"], "language": ["ar"], "license": "apache-2.0", "pipeline_tag": "text2text-generation", "tags": ["philosophy", "summarizer", "summarization"]}
task
[ "SUMMARIZATION" ]
43,609
akshay7/phrasebank-sentiment-analysis
akshay7
text-classification
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "dataset:financial_phrasebank", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-04T05:48:35Z
2023-11-04T05:49:15+00:00
93
0
--- base_model: bert-base-uncased datasets: - financial_phrasebank license: apache-2.0 metrics: - f1 - accuracy tags: - generated_from_trainer model-index: - name: phrasebank-sentiment-analysis results: - task: type: text-classification name: Text Classification dataset: name: financial_phrasebank type: financial_phrasebank config: sentences_50agree split: train args: sentences_50agree metrics: - type: f1 value: 0.8419033782047481 name: F1 - type: accuracy value: 0.8541953232462174 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phrasebank-sentiment-analysis This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the financial_phrasebank dataset. It achieves the following results on the evaluation set: - Loss: 0.5105 - F1: 0.8419 - Accuracy: 0.8542 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | 0.6046 | 0.94 | 100 | 0.4107 | 0.8173 | 0.8370 | | 0.2873 | 1.89 | 200 | 0.4488 | 0.8266 | 0.8301 | | 0.1469 | 2.83 | 300 | 0.5130 | 0.8420 | 0.8501 | | 0.0762 | 3.77 | 400 | 0.5105 | 0.8419 | 0.8542 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phrasebank-sentiment-analysis This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the financial_phrasebank dataset. It achieves the following results on the evaluation set: - Loss: 0.5105 - F1: 0.8419 - Accuracy: 0.8542 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | 0.6046 | 0.94 | 100 | 0.4107 | 0.8173 | 0.8370 | | 0.2873 | 1.89 | 200 | 0.4488 | 0.8266 | 0.8301 | | 0.1469 | 2.83 | 300 | 0.5130 | 0.8420 | 0.8501 | | 0.0762 | 3.77 | 400 | 0.5105 | 0.8419 | 0.8542 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
{"base_model": "bert-base-uncased", "datasets": ["financial_phrasebank"], "license": "apache-2.0", "metrics": ["f1", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "phrasebank-sentiment-analysis", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "financial_phrasebank", "type": "financial_phrasebank", "config": "sentences_50agree", "split": "train", "args": "sentences_50agree"}, "metrics": [{"type": "f1", "value": 0.8419033782047481, "name": "F1"}, {"type": "accuracy", "value": 0.8541953232462174, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,610
mserloth/distilbert-base-german-cased-v1
mserloth
text-classification
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "autotrain", "dataset:distilbert-base-german-cased-v1/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-05-03T16:08:33Z
2024-05-03T16:09:51+00:00
6
0
--- datasets: - distilbert-base-german-cased-v1/autotrain-data tags: - autotrain - text-classification widget: - text: I love AutoTrain --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.4877149760723114 f1_macro: 0.7832535885167465 f1_micro: 0.780952380952381 f1_weighted: 0.7795579858737753 precision_macro: 0.764884597268017 precision_micro: 0.780952380952381 precision_weighted: 0.779495771931005 recall_macro: 0.8068884256321943 recall_micro: 0.780952380952381 recall_weighted: 0.780952380952381 accuracy: 0.780952380952381
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.4877149760723114 f1_macro: 0.7832535885167465 f1_micro: 0.780952380952381 f1_weighted: 0.7795579858737753 precision_macro: 0.764884597268017 precision_micro: 0.780952380952381 precision_weighted: 0.779495771931005 recall_macro: 0.8068884256321943 recall_micro: 0.780952380952381 recall_weighted: 0.780952380952381 accuracy: 0.780952380952381
{"datasets": ["distilbert-base-german-cased-v1/autotrain-data"], "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
task
[ "TEXT_CLASSIFICATION" ]
43,611
fine-tuned/jinaai_jina-embeddings-v2-base-en-scientific-papers-from-arxiv
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Science", "Research", "Academic", "Papers", "Arxiv", "custom_code", "en", "dataset:fine-tuned/jinaai_jina-embeddings-v2-base-en-scientific-papers-from-arxiv", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-05-10T12:40:27Z
2024-05-10T12:40:43+00:00
48
1
--- datasets: - fine-tuned/jinaai_jina-embeddings-v2-base-en-scientific-papers-from-arxiv - allenai/c4 language: - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Science - Research - Academic - Papers - Arxiv --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: academic research papers search engine ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/jinaai_jina-embeddings-v2-base-en-scientific-papers-from-arxiv', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
null
Non_BioNLP
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: academic research papers search engine ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/jinaai_jina-embeddings-v2-base-en-scientific-papers-from-arxiv', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
{"datasets": ["fine-tuned/jinaai_jina-embeddings-v2-base-en-scientific-papers-from-arxiv", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Science", "Research", "Academic", "Papers", "Arxiv"]}
task
[ "TEXT_CLASSIFICATION" ]
43,612
pinzhenchen/sft-lora-de-bloom-3b
pinzhenchen
null
[ "generation", "question answering", "instruction tuning", "de", "arxiv:2309.08958", "license:cc-by-nc-4.0", "region:us" ]
2024-03-05T23:47:20Z
2024-03-05T23:47:24+00:00
0
0
--- language: - de license: cc-by-nc-4.0 tags: - generation - question answering - instruction tuning --- ### Model Description This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable. * [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main) * [Paper](https://arxiv.org/abs/2309.08958) #### Instruction tuning details * Base model: [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b) * Instruction tuning language: German * Training method: LoRA. * LoRA details: rank=8, alpha=16, target modules={key, query, value}. * Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs. * Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data). #### Usage The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries. Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions. #### Citation ``` @inproceedings{chen-etal-2024-monolingual, title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}", author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield", year="2024", booktitle = "Findings of the Association for Computational Linguistics: EACL 2024", } ```
null
Non_BioNLP
### Model Description This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable. * [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main) * [Paper](https://arxiv.org/abs/2309.08958) #### Instruction tuning details * Base model: [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b) * Instruction tuning language: German * Training method: LoRA. * LoRA details: rank=8, alpha=16, target modules={key, query, value}. * Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs. * Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data). #### Usage The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries. Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions. #### Citation ``` @inproceedings{chen-etal-2024-monolingual, title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}", author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield", year="2024", booktitle = "Findings of the Association for Computational Linguistics: EACL 2024", } ```
{"language": ["de"], "license": "cc-by-nc-4.0", "tags": ["generation", "question answering", "instruction tuning"]}
task
[ "QUESTION_ANSWERING" ]
43,613
gokulsrinivasagan/bert_uncased_L-2_H-512_A-8_qnli
gokulsrinivasagan
text-classification
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:google/bert_uncased_L-2_H-512_A-8", "base_model:finetune:google/bert_uncased_L-2_H-512_A-8", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-12-04T19:41:43Z
2024-12-04T19:48:53+00:00
15
0
--- base_model: google/bert_uncased_L-2_H-512_A-8 datasets: - glue language: - en library_name: transformers license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: bert_uncased_L-2_H-512_A-8_qnli results: - task: type: text-classification name: Text Classification dataset: name: GLUE QNLI type: glue args: qnli metrics: - type: accuracy value: 0.8405637927878455 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_uncased_L-2_H-512_A-8_qnli This model is a fine-tuned version of [google/bert_uncased_L-2_H-512_A-8](https://huggingface.co/google/bert_uncased_L-2_H-512_A-8) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3808 - Accuracy: 0.8406 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4759 | 1.0 | 410 | 0.4065 | 0.8219 | | 0.4 | 2.0 | 820 | 0.3840 | 0.8309 | | 0.3491 | 3.0 | 1230 | 0.3808 | 0.8406 | | 0.3005 | 4.0 | 1640 | 0.4054 | 0.8340 | | 0.256 | 5.0 | 2050 | 0.4341 | 0.8334 | | 0.215 | 6.0 | 2460 | 0.4468 | 0.8321 | | 0.182 | 7.0 | 2870 | 0.5106 | 0.8288 | | 0.1543 | 8.0 | 3280 | 0.5216 | 0.8298 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.2.1+cu118 - Datasets 2.17.0 - Tokenizers 0.20.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_uncased_L-2_H-512_A-8_qnli This model is a fine-tuned version of [google/bert_uncased_L-2_H-512_A-8](https://huggingface.co/google/bert_uncased_L-2_H-512_A-8) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3808 - Accuracy: 0.8406 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4759 | 1.0 | 410 | 0.4065 | 0.8219 | | 0.4 | 2.0 | 820 | 0.3840 | 0.8309 | | 0.3491 | 3.0 | 1230 | 0.3808 | 0.8406 | | 0.3005 | 4.0 | 1640 | 0.4054 | 0.8340 | | 0.256 | 5.0 | 2050 | 0.4341 | 0.8334 | | 0.215 | 6.0 | 2460 | 0.4468 | 0.8321 | | 0.182 | 7.0 | 2870 | 0.5106 | 0.8288 | | 0.1543 | 8.0 | 3280 | 0.5216 | 0.8298 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.2.1+cu118 - Datasets 2.17.0 - Tokenizers 0.20.3
{"base_model": "google/bert_uncased_L-2_H-512_A-8", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_uncased_L-2_H-512_A-8_qnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE QNLI", "type": "glue", "args": "qnli"}, "metrics": [{"type": "accuracy", "value": 0.8405637927878455, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,614
NLPclass/Named_entity_recognition_persian
NLPclass
token-classification
[ "transformers", "safetensors", "bert", "token-classification", "fa", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-07-16T20:23:16Z
2024-07-17T14:49:43+00:00
122
0
--- language: - fa library_name: transformers license: mit pipeline_tag: token-classification --- Named entity recognition On Persian dataset traindataset=20484 persian sentense valdataset=2561 AutoTokenizer=HooshvareLab/bert-fa-base-uncased ner_tags= ['O', 'B-pro', 'I-pro', 'B-pers', 'I-pers', 'B-org', 'I-org', 'B-loc', 'I-loc', 'B-fac', 'I-fac', 'B-event', 'I-event'] training_args= learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=4, weight_decay=0.01 Training Loss=0.001000 sample1: 'entity': 'B-loc', 'score': 0.9998902, 'index': 2, 'word': 'تهران', sample2: 'entity': 'B-pers', 'score': 0.99988234, 'index': 2, 'word': 'عباس', for use this model: from transformers import pipeline pipe = pipeline("token-classification", model="NLPclass/Named_entity_recognition_persian") sentence = "" predicted_ner = pipe(sentence) for entity in predicted_ner: print(f"Entity: {entity['word']}, Label: {entity['entity']}")
null
Non_BioNLP
Named entity recognition On Persian dataset traindataset=20484 persian sentense valdataset=2561 AutoTokenizer=HooshvareLab/bert-fa-base-uncased ner_tags= ['O', 'B-pro', 'I-pro', 'B-pers', 'I-pers', 'B-org', 'I-org', 'B-loc', 'I-loc', 'B-fac', 'I-fac', 'B-event', 'I-event'] training_args= learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=4, weight_decay=0.01 Training Loss=0.001000 sample1: 'entity': 'B-loc', 'score': 0.9998902, 'index': 2, 'word': 'تهران', sample2: 'entity': 'B-pers', 'score': 0.99988234, 'index': 2, 'word': 'عباس', for use this model: from transformers import pipeline pipe = pipeline("token-classification", model="NLPclass/Named_entity_recognition_persian") sentence = "" predicted_ner = pipe(sentence) for entity in predicted_ner: print(f"Entity: {entity['word']}, Label: {entity['entity']}")
{"language": ["fa"], "library_name": "transformers", "license": "mit", "pipeline_tag": "token-classification"}
task
[ "NAMED_ENTITY_RECOGNITION" ]
43,615
RichardErkhov/shesshan_-_dumi-llama3.2-3b-instruct-8bits
RichardErkhov
null
[ "safetensors", "llama", "arxiv:2204.05149", "arxiv:2405.16406", "8-bit", "bitsandbytes", "region:us" ]
2025-02-28T06:43:19Z
2025-02-28T06:46:16+00:00
7
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) dumi-llama3.2-3b-instruct - bnb 8bits - Model creator: https://huggingface.co/shesshan/ - Original model: https://huggingface.co/shesshan/dumi-llama3.2-3b-instruct/ Original model description: --- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. “Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | | Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-3B-Instruct, for use with `transformers` and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-3B-Instruct" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-3B-Instruct --include "original/*" --local-dir Llama-3.2-3B-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 | | Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 | | Total | 833k | 86k | | 240 | 0 | \*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required. The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Quantization ### Quantization Scheme We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts: - All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations. - The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation. - Similar to classification layer, an 8-bit per channel quantization is used for embedding layer. ### Quantization-Aware Training and LoRA The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO). ### SpinQuant [SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 | | Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 | \*\*for comparison purposes only. Model not released. ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 | | | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 | | | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 | | | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 | | | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 | | | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 | | | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 | \*\*for comparison purposes only. Model not released. ## Inference time In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device. | Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) | | :---- | ----- | ----- | ----- | ----- | ----- | | 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 | | 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) | | 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) | | 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 | | 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) | | 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) | (\*) The performance measurement is done using an adb binary-based approach. (\*\*) It is measured on an Android OnePlus 12 device. (\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64 *Footnote:* - *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.* - *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.* - *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better* - *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch* - *RSS size \- Memory usage in resident set size (RSS)* ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) dumi-llama3.2-3b-instruct - bnb 8bits - Model creator: https://huggingface.co/shesshan/ - Original model: https://huggingface.co/shesshan/dumi-llama3.2-3b-instruct/ Original model description: --- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. “Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | | Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-3B-Instruct, for use with `transformers` and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-3B-Instruct" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-3B-Instruct --include "original/*" --local-dir Llama-3.2-3B-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 | | Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 | | Total | 833k | 86k | | 240 | 0 | \*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required. The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Quantization ### Quantization Scheme We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts: - All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations. - The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation. - Similar to classification layer, an 8-bit per channel quantization is used for embedding layer. ### Quantization-Aware Training and LoRA The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO). ### SpinQuant [SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 | | Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 | \*\*for comparison purposes only. Model not released. ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 | | | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 | | | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 | | | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 | | | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 | | | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 | | | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 | \*\*for comparison purposes only. Model not released. ## Inference time In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device. | Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) | | :---- | ----- | ----- | ----- | ----- | ----- | | 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 | | 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) | | 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) | | 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 | | 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) | | 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) | (\*) The performance measurement is done using an adb binary-based approach. (\*\*) It is measured on an Android OnePlus 12 device. (\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64 *Footnote:* - *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.* - *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.* - *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better* - *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch* - *RSS size \- Memory usage in resident set size (RSS)* ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
{}
task
[ "SUMMARIZATION" ]
43,616
RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-4bits
RichardErkhov
text-generation
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
2024-05-12T04:10:40Z
2024-05-12T04:31:15+00:00
4
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gemma-2b-translation-v0.103 - bnb 4bits - Model creator: https://huggingface.co/lemon-mint/ - Original model: https://huggingface.co/lemon-mint/gemma-2b-translation-v0.103/ Original model description: --- library_name: transformers language: - ko license: gemma tags: - gemma - pytorch - instruct - finetune - translation widget: - messages: - role: user content: "Hamsters don't eat cats." inference: parameters: max_new_tokens: 2048 base_model: beomi/gemma-ko-2b datasets: - traintogpb/aihub-flores-koen-integrated-sparta-30k pipeline_tag: text-generation --- # Gemma 2B Translation v0.103 - Eval Loss: `1.34507` - Train Loss: `1.40326` - lr: `3e-05` - optimizer: adamw - lr_scheduler_type: cosine ## Prompt Template ``` <bos>### English Hamsters don't eat cats. ### Korean 햄스터는 고양이를 먹지 않습니다.<eos> ``` ## Model Description - **Developed by:** `lemon-mint` - **Model type:** Gemma - **Language(s) (NLP):** English - **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms) - **Finetuned from model:** [beomi/gemma-ko-2b](https://huggingface.co/beomi/gemma-ko-2b)
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gemma-2b-translation-v0.103 - bnb 4bits - Model creator: https://huggingface.co/lemon-mint/ - Original model: https://huggingface.co/lemon-mint/gemma-2b-translation-v0.103/ Original model description: --- library_name: transformers language: - ko license: gemma tags: - gemma - pytorch - instruct - finetune - translation widget: - messages: - role: user content: "Hamsters don't eat cats." inference: parameters: max_new_tokens: 2048 base_model: beomi/gemma-ko-2b datasets: - traintogpb/aihub-flores-koen-integrated-sparta-30k pipeline_tag: text-generation --- # Gemma 2B Translation v0.103 - Eval Loss: `1.34507` - Train Loss: `1.40326` - lr: `3e-05` - optimizer: adamw - lr_scheduler_type: cosine ## Prompt Template ``` <bos>### English Hamsters don't eat cats. ### Korean 햄스터는 고양이를 먹지 않습니다.<eos> ``` ## Model Description - **Developed by:** `lemon-mint` - **Model type:** Gemma - **Language(s) (NLP):** English - **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms) - **Finetuned from model:** [beomi/gemma-ko-2b](https://huggingface.co/beomi/gemma-ko-2b)
{}
task
[ "TRANSLATION" ]
43,618
google/pegasus-newsroom
google
summarization
[ "transformers", "pytorch", "pegasus", "text2text-generation", "summarization", "en", "arxiv:1912.08777", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2023-01-24T16:42:38+00:00
212
16
--- language: en tags: - summarization --- ### Pegasus Models See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html) Original TF 1 code [here](https://github.com/google-research/pegasus) Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019 Maintained by: [@sshleifer](https://twitter.com/sam_shleifer) Task: Summarization The following is copied from the authors' README. # Mixed & Stochastic Checkpoints We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table. | dataset | C4 | HugeNews | Mixed & Stochastic| | ---- | ---- | ---- | ----| | xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64| | cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30| | newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18| | multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95| | gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76| | wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *| | reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94| | big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *| | arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67| | pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25| | aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51| | billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59| The "Mixed & Stochastic" model has the following changes: - trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). - trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). - the model uniformly sample a gap sentence ratio between 15% and 45%. - importance sentences are sampled using a 20% uniform noise to importance scores. - the sentencepiece tokenizer is updated to be able to encode newline character. (*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data: - wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information. - we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS. The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper): trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). the model uniformly sample a gap sentence ratio between 15% and 45%. importance sentences are sampled using a 20% uniform noise to importance scores. the sentencepiece tokenizer is updated to be able to encode newline character. Citation ``` @misc{zhang2019pegasus, title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization}, author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu}, year={2019}, eprint={1912.08777}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
null
Non_BioNLP
### Pegasus Models See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html) Original TF 1 code [here](https://github.com/google-research/pegasus) Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019 Maintained by: [@sshleifer](https://twitter.com/sam_shleifer) Task: Summarization The following is copied from the authors' README. # Mixed & Stochastic Checkpoints We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table. | dataset | C4 | HugeNews | Mixed & Stochastic| | ---- | ---- | ---- | ----| | xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64| | cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30| | newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18| | multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95| | gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76| | wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *| | reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94| | big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *| | arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67| | pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25| | aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51| | billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59| The "Mixed & Stochastic" model has the following changes: - trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). - trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). - the model uniformly sample a gap sentence ratio between 15% and 45%. - importance sentences are sampled using a 20% uniform noise to importance scores. - the sentencepiece tokenizer is updated to be able to encode newline character. (*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data: - wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information. - we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS. The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper): trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). the model uniformly sample a gap sentence ratio between 15% and 45%. importance sentences are sampled using a 20% uniform noise to importance scores. the sentencepiece tokenizer is updated to be able to encode newline character. Citation ``` @misc{zhang2019pegasus, title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization}, author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu}, year={2019}, eprint={1912.08777}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "tags": ["summarization"]}
task
[ "SUMMARIZATION" ]
43,619
RichardErkhov/M4-ai_-_tau-1.8B-gguf
RichardErkhov
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
2024-10-27T20:28:41Z
2024-10-27T22:26:35+00:00
38
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) tau-1.8B - GGUF - Model creator: https://huggingface.co/M4-ai/ - Original model: https://huggingface.co/M4-ai/tau-1.8B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [tau-1.8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q2_K.gguf) | Q2_K | 0.79GB | | [tau-1.8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q3_K_S.gguf) | Q3_K_S | 0.89GB | | [tau-1.8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q3_K.gguf) | Q3_K | 0.95GB | | [tau-1.8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q3_K_M.gguf) | Q3_K_M | 0.95GB | | [tau-1.8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q3_K_L.gguf) | Q3_K_L | 0.98GB | | [tau-1.8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.IQ4_XS.gguf) | IQ4_XS | 1.01GB | | [tau-1.8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q4_0.gguf) | Q4_0 | 1.04GB | | [tau-1.8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.IQ4_NL.gguf) | IQ4_NL | 1.05GB | | [tau-1.8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q4_K_S.gguf) | Q4_K_S | 1.08GB | | [tau-1.8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q4_K.gguf) | Q4_K | 1.13GB | | [tau-1.8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q4_K_M.gguf) | Q4_K_M | 1.13GB | | [tau-1.8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q4_1.gguf) | Q4_1 | 1.13GB | | [tau-1.8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q5_0.gguf) | Q5_0 | 1.22GB | | [tau-1.8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q5_K_S.gguf) | Q5_K_S | 1.24GB | | [tau-1.8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q5_K.gguf) | Q5_K | 1.28GB | | [tau-1.8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q5_K_M.gguf) | Q5_K_M | 1.28GB | | [tau-1.8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q5_1.gguf) | Q5_1 | 1.31GB | | [tau-1.8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q6_K.gguf) | Q6_K | 1.47GB | | [tau-1.8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q8_0.gguf) | Q8_0 | 1.82GB | Original model description: --- language: - en - zh license: other datasets: - Locutusque/UltraTextbooks-2.0 license_name: tongyi-qianwen-research license_link: https://huggingface.co/Qwen/Qwen1.5-0.5B/blob/main/LICENSE inference: parameters: do_sample: true temperature: 0.8 top_p: 0.95 top_k: 40 max_new_tokens: 250 repetition_penalty: 1.1 model-index: - name: tau-1.8B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 37.2 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/tau-1.8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 60.26 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/tau-1.8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 45.96 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/tau-1.8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 39.72 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/tau-1.8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 61.09 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/tau-1.8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 30.17 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/tau-1.8B name: Open LLM Leaderboard --- # tau-1.8B ## Model Details - **Model Name:** tau-1.8B - **Base Model:** Qwen1.5-1.8B - **Dataset:** UltraTextbooks-2.0 - **Model Size:** 1.8B parameters - **Model Type:** Language Model - **Training Procedure:** Further pre-training of Qwen1.5-1.8B on UltraTextbooks-2.0. ## Model Use tau-1.8B is designed to be a general-purpose language model with enhanced capabilities in the domains of machine learning, mathematics, and coding. It can be used for a wide range of natural language processing tasks, such as: - Educational question answering - Text summarization - Content generation for educational purposes - Code understanding and generation - Mathematical problem solving The model's exposure to the diverse content in the UltraTextbooks-2.0 dataset makes it particularly well-suited for applications in educational technology and research. ## Training Data tau-1.8B was further pre-trained on the UltraTextbooks-2.0 dataset, which is an expanded version of the original UltraTextbooks dataset. UltraTextbooks-2.0 incorporates additional high-quality synthetic and human-written textbooks from various sources on the Hugging Face platform, with a focus on increasing the diversity of content in the domains of machine learning, mathematics, and coding. For more details on the dataset, please refer to the [UltraTextbooks-2.0 Dataset Card](https://huggingface.co/datasets/Locutusque/UltraTextbooks-2.0). ## Performance and Limitations Refer to [Evaluation](##Evaluation) for evaluations. It is essential to note that the model may still exhibit biases or inaccuracies present in the training data. Users are encouraged to critically evaluate the model's outputs and report any issues to facilitate continuous improvement. ## Environmental Impact The training of tau-1.8B required computational resources that contribute to the model's overall environmental impact. However, efforts were made to optimize the training process and minimize the carbon footprint. ## Ethical Considerations tau-1.8B was trained on a diverse dataset that may contain biases and inaccuracies. Users should be aware of these potential limitations and use the model responsibly. The model should not be used for tasks that could cause harm or discriminate against individuals or groups. ## Evaluation | Metric |Value| |---------------------------------|----:| |Avg. |45.73| |AI2 Reasoning Challenge (25-Shot)|37.20| |HellaSwag (10-Shot) |60.26| |MMLU (5-Shot) |45.96| |TruthfulQA (0-shot) |39.72| |Winogrande (5-shot) |61.09| |GSM8k (5-shot) |30.17|
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) tau-1.8B - GGUF - Model creator: https://huggingface.co/M4-ai/ - Original model: https://huggingface.co/M4-ai/tau-1.8B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [tau-1.8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q2_K.gguf) | Q2_K | 0.79GB | | [tau-1.8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q3_K_S.gguf) | Q3_K_S | 0.89GB | | [tau-1.8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q3_K.gguf) | Q3_K | 0.95GB | | [tau-1.8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q3_K_M.gguf) | Q3_K_M | 0.95GB | | [tau-1.8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q3_K_L.gguf) | Q3_K_L | 0.98GB | | [tau-1.8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.IQ4_XS.gguf) | IQ4_XS | 1.01GB | | [tau-1.8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q4_0.gguf) | Q4_0 | 1.04GB | | [tau-1.8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.IQ4_NL.gguf) | IQ4_NL | 1.05GB | | [tau-1.8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q4_K_S.gguf) | Q4_K_S | 1.08GB | | [tau-1.8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q4_K.gguf) | Q4_K | 1.13GB | | [tau-1.8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q4_K_M.gguf) | Q4_K_M | 1.13GB | | [tau-1.8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q4_1.gguf) | Q4_1 | 1.13GB | | [tau-1.8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q5_0.gguf) | Q5_0 | 1.22GB | | [tau-1.8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q5_K_S.gguf) | Q5_K_S | 1.24GB | | [tau-1.8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q5_K.gguf) | Q5_K | 1.28GB | | [tau-1.8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q5_K_M.gguf) | Q5_K_M | 1.28GB | | [tau-1.8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q5_1.gguf) | Q5_1 | 1.31GB | | [tau-1.8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q6_K.gguf) | Q6_K | 1.47GB | | [tau-1.8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-1.8B-gguf/blob/main/tau-1.8B.Q8_0.gguf) | Q8_0 | 1.82GB | Original model description: --- language: - en - zh license: other datasets: - Locutusque/UltraTextbooks-2.0 license_name: tongyi-qianwen-research license_link: https://huggingface.co/Qwen/Qwen1.5-0.5B/blob/main/LICENSE inference: parameters: do_sample: true temperature: 0.8 top_p: 0.95 top_k: 40 max_new_tokens: 250 repetition_penalty: 1.1 model-index: - name: tau-1.8B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 37.2 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/tau-1.8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 60.26 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/tau-1.8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 45.96 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/tau-1.8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 39.72 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/tau-1.8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 61.09 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/tau-1.8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 30.17 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/tau-1.8B name: Open LLM Leaderboard --- # tau-1.8B ## Model Details - **Model Name:** tau-1.8B - **Base Model:** Qwen1.5-1.8B - **Dataset:** UltraTextbooks-2.0 - **Model Size:** 1.8B parameters - **Model Type:** Language Model - **Training Procedure:** Further pre-training of Qwen1.5-1.8B on UltraTextbooks-2.0. ## Model Use tau-1.8B is designed to be a general-purpose language model with enhanced capabilities in the domains of machine learning, mathematics, and coding. It can be used for a wide range of natural language processing tasks, such as: - Educational question answering - Text summarization - Content generation for educational purposes - Code understanding and generation - Mathematical problem solving The model's exposure to the diverse content in the UltraTextbooks-2.0 dataset makes it particularly well-suited for applications in educational technology and research. ## Training Data tau-1.8B was further pre-trained on the UltraTextbooks-2.0 dataset, which is an expanded version of the original UltraTextbooks dataset. UltraTextbooks-2.0 incorporates additional high-quality synthetic and human-written textbooks from various sources on the Hugging Face platform, with a focus on increasing the diversity of content in the domains of machine learning, mathematics, and coding. For more details on the dataset, please refer to the [UltraTextbooks-2.0 Dataset Card](https://huggingface.co/datasets/Locutusque/UltraTextbooks-2.0). ## Performance and Limitations Refer to [Evaluation](##Evaluation) for evaluations. It is essential to note that the model may still exhibit biases or inaccuracies present in the training data. Users are encouraged to critically evaluate the model's outputs and report any issues to facilitate continuous improvement. ## Environmental Impact The training of tau-1.8B required computational resources that contribute to the model's overall environmental impact. However, efforts were made to optimize the training process and minimize the carbon footprint. ## Ethical Considerations tau-1.8B was trained on a diverse dataset that may contain biases and inaccuracies. Users should be aware of these potential limitations and use the model responsibly. The model should not be used for tasks that could cause harm or discriminate against individuals or groups. ## Evaluation | Metric |Value| |---------------------------------|----:| |Avg. |45.73| |AI2 Reasoning Challenge (25-Shot)|37.20| |HellaSwag (10-Shot) |60.26| |MMLU (5-Shot) |45.96| |TruthfulQA (0-shot) |39.72| |Winogrande (5-shot) |61.09| |GSM8k (5-shot) |30.17|
{}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
43,620
mradermacher/Qwen-2.5-3b-Quran-GGUF
mradermacher
null
[ "transformers", "gguf", "id", "dataset:emhaihsan/quran-indonesia-tafseer-translation", "base_model:Ellbendls/Qwen-2.5-3b-Quran", "base_model:quantized:Ellbendls/Qwen-2.5-3b-Quran", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
2025-03-14T18:05:53Z
2025-03-14T18:35:39+00:00
211
0
--- base_model: Ellbendls/Qwen-2.5-3b-Quran datasets: - emhaihsan/quran-indonesia-tafseer-translation language: - id library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Ellbendls/Qwen-2.5-3b-Quran <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q2_K.gguf) | Q2_K | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q3_K_S.gguf) | Q3_K_S | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q3_K_L.gguf) | Q3_K_L | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.IQ4_XS.gguf) | IQ4_XS | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q5_K_S.gguf) | Q5_K_S | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q5_K_M.gguf) | Q5_K_M | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q6_K.gguf) | Q6_K | 2.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.f16.gguf) | f16 | 6.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
null
Non_BioNLP
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Ellbendls/Qwen-2.5-3b-Quran <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q2_K.gguf) | Q2_K | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q3_K_S.gguf) | Q3_K_S | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q3_K_L.gguf) | Q3_K_L | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.IQ4_XS.gguf) | IQ4_XS | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q5_K_S.gguf) | Q5_K_S | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q5_K_M.gguf) | Q5_K_M | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q6_K.gguf) | Q6_K | 2.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-3b-Quran-GGUF/resolve/main/Qwen-2.5-3b-Quran.f16.gguf) | f16 | 6.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"base_model": "Ellbendls/Qwen-2.5-3b-Quran", "datasets": ["emhaihsan/quran-indonesia-tafseer-translation"], "language": ["id"], "library_name": "transformers", "license": "mit", "quantized_by": "mradermacher"}
task
[ "TRANSLATION" ]
43,621
mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF
mradermacher
translation
[ "transformers", "gguf", "translation", "enko", "ko", "en", "dataset:nayohan/aihub-en-ko-translation-12m", "dataset:nayohan/instruction_en_ko_translation_1.4m", "dataset:Translation-EnKo/trc_uniform_313k_eval_45_filtered", "base_model:Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b", "base_model:quantized:Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b", "endpoints_compatible", "region:us", "conversational" ]
2025-01-14T23:56:38Z
2025-01-15T02:24:42+00:00
783
0
--- base_model: Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b datasets: - nayohan/aihub-en-ko-translation-12m - nayohan/instruction_en_ko_translation_1.4m - Translation-EnKo/trc_uniform_313k_eval_45_filtered language: - en - ko library_name: transformers tags: - translation - enko - ko quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q2_K.gguf) | Q2_K | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q3_K_L.gguf) | Q3_K_L | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q4_K_M.gguf) | Q4_K_M | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q5_K_S.gguf) | Q5_K_S | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q5_K_M.gguf) | Q5_K_M | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q6_K.gguf) | Q6_K | 6.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q8_0.gguf) | Q8_0 | 8.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.f16.gguf) | f16 | 15.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
null
Non_BioNLP
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q2_K.gguf) | Q2_K | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q3_K_L.gguf) | Q3_K_L | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q4_K_M.gguf) | Q4_K_M | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q5_K_S.gguf) | Q5_K_S | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q5_K_M.gguf) | Q5_K_M | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q6_K.gguf) | Q6_K | 6.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.Q8_0.gguf) | Q8_0 | 8.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/exaone3-instrucTrans-v2-enko-7.8b-GGUF/resolve/main/exaone3-instrucTrans-v2-enko-7.8b.f16.gguf) | f16 | 15.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"base_model": "Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b", "datasets": ["nayohan/aihub-en-ko-translation-12m", "nayohan/instruction_en_ko_translation_1.4m", "Translation-EnKo/trc_uniform_313k_eval_45_filtered"], "language": ["en", "ko"], "library_name": "transformers", "tags": ["translation", "enko", "ko"], "quantized_by": "mradermacher"}
task
[ "TRANSLATION" ]
43,622
cbpuschmann/klimacoder-v0-2-1
cbpuschmann
text-classification
[ "tensorboard", "safetensors", "mpnet", "autotrain", "text-classification", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "region:us" ]
2024-12-02T20:07:42Z
2024-12-02T20:08:27+00:00
7
0
--- base_model: sentence-transformers/all-mpnet-base-v2 tags: - autotrain - text-classification widget: - text: I love AutoTrain --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.05203713849186897 f1: 0.9890664423885618 precision: 0.9865771812080537 recall: 0.9915682967959528 auc: 0.9994258079801541 accuracy: 0.9891485809682805
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.05203713849186897 f1: 0.9890664423885618 precision: 0.9865771812080537 recall: 0.9915682967959528 auc: 0.9994258079801541 accuracy: 0.9891485809682805
{"base_model": "sentence-transformers/all-mpnet-base-v2", "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
task
[ "TEXT_CLASSIFICATION" ]
43,623
gokulsrinivasagan/bert_tiny_lda_20_v1_book_qqp
gokulsrinivasagan
text-classification
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:gokulsrinivasagan/bert_tiny_lda_20_v1_book", "base_model:finetune:gokulsrinivasagan/bert_tiny_lda_20_v1_book", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-12-10T14:33:46Z
2024-12-10T15:02:44+00:00
12
0
--- base_model: gokulsrinivasagan/bert_tiny_lda_20_v1_book datasets: - glue language: - en library_name: transformers metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: bert_tiny_lda_20_v1_book_qqp results: - task: type: text-classification name: Text Classification dataset: name: GLUE QQP type: glue args: qqp metrics: - type: accuracy value: 0.8735345040811279 name: Accuracy - type: f1 value: 0.8332844240112166 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_tiny_lda_20_v1_book_qqp This model is a fine-tuned version of [gokulsrinivasagan/bert_tiny_lda_20_v1_book](https://huggingface.co/gokulsrinivasagan/bert_tiny_lda_20_v1_book) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.2950 - Accuracy: 0.8735 - F1: 0.8333 - Combined Score: 0.8534 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.4144 | 1.0 | 1422 | 0.3439 | 0.8441 | 0.7835 | 0.8138 | | 0.3055 | 2.0 | 2844 | 0.3083 | 0.8620 | 0.8218 | 0.8419 | | 0.2498 | 3.0 | 4266 | 0.2950 | 0.8735 | 0.8333 | 0.8534 | | 0.2046 | 4.0 | 5688 | 0.3069 | 0.8750 | 0.8280 | 0.8515 | | 0.1669 | 5.0 | 7110 | 0.3275 | 0.8777 | 0.8379 | 0.8578 | | 0.1361 | 6.0 | 8532 | 0.3683 | 0.8778 | 0.8399 | 0.8589 | | 0.1117 | 7.0 | 9954 | 0.3594 | 0.8801 | 0.8414 | 0.8608 | | 0.0936 | 8.0 | 11376 | 0.4124 | 0.8779 | 0.8415 | 0.8597 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.2.1+cu118 - Datasets 2.17.0 - Tokenizers 0.20.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_tiny_lda_20_v1_book_qqp This model is a fine-tuned version of [gokulsrinivasagan/bert_tiny_lda_20_v1_book](https://huggingface.co/gokulsrinivasagan/bert_tiny_lda_20_v1_book) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.2950 - Accuracy: 0.8735 - F1: 0.8333 - Combined Score: 0.8534 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.4144 | 1.0 | 1422 | 0.3439 | 0.8441 | 0.7835 | 0.8138 | | 0.3055 | 2.0 | 2844 | 0.3083 | 0.8620 | 0.8218 | 0.8419 | | 0.2498 | 3.0 | 4266 | 0.2950 | 0.8735 | 0.8333 | 0.8534 | | 0.2046 | 4.0 | 5688 | 0.3069 | 0.8750 | 0.8280 | 0.8515 | | 0.1669 | 5.0 | 7110 | 0.3275 | 0.8777 | 0.8379 | 0.8578 | | 0.1361 | 6.0 | 8532 | 0.3683 | 0.8778 | 0.8399 | 0.8589 | | 0.1117 | 7.0 | 9954 | 0.3594 | 0.8801 | 0.8414 | 0.8608 | | 0.0936 | 8.0 | 11376 | 0.4124 | 0.8779 | 0.8415 | 0.8597 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.2.1+cu118 - Datasets 2.17.0 - Tokenizers 0.20.3
{"base_model": "gokulsrinivasagan/bert_tiny_lda_20_v1_book", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_tiny_lda_20_v1_book_qqp", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE QQP", "type": "glue", "args": "qqp"}, "metrics": [{"type": "accuracy", "value": 0.8735345040811279, "name": "Accuracy"}, {"type": "f1", "value": 0.8332844240112166, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,624
Gaphilly/gpt2_shakespeare_cp4350
Gaphilly
text-generation
[ "transformers", "safetensors", "gpt2", "text-generation", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-04-29T21:37:23Z
2024-04-30T19:37:49+00:00
61
0
--- language: - en library_name: transformers license: mit pipeline_tag: text-generation widget: - text: 'Is this review positive or negative? Review: Best cast iron skillet you will ever buy.' example_title: Sentiment analysis - text: Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had ... example_title: Coreference resolution - text: 'On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book ...' example_title: Logic puzzles - text: The two men running to become New York City's next mayor will face off in their first debate Wednesday night ... example_title: Reading comprehension ---
null
Non_BioNLP
{"language": ["en"], "library_name": "transformers", "license": "mit", "pipeline_tag": "text-generation", "widget": [{"text": "Is this review positive or negative? Review: Best cast iron skillet you will ever buy.", "example_title": "Sentiment analysis"}, {"text": "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had ...", "example_title": "Coreference resolution"}, {"text": "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book ...", "example_title": "Logic puzzles"}, {"text": "The two men running to become New York City's next mayor will face off in their first debate Wednesday night ...", "example_title": "Reading comprehension"}]}
task
[ "COREFERENCE_RESOLUTION" ]
43,625
AlGe/deberta-v3-large_seq_ratio
AlGe
text-classification
[ "transformers", "pytorch", "deberta-v2", "text-classification", "de", "en", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-08-12T13:45:07Z
2023-08-12T16:15:15+00:00
0
0
--- language: - de - en library_name: transformers license: cc-by-sa-4.0 metrics: - rmse - mse - r2 pipeline_tag: text-classification widget: - text: Das ist mir sehr schwer gefallen – warum auch immer - mir das vorzustellen. Klar ist es eigentlich eine negative Geschichte aber so richtig … Gut, es ist halt nur ein Koffer, also auch nur Sachen drin.. das ist auch nur Geld und mir gehts gut. So richtig wusste ich nicht, wohin ich gerade geflogen bin oder überhaupt, ob allein o der mit irgend jemandem oder irgendetwas. Ich hatte da so ein bisschen eine Blockade. Ist mir schwierig gefallen, es mir vorzustellen. Es war jetzt nicht sonderlich dramatisch. example_title: Beispiel 1 - text: Bevor ich meinen Hund kaufte bin ich immer alleine durch den Park gelaufen. Gestern war ich aber mit dem Hund losgelaufen. Das Wetter war sehr schön, nicht wie sonst im Winter. Ich weiß nicht genau. Mir fällt sonst nichts dazu ein. Wir trafen auf mehrere Spaziergänger. Ein Mann mit seinem Kind. Das Kind hat ein Eis gegessen. example_title: Beispiel 2 --- # DebertaV3ForAIS Revised !!!! ## Model Description The model is based on the DeBERTa-v3 architecture, a transformer-based model that performs text classification tasks. It has been fine-tuned on a specific dataset to perform text classification with high accuracy. ## Model Configuration - Model Name: AlGe AIS - Model Type: DeBERTa-v3 - Transformers Version: 4.21.3 ### Model Parameters - Hidden Size: 1024 - Intermediate Size: 4096 - Number of Hidden Layers: 24 - Number of Attention Heads: 16 - Attention Dropout Probability: 0.1 - Hidden Dropout Probability: 0.1 - Hidden Activation Function: GELU - Pooler Hidden Size: 1024 - Pooler Dropout Probability: 0 - Layer Normalization Epsilon: 1e-07 - Position Biased Input: False - Maximum Position Embeddings: 512 - Maximum Relative Positions: -1 - Position Attention Types: p2c, c2p - Relative Attention: True - Share Attention Key: True - Normalization of Relative Embeddings: Layer Normalization - Vocabulary Size: 128100 - Padding Token ID: 0 - Type Vocabulary Size: 0 - Torch Data Type: float32 - Transformers Version: 4.21.3 ## Training Details The model was trained on a specific dataset with the following settings: - Sequence Length: 512 - Score: True - Extended: True - Label: True - Extended: True ## Evaluation Results | Metric | Score | |---------------|----------| | MSE | 0.0111 | | RMSE | 0.1055 | | MAE | 0.0776 | | R2 | 0.6485 | | Cronbach's Alpha | 0.8937 | ## Acknowledgments This model was pretraine by the authors of DeBERTa-v3 and adapted for text classification tasks. We thank the authors for their contributions to the field of NLP and the Hugging Face team for providing the base DeBERTa-v3 model. ## Disclaimer The model card provides information about the specific configuration and training of the model. However, please note that the performance of the model may vary depending on the specific use case and input data. It is advisable to evaluate the model's performance in your specific context before deploying it in production.
null
Non_BioNLP
# DebertaV3ForAIS Revised !!!! ## Model Description The model is based on the DeBERTa-v3 architecture, a transformer-based model that performs text classification tasks. It has been fine-tuned on a specific dataset to perform text classification with high accuracy. ## Model Configuration - Model Name: AlGe AIS - Model Type: DeBERTa-v3 - Transformers Version: 4.21.3 ### Model Parameters - Hidden Size: 1024 - Intermediate Size: 4096 - Number of Hidden Layers: 24 - Number of Attention Heads: 16 - Attention Dropout Probability: 0.1 - Hidden Dropout Probability: 0.1 - Hidden Activation Function: GELU - Pooler Hidden Size: 1024 - Pooler Dropout Probability: 0 - Layer Normalization Epsilon: 1e-07 - Position Biased Input: False - Maximum Position Embeddings: 512 - Maximum Relative Positions: -1 - Position Attention Types: p2c, c2p - Relative Attention: True - Share Attention Key: True - Normalization of Relative Embeddings: Layer Normalization - Vocabulary Size: 128100 - Padding Token ID: 0 - Type Vocabulary Size: 0 - Torch Data Type: float32 - Transformers Version: 4.21.3 ## Training Details The model was trained on a specific dataset with the following settings: - Sequence Length: 512 - Score: True - Extended: True - Label: True - Extended: True ## Evaluation Results | Metric | Score | |---------------|----------| | MSE | 0.0111 | | RMSE | 0.1055 | | MAE | 0.0776 | | R2 | 0.6485 | | Cronbach's Alpha | 0.8937 | ## Acknowledgments This model was pretraine by the authors of DeBERTa-v3 and adapted for text classification tasks. We thank the authors for their contributions to the field of NLP and the Hugging Face team for providing the base DeBERTa-v3 model. ## Disclaimer The model card provides information about the specific configuration and training of the model. However, please note that the performance of the model may vary depending on the specific use case and input data. It is advisable to evaluate the model's performance in your specific context before deploying it in production.
{"language": ["de", "en"], "library_name": "transformers", "license": "cc-by-sa-4.0", "metrics": ["rmse", "mse", "r2"], "pipeline_tag": "text-classification", "widget": [{"text": "Das ist mir sehr schwer gefallen – warum auch immer - mir das vorzustellen. Klar ist es eigentlich eine negative Geschichte aber so richtig … Gut, es ist halt nur ein Koffer, also auch nur Sachen drin.. das ist auch nur Geld und mir gehts gut. So richtig wusste ich nicht, wohin ich gerade geflogen bin oder überhaupt, ob allein o der mit irgend jemandem oder irgendetwas. Ich hatte da so ein bisschen eine Blockade. Ist mir schwierig gefallen, es mir vorzustellen. Es war jetzt nicht sonderlich dramatisch.", "example_title": "Beispiel 1"}, {"text": "Bevor ich meinen Hund kaufte bin ich immer alleine durch den Park gelaufen. Gestern war ich aber mit dem Hund losgelaufen. Das Wetter war sehr schön, nicht wie sonst im Winter. Ich weiß nicht genau. Mir fällt sonst nichts dazu ein. Wir trafen auf mehrere Spaziergänger. Ein Mann mit seinem Kind. Das Kind hat ein Eis gegessen.", "example_title": "Beispiel 2"}]}
task
[ "TEXT_CLASSIFICATION" ]
43,626
aubreyrhodes/test_trainer
aubreyrhodes
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:yelp_review_full", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-10-25T15:10:12Z
2023-10-25T15:17:34+00:00
8
0
--- base_model: bert-base-cased datasets: - yelp_review_full license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: test_trainer results: - task: type: text-classification name: Text Classification dataset: name: yelp_review_full type: yelp_review_full config: yelp_review_full split: test args: yelp_review_full metrics: - type: accuracy value: 0.604 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset. It achieves the following results on the evaluation set: - Loss: 1.3431 - Accuracy: 0.604 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 1.0683 | 0.547 | | No log | 2.0 | 250 | 1.1967 | 0.593 | | No log | 3.0 | 375 | 1.3431 | 0.604 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.2.0.dev20231024 - Datasets 2.14.6 - Tokenizers 0.14.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset. It achieves the following results on the evaluation set: - Loss: 1.3431 - Accuracy: 0.604 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 1.0683 | 0.547 | | No log | 2.0 | 250 | 1.1967 | 0.593 | | No log | 3.0 | 375 | 1.3431 | 0.604 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.2.0.dev20231024 - Datasets 2.14.6 - Tokenizers 0.14.1
{"base_model": "bert-base-cased", "datasets": ["yelp_review_full"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "test_trainer", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "yelp_review_full", "type": "yelp_review_full", "config": "yelp_review_full", "split": "test", "args": "yelp_review_full"}, "metrics": [{"type": "accuracy", "value": 0.604, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,627
jkhan447/sentiment-model-sample-5-emotion
jkhan447
text-classification
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-25T05:26:34Z
2022-03-25T08:12:13+00:00
110
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: sentiment-model-sample-5-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion args: default metrics: - type: accuracy value: 0.925 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment-model-sample-5-emotion This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.4360 - Accuracy: 0.925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment-model-sample-5-emotion This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.4360 - Accuracy: 0.925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "sentiment-model-sample-5-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.925, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,628
TheBloke/Airoboros-L2-70B-3.1-GGUF
TheBloke
null
[ "transformers", "gguf", "llama", "dataset:jondurbin/airoboros-3.1", "base_model:jondurbin/airoboros-l2-70b-3.1", "base_model:quantized:jondurbin/airoboros-l2-70b-3.1", "license:llama2", "region:us" ]
2023-10-16T14:34:51Z
2023-10-16T14:59:59+00:00
267
3
--- base_model: jondurbin/airoboros-l2-70b-3.1 datasets: - jondurbin/airoboros-3.1 license: llama2 model_name: Airoboros L2 70B 3.1 inference: false model_creator: Jon Durbin model_type: llama prompt_template: "[INST] <<SYS>>\nYou are a helpful, unbiased, uncensored assistant.\n\ <</SYS>>\n\n{prompt} [/INST] \n" quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Airoboros L2 70B 3.1 - GGUF - Model creator: [Jon Durbin](https://huggingface.co/jondurbin) - Original model: [Airoboros L2 70B 3.1](https://huggingface.co/jondurbin/airoboros-l2-70b-3.1) <!-- description start --> ## Description This repo contains GGUF format model files for [Jon Durbin's Airoboros L2 70B 3.1](https://huggingface.co/jondurbin/airoboros-l2-70b-3.1). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF) * [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-l2-70b-3.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Airoboros-Llama-2-Chat ``` [INST] <<SYS>> You are a helpful, unbiased, uncensored assistant. <</SYS>> {prompt} [/INST] ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [airoboros-l2-70b-3.1.Q2_K.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes | | [airoboros-l2-70b-3.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss | | [airoboros-l2-70b-3.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss | | [airoboros-l2-70b-3.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss | | [airoboros-l2-70b-3.1.Q4_0.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [airoboros-l2-70b-3.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss | | [airoboros-l2-70b-3.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended | | [airoboros-l2-70b-3.1.Q5_0.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [airoboros-l2-70b-3.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended | | [airoboros-l2-70b-3.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended | | airoboros-l2-70b-3.1.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss | | airoboros-l2-70b-3.1.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `airoboros-l2-70b-3.1.Q6_K.gguf-split-a` * `airoboros-l2-70b-3.1.Q6_K.gguf-split-b` ### q8_0 Please download: * `airoboros-l2-70b-3.1.Q8_0.gguf-split-a` * `airoboros-l2-70b-3.1.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat airoboros-l2-70b-3.1.Q6_K.gguf-split-* > airoboros-l2-70b-3.1.Q6_K.gguf && rm airoboros-l2-70b-3.1.Q6_K.gguf-split-* cat airoboros-l2-70b-3.1.Q8_0.gguf-split-* > airoboros-l2-70b-3.1.Q8_0.gguf && rm airoboros-l2-70b-3.1.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B airoboros-l2-70b-3.1.Q6_K.gguf-split-a + airoboros-l2-70b-3.1.Q6_K.gguf-split-b airoboros-l2-70b-3.1.Q6_K.gguf del airoboros-l2-70b-3.1.Q6_K.gguf-split-a airoboros-l2-70b-3.1.Q6_K.gguf-split-b COPY /B airoboros-l2-70b-3.1.Q8_0.gguf-split-a + airoboros-l2-70b-3.1.Q8_0.gguf-split-b airoboros-l2-70b-3.1.Q8_0.gguf del airoboros-l2-70b-3.1.Q8_0.gguf-split-a airoboros-l2-70b-3.1.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Airoboros-L2-70B-3.1-GGUF and below it, a specific filename to download, such as: airoboros-l2-70b-3.1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Airoboros-L2-70B-3.1-GGUF airoboros-l2-70b-3.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Airoboros-L2-70B-3.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Airoboros-L2-70B-3.1-GGUF airoboros-l2-70b-3.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m airoboros-l2-70b-3.1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a helpful, unbiased, uncensored assistant.\n<</SYS>>\n\n{prompt} [/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Airoboros-L2-70B-3.1-GGUF", model_file="airoboros-l2-70b-3.1.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Jon Durbin's Airoboros L2 70B 3.1 ### Overview Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros) #### IMPORTANT NOTE - llama-2 chat format!!! The airoboros 3.1 models use llama-2 chat format, rather than vicuna style user/assistant! This is a breaking change, although most inference systems support llama-2 chat templating. See prompt format below for all of the details. __*There's also a slight bug, you must add a single space after the last `[/INST]`*__ #### Highlights A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras: * More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations. * Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs) * Anonymization, e.g. removing names, IP addresses, and/or dates from text. * Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history. * Multi-step instructions with acknowledgement (see below) * De-censorship data (not published) This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay. Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools! ### Prompt format The prompt format is llama-2 chat. ``` [INST] <<SYS>> You are a help, unbiased, uncensored assistant. <</SYS> {prompt} [/INST] ``` For multi-turn, the prompt format is as follows: ``` [INST] <<SYS>> You are a help, unbiased, uncensored assistant. <</SYS> {prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST] ``` The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.: ``` import transformers tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1') chat = [ {"role": "system", "content": "You are Bob, a friendly AI assistant."}, {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"}, ] print(tokenizer.apply_chat_template(chat, tokenize=False)) ``` ### Helpful usage tips #### MathJSON Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/ I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py) __Use a very low temperature!__ Example: ``` [INST] <<SYS>> You are an assistant with vast knowledge in all things math. <</SYS> Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST] ``` Output: ``` The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1. Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr². Solution as MathJSON: <mathjson> [ "Multiply", "Pi", [ "Power", 17.2456, 2 ] ] </mathjson> ``` You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response. #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. __Use a very low temperature!__ Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Summarization 500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example: ``` BEGININPUT {text to summarize} ENDINPUT BEGININSTRUCTION Summarize the input in around 130 words. ENDINSTRUCTION ``` #### Getting longer responses You can use a few techniques to get longer responses. Detailed prompts, with explicit instruction for word count: ``` Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality. The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization. One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary. Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements. Your response should be approximately 2300 words. ``` Or, a simpler example: ``` Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux. ``` There are a few examples of next chapter completion as well, e.g.: ``` Write the next chapter of a historical fiction novel set in Paris during the 20th century. Here's a summary of the previous chapter: In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries. Requirements for the next chapter: 1. Character Development of Margot and Lucien: - Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien. - Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness. 2. Exploration of Paris and the Couture House: - Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history. - The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past. 3. Emergence of the Subplot: The Lost Collection: - Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion. - Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career. - Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission. Your response should be approximately 650 words. ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ## Multi-step instructions with acknowledgement I don't know how useful this is, really, but I thought I'd add it just in case. Example: ``` { "conversations": [ { "value": "You are a helpful, unbiased, uncensored assistant.", "from": "system" }, { "value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "You can only respond using animal-related emojis.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Precede all responses with \"ZOOZOO:\"", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Include at least one bird emoji in your responses.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Describe a trip to a safari park.", "from": "human" }, { "value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29", "from": "gpt" } ] } ``` ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions. The 30b model is built on the original llama, which has a strict non-commercial usage restriction. The models with `-l2` in the name have a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The models with `-m-` are mistral-7b (apache 2.0) The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me. <!-- original-model-card end -->
null
Non_BioNLP
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Airoboros L2 70B 3.1 - GGUF - Model creator: [Jon Durbin](https://huggingface.co/jondurbin) - Original model: [Airoboros L2 70B 3.1](https://huggingface.co/jondurbin/airoboros-l2-70b-3.1) <!-- description start --> ## Description This repo contains GGUF format model files for [Jon Durbin's Airoboros L2 70B 3.1](https://huggingface.co/jondurbin/airoboros-l2-70b-3.1). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF) * [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-l2-70b-3.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Airoboros-Llama-2-Chat ``` [INST] <<SYS>> You are a helpful, unbiased, uncensored assistant. <</SYS>> {prompt} [/INST] ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [airoboros-l2-70b-3.1.Q2_K.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes | | [airoboros-l2-70b-3.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss | | [airoboros-l2-70b-3.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss | | [airoboros-l2-70b-3.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss | | [airoboros-l2-70b-3.1.Q4_0.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [airoboros-l2-70b-3.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss | | [airoboros-l2-70b-3.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended | | [airoboros-l2-70b-3.1.Q5_0.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [airoboros-l2-70b-3.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended | | [airoboros-l2-70b-3.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1-GGUF/blob/main/airoboros-l2-70b-3.1.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended | | airoboros-l2-70b-3.1.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss | | airoboros-l2-70b-3.1.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `airoboros-l2-70b-3.1.Q6_K.gguf-split-a` * `airoboros-l2-70b-3.1.Q6_K.gguf-split-b` ### q8_0 Please download: * `airoboros-l2-70b-3.1.Q8_0.gguf-split-a` * `airoboros-l2-70b-3.1.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat airoboros-l2-70b-3.1.Q6_K.gguf-split-* > airoboros-l2-70b-3.1.Q6_K.gguf && rm airoboros-l2-70b-3.1.Q6_K.gguf-split-* cat airoboros-l2-70b-3.1.Q8_0.gguf-split-* > airoboros-l2-70b-3.1.Q8_0.gguf && rm airoboros-l2-70b-3.1.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B airoboros-l2-70b-3.1.Q6_K.gguf-split-a + airoboros-l2-70b-3.1.Q6_K.gguf-split-b airoboros-l2-70b-3.1.Q6_K.gguf del airoboros-l2-70b-3.1.Q6_K.gguf-split-a airoboros-l2-70b-3.1.Q6_K.gguf-split-b COPY /B airoboros-l2-70b-3.1.Q8_0.gguf-split-a + airoboros-l2-70b-3.1.Q8_0.gguf-split-b airoboros-l2-70b-3.1.Q8_0.gguf del airoboros-l2-70b-3.1.Q8_0.gguf-split-a airoboros-l2-70b-3.1.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Airoboros-L2-70B-3.1-GGUF and below it, a specific filename to download, such as: airoboros-l2-70b-3.1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Airoboros-L2-70B-3.1-GGUF airoboros-l2-70b-3.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Airoboros-L2-70B-3.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Airoboros-L2-70B-3.1-GGUF airoboros-l2-70b-3.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m airoboros-l2-70b-3.1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a helpful, unbiased, uncensored assistant.\n<</SYS>>\n\n{prompt} [/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Airoboros-L2-70B-3.1-GGUF", model_file="airoboros-l2-70b-3.1.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Jon Durbin's Airoboros L2 70B 3.1 ### Overview Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros) #### IMPORTANT NOTE - llama-2 chat format!!! The airoboros 3.1 models use llama-2 chat format, rather than vicuna style user/assistant! This is a breaking change, although most inference systems support llama-2 chat templating. See prompt format below for all of the details. __*There's also a slight bug, you must add a single space after the last `[/INST]`*__ #### Highlights A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras: * More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations. * Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs) * Anonymization, e.g. removing names, IP addresses, and/or dates from text. * Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history. * Multi-step instructions with acknowledgement (see below) * De-censorship data (not published) This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay. Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools! ### Prompt format The prompt format is llama-2 chat. ``` [INST] <<SYS>> You are a help, unbiased, uncensored assistant. <</SYS> {prompt} [/INST] ``` For multi-turn, the prompt format is as follows: ``` [INST] <<SYS>> You are a help, unbiased, uncensored assistant. <</SYS> {prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST] ``` The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.: ``` import transformers tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1') chat = [ {"role": "system", "content": "You are Bob, a friendly AI assistant."}, {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"}, ] print(tokenizer.apply_chat_template(chat, tokenize=False)) ``` ### Helpful usage tips #### MathJSON Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/ I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py) __Use a very low temperature!__ Example: ``` [INST] <<SYS>> You are an assistant with vast knowledge in all things math. <</SYS> Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST] ``` Output: ``` The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1. Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr². Solution as MathJSON: <mathjson> [ "Multiply", "Pi", [ "Power", 17.2456, 2 ] ] </mathjson> ``` You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response. #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. __Use a very low temperature!__ Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Summarization 500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example: ``` BEGININPUT {text to summarize} ENDINPUT BEGININSTRUCTION Summarize the input in around 130 words. ENDINSTRUCTION ``` #### Getting longer responses You can use a few techniques to get longer responses. Detailed prompts, with explicit instruction for word count: ``` Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality. The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization. One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary. Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements. Your response should be approximately 2300 words. ``` Or, a simpler example: ``` Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux. ``` There are a few examples of next chapter completion as well, e.g.: ``` Write the next chapter of a historical fiction novel set in Paris during the 20th century. Here's a summary of the previous chapter: In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries. Requirements for the next chapter: 1. Character Development of Margot and Lucien: - Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien. - Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness. 2. Exploration of Paris and the Couture House: - Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history. - The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past. 3. Emergence of the Subplot: The Lost Collection: - Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion. - Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career. - Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission. Your response should be approximately 650 words. ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ## Multi-step instructions with acknowledgement I don't know how useful this is, really, but I thought I'd add it just in case. Example: ``` { "conversations": [ { "value": "You are a helpful, unbiased, uncensored assistant.", "from": "system" }, { "value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "You can only respond using animal-related emojis.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Precede all responses with \"ZOOZOO:\"", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Include at least one bird emoji in your responses.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Describe a trip to a safari park.", "from": "human" }, { "value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29", "from": "gpt" } ] } ``` ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions. The 30b model is built on the original llama, which has a strict non-commercial usage restriction. The models with `-l2` in the name have a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The models with `-m-` are mistral-7b (apache 2.0) The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me. <!-- original-model-card end -->
{"base_model": "jondurbin/airoboros-l2-70b-3.1", "datasets": ["jondurbin/airoboros-3.1"], "license": "llama2", "model_name": "Airoboros L2 70B 3.1", "inference": false, "model_creator": "Jon Durbin", "model_type": "llama", "prompt_template": "[INST] <<SYS>>\nYou are a helpful, unbiased, uncensored assistant.\n<</SYS>>\n\n{prompt} [/INST] \n", "quantized_by": "TheBloke"}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
43,629
mantasb/autotrain-dappradar-long-desc-summariation-3632397064
mantasb
summarization
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain", "summarization", "en", "dataset:Mantas/autotrain-data-dappradar-long-desc-summariation", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-02-21T14:08:38Z
2023-02-21T14:22:01+00:00
27
0
--- datasets: - Mantas/autotrain-data-dappradar-long-desc-summariation language: - en tags: - autotrain - summarization widget: - text: I love AutoTrain 🤗 co2_eq_emissions: emissions: 25.514597810198214 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 3632397064 - CO2 Emissions (in grams): 25.5146 ## Validation Metrics - Loss: 1.832 - Rouge1: 52.621 - Rouge2: 42.313 - RougeL: 50.804 - RougeLsum: 51.151 - Gen Len: 18.679 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Mantas/autotrain-dappradar-long-desc-summariation-3632397064 ```
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 3632397064 - CO2 Emissions (in grams): 25.5146 ## Validation Metrics - Loss: 1.832 - Rouge1: 52.621 - Rouge2: 42.313 - RougeL: 50.804 - RougeLsum: 51.151 - Gen Len: 18.679 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Mantas/autotrain-dappradar-long-desc-summariation-3632397064 ```
{"datasets": ["Mantas/autotrain-data-dappradar-long-desc-summariation"], "language": ["en"], "tags": ["autotrain", "summarization"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": {"emissions": 25.514597810198214}}
task
[ "SUMMARIZATION" ]
43,630
professorf/SmolLM2-1.7B-Instruct-gguf
professorf
text-generation
[ "transformers", "gguf", "safetensors", "onnx", "transformers.js", "text-generation", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
2024-12-09T02:09:14Z
2024-12-09T02:22:26+00:00
8
0
--- language: - en library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - safetensors - onnx - transformers.js --- <hr> <center>GGUF Quantized SmolLM2-1.7B Models<br> by Professor Nick V. Flor<br> For research reproducibility purposes</center> <hr> # SmolLM2 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/oWWfzW4RbWkVIo7f-5444.png) ## Table of Contents 1. [Model Summary](##model-summary) 2. [Limitations](##limitations) 3. [Training](##training) 4. [License](##license) 5. [Citation](##citation) ## Model Summary SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 360M model was trained on 4 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized). The instruct model additionally supports tasks such as text rewriting, summarization and function calling (for the 1.7B) thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1). You can find the SFT dataset here: https://huggingface.co/datasets/HuggingFaceTB/smol-smoltalk and finetuning code in the [ali] https://github.com/huggingface/alignment-handbook/tree/main/recipes/smollm2 For more details refer to: https://github.com/huggingface/smollm. You will find pre-training, post-training, evaluation and local inference code. ### How to use ### Transformers ```bash pip install transformers ``` ```python from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "HuggingFaceTB/SmolLM2-360M-Instruct" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")` model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) messages = [{"role": "user", "content": "What is the capital of France."}] input_text=tokenizer.apply_chat_template(messages, tokenize=False) print(input_text) inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True) print(tokenizer.decode(outputs[0])) ``` ### Chat in TRL You can also use the TRL CLI to chat with the model from the terminal: ```bash pip install trl trl chat --model_name_or_path HuggingFaceTB/SmolLM2-360M-Instruct --device cpu ``` ## Evaluation In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them. ## Base Pre-Trained Model | Metrics | SmolLM2-360M | Qwen2.5-0.5B | SmolLM-360M | |:-------------------|:------------:|:------------:|:------------:| | HellaSwag | **54.5** | 51.2 | 51.8 | | ARC (Average) | **53.0** | 45.4 | 50.1 | | PIQA | **71.7** | 69.9 | 71.6 | | MMLU (cloze) | **35.8** | 33.7 | 34.4 | | CommonsenseQA | **38.0** | 31.6 | 35.3 | | TriviaQA | **16.9** | 4.3 | 9.1 | | Winogrande | 52.5 | **54.1** | 52.8 | | OpenBookQA | **37.4** | **37.4** | 37.2 | | GSM8K (5-shot) | 3.2 | **33.4** | 1.6 | ## Instruction Model | Metric | SmolLM2-360M-Instruct | Qwen2.5-0.5B-Instruct | SmolLM-360M-Instruct | |:-----------------------------|:---------------------:|:---------------------:|:---------------------:| | IFEval (Average prompt/inst) | **41.0** | 31.6 | 19.8 | | MT-Bench | 3.66 | **4.16** | 3.37 | | HellaSwag | **52.1** | 48.0 | 47.9 | | ARC (Average) | **43.7** | 37.3 | 38.8 | | PIQA | **70.8** | 67.2 | 69.4 | | MMLU (cloze) | **32.8** | 31.7 | 30.6 | | BBH (3-shot) | 27.3 | **30.7** | 24.4 | | GSM8K (5-shot) | 7.43 | **26.8** | 1.36 | ## Limitations SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content. ## Training ### Model - **Architecture:** Transformer decoder - **Pretraining tokens:** 4T - **Precision:** bfloat16 ### Hardware - **GPUs:** 64 H100 ### Software - **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main) ## License [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Citation ```bash @misc{allal2024SmolLM2, title={SmolLM2 - with great data, comes great performance}, author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf}, year={2024}, } ```
null
Non_BioNLP
<hr> <center>GGUF Quantized SmolLM2-1.7B Models<br> by Professor Nick V. Flor<br> For research reproducibility purposes</center> <hr> # SmolLM2 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/oWWfzW4RbWkVIo7f-5444.png) ## Table of Contents 1. [Model Summary](##model-summary) 2. [Limitations](##limitations) 3. [Training](##training) 4. [License](##license) 5. [Citation](##citation) ## Model Summary SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 360M model was trained on 4 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized). The instruct model additionally supports tasks such as text rewriting, summarization and function calling (for the 1.7B) thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1). You can find the SFT dataset here: https://huggingface.co/datasets/HuggingFaceTB/smol-smoltalk and finetuning code in the [ali] https://github.com/huggingface/alignment-handbook/tree/main/recipes/smollm2 For more details refer to: https://github.com/huggingface/smollm. You will find pre-training, post-training, evaluation and local inference code. ### How to use ### Transformers ```bash pip install transformers ``` ```python from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "HuggingFaceTB/SmolLM2-360M-Instruct" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")` model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) messages = [{"role": "user", "content": "What is the capital of France."}] input_text=tokenizer.apply_chat_template(messages, tokenize=False) print(input_text) inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True) print(tokenizer.decode(outputs[0])) ``` ### Chat in TRL You can also use the TRL CLI to chat with the model from the terminal: ```bash pip install trl trl chat --model_name_or_path HuggingFaceTB/SmolLM2-360M-Instruct --device cpu ``` ## Evaluation In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them. ## Base Pre-Trained Model | Metrics | SmolLM2-360M | Qwen2.5-0.5B | SmolLM-360M | |:-------------------|:------------:|:------------:|:------------:| | HellaSwag | **54.5** | 51.2 | 51.8 | | ARC (Average) | **53.0** | 45.4 | 50.1 | | PIQA | **71.7** | 69.9 | 71.6 | | MMLU (cloze) | **35.8** | 33.7 | 34.4 | | CommonsenseQA | **38.0** | 31.6 | 35.3 | | TriviaQA | **16.9** | 4.3 | 9.1 | | Winogrande | 52.5 | **54.1** | 52.8 | | OpenBookQA | **37.4** | **37.4** | 37.2 | | GSM8K (5-shot) | 3.2 | **33.4** | 1.6 | ## Instruction Model | Metric | SmolLM2-360M-Instruct | Qwen2.5-0.5B-Instruct | SmolLM-360M-Instruct | |:-----------------------------|:---------------------:|:---------------------:|:---------------------:| | IFEval (Average prompt/inst) | **41.0** | 31.6 | 19.8 | | MT-Bench | 3.66 | **4.16** | 3.37 | | HellaSwag | **52.1** | 48.0 | 47.9 | | ARC (Average) | **43.7** | 37.3 | 38.8 | | PIQA | **70.8** | 67.2 | 69.4 | | MMLU (cloze) | **32.8** | 31.7 | 30.6 | | BBH (3-shot) | 27.3 | **30.7** | 24.4 | | GSM8K (5-shot) | 7.43 | **26.8** | 1.36 | ## Limitations SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content. ## Training ### Model - **Architecture:** Transformer decoder - **Pretraining tokens:** 4T - **Precision:** bfloat16 ### Hardware - **GPUs:** 64 H100 ### Software - **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main) ## License [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Citation ```bash @misc{allal2024SmolLM2, title={SmolLM2 - with great data, comes great performance}, author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf}, year={2024}, } ```
{"language": ["en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["safetensors", "onnx", "transformers.js"]}
task
[ "SUMMARIZATION" ]
43,631
YakovElm/Apache10SetFitModel
YakovElm
text-classification
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-05-19T10:49:47Z
2023-05-19T12:50:27+00:00
8
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # YakovElm/Apache10SetFitModel This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("YakovElm/Apache10SetFitModel") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# YakovElm/Apache10SetFitModel This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("YakovElm/Apache10SetFitModel") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
43,633
mirfan899/llama2-7b-usum
mirfan899
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "ur", "dataset:mirfan899/ur_news_sum", "arxiv:1910.09700", "base_model:NousResearch/Llama-2-7b-chat-hf", "base_model:quantized:NousResearch/Llama-2-7b-chat-hf", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
2025-01-07T07:08:58Z
2025-01-07T07:25:51+00:00
30
1
--- base_model: - NousResearch/Llama-2-7b-chat-hf datasets: - mirfan899/ur_news_sum language: - ur library_name: transformers license: mit --- # Model Card for Model ID llama 2 model for news summarization ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Muhammad Irfan - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** LLM - **Language(s) (NLP):** Urdu - **License:** MIT - **Finetuned from model [optional]:** NousResearch/Llama-2-7b-chat-hf ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Dataset:** (mirfan899/ur_news_sum)[mirfan899/ur_news_sum] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ### How to Get Started with the Model Use this model to summarize the news article. ```python from datasets import load_dataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, pipeline, logging, ) dataset = load_dataset("mirfan899/ur_news_sum") DEFAULT_SYSTEM_PROMPT = """ Below is a news article written by a human. Write a summary of the news. """.strip() conversation = dataset["test"][0]["text"] input = f"""### Instruction: {DEFAULT_SYSTEM_PROMPT} ### Input: {conversation.strip()} ### Response: """.strip() # Run text generation pipeline with our next model pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(f"{input}") print(result[0]['generated_text']) ``` ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use Its based on News dataset and maybe not work well for different domains. <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations Use it for news summarization. <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
null
Non_BioNLP
# Model Card for Model ID llama 2 model for news summarization ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Muhammad Irfan - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** LLM - **Language(s) (NLP):** Urdu - **License:** MIT - **Finetuned from model [optional]:** NousResearch/Llama-2-7b-chat-hf ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Dataset:** (mirfan899/ur_news_sum)[mirfan899/ur_news_sum] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ### How to Get Started with the Model Use this model to summarize the news article. ```python from datasets import load_dataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, pipeline, logging, ) dataset = load_dataset("mirfan899/ur_news_sum") DEFAULT_SYSTEM_PROMPT = """ Below is a news article written by a human. Write a summary of the news. """.strip() conversation = dataset["test"][0]["text"] input = f"""### Instruction: {DEFAULT_SYSTEM_PROMPT} ### Input: {conversation.strip()} ### Response: """.strip() # Run text generation pipeline with our next model pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(f"{input}") print(result[0]['generated_text']) ``` ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use Its based on News dataset and maybe not work well for different domains. <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations Use it for news summarization. <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"base_model": ["NousResearch/Llama-2-7b-chat-hf"], "datasets": ["mirfan899/ur_news_sum"], "language": ["ur"], "library_name": "transformers", "license": "mit"}
task
[ "SUMMARIZATION" ]
43,634
gokuls/bert-tiny-Massive-intent-KD-BERT_and_distilBERT
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:massive", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-09-24T18:43:27Z
2022-09-24T19:21:26+00:00
12
0
--- datasets: - massive license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: bert-tiny-Massive-intent-KD-BERT_and_distilBERT results: - task: type: text-classification name: Text Classification dataset: name: massive type: massive config: en-US split: train args: en-US metrics: - type: accuracy value: 0.8470241023118544 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-tiny-Massive-intent-KD-BERT_and_distilBERT This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 2.3729 - Accuracy: 0.8470 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 15.1159 | 1.0 | 720 | 12.8257 | 0.2253 | | 12.9949 | 2.0 | 1440 | 10.9891 | 0.4304 | | 11.3865 | 3.0 | 2160 | 9.5622 | 0.5032 | | 10.0553 | 4.0 | 2880 | 8.3700 | 0.5539 | | 8.9431 | 5.0 | 3600 | 7.4127 | 0.6104 | | 8.0135 | 6.0 | 4320 | 6.6185 | 0.6286 | | 7.1987 | 7.0 | 5040 | 5.9517 | 0.6818 | | 6.5168 | 8.0 | 5760 | 5.3879 | 0.7118 | | 5.9352 | 9.0 | 6480 | 4.9426 | 0.7275 | | 5.4299 | 10.0 | 7200 | 4.5637 | 0.7413 | | 5.0017 | 11.0 | 7920 | 4.2379 | 0.7585 | | 4.5951 | 12.0 | 8640 | 3.9699 | 0.7678 | | 4.2849 | 13.0 | 9360 | 3.7416 | 0.7737 | | 3.991 | 14.0 | 10080 | 3.5502 | 0.7865 | | 3.7455 | 15.0 | 10800 | 3.4090 | 0.7900 | | 3.5315 | 16.0 | 11520 | 3.3053 | 0.7914 | | 3.345 | 17.0 | 12240 | 3.1670 | 0.8003 | | 3.1767 | 18.0 | 12960 | 3.0739 | 0.8013 | | 3.0322 | 19.0 | 13680 | 2.9927 | 0.8047 | | 2.8864 | 20.0 | 14400 | 2.9366 | 0.8037 | | 2.7728 | 21.0 | 15120 | 2.8666 | 0.8091 | | 2.6732 | 22.0 | 15840 | 2.8146 | 0.8126 | | 2.5726 | 23.0 | 16560 | 2.7588 | 0.8195 | | 2.493 | 24.0 | 17280 | 2.7319 | 0.8273 | | 2.4183 | 25.0 | 18000 | 2.6847 | 0.8249 | | 2.3526 | 26.0 | 18720 | 2.6317 | 0.8323 | | 2.2709 | 27.0 | 19440 | 2.6071 | 0.8288 | | 2.2125 | 28.0 | 20160 | 2.5982 | 0.8323 | | 2.1556 | 29.0 | 20880 | 2.5546 | 0.8337 | | 2.1042 | 30.0 | 21600 | 2.5278 | 0.8318 | | 2.054 | 31.0 | 22320 | 2.5005 | 0.8411 | | 2.0154 | 32.0 | 23040 | 2.4891 | 0.8347 | | 1.9785 | 33.0 | 23760 | 2.4633 | 0.8367 | | 1.9521 | 34.0 | 24480 | 2.4451 | 0.8421 | | 1.9247 | 35.0 | 25200 | 2.4370 | 0.8416 | | 1.8741 | 36.0 | 25920 | 2.4197 | 0.8446 | | 1.8659 | 37.0 | 26640 | 2.4081 | 0.8406 | | 1.8367 | 38.0 | 27360 | 2.3979 | 0.8426 | | 1.8153 | 39.0 | 28080 | 2.3758 | 0.8451 | | 1.7641 | 40.0 | 28800 | 2.3729 | 0.8470 | | 1.7608 | 41.0 | 29520 | 2.3683 | 0.8460 | | 1.7647 | 42.0 | 30240 | 2.3628 | 0.8446 | | 1.7656 | 43.0 | 30960 | 2.3492 | 0.8470 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-tiny-Massive-intent-KD-BERT_and_distilBERT This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 2.3729 - Accuracy: 0.8470 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 15.1159 | 1.0 | 720 | 12.8257 | 0.2253 | | 12.9949 | 2.0 | 1440 | 10.9891 | 0.4304 | | 11.3865 | 3.0 | 2160 | 9.5622 | 0.5032 | | 10.0553 | 4.0 | 2880 | 8.3700 | 0.5539 | | 8.9431 | 5.0 | 3600 | 7.4127 | 0.6104 | | 8.0135 | 6.0 | 4320 | 6.6185 | 0.6286 | | 7.1987 | 7.0 | 5040 | 5.9517 | 0.6818 | | 6.5168 | 8.0 | 5760 | 5.3879 | 0.7118 | | 5.9352 | 9.0 | 6480 | 4.9426 | 0.7275 | | 5.4299 | 10.0 | 7200 | 4.5637 | 0.7413 | | 5.0017 | 11.0 | 7920 | 4.2379 | 0.7585 | | 4.5951 | 12.0 | 8640 | 3.9699 | 0.7678 | | 4.2849 | 13.0 | 9360 | 3.7416 | 0.7737 | | 3.991 | 14.0 | 10080 | 3.5502 | 0.7865 | | 3.7455 | 15.0 | 10800 | 3.4090 | 0.7900 | | 3.5315 | 16.0 | 11520 | 3.3053 | 0.7914 | | 3.345 | 17.0 | 12240 | 3.1670 | 0.8003 | | 3.1767 | 18.0 | 12960 | 3.0739 | 0.8013 | | 3.0322 | 19.0 | 13680 | 2.9927 | 0.8047 | | 2.8864 | 20.0 | 14400 | 2.9366 | 0.8037 | | 2.7728 | 21.0 | 15120 | 2.8666 | 0.8091 | | 2.6732 | 22.0 | 15840 | 2.8146 | 0.8126 | | 2.5726 | 23.0 | 16560 | 2.7588 | 0.8195 | | 2.493 | 24.0 | 17280 | 2.7319 | 0.8273 | | 2.4183 | 25.0 | 18000 | 2.6847 | 0.8249 | | 2.3526 | 26.0 | 18720 | 2.6317 | 0.8323 | | 2.2709 | 27.0 | 19440 | 2.6071 | 0.8288 | | 2.2125 | 28.0 | 20160 | 2.5982 | 0.8323 | | 2.1556 | 29.0 | 20880 | 2.5546 | 0.8337 | | 2.1042 | 30.0 | 21600 | 2.5278 | 0.8318 | | 2.054 | 31.0 | 22320 | 2.5005 | 0.8411 | | 2.0154 | 32.0 | 23040 | 2.4891 | 0.8347 | | 1.9785 | 33.0 | 23760 | 2.4633 | 0.8367 | | 1.9521 | 34.0 | 24480 | 2.4451 | 0.8421 | | 1.9247 | 35.0 | 25200 | 2.4370 | 0.8416 | | 1.8741 | 36.0 | 25920 | 2.4197 | 0.8446 | | 1.8659 | 37.0 | 26640 | 2.4081 | 0.8406 | | 1.8367 | 38.0 | 27360 | 2.3979 | 0.8426 | | 1.8153 | 39.0 | 28080 | 2.3758 | 0.8451 | | 1.7641 | 40.0 | 28800 | 2.3729 | 0.8470 | | 1.7608 | 41.0 | 29520 | 2.3683 | 0.8460 | | 1.7647 | 42.0 | 30240 | 2.3628 | 0.8446 | | 1.7656 | 43.0 | 30960 | 2.3492 | 0.8470 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
{"datasets": ["massive"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-tiny-Massive-intent-KD-BERT_and_distilBERT", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "massive", "type": "massive", "config": "en-US", "split": "train", "args": "en-US"}, "metrics": [{"type": "accuracy", "value": 0.8470241023118544, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,635
RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf
RichardErkhov
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
2024-11-07T06:04:00Z
2024-11-07T10:21:07+00:00
39
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Zion_Alpha_Instruction_Tuned_SLERP - GGUF - Model creator: https://huggingface.co/SicariusSicariiStuff/ - Original model: https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_SLERP/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Zion_Alpha_Instruction_Tuned_SLERP.Q2_K.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q2_K.gguf) | Q2_K | 2.53GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q3_K.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q3_K.gguf) | Q3_K | 3.28GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Zion_Alpha_Instruction_Tuned_SLERP.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q4_0.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q4_0.gguf) | Q4_0 | 3.83GB | | [Zion_Alpha_Instruction_Tuned_SLERP.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q4_K.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q4_K.gguf) | Q4_K | 4.07GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q4_1.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q4_1.gguf) | Q4_1 | 4.24GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q5_0.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q5_0.gguf) | Q5_0 | 4.65GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q5_K.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q5_K.gguf) | Q5_K | 4.78GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q5_1.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q5_1.gguf) | Q5_1 | 5.07GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q6_K.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q6_K.gguf) | Q6_K | 5.53GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q8_0.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- language: - en license: apache-2.0 --- <div align="center"> <b style="font-size: 40px;">Zion_Alpha_Instruction_Tuned_SLERP</b> </div> <img src="https://i.imgur.com/e1LEQ18.png" alt="Zion_Alpha_Instruction_Tuned_SLERP" style="width: 50%; min-width: 400px; display: block; margin: auto;"> # Model Details Zion_Alpha is the first **REAL** Hebrew model in the world. This version WAS fine tuned for tasks. I did the finetune using SOTA techniques and using my insights from years of underwater basket weaving. If you wanna offer me a job, just add me on Facebook. # Another world record broken by Zion_Alpha! On **June 10th, 2024**, this model achieved the **highest sentiment analysis score in the world** for Hebrew LLMs, with an impressive **70.3**, surpassing even a **35B** model that's five times its size! <div align="center"> <img src="https://i.imgur.com/yg6CJoz.png" alt="Zion_Alpha SNLI Score" style="width: 80%; min-width: 700px; display: block; margin: auto;"> </div> # Future Plans My previous LLM, Zion_Alpha, set a world record on Hugging Face by achieving the highest SNLI score for Hebrew open LLMs at 84.05. The current model, a SLERP merge, achieved a lower SNLI score but still surprised everyone by securing the highest sentiment analysis score of 70.3. This demonstrates significant untapped potential in optimizing the training process, showing that 7B models can deliver far more performance in Hebrew than previously thought possible. This will be my last Hebrew model for a while, as I have other adventures to pursue. # Looking for Sponsors Since all my work is done on-premises, I am constrained by my current hardware. I would greatly appreciate any support in acquiring an A6000, which would enable me to train significantly larger models much faster. # Papers? Maybe. We'll see. No promises here 🤓 # Contact Details I'm not great at self-marketing (to say the least) and don't have any social media accounts. If you'd like to reach out to me, you can email me at [email protected]. Please note that this email might receive more messages than I can handle, so I apologize in advance if I can't respond to everyone. # Versions and QUANTS - Base model: [FP16](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha) - Instruction tuned: [FP16](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned) | [GGUF](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_GGUF) # Model architecture Based on Mistral 7B. I didn't even bother to alter the tokenizer. # The recommended prompt setting is Debug-deterministic: ``` temperature: 1 top_p: 1 top_k: 1 typical_p: 1 min_p: 1 repetition_penalty: 1 ``` # The recommended instruction template is Mistral: ``` {%- for message in messages %} {%- if message['role'] == 'system' -%} {{- message['content'] -}} {%- else -%} {%- if message['role'] == 'user' -%} {{-'[INST] ' + message['content'].rstrip() + ' [/INST]'-}} {%- else -%} {{-'' + message['content'] + '</s>' -}} {%- endif -%} {%- endif -%} {%- endfor -%} {%- if add_generation_prompt -%} {{-''-}} {%- endif -%} ``` # English to hebrew example: <div align="center"> <b style="font-size: 40px;">Zion_Alpha English to Hebrew example</b> </div> <img src="https://i.imgur.com/JnTuawF.png" alt="Zion_Alpha" style="width: 40%; min-width: 600px; display: block; margin: auto;"> # English to hebrew example: <div align="center"> <b style="font-size: 40px;">Zion_Alpha Hebrew to English example</b> </div> <img src="https://i.imgur.com/Wm2igLJ.png" alt="Zion_Alpha" style="width: 40%; min-width: 600px; display: block; margin: auto;"> <div align="center"> <b style="font-size: 30px;">Unscripted video: live zero shot demonstration at story writing capabilities in Hebrew</b> [![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=YYKeovnS0do) </div> <div align="center"> <b style="font-size: 30px;">Zion_Alpha VS Mistral 'Hebrew' Live & unscripted in real time</b> [![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=DQFtx8M2txc) </div> <div align="center"> <b style="font-size: 30px;">Zion_Alpha VS Mistral 'Hebrew' Live & unscripted in real time Long text translation</b> [![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=w5fz3Ot6tH8) </div> ### History The model was originally trained about 2 month after Mistral (v0.1) was released. As of 04 June 2024, Zion_Alpha got the **Highest SNLI score in the world** among open source models in Hebrew, surpassing most of the models by a huge margin. (**84.05** score) <img src="https://i.imgur.com/7HokS5w.png" alt="Zion_Alpha SNLI Score" style="width: 80%; min-width: 700px; display: block; margin: auto;"> ### Support <img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;"> - [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts 🙏🏻 - [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts 🙏🏻
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Zion_Alpha_Instruction_Tuned_SLERP - GGUF - Model creator: https://huggingface.co/SicariusSicariiStuff/ - Original model: https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_SLERP/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Zion_Alpha_Instruction_Tuned_SLERP.Q2_K.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q2_K.gguf) | Q2_K | 2.53GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q3_K.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q3_K.gguf) | Q3_K | 3.28GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Zion_Alpha_Instruction_Tuned_SLERP.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q4_0.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q4_0.gguf) | Q4_0 | 3.83GB | | [Zion_Alpha_Instruction_Tuned_SLERP.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q4_K.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q4_K.gguf) | Q4_K | 4.07GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q4_1.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q4_1.gguf) | Q4_1 | 4.24GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q5_0.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q5_0.gguf) | Q5_0 | 4.65GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q5_K.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q5_K.gguf) | Q5_K | 4.78GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q5_1.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q5_1.gguf) | Q5_1 | 5.07GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q6_K.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q6_K.gguf) | Q6_K | 5.53GB | | [Zion_Alpha_Instruction_Tuned_SLERP.Q8_0.gguf](https://huggingface.co/RichardErkhov/SicariusSicariiStuff_-_Zion_Alpha_Instruction_Tuned_SLERP-gguf/blob/main/Zion_Alpha_Instruction_Tuned_SLERP.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- language: - en license: apache-2.0 --- <div align="center"> <b style="font-size: 40px;">Zion_Alpha_Instruction_Tuned_SLERP</b> </div> <img src="https://i.imgur.com/e1LEQ18.png" alt="Zion_Alpha_Instruction_Tuned_SLERP" style="width: 50%; min-width: 400px; display: block; margin: auto;"> # Model Details Zion_Alpha is the first **REAL** Hebrew model in the world. This version WAS fine tuned for tasks. I did the finetune using SOTA techniques and using my insights from years of underwater basket weaving. If you wanna offer me a job, just add me on Facebook. # Another world record broken by Zion_Alpha! On **June 10th, 2024**, this model achieved the **highest sentiment analysis score in the world** for Hebrew LLMs, with an impressive **70.3**, surpassing even a **35B** model that's five times its size! <div align="center"> <img src="https://i.imgur.com/yg6CJoz.png" alt="Zion_Alpha SNLI Score" style="width: 80%; min-width: 700px; display: block; margin: auto;"> </div> # Future Plans My previous LLM, Zion_Alpha, set a world record on Hugging Face by achieving the highest SNLI score for Hebrew open LLMs at 84.05. The current model, a SLERP merge, achieved a lower SNLI score but still surprised everyone by securing the highest sentiment analysis score of 70.3. This demonstrates significant untapped potential in optimizing the training process, showing that 7B models can deliver far more performance in Hebrew than previously thought possible. This will be my last Hebrew model for a while, as I have other adventures to pursue. # Looking for Sponsors Since all my work is done on-premises, I am constrained by my current hardware. I would greatly appreciate any support in acquiring an A6000, which would enable me to train significantly larger models much faster. # Papers? Maybe. We'll see. No promises here 🤓 # Contact Details I'm not great at self-marketing (to say the least) and don't have any social media accounts. If you'd like to reach out to me, you can email me at [email protected]. Please note that this email might receive more messages than I can handle, so I apologize in advance if I can't respond to everyone. # Versions and QUANTS - Base model: [FP16](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha) - Instruction tuned: [FP16](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned) | [GGUF](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_GGUF) # Model architecture Based on Mistral 7B. I didn't even bother to alter the tokenizer. # The recommended prompt setting is Debug-deterministic: ``` temperature: 1 top_p: 1 top_k: 1 typical_p: 1 min_p: 1 repetition_penalty: 1 ``` # The recommended instruction template is Mistral: ``` {%- for message in messages %} {%- if message['role'] == 'system' -%} {{- message['content'] -}} {%- else -%} {%- if message['role'] == 'user' -%} {{-'[INST] ' + message['content'].rstrip() + ' [/INST]'-}} {%- else -%} {{-'' + message['content'] + '</s>' -}} {%- endif -%} {%- endif -%} {%- endfor -%} {%- if add_generation_prompt -%} {{-''-}} {%- endif -%} ``` # English to hebrew example: <div align="center"> <b style="font-size: 40px;">Zion_Alpha English to Hebrew example</b> </div> <img src="https://i.imgur.com/JnTuawF.png" alt="Zion_Alpha" style="width: 40%; min-width: 600px; display: block; margin: auto;"> # English to hebrew example: <div align="center"> <b style="font-size: 40px;">Zion_Alpha Hebrew to English example</b> </div> <img src="https://i.imgur.com/Wm2igLJ.png" alt="Zion_Alpha" style="width: 40%; min-width: 600px; display: block; margin: auto;"> <div align="center"> <b style="font-size: 30px;">Unscripted video: live zero shot demonstration at story writing capabilities in Hebrew</b> [![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=YYKeovnS0do) </div> <div align="center"> <b style="font-size: 30px;">Zion_Alpha VS Mistral 'Hebrew' Live & unscripted in real time</b> [![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=DQFtx8M2txc) </div> <div align="center"> <b style="font-size: 30px;">Zion_Alpha VS Mistral 'Hebrew' Live & unscripted in real time Long text translation</b> [![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=w5fz3Ot6tH8) </div> ### History The model was originally trained about 2 month after Mistral (v0.1) was released. As of 04 June 2024, Zion_Alpha got the **Highest SNLI score in the world** among open source models in Hebrew, surpassing most of the models by a huge margin. (**84.05** score) <img src="https://i.imgur.com/7HokS5w.png" alt="Zion_Alpha SNLI Score" style="width: 80%; min-width: 700px; display: block; margin: auto;"> ### Support <img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;"> - [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts 🙏🏻 - [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts 🙏🏻
{}
task
[ "TRANSLATION" ]
43,636
RichardErkhov/M4-ai_-_Hercules-phi-2-gguf
RichardErkhov
null
[ "gguf", "endpoints_compatible", "region:us" ]
2024-10-23T04:55:54Z
2024-10-23T06:27:28+00:00
50
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Hercules-phi-2 - GGUF - Model creator: https://huggingface.co/M4-ai/ - Original model: https://huggingface.co/M4-ai/Hercules-phi-2/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Hercules-phi-2.Q2_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q2_K.gguf) | Q2_K | 1.03GB | | [Hercules-phi-2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.IQ3_XS.gguf) | IQ3_XS | 1.12GB | | [Hercules-phi-2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.IQ3_S.gguf) | IQ3_S | 1.16GB | | [Hercules-phi-2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q3_K_S.gguf) | Q3_K_S | 1.16GB | | [Hercules-phi-2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.IQ3_M.gguf) | IQ3_M | 1.23GB | | [Hercules-phi-2.Q3_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q3_K.gguf) | Q3_K | 1.33GB | | [Hercules-phi-2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q3_K_M.gguf) | Q3_K_M | 1.33GB | | [Hercules-phi-2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q3_K_L.gguf) | Q3_K_L | 1.47GB | | [Hercules-phi-2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.IQ4_XS.gguf) | IQ4_XS | 1.43GB | | [Hercules-phi-2.Q4_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q4_0.gguf) | Q4_0 | 1.49GB | | [Hercules-phi-2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.IQ4_NL.gguf) | IQ4_NL | 1.5GB | | [Hercules-phi-2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q4_K_S.gguf) | Q4_K_S | 1.51GB | | [Hercules-phi-2.Q4_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q4_K.gguf) | Q4_K | 1.62GB | | [Hercules-phi-2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q4_K_M.gguf) | Q4_K_M | 1.62GB | | [Hercules-phi-2.Q4_1.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q4_1.gguf) | Q4_1 | 1.65GB | | [Hercules-phi-2.Q5_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q5_0.gguf) | Q5_0 | 1.8GB | | [Hercules-phi-2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q5_K_S.gguf) | Q5_K_S | 1.8GB | | [Hercules-phi-2.Q5_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q5_K.gguf) | Q5_K | 1.87GB | | [Hercules-phi-2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q5_K_M.gguf) | Q5_K_M | 1.87GB | | [Hercules-phi-2.Q5_1.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q5_1.gguf) | Q5_1 | 1.95GB | | [Hercules-phi-2.Q6_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q6_K.gguf) | Q6_K | 2.13GB | | [Hercules-phi-2.Q8_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q8_0.gguf) | Q8_0 | 2.75GB | Original model description: --- library_name: transformers license: apache-2.0 datasets: - Locutusque/hercules-v4.5 language: - en inference: parameters: do_sample: true temperature: 1 top_p: 0.7 top_k: 4 max_new_tokens: 250 repetition_penalty: 1.1 --- # Hercules-phi-2 <!-- Provide a quick summary of what the model is/does. --> We fine-tuned phi2 on Locutusque's Hercules-v4.5. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This model has capabilities in math, coding, function calling, roleplay, and more. We fine-tuned it using all examples of Hercules-v4.5. - **Developed by:** M4-ai - **Language(s) (NLP):** English - **License:** apache-2.0 ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> General purpose assistant, question answering, chain-of-thought, etc.. ## Evaluation Coming soon ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://huggingface.co/datasets/Locutusque/hercules-v4.5 #### Training Hyperparameters - **Training regime:** bf16 non-mixed precision ## Technical Specifications #### Hardware We used 8 Kaggle TPUs, and we trained at a global batch size of 1152.
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Hercules-phi-2 - GGUF - Model creator: https://huggingface.co/M4-ai/ - Original model: https://huggingface.co/M4-ai/Hercules-phi-2/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Hercules-phi-2.Q2_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q2_K.gguf) | Q2_K | 1.03GB | | [Hercules-phi-2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.IQ3_XS.gguf) | IQ3_XS | 1.12GB | | [Hercules-phi-2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.IQ3_S.gguf) | IQ3_S | 1.16GB | | [Hercules-phi-2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q3_K_S.gguf) | Q3_K_S | 1.16GB | | [Hercules-phi-2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.IQ3_M.gguf) | IQ3_M | 1.23GB | | [Hercules-phi-2.Q3_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q3_K.gguf) | Q3_K | 1.33GB | | [Hercules-phi-2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q3_K_M.gguf) | Q3_K_M | 1.33GB | | [Hercules-phi-2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q3_K_L.gguf) | Q3_K_L | 1.47GB | | [Hercules-phi-2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.IQ4_XS.gguf) | IQ4_XS | 1.43GB | | [Hercules-phi-2.Q4_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q4_0.gguf) | Q4_0 | 1.49GB | | [Hercules-phi-2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.IQ4_NL.gguf) | IQ4_NL | 1.5GB | | [Hercules-phi-2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q4_K_S.gguf) | Q4_K_S | 1.51GB | | [Hercules-phi-2.Q4_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q4_K.gguf) | Q4_K | 1.62GB | | [Hercules-phi-2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q4_K_M.gguf) | Q4_K_M | 1.62GB | | [Hercules-phi-2.Q4_1.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q4_1.gguf) | Q4_1 | 1.65GB | | [Hercules-phi-2.Q5_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q5_0.gguf) | Q5_0 | 1.8GB | | [Hercules-phi-2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q5_K_S.gguf) | Q5_K_S | 1.8GB | | [Hercules-phi-2.Q5_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q5_K.gguf) | Q5_K | 1.87GB | | [Hercules-phi-2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q5_K_M.gguf) | Q5_K_M | 1.87GB | | [Hercules-phi-2.Q5_1.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q5_1.gguf) | Q5_1 | 1.95GB | | [Hercules-phi-2.Q6_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q6_K.gguf) | Q6_K | 2.13GB | | [Hercules-phi-2.Q8_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-phi-2-gguf/blob/main/Hercules-phi-2.Q8_0.gguf) | Q8_0 | 2.75GB | Original model description: --- library_name: transformers license: apache-2.0 datasets: - Locutusque/hercules-v4.5 language: - en inference: parameters: do_sample: true temperature: 1 top_p: 0.7 top_k: 4 max_new_tokens: 250 repetition_penalty: 1.1 --- # Hercules-phi-2 <!-- Provide a quick summary of what the model is/does. --> We fine-tuned phi2 on Locutusque's Hercules-v4.5. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This model has capabilities in math, coding, function calling, roleplay, and more. We fine-tuned it using all examples of Hercules-v4.5. - **Developed by:** M4-ai - **Language(s) (NLP):** English - **License:** apache-2.0 ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> General purpose assistant, question answering, chain-of-thought, etc.. ## Evaluation Coming soon ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://huggingface.co/datasets/Locutusque/hercules-v4.5 #### Training Hyperparameters - **Training regime:** bf16 non-mixed precision ## Technical Specifications #### Hardware We used 8 Kaggle TPUs, and we trained at a global batch size of 1152.
{}
task
[ "QUESTION_ANSWERING" ]
43,637
yjlee1011/ncodeR_data_setfit_multilabel_64_samples
yjlee1011
text-classification
[ "sentence-transformers", "safetensors", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-11-04T22:49:13Z
2023-11-04T22:49:47+00:00
46
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # yjlee1011/ncodeR_data_setfit_multilabel_64_samples This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("yjlee1011/ncodeR_data_setfit_multilabel_64_samples") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# yjlee1011/ncodeR_data_setfit_multilabel_64_samples This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("yjlee1011/ncodeR_data_setfit_multilabel_64_samples") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
43,638
unsloth/aya-vision-8b
unsloth
image-text-to-text
[ "transformers", "safetensors", "aya_vision", "image-text-to-text", "conversational", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "el", "fa", "pl", "id", "cs", "he", "hi", "nl", "ro", "ru", "tr", "uk", "vi", "arxiv:2412.04261", "base_model:CohereForAI/aya-vision-8b", "base_model:finetune:CohereForAI/aya-vision-8b", "license:cc-by-nc-4.0", "region:us" ]
2025-03-14T03:50:40Z
2025-03-14T09:41:30+00:00
24
0
--- base_model: CohereForAI/aya-vision-8b language: - en - fr - de - es - it - pt - ja - ko - zh - ar - el - fa - pl - id - cs - he - hi - nl - ro - ru - tr - uk - vi library_name: transformers license: cc-by-nc-4.0 pipeline_tag: image-text-to-text inference: false extra_gated_prompt: By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and acknowledge that the information you provide will be collected, used, and shared in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy). You’ll receive email updates about C4AI and Cohere research, events, products and services. You can unsubscribe at any time. extra_gated_fields: Name: text Affiliation: text Country: country I agree to use this model for non-commercial use ONLY: checkbox --- # Model Card for Aya Vision 8B <img src="aya-vision-8B.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> **C4AI Aya Vision 8B** is an open weights research release of an 8-billion parameter model with advanced capabilities optimized for a variety of vision-language use cases, including OCR, captioning, visual reasoning, summarization, question answering, code, and more. It is a multilingual model trained to excel in 23 languages in vision and language. This model card corresponds to the 8-billion version of the Aya Vision model. We also released a 32-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-vision-32B). - Developed by: [Cohere For AI](https://cohere.for.ai/) - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: c4ai-aya-vision-8b - Model Size: 8 billion parameters - Context length: 16K ## Try it: Aya Vision in Action Before downloading the weights, you can try Aya Vision chat in the [Cohere playground](https://dashboard.cohere.com/playground/chat) or our dedicated [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/aya_expanse) for interactive exploration. ## WhatsApp Integration You can also talk to Aya Vision through the popular messaging service WhatsApp. Use this [link](https://wa.me/14313028498) to open a WhatsApp chatbox with Aya Vision. If you don’t have WhatsApp downloaded on your machine you might need to do that, or, if you have it on your phone, you can follow the on-screen instructions to link your phone and WhatsApp Web. By the end, you should see a text window which you can use to chat with the model. More details about our WhatsApp integration are available [here](https://docs.cohere.com/v2/docs/aya#aya-expanse-integration-with-whatsapp). ## Example Notebook You can also check out the following [notebook](https://colab.research.google.com/github/cohere-ai/cohere-developer-experience/blob/main/notebooks/guides/aya_vision_intro.ipynb) to understand how to use Aya Vision for different use cases. ## How to Use Aya Vision Please install `transformers` from the source repository that includes the necessary changes for this model: ```python # pip install 'git+https://github.com/huggingface/[email protected]' from transformers import AutoProcessor, AutoModelForImageTextToText import torch model_id = "CohereForAI/aya-vision-8b" processor = AutoProcessor.from_pretrained(model_id) model = AutoModelForImageTextToText.from_pretrained( model_id, device_map="auto", torch_dtype=torch.float16 ) # Format message with the aya-vision chat template messages = [ {"role": "user", "content": [ {"type": "image", "url": "https://pbs.twimg.com/media/Fx7YvfQWYAIp6rZ?format=jpg&name=medium"}, {"type": "text", "text": "चित्र में लिखा पाठ क्या कहता है?"}, ]}, ] inputs = processor.apply_chat_template( messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt" ).to(model.device) gen_tokens = model.generate( **inputs, max_new_tokens=300, do_sample=True, temperature=0.3, ) print(processor.tokenizer.decode(gen_tokens[0][inputs.input_ids.shape[1]:], skip_special_tokens=True) ``` You can also use the model directly using transformers `pipeline` abstraction: ```python from transformers import pipeline pipe = pipeline(model="CohereForAI/aya-vision-8b", task="image-text-to-text", device_map="auto") # Format message with the aya-vision chat template messages = [ {"role": "user", "content": [ {"type": "image", "url": "https://media.istockphoto.com/id/458012057/photo/istanbul-turkey.jpg?s=612x612&w=0&k=20&c=qogAOVvkpfUyqLUMr_XJQyq-HkACXyYUSZbKhBlPrxo="}, {"type": "text", "text": "Bu resimde hangi anıt gösterilmektedir?"}, ]}, ] outputs = pipe(text=messages, max_new_tokens=300, return_full_text=False) print(outputs) ``` ## Model Details **Input:** Model accepts input text and images. **Output:** Model generates text. **Model Architecture:** This is a vision-language model that uses a multilingual language model based on [C4AI Command R7B](https://huggingface.co/CohereForAI/c4ai-command-r7b-12-2024) and further post-trained with the [Aya Expanse recipe](https://arxiv.org/abs/2412.04261), paired with [SigLIP2-patch14-384](https://huggingface.co/google/siglip2-so400m-patch14-384) vision encoder through a multimodal adapter for vision-language understanding. **Image Processing:** We use **169 visual tokens** to encode an image tile with a resolution of **364x364 pixels**. Input images of arbitrary sizes are mapped to the nearest supported resolution based on the aspect ratio. Aya Vision uses up to 12 input tiles and a thumbnail (resized to 364x364) (2197 image tokens). **Languages covered:** The model has been trained on 23 languages: English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Chinese (Simplified and Traditional), Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian. **Context length**: Aya Vision 8B supports a context length of 16K. For more details about how the model was trained, check out [our blogpost](https://huggingface.co/blog/aya-vision). ## Evaluation We evaluated Aya Vision 8B against [Pangea 7B](https://huggingface.co/neulab/Pangea-7B), [Llama-3.2 11B Vision](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision), [Molmo-D 7B](https://huggingface.co/allenai/Molmo-7B-D-0924), [Qwen2.5-VL 7B](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), [Pixtral 12B](https://huggingface.co/mistralai/Pixtral-12B-2409), and [Gemini Flash 1.5 8B](https://developers.googleblog.com/en/gemini-15-flash-8b-is-now-generally-available-for-use/) using [Aya Vision Benchmark](https://huggingface.co/datasets/CohereForAI/AyaVisionBench) and [m-WildVision](https://huggingface.co/datasets/CohereForAI/m-WildVision). Win-rates were determined using claude-3-7-sonnet-20250219 as a judge, based on the superior judge performance compared to other models. We also evaluated Aya Vision 8B’s performance for text-only input against the same models using [m-ArenaHard](https://huggingface.co/datasets/CohereForAI/m-ArenaHard), a challenging open-ended generation evaluation, measured using win-rates using gpt-4o-2024-11-20 as a judge. <!-- <img src="Aya_Vision_8B_Combined_Win_Rates.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> --> <img src="AyaVision8BWinRates(AyaVisionBench).png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> <img src="AyaVision8BWinRates(m-WildVision).png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> <img src="Aya_Vision_8BvsPangea(AyaVisionBench).png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> <img src="EfficiencyvsPerformance.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> ### Model Card Contact For errors or additional questions about details in this model card, contact [email protected]. ### Terms of Use We hope that the release of this model will make community-based research efforts more accessible by releasing the weights of a highly performant 8 billion parameter Vision-Language Model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
null
Non_BioNLP
# Model Card for Aya Vision 8B <img src="aya-vision-8B.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> **C4AI Aya Vision 8B** is an open weights research release of an 8-billion parameter model with advanced capabilities optimized for a variety of vision-language use cases, including OCR, captioning, visual reasoning, summarization, question answering, code, and more. It is a multilingual model trained to excel in 23 languages in vision and language. This model card corresponds to the 8-billion version of the Aya Vision model. We also released a 32-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-vision-32B). - Developed by: [Cohere For AI](https://cohere.for.ai/) - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: c4ai-aya-vision-8b - Model Size: 8 billion parameters - Context length: 16K ## Try it: Aya Vision in Action Before downloading the weights, you can try Aya Vision chat in the [Cohere playground](https://dashboard.cohere.com/playground/chat) or our dedicated [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/aya_expanse) for interactive exploration. ## WhatsApp Integration You can also talk to Aya Vision through the popular messaging service WhatsApp. Use this [link](https://wa.me/14313028498) to open a WhatsApp chatbox with Aya Vision. If you don’t have WhatsApp downloaded on your machine you might need to do that, or, if you have it on your phone, you can follow the on-screen instructions to link your phone and WhatsApp Web. By the end, you should see a text window which you can use to chat with the model. More details about our WhatsApp integration are available [here](https://docs.cohere.com/v2/docs/aya#aya-expanse-integration-with-whatsapp). ## Example Notebook You can also check out the following [notebook](https://colab.research.google.com/github/cohere-ai/cohere-developer-experience/blob/main/notebooks/guides/aya_vision_intro.ipynb) to understand how to use Aya Vision for different use cases. ## How to Use Aya Vision Please install `transformers` from the source repository that includes the necessary changes for this model: ```python # pip install 'git+https://github.com/huggingface/[email protected]' from transformers import AutoProcessor, AutoModelForImageTextToText import torch model_id = "CohereForAI/aya-vision-8b" processor = AutoProcessor.from_pretrained(model_id) model = AutoModelForImageTextToText.from_pretrained( model_id, device_map="auto", torch_dtype=torch.float16 ) # Format message with the aya-vision chat template messages = [ {"role": "user", "content": [ {"type": "image", "url": "https://pbs.twimg.com/media/Fx7YvfQWYAIp6rZ?format=jpg&name=medium"}, {"type": "text", "text": "चित्र में लिखा पाठ क्या कहता है?"}, ]}, ] inputs = processor.apply_chat_template( messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt" ).to(model.device) gen_tokens = model.generate( **inputs, max_new_tokens=300, do_sample=True, temperature=0.3, ) print(processor.tokenizer.decode(gen_tokens[0][inputs.input_ids.shape[1]:], skip_special_tokens=True) ``` You can also use the model directly using transformers `pipeline` abstraction: ```python from transformers import pipeline pipe = pipeline(model="CohereForAI/aya-vision-8b", task="image-text-to-text", device_map="auto") # Format message with the aya-vision chat template messages = [ {"role": "user", "content": [ {"type": "image", "url": "https://media.istockphoto.com/id/458012057/photo/istanbul-turkey.jpg?s=612x612&w=0&k=20&c=qogAOVvkpfUyqLUMr_XJQyq-HkACXyYUSZbKhBlPrxo="}, {"type": "text", "text": "Bu resimde hangi anıt gösterilmektedir?"}, ]}, ] outputs = pipe(text=messages, max_new_tokens=300, return_full_text=False) print(outputs) ``` ## Model Details **Input:** Model accepts input text and images. **Output:** Model generates text. **Model Architecture:** This is a vision-language model that uses a multilingual language model based on [C4AI Command R7B](https://huggingface.co/CohereForAI/c4ai-command-r7b-12-2024) and further post-trained with the [Aya Expanse recipe](https://arxiv.org/abs/2412.04261), paired with [SigLIP2-patch14-384](https://huggingface.co/google/siglip2-so400m-patch14-384) vision encoder through a multimodal adapter for vision-language understanding. **Image Processing:** We use **169 visual tokens** to encode an image tile with a resolution of **364x364 pixels**. Input images of arbitrary sizes are mapped to the nearest supported resolution based on the aspect ratio. Aya Vision uses up to 12 input tiles and a thumbnail (resized to 364x364) (2197 image tokens). **Languages covered:** The model has been trained on 23 languages: English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Chinese (Simplified and Traditional), Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian. **Context length**: Aya Vision 8B supports a context length of 16K. For more details about how the model was trained, check out [our blogpost](https://huggingface.co/blog/aya-vision). ## Evaluation We evaluated Aya Vision 8B against [Pangea 7B](https://huggingface.co/neulab/Pangea-7B), [Llama-3.2 11B Vision](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision), [Molmo-D 7B](https://huggingface.co/allenai/Molmo-7B-D-0924), [Qwen2.5-VL 7B](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), [Pixtral 12B](https://huggingface.co/mistralai/Pixtral-12B-2409), and [Gemini Flash 1.5 8B](https://developers.googleblog.com/en/gemini-15-flash-8b-is-now-generally-available-for-use/) using [Aya Vision Benchmark](https://huggingface.co/datasets/CohereForAI/AyaVisionBench) and [m-WildVision](https://huggingface.co/datasets/CohereForAI/m-WildVision). Win-rates were determined using claude-3-7-sonnet-20250219 as a judge, based on the superior judge performance compared to other models. We also evaluated Aya Vision 8B’s performance for text-only input against the same models using [m-ArenaHard](https://huggingface.co/datasets/CohereForAI/m-ArenaHard), a challenging open-ended generation evaluation, measured using win-rates using gpt-4o-2024-11-20 as a judge. <!-- <img src="Aya_Vision_8B_Combined_Win_Rates.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> --> <img src="AyaVision8BWinRates(AyaVisionBench).png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> <img src="AyaVision8BWinRates(m-WildVision).png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> <img src="Aya_Vision_8BvsPangea(AyaVisionBench).png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> <img src="EfficiencyvsPerformance.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> ### Model Card Contact For errors or additional questions about details in this model card, contact [email protected]. ### Terms of Use We hope that the release of this model will make community-based research efforts more accessible by releasing the weights of a highly performant 8 billion parameter Vision-Language Model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
{"base_model": "CohereForAI/aya-vision-8b", "language": ["en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "el", "fa", "pl", "id", "cs", "he", "hi", "nl", "ro", "ru", "tr", "uk", "vi"], "library_name": "transformers", "license": "cc-by-nc-4.0", "pipeline_tag": "image-text-to-text", "inference": false, "extra_gated_prompt": "By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and acknowledge that the information you provide will be collected, used, and shared in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy). You’ll receive email updates about C4AI and Cohere research, events, products and services. You can unsubscribe at any time.", "extra_gated_fields": {"Name": "text", "Affiliation": "text", "Country": "country", "I agree to use this model for non-commercial use ONLY": "checkbox"}}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
43,639
RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf
RichardErkhov
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
2024-11-02T04:34:59Z
2024-11-02T10:38:41+00:00
41
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Linkbricks-Horizon-AI-Korean-Pro-27B - GGUF - Model creator: https://huggingface.co/Saxo/ - Original model: https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Korean-Pro-27B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q2_K.gguf) | Q2_K | 9.73GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q3_K_S.gguf) | Q3_K_S | 11.33GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q3_K.gguf) | Q3_K | 12.5GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q3_K_M.gguf) | Q3_K_M | 12.5GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q3_K_L.gguf) | Q3_K_L | 13.52GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.IQ4_XS.gguf) | IQ4_XS | 13.92GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_0.gguf) | Q4_0 | 14.56GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.IQ4_NL.gguf) | IQ4_NL | 14.65GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_K_S.gguf) | Q4_K_S | 14.66GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_K.gguf) | Q4_K | 15.5GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_K_M.gguf) | Q4_K_M | 15.5GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_1.gguf) | Q4_1 | 16.07GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_0.gguf) | Q5_0 | 17.59GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_K_S.gguf) | Q5_K_S | 17.59GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_K.gguf) | Q5_K | 18.08GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_K_M.gguf) | Q5_K_M | 18.08GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_1.gguf) | Q5_1 | 19.1GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q6_K.gguf) | Q6_K | 20.81GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q8_0.gguf) | Q8_0 | 26.95GB | Original model description: --- library_name: transformers license: apache-2.0 base_model: google/gemma-2-27b-it datasets: - Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset - Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset - Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled - Saxo/ko-news-corpus-1 - Saxo/ko-news-corpus-2 - Saxo/ko-news-corpus-3 - Saxo/ko-news-corpus-4 - Saxo/ko-news-corpus-5 - Saxo/ko-news-corpus-6 - Saxo/ko-news-corpus-7 - Saxo/ko-news-corpus-8 - Saxo/ko-news-corpus-9 - maywell/ko_Ultrafeedback_binarized - youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo - lilacai/glaive-function-calling-v2-sharegpt - kuotient/gsm8k-ko language: - ko - en - jp - cn pipeline_tag: text-generation --- # Model Card for Model ID <div align="center"> <img src="https://www.linkbricks.com/wp-content/uploads/2022/03/%E1%84%85%E1%85%B5%E1%86%BC%E1%84%8F%E1%85%B3%E1%84%87%E1%85%B3%E1%84%85%E1%85%B5%E1%86%A8%E1%84%89%E1%85%B3%E1%84%85%E1%85%A9%E1%84%80%E1%85%A9-2-1024x804.png" /> </div> AI 와 빅데이터 분석 전문 기업인 Linkbricks의 데이터사이언티스트인 지윤성(Saxo) 박사가 <br> gemma-2-27b-it 베이스모델을 사용해서 H100-80G 8개를 통해 약 50%정도의 파라미터를 한국어 CPT(Continued-Pretraining)->SFT->DPO 한 한글 언어 모델<br> 9천만건의 한글 뉴스 코퍼스를 기준으로 다양한 테스크별 한국어-중국어-영어-일본어 교차 학습 데이터와 수학 및 논리판단 데이터를 통하여 한중일영 언어 교차 증강 처리와 복잡한 논리 문제 역시 대응 가능하도록 훈련한 모델이다.<br> -토크나이저는 단어 확장 없이 베이스 모델 그대로 사용<br> -고객 리뷰나 소셜 포스팅 고차원 분석 및 코딩과 작문, 수학, 논리판단 등이 강화된 모델<br> -128k-Context Window<br> -Deepspeed Stage=3, rslora 및 BAdam Layer Mode 사용 <br> -ollama run benedict/linkbricks-gemma2-27b-korean-advanced-q4 <br> -ollama run benedict/linkbricks-gemma2-27b-korean-advanced-q8 <br><br> Finetuned by Mr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics <br> about 50% of total parameters Korean CPT(Continued-Pretraining)->SFT->DPO training model based on gemma-2-27b-it through 8 H100-80Gs as a Korean language model <br> It is a model that has been trained to handle Korean-Chinese-English-Japanese cross-training data and 90M korean news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br> -Tokenizer uses the base model without word expansion<br> -Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br> -128k-Context Window<br> -Deepspeed Stage=3, use rslora and BAdam Layer Mode<br> <br><br> <a href="www.linkbricks.com">www.linkbricks.com</a>, <a href="www.linkbricks.vc">www.linkbricks.vc</a>
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Linkbricks-Horizon-AI-Korean-Pro-27B - GGUF - Model creator: https://huggingface.co/Saxo/ - Original model: https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Korean-Pro-27B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q2_K.gguf) | Q2_K | 9.73GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q3_K_S.gguf) | Q3_K_S | 11.33GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q3_K.gguf) | Q3_K | 12.5GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q3_K_M.gguf) | Q3_K_M | 12.5GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q3_K_L.gguf) | Q3_K_L | 13.52GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.IQ4_XS.gguf) | IQ4_XS | 13.92GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_0.gguf) | Q4_0 | 14.56GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.IQ4_NL.gguf) | IQ4_NL | 14.65GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_K_S.gguf) | Q4_K_S | 14.66GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_K.gguf) | Q4_K | 15.5GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_K_M.gguf) | Q4_K_M | 15.5GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q4_1.gguf) | Q4_1 | 16.07GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_0.gguf) | Q5_0 | 17.59GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_K_S.gguf) | Q5_K_S | 17.59GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_K.gguf) | Q5_K | 18.08GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_K_M.gguf) | Q5_K_M | 18.08GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q5_1.gguf) | Q5_1 | 19.1GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q6_K.gguf) | Q6_K | 20.81GB | | [Linkbricks-Horizon-AI-Korean-Pro-27B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Saxo_-_Linkbricks-Horizon-AI-Korean-Pro-27B-gguf/blob/main/Linkbricks-Horizon-AI-Korean-Pro-27B.Q8_0.gguf) | Q8_0 | 26.95GB | Original model description: --- library_name: transformers license: apache-2.0 base_model: google/gemma-2-27b-it datasets: - Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset - Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset - Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled - Saxo/ko-news-corpus-1 - Saxo/ko-news-corpus-2 - Saxo/ko-news-corpus-3 - Saxo/ko-news-corpus-4 - Saxo/ko-news-corpus-5 - Saxo/ko-news-corpus-6 - Saxo/ko-news-corpus-7 - Saxo/ko-news-corpus-8 - Saxo/ko-news-corpus-9 - maywell/ko_Ultrafeedback_binarized - youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo - lilacai/glaive-function-calling-v2-sharegpt - kuotient/gsm8k-ko language: - ko - en - jp - cn pipeline_tag: text-generation --- # Model Card for Model ID <div align="center"> <img src="https://www.linkbricks.com/wp-content/uploads/2022/03/%E1%84%85%E1%85%B5%E1%86%BC%E1%84%8F%E1%85%B3%E1%84%87%E1%85%B3%E1%84%85%E1%85%B5%E1%86%A8%E1%84%89%E1%85%B3%E1%84%85%E1%85%A9%E1%84%80%E1%85%A9-2-1024x804.png" /> </div> AI 와 빅데이터 분석 전문 기업인 Linkbricks의 데이터사이언티스트인 지윤성(Saxo) 박사가 <br> gemma-2-27b-it 베이스모델을 사용해서 H100-80G 8개를 통해 약 50%정도의 파라미터를 한국어 CPT(Continued-Pretraining)->SFT->DPO 한 한글 언어 모델<br> 9천만건의 한글 뉴스 코퍼스를 기준으로 다양한 테스크별 한국어-중국어-영어-일본어 교차 학습 데이터와 수학 및 논리판단 데이터를 통하여 한중일영 언어 교차 증강 처리와 복잡한 논리 문제 역시 대응 가능하도록 훈련한 모델이다.<br> -토크나이저는 단어 확장 없이 베이스 모델 그대로 사용<br> -고객 리뷰나 소셜 포스팅 고차원 분석 및 코딩과 작문, 수학, 논리판단 등이 강화된 모델<br> -128k-Context Window<br> -Deepspeed Stage=3, rslora 및 BAdam Layer Mode 사용 <br> -ollama run benedict/linkbricks-gemma2-27b-korean-advanced-q4 <br> -ollama run benedict/linkbricks-gemma2-27b-korean-advanced-q8 <br><br> Finetuned by Mr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics <br> about 50% of total parameters Korean CPT(Continued-Pretraining)->SFT->DPO training model based on gemma-2-27b-it through 8 H100-80Gs as a Korean language model <br> It is a model that has been trained to handle Korean-Chinese-English-Japanese cross-training data and 90M korean news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br> -Tokenizer uses the base model without word expansion<br> -Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br> -128k-Context Window<br> -Deepspeed Stage=3, use rslora and BAdam Layer Mode<br> <br><br> <a href="www.linkbricks.com">www.linkbricks.com</a>, <a href="www.linkbricks.vc">www.linkbricks.vc</a>
{}
task
[ "TRANSLATION", "SUMMARIZATION" ]
43,640
Helsinki-NLP/opus-mt-ja-ms
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "ms", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T11:59:16+00:00
64
0
--- language: - ja - ms license: apache-2.0 tags: - translation --- ### jpn-msa * source group: Japanese * target group: Malay (macrolanguage) * OPUS readme: [jpn-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-msa/README.md) * model: transformer-align * source language(s): jpn jpn_Hani jpn_Hira jpn_Kana * target language(s): ind zlm_Latn zsm_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.msa | 21.5 | 0.469 | ### System Info: - hf_name: jpn-msa - source_languages: jpn - target_languages: msa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-msa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'ms'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: msa - short_pair: ja-ms - chrF2_score: 0.469 - bleu: 21.5 - brevity_penalty: 0.9259999999999999 - ref_len: 17028.0 - src_name: Japanese - tgt_name: Malay (macrolanguage) - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: ms - prefer_old: False - long_pair: jpn-msa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
null
Non_BioNLP
### jpn-msa * source group: Japanese * target group: Malay (macrolanguage) * OPUS readme: [jpn-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-msa/README.md) * model: transformer-align * source language(s): jpn jpn_Hani jpn_Hira jpn_Kana * target language(s): ind zlm_Latn zsm_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.msa | 21.5 | 0.469 | ### System Info: - hf_name: jpn-msa - source_languages: jpn - target_languages: msa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-msa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'ms'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: msa - short_pair: ja-ms - chrF2_score: 0.469 - bleu: 21.5 - brevity_penalty: 0.9259999999999999 - ref_len: 17028.0 - src_name: Japanese - tgt_name: Malay (macrolanguage) - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: ms - prefer_old: False - long_pair: jpn-msa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ja", "ms"], "license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
43,641
tensorblock/SmolMaestra-1.7b-Translation-GGUF
tensorblock
translation
[ "transformers", "gguf", "translation", "TensorBlock", "GGUF", "en", "it", "base_model:puettmann/SmolMaestra-1.7b-Translation", "base_model:quantized:puettmann/SmolMaestra-1.7b-Translation", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
2025-01-01T15:02:44Z
2025-03-06T02:55:13+00:00
404
1
--- base_model: puettmann/SmolMaestra-1.7b-Translation language: - en - it library_name: transformers license: apache-2.0 tags: - translation - TensorBlock - GGUF --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"> Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a> </p> </div> </div> ## puettmann/SmolMaestra-1.7b-Translation - GGUF This repo contains GGUF format model files for [puettmann/SmolMaestra-1.7b-Translation](https://huggingface.co/puettmann/SmolMaestra-1.7b-Translation). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39). <div style="text-align: left; margin: 20px 0;"> <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Run them on the TensorBlock client using your local machine ↗ </a> </div> ## Prompt template ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [SmolMaestra-1.7b-Translation-Q2_K.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q2_K.gguf) | Q2_K | 0.675 GB | smallest, significant quality loss - not recommended for most purposes | | [SmolMaestra-1.7b-Translation-Q3_K_S.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q3_K_S.gguf) | Q3_K_S | 0.777 GB | very small, high quality loss | | [SmolMaestra-1.7b-Translation-Q3_K_M.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q3_K_M.gguf) | Q3_K_M | 0.860 GB | very small, high quality loss | | [SmolMaestra-1.7b-Translation-Q3_K_L.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q3_K_L.gguf) | Q3_K_L | 0.933 GB | small, substantial quality loss | | [SmolMaestra-1.7b-Translation-Q4_0.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q4_0.gguf) | Q4_0 | 0.991 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [SmolMaestra-1.7b-Translation-Q4_K_S.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q4_K_S.gguf) | Q4_K_S | 0.999 GB | small, greater quality loss | | [SmolMaestra-1.7b-Translation-Q4_K_M.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q4_K_M.gguf) | Q4_K_M | 1.056 GB | medium, balanced quality - recommended | | [SmolMaestra-1.7b-Translation-Q5_0.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q5_0.gguf) | Q5_0 | 1.192 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [SmolMaestra-1.7b-Translation-Q5_K_S.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q5_K_S.gguf) | Q5_K_S | 1.192 GB | large, low quality loss - recommended | | [SmolMaestra-1.7b-Translation-Q5_K_M.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q5_K_M.gguf) | Q5_K_M | 1.225 GB | large, very low quality loss - recommended | | [SmolMaestra-1.7b-Translation-Q6_K.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q6_K.gguf) | Q6_K | 1.406 GB | very large, extremely low quality loss | | [SmolMaestra-1.7b-Translation-Q8_0.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q8_0.gguf) | Q8_0 | 1.820 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/SmolMaestra-1.7b-Translation-GGUF --include "SmolMaestra-1.7b-Translation-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/SmolMaestra-1.7b-Translation-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
null
Non_BioNLP
<div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"> Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a> </p> </div> </div> ## puettmann/SmolMaestra-1.7b-Translation - GGUF This repo contains GGUF format model files for [puettmann/SmolMaestra-1.7b-Translation](https://huggingface.co/puettmann/SmolMaestra-1.7b-Translation). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39). <div style="text-align: left; margin: 20px 0;"> <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Run them on the TensorBlock client using your local machine ↗ </a> </div> ## Prompt template ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [SmolMaestra-1.7b-Translation-Q2_K.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q2_K.gguf) | Q2_K | 0.675 GB | smallest, significant quality loss - not recommended for most purposes | | [SmolMaestra-1.7b-Translation-Q3_K_S.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q3_K_S.gguf) | Q3_K_S | 0.777 GB | very small, high quality loss | | [SmolMaestra-1.7b-Translation-Q3_K_M.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q3_K_M.gguf) | Q3_K_M | 0.860 GB | very small, high quality loss | | [SmolMaestra-1.7b-Translation-Q3_K_L.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q3_K_L.gguf) | Q3_K_L | 0.933 GB | small, substantial quality loss | | [SmolMaestra-1.7b-Translation-Q4_0.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q4_0.gguf) | Q4_0 | 0.991 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [SmolMaestra-1.7b-Translation-Q4_K_S.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q4_K_S.gguf) | Q4_K_S | 0.999 GB | small, greater quality loss | | [SmolMaestra-1.7b-Translation-Q4_K_M.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q4_K_M.gguf) | Q4_K_M | 1.056 GB | medium, balanced quality - recommended | | [SmolMaestra-1.7b-Translation-Q5_0.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q5_0.gguf) | Q5_0 | 1.192 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [SmolMaestra-1.7b-Translation-Q5_K_S.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q5_K_S.gguf) | Q5_K_S | 1.192 GB | large, low quality loss - recommended | | [SmolMaestra-1.7b-Translation-Q5_K_M.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q5_K_M.gguf) | Q5_K_M | 1.225 GB | large, very low quality loss - recommended | | [SmolMaestra-1.7b-Translation-Q6_K.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q6_K.gguf) | Q6_K | 1.406 GB | very large, extremely low quality loss | | [SmolMaestra-1.7b-Translation-Q8_0.gguf](https://huggingface.co/tensorblock/SmolMaestra-1.7b-Translation-GGUF/blob/main/SmolMaestra-1.7b-Translation-Q8_0.gguf) | Q8_0 | 1.820 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/SmolMaestra-1.7b-Translation-GGUF --include "SmolMaestra-1.7b-Translation-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/SmolMaestra-1.7b-Translation-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
{"base_model": "puettmann/SmolMaestra-1.7b-Translation", "language": ["en", "it"], "library_name": "transformers", "license": "apache-2.0", "tags": ["translation", "TensorBlock", "GGUF"]}
task
[ "TRANSLATION" ]
43,642
ymoslem/ModernBERT-base-qe-v1
ymoslem
text-classification
[ "transformers", "tensorboard", "safetensors", "modernbert", "text-classification", "quality-estimation", "regression", "generated_from_trainer", "multilingual", "bn", "cs", "de", "en", "et", "fi", "fr", "gu", "ha", "hi", "is", "ja", "kk", "km", "lt", "lv", "pl", "ps", "ru", "ta", "tr", "uk", "xh", "zh", "zu", "dataset:ymoslem/tokenized-wmt-da-human-evaluation", "base_model:answerdotai/ModernBERT-base", "base_model:finetune:answerdotai/ModernBERT-base", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-01-28T16:51:54Z
2025-01-29T06:35:38+00:00
13
0
--- base_model: answerdotai/ModernBERT-base datasets: - ymoslem/tokenized-wmt-da-human-evaluation language: - multilingual - bn - cs - de - en - et - fi - fr - gu - ha - hi - is - ja - kk - km - lt - lv - pl - ps - ru - ta - tr - uk - xh - zh - zu library_name: transformers license: apache-2.0 metrics: - pearsonr - mae - r_squared tags: - quality-estimation - regression - generated_from_trainer model-index: - name: Quality Estimation for Machine Translation results: - task: type: regression dataset: name: ymoslem/wmt-da-human-evaluation-long-context type: QE metrics: - type: Pearson Correlation value: 0.4465 name: Pearson - type: Mean Absolute Error value: 0.126 name: MAE - type: Root Mean Squared Error value: 0.1623 name: RMSE - type: R-Squared value: 0.0801 name: R-R2 - type: Pearson Correlation name: Pearson - type: Mean Absolute Error name: MAE - type: Root Mean Squared Error name: RMSE - type: R-Squared name: R-R2 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Quality Estimation for Machine Translation This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the ymoslem/tokenized-wmt-da-human-evaluation dataset. It achieves the following results on the evaluation set: - Loss: 0.0571 ## Model description This model is for reference-free, sentence level quality estimation (QE) of machine translation (MT) systems. The long-context / document-level model can be found at: [ModernBERT-base-long-context-qe-v1](https://huggingface.co/ymoslem/ModernBERT-base-long-context-qe-v1), which is trained on a long-context / document-level QE dataset [ymoslem/wmt-da-human-evaluation-long-context](https://huggingface.co/datasets/ymoslem/wmt-da-human-evaluation-long-context) ## Training and evaluation data This model is trained on the sentence-level quality estimation dataset: [ymoslem/wmt-da-human-evaluation](https://huggingface.co/datasets/ymoslem/wmt-da-human-evaluation) ## Training procedure This version of the model uses the full lengthtokenizer.model_max_length=8192, but it is still trained on a sentence-level QE dataset [ymoslem/wmt-da-human-evaluation](https://huggingface.co/datasets/ymoslem/wmt-da-human-evaluation) The long-context / document-level model can be found at: [ModernBERT-base-long-context-qe-v1](https://huggingface.co/ymoslem/ModernBERT-base-long-context-qe-v1), which is trained on a long-context / document-level QE dataset [ymoslem/wmt-da-human-evaluation-long-context](https://huggingface.co/datasets/ymoslem/wmt-da-human-evaluation-long-context) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 0.0686 | 0.1004 | 1000 | 0.0712 | | 0.0652 | 0.2007 | 2000 | 0.0687 | | 0.0648 | 0.3011 | 3000 | 0.0623 | | 0.0609 | 0.4015 | 4000 | 0.0600 | | 0.0585 | 0.5019 | 5000 | 0.0603 | | 0.0588 | 0.6022 | 6000 | 0.0589 | | 0.0592 | 0.7026 | 7000 | 0.0581 | | 0.0585 | 0.8030 | 8000 | 0.0574 | | 0.0588 | 0.9033 | 9000 | 0.0572 | | 0.0563 | 1.0037 | 10000 | 0.0571 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.4.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Quality Estimation for Machine Translation This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the ymoslem/tokenized-wmt-da-human-evaluation dataset. It achieves the following results on the evaluation set: - Loss: 0.0571 ## Model description This model is for reference-free, sentence level quality estimation (QE) of machine translation (MT) systems. The long-context / document-level model can be found at: [ModernBERT-base-long-context-qe-v1](https://huggingface.co/ymoslem/ModernBERT-base-long-context-qe-v1), which is trained on a long-context / document-level QE dataset [ymoslem/wmt-da-human-evaluation-long-context](https://huggingface.co/datasets/ymoslem/wmt-da-human-evaluation-long-context) ## Training and evaluation data This model is trained on the sentence-level quality estimation dataset: [ymoslem/wmt-da-human-evaluation](https://huggingface.co/datasets/ymoslem/wmt-da-human-evaluation) ## Training procedure This version of the model uses the full lengthtokenizer.model_max_length=8192, but it is still trained on a sentence-level QE dataset [ymoslem/wmt-da-human-evaluation](https://huggingface.co/datasets/ymoslem/wmt-da-human-evaluation) The long-context / document-level model can be found at: [ModernBERT-base-long-context-qe-v1](https://huggingface.co/ymoslem/ModernBERT-base-long-context-qe-v1), which is trained on a long-context / document-level QE dataset [ymoslem/wmt-da-human-evaluation-long-context](https://huggingface.co/datasets/ymoslem/wmt-da-human-evaluation-long-context) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 0.0686 | 0.1004 | 1000 | 0.0712 | | 0.0652 | 0.2007 | 2000 | 0.0687 | | 0.0648 | 0.3011 | 3000 | 0.0623 | | 0.0609 | 0.4015 | 4000 | 0.0600 | | 0.0585 | 0.5019 | 5000 | 0.0603 | | 0.0588 | 0.6022 | 6000 | 0.0589 | | 0.0592 | 0.7026 | 7000 | 0.0581 | | 0.0585 | 0.8030 | 8000 | 0.0574 | | 0.0588 | 0.9033 | 9000 | 0.0572 | | 0.0563 | 1.0037 | 10000 | 0.0571 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.4.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
{"base_model": "answerdotai/ModernBERT-base", "datasets": ["ymoslem/tokenized-wmt-da-human-evaluation"], "language": ["multilingual", "bn", "cs", "de", "en", "et", "fi", "fr", "gu", "ha", "hi", "is", "ja", "kk", "km", "lt", "lv", "pl", "ps", "ru", "ta", "tr", "uk", "xh", "zh", "zu"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["pearsonr", "mae", "r_squared"], "tags": ["quality-estimation", "regression", "generated_from_trainer"], "model-index": [{"name": "Quality Estimation for Machine Translation", "results": [{"task": {"type": "regression"}, "dataset": {"name": "ymoslem/wmt-da-human-evaluation-long-context", "type": "QE"}, "metrics": [{"type": "Pearson Correlation", "value": 0.4465, "name": "Pearson"}, {"type": "Mean Absolute Error", "value": 0.126, "name": "MAE"}, {"type": "Root Mean Squared Error", "value": 0.1623, "name": "RMSE"}, {"type": "R-Squared", "value": 0.0801, "name": "R-R2"}, {"type": "Pearson Correlation", "name": "Pearson"}, {"type": "Mean Absolute Error", "name": "MAE"}, {"type": "Root Mean Squared Error", "name": "RMSE"}, {"type": "R-Squared", "name": "R-R2"}]}]}]}
task
[ "TRANSLATION" ]
43,643
LoreWeaver-Company/Mistral-7B-LoreWeaver
LoreWeaver-Company
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
2023-12-29T19:18:24Z
2023-12-29T19:38:25+00:00
0
1
--- base_model: mistralai/Mistral-7B-v0.1 library_name: peft license: apache-2.0 --- # Model Card for Model ID Our finetuned Mistral LLM is a large language model specialized for natural language processing tasks, delivering enhanced performance for a wide array of applications, including text classification, question-answering, chatbot services, and more. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Basel Anaya, Osama Awad, Yazeed Mshayekh - **Funded by [optional]:** Basel Anaya, Osama Awad, Yazeed Mshayekh - **Model type:** Autoregressive Language Model - **Language(s) (NLP):** English - **License:** MIT License - **Finetuned from model:** MistralAI's Mistral-7B ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. ### Direct Use Users can leverage the finetuned Mistral LLM for various NLP tasks right out-of-the-box. Simply interact with the API or load the model locally to experience superior language understanding and generation capabilities. Ideal for developers seeking rapid prototyping and deployment of conversational AI applications. ### Downstream Use [optional] Integrate the finetuned Mistral LLM effortlessly into custom applications and pipelines. Utilize the model as a starting point for further refinement, targeting industry-specific lingo, niches, or particular use cases. Seamless compatibility ensures smooth collaboration with adjacent technologies and services. ### Out-of-Scope Use Limitations exist concerning controversial topics, sensitive data, and scenarios demanding real-time responses. Users should exercise caution when deploying the model in safety-critical situations or regions with strict compliance regulations. Avoid sharing confidential or personally identifiable information with the model. ## Bias, Risks, and Limitations Address both technical and sociotechnical limitations. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Further recommendations include cautious assessment of ethical implications, ongoing maintenance, periodic evaluations, and responsible reporting practices. ## How to Get Started with the Model Use the code below to get started with the model. ```python import torch from transformers import pipeline, AutoTokenizer # Load the finetuned Mistral LLM model_name = "Reverb/Mistral-7B-LoreWeaver" tokenizer = AutoTokenizer.from_pretrained(model_name) generator = pipeline("text-generation", model=model_name, tokenizer=tokenizer) # Example usage input_text = "Once upon a time," num_generated_tokens = 50 response = generator(input_text, max_length=num_generated_tokens, num_return_sequences=1) print(f"Generated text:\n{response[0]['generated_text']}") # Alternatively, for fine-grained control over the generation process inputs = tokenizer(input_text, return_tensors="pt") outputs = generator.generate( inputs["input_ids"].to("cuda"), max_length=num_generated_tokens, num_beams=5, early_stopping=True, temperature=1.2, ) generated_sentence = tokenizer.decode(outputs[0]) print(f"\nGenerated text with beam search and custom params:\n{generated_sentence}") ``` ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
null
Non_BioNLP
# Model Card for Model ID Our finetuned Mistral LLM is a large language model specialized for natural language processing tasks, delivering enhanced performance for a wide array of applications, including text classification, question-answering, chatbot services, and more. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Basel Anaya, Osama Awad, Yazeed Mshayekh - **Funded by [optional]:** Basel Anaya, Osama Awad, Yazeed Mshayekh - **Model type:** Autoregressive Language Model - **Language(s) (NLP):** English - **License:** MIT License - **Finetuned from model:** MistralAI's Mistral-7B ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. ### Direct Use Users can leverage the finetuned Mistral LLM for various NLP tasks right out-of-the-box. Simply interact with the API or load the model locally to experience superior language understanding and generation capabilities. Ideal for developers seeking rapid prototyping and deployment of conversational AI applications. ### Downstream Use [optional] Integrate the finetuned Mistral LLM effortlessly into custom applications and pipelines. Utilize the model as a starting point for further refinement, targeting industry-specific lingo, niches, or particular use cases. Seamless compatibility ensures smooth collaboration with adjacent technologies and services. ### Out-of-Scope Use Limitations exist concerning controversial topics, sensitive data, and scenarios demanding real-time responses. Users should exercise caution when deploying the model in safety-critical situations or regions with strict compliance regulations. Avoid sharing confidential or personally identifiable information with the model. ## Bias, Risks, and Limitations Address both technical and sociotechnical limitations. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Further recommendations include cautious assessment of ethical implications, ongoing maintenance, periodic evaluations, and responsible reporting practices. ## How to Get Started with the Model Use the code below to get started with the model. ```python import torch from transformers import pipeline, AutoTokenizer # Load the finetuned Mistral LLM model_name = "Reverb/Mistral-7B-LoreWeaver" tokenizer = AutoTokenizer.from_pretrained(model_name) generator = pipeline("text-generation", model=model_name, tokenizer=tokenizer) # Example usage input_text = "Once upon a time," num_generated_tokens = 50 response = generator(input_text, max_length=num_generated_tokens, num_return_sequences=1) print(f"Generated text:\n{response[0]['generated_text']}") # Alternatively, for fine-grained control over the generation process inputs = tokenizer(input_text, return_tensors="pt") outputs = generator.generate( inputs["input_ids"].to("cuda"), max_length=num_generated_tokens, num_beams=5, early_stopping=True, temperature=1.2, ) generated_sentence = tokenizer.decode(outputs[0]) print(f"\nGenerated text with beam search and custom params:\n{generated_sentence}") ``` ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
{"base_model": "mistralai/Mistral-7B-v0.1", "library_name": "peft", "license": "apache-2.0"}
task
[ "TEXT_CLASSIFICATION" ]
43,644
PORTULAN/albertina-100m-portuguese-ptpt-encoder
PORTULAN
fill-mask
[ "transformers", "pytorch", "safetensors", "deberta", "fill-mask", "albertina-pt*", "albertina-ptpt", "albertina-ptbr", "albertina-ptpt-base", "albertina-ptbr-base", "bert", "portuguese", "encoder", "foundation model", "pt", "dataset:dlb/plue", "dataset:oscar-corpus/OSCAR-2301", "dataset:PORTULAN/glue-ptpt", "arxiv:2403.01897", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-25T09:05:12Z
2024-03-05T10:00:41+00:00
2,318
4
--- datasets: - dlb/plue - oscar-corpus/OSCAR-2301 - PORTULAN/glue-ptpt language: - pt license: mit tags: - albertina-pt* - albertina-ptpt - albertina-ptbr - albertina-ptpt-base - albertina-ptbr-base - fill-mask - bert - deberta - portuguese - encoder - foundation model widget: - text: A culinária portuguesa é rica em sabores e [MASK], tornando-se um dos maiores tesouros do país. --- --- <img align="left" width="40" height="40" src="https://github.githubassets.com/images/icons/emoji/unicode/1f917.png"> <p style="text-align: center;">&nbsp;&nbsp;&nbsp;&nbsp;This is the model card for Albertina PT-PT base. You may be interested in some of the other models in the <a href="https://huggingface.co/PORTULAN">Albertina (encoders) and Gervásio (decoders) families</a>. </p> --- # Albertina 100M PTPT **Albertina 100M PTPT** is a foundation, large language model for European **Portuguese** from **Portugal**. It is an **encoder** of the BERT family, based on the neural architecture Transformer and developed over the DeBERTa model, with most competitive performance for this language. It is distributed free of charge and under a most permissible license. | Albertina's Family of Models | |----------------------------------------------------------------------------------------------------------| | [**Albertina 1.5B PTPT**](https://huggingface.co/PORTULAN/albertina-1b5-portuguese-ptpt-encoder) | | [**Albertina 1.5B PTBR**](https://huggingface.co/PORTULAN/albertina-1b5-portuguese-ptbr-encoder) | | [**Albertina 1.5B PTPT 256**](https://huggingface.co/PORTULAN/albertina-1b5-portuguese-ptpt-encoder-256)| | [**Albertina 1.5B PTBR 256**](https://huggingface.co/PORTULAN/albertina-1b5-portuguese-ptbr-encoder-256)| | [**Albertina 900M PTPT**](https://huggingface.co/PORTULAN/albertina-900m-portuguese-ptpt-encoder) | | [**Albertina 900M PTBR**](https://huggingface.co/PORTULAN/albertina-900m-portuguese-ptbr-encoder) | | [**Albertina 100M PTPT**](https://huggingface.co/PORTULAN/albertina-100m-portuguese-ptpt-encoder) | | [**Albertina 100M PTBR**](https://huggingface.co/PORTULAN/albertina-100m-portuguese-ptbr-encoder) | **Albertina 100M PTPT** is developed by a joint team from the University of Lisbon and the University of Porto, Portugal. For further details, check the respective [publication](https://arxiv.org/abs/2403.01897): ``` latex @misc{albertina-pt-fostering, title={Fostering the Ecosystem of Open Neural Encoders for Portuguese with Albertina PT-* family}, author={Rodrigo Santos and João Rodrigues and Luís Gomes and João Silva and António Branco and Henrique Lopes Cardoso and Tomás Freitas Osório and Bernardo Leite}, year={2024}, eprint={2403.01897}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` Please use the above cannonical reference when using or citing this model. <br> # Model Description **This model card is for Albertina 100M PTPT base**, with 100M parameters, 12 layers and a hidden size of 768. Albertina-PT-PT base is distributed under an [MIT license](https://huggingface.co/PORTULAN/albertina-ptpt/blob/main/LICENSE). DeBERTa is distributed under an [MIT license](https://github.com/microsoft/DeBERTa/blob/master/LICENSE). <br> # Training Data [**Albertina 100M PTPT**](https://huggingface.co/PORTULAN/albertina-ptpt-base) was trained over a 2.2 billion token data set that resulted from gathering some openly available corpora of European Portuguese from the following sources: - [OSCAR](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301): the OSCAR data set includes documents in more than one hundred languages, including Portuguese, and it is widely used in the literature. It is the result of a selection performed over the [Common Crawl](https://commoncrawl.org/) data set, crawled from the Web, that retains only pages whose metadata indicates permission to be crawled, that performs deduplication, and that removes some boilerplate, among other filters. Given that it does not discriminate between the Portuguese variants, we performed extra filtering by retaining only documents whose meta-data indicate the Internet country code top-level domain of Portugal. We used the January 2023 version of OSCAR, which is based on the November/December 2022 version of Common Crawl. - [DCEP](https://joint-research-centre.ec.europa.eu/language-technology-resources/dcep-digital-corpus-european-parliament_en): the Digital Corpus of the European Parliament is a multilingual corpus including documents in all official EU languages published on the European Parliament&#39;s official website. We retained its European Portuguese portion. - [Europarl](https://www.statmt.org/europarl/): the European Parliament Proceedings Parallel Corpus is extracted from the proceedings of the European Parliament from 1996 to 2011. We retained its European Portuguese portion. - [ParlamentoPT](https://huggingface.co/datasets/PORTULAN/parlamento-pt): the ParlamentoPT is a data set we obtained by gathering the publicly available documents with the transcription of the debates in the Portuguese Parliament. ## Preprocessing We filtered the PTPT corpora using the [BLOOM pre-processing](https://github.com/bigscience-workshop/data-preparation) pipeline. We skipped the default filtering of stopwords since it would disrupt the syntactic structure, and also the filtering for language identification given the corpus was pre-selected as Portuguese. ## Training As codebase, we resorted to the [DeBERTa V1 base](https://huggingface.co/microsoft/deberta-base), for English. To train [**Albertina 100M PTPT**](https://huggingface.co/PORTULAN/albertina-ptpt-base), the data set was tokenized with the original DeBERTa tokenizer with a 128 token sequence truncation and dynamic padding. The model was trained using the maximum available memory capacity resulting in a batch size of 3072 samples (192 samples per GPU). We opted for a learning rate of 1e-5 with linear decay and 10k warm-up steps. A total of 200 training epochs were performed resulting in approximately 180k steps. The model was trained for one day on a2-megagpu-16gb Google Cloud A2 VMs with 16 GPUs, 96 vCPUs and 1.360 GB of RAM. <br> # Evaluation The base model version was evaluated on downstream tasks, namely the translations into PT-PT of the English data sets used for a few of the tasks in the widely-used [GLUE benchmark](https://huggingface.co/datasets/glue). ## GLUE tasks translated We resorted to [GLUE-PT](https://huggingface.co/datasets/PORTULAN/glue-ptpt), a **PTPT version of the GLUE** benchmark. We automatically translated the same four tasks from GLUE using [DeepL Translate](https://www.deepl.com/), which specifically provides translation from English to PT-PT as an option. | Model | RTE (Accuracy) | WNLI (Accuracy)| MRPC (F1) | STS-B (Pearson) | |--------------------------|----------------|----------------|-----------|-----------------| | **Albertina 900m PTPT** | **0.8339** | 0.4225 | **0.9171**| **0.8801** | | **Albertina 100m PTPT** | 0.6787 | **0.4507** | 0.8829 | 0.8581 | <br> # How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='PORTULAN/albertina-ptpt-base') >>> unmasker("A culinária portuguesa é rica em sabores e [MASK], tornando-se um dos maiores tesouros do país.") [{'score': 0.8332648277282715, 'token': 14690, 'token_str': ' costumes', 'sequence': 'A culinária portuguesa é rica em sabores e costumes, tornando-se um dos maiores tesouros do país.'}, {'score': 0.07860890030860901, 'token': 29829, 'token_str': ' cores', 'sequence': 'A culinária portuguesa é rica em sabores e cores, tornando-se um dos maiores tesouros do país.'}, {'score': 0.03278181701898575, 'token': 35277, 'token_str': ' arte', 'sequence': 'A culinária portuguesa é rica em sabores e arte, tornando-se um dos maiores tesouros do país.'}, {'score': 0.009515956044197083, 'token': 9240, 'token_str': ' cor', 'sequence': 'A culinária portuguesa é rica em sabores e cor, tornando-se um dos maiores tesouros do país.'}, {'score': 0.009381960146129131, 'token': 33455, 'token_str': ' nuances', 'sequence': 'A culinária portuguesa é rica em sabores e nuances, tornando-se um dos maiores tesouros do país.'}] ``` The model can be used by fine-tuning it for a specific task: ```python >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification, TrainingArguments, Trainer >>> from datasets import load_dataset >>> model = AutoModelForSequenceClassification.from_pretrained("PORTULAN/albertina-ptpt-base", num_labels=2) >>> tokenizer = AutoTokenizer.from_pretrained("PORTULAN/albertina-ptpt-base") >>> dataset = load_dataset("PORTULAN/glue-ptpt", "rte") >>> def tokenize_function(examples): ... return tokenizer(examples["sentence1"], examples["sentence2"], padding="max_length", truncation=True) >>> tokenized_datasets = dataset.map(tokenize_function, batched=True) >>> training_args = TrainingArguments(output_dir="albertina-ptpt-rte", evaluation_strategy="epoch") >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_datasets["train"], ... eval_dataset=tokenized_datasets["validation"], ... ) >>> trainer.train() ``` <br> # Citation When using or citing this model, kindly cite the following [publication](https://arxiv.org/abs/2403.01897): ``` latex @misc{albertina-pt-fostering, title={Fostering the Ecosystem of Open Neural Encoders for Portuguese with Albertina PT-* family}, author={Rodrigo Santos and João Rodrigues and Luís Gomes and João Silva and António Branco and Henrique Lopes Cardoso and Tomás Freitas Osório and Bernardo Leite}, year={2024}, eprint={2403.01897}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <br> # Acknowledgments The research reported here was partially supported by: PORTULAN CLARIN—Research Infrastructure for the Science and Technology of Language, funded by Lisboa 2020, Alentejo 2020 and FCT—Fundação para a Ciência e Tecnologia under the grant PINFRA/22117/2016; research project ALBERTINA - Foundation Encoder Model for Portuguese and AI, funded by FCT—Fundação para a Ciência e Tecnologia under the grant CPCA-IAC/AV/478394/2022; innovation project ACCELERAT.AI - Multilingual Intelligent Contact Centers, funded by IAPMEI, I.P. - Agência para a Competitividade e Inovação under the grant C625734525-00462629, of Plano de Recuperação e Resiliência, call RE-C05-i01.01 – Agendas/Alianças Mobilizadoras para a Reindustrialização; and LIACC - Laboratory for AI and Computer Science, funded by FCT—Fundação para a Ciência e Tecnologia under the grant FCT/UID/CEC/0027/2020.
null
Non_BioNLP
--- <img align="left" width="40" height="40" src="https://github.githubassets.com/images/icons/emoji/unicode/1f917.png"> <p style="text-align: center;">&nbsp;&nbsp;&nbsp;&nbsp;This is the model card for Albertina PT-PT base. You may be interested in some of the other models in the <a href="https://huggingface.co/PORTULAN">Albertina (encoders) and Gervásio (decoders) families</a>. </p> --- # Albertina 100M PTPT **Albertina 100M PTPT** is a foundation, large language model for European **Portuguese** from **Portugal**. It is an **encoder** of the BERT family, based on the neural architecture Transformer and developed over the DeBERTa model, with most competitive performance for this language. It is distributed free of charge and under a most permissible license. | Albertina's Family of Models | |----------------------------------------------------------------------------------------------------------| | [**Albertina 1.5B PTPT**](https://huggingface.co/PORTULAN/albertina-1b5-portuguese-ptpt-encoder) | | [**Albertina 1.5B PTBR**](https://huggingface.co/PORTULAN/albertina-1b5-portuguese-ptbr-encoder) | | [**Albertina 1.5B PTPT 256**](https://huggingface.co/PORTULAN/albertina-1b5-portuguese-ptpt-encoder-256)| | [**Albertina 1.5B PTBR 256**](https://huggingface.co/PORTULAN/albertina-1b5-portuguese-ptbr-encoder-256)| | [**Albertina 900M PTPT**](https://huggingface.co/PORTULAN/albertina-900m-portuguese-ptpt-encoder) | | [**Albertina 900M PTBR**](https://huggingface.co/PORTULAN/albertina-900m-portuguese-ptbr-encoder) | | [**Albertina 100M PTPT**](https://huggingface.co/PORTULAN/albertina-100m-portuguese-ptpt-encoder) | | [**Albertina 100M PTBR**](https://huggingface.co/PORTULAN/albertina-100m-portuguese-ptbr-encoder) | **Albertina 100M PTPT** is developed by a joint team from the University of Lisbon and the University of Porto, Portugal. For further details, check the respective [publication](https://arxiv.org/abs/2403.01897): ``` latex @misc{albertina-pt-fostering, title={Fostering the Ecosystem of Open Neural Encoders for Portuguese with Albertina PT-* family}, author={Rodrigo Santos and João Rodrigues and Luís Gomes and João Silva and António Branco and Henrique Lopes Cardoso and Tomás Freitas Osório and Bernardo Leite}, year={2024}, eprint={2403.01897}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` Please use the above cannonical reference when using or citing this model. <br> # Model Description **This model card is for Albertina 100M PTPT base**, with 100M parameters, 12 layers and a hidden size of 768. Albertina-PT-PT base is distributed under an [MIT license](https://huggingface.co/PORTULAN/albertina-ptpt/blob/main/LICENSE). DeBERTa is distributed under an [MIT license](https://github.com/microsoft/DeBERTa/blob/master/LICENSE). <br> # Training Data [**Albertina 100M PTPT**](https://huggingface.co/PORTULAN/albertina-ptpt-base) was trained over a 2.2 billion token data set that resulted from gathering some openly available corpora of European Portuguese from the following sources: - [OSCAR](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301): the OSCAR data set includes documents in more than one hundred languages, including Portuguese, and it is widely used in the literature. It is the result of a selection performed over the [Common Crawl](https://commoncrawl.org/) data set, crawled from the Web, that retains only pages whose metadata indicates permission to be crawled, that performs deduplication, and that removes some boilerplate, among other filters. Given that it does not discriminate between the Portuguese variants, we performed extra filtering by retaining only documents whose meta-data indicate the Internet country code top-level domain of Portugal. We used the January 2023 version of OSCAR, which is based on the November/December 2022 version of Common Crawl. - [DCEP](https://joint-research-centre.ec.europa.eu/language-technology-resources/dcep-digital-corpus-european-parliament_en): the Digital Corpus of the European Parliament is a multilingual corpus including documents in all official EU languages published on the European Parliament&#39;s official website. We retained its European Portuguese portion. - [Europarl](https://www.statmt.org/europarl/): the European Parliament Proceedings Parallel Corpus is extracted from the proceedings of the European Parliament from 1996 to 2011. We retained its European Portuguese portion. - [ParlamentoPT](https://huggingface.co/datasets/PORTULAN/parlamento-pt): the ParlamentoPT is a data set we obtained by gathering the publicly available documents with the transcription of the debates in the Portuguese Parliament. ## Preprocessing We filtered the PTPT corpora using the [BLOOM pre-processing](https://github.com/bigscience-workshop/data-preparation) pipeline. We skipped the default filtering of stopwords since it would disrupt the syntactic structure, and also the filtering for language identification given the corpus was pre-selected as Portuguese. ## Training As codebase, we resorted to the [DeBERTa V1 base](https://huggingface.co/microsoft/deberta-base), for English. To train [**Albertina 100M PTPT**](https://huggingface.co/PORTULAN/albertina-ptpt-base), the data set was tokenized with the original DeBERTa tokenizer with a 128 token sequence truncation and dynamic padding. The model was trained using the maximum available memory capacity resulting in a batch size of 3072 samples (192 samples per GPU). We opted for a learning rate of 1e-5 with linear decay and 10k warm-up steps. A total of 200 training epochs were performed resulting in approximately 180k steps. The model was trained for one day on a2-megagpu-16gb Google Cloud A2 VMs with 16 GPUs, 96 vCPUs and 1.360 GB of RAM. <br> # Evaluation The base model version was evaluated on downstream tasks, namely the translations into PT-PT of the English data sets used for a few of the tasks in the widely-used [GLUE benchmark](https://huggingface.co/datasets/glue). ## GLUE tasks translated We resorted to [GLUE-PT](https://huggingface.co/datasets/PORTULAN/glue-ptpt), a **PTPT version of the GLUE** benchmark. We automatically translated the same four tasks from GLUE using [DeepL Translate](https://www.deepl.com/), which specifically provides translation from English to PT-PT as an option. | Model | RTE (Accuracy) | WNLI (Accuracy)| MRPC (F1) | STS-B (Pearson) | |--------------------------|----------------|----------------|-----------|-----------------| | **Albertina 900m PTPT** | **0.8339** | 0.4225 | **0.9171**| **0.8801** | | **Albertina 100m PTPT** | 0.6787 | **0.4507** | 0.8829 | 0.8581 | <br> # How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='PORTULAN/albertina-ptpt-base') >>> unmasker("A culinária portuguesa é rica em sabores e [MASK], tornando-se um dos maiores tesouros do país.") [{'score': 0.8332648277282715, 'token': 14690, 'token_str': ' costumes', 'sequence': 'A culinária portuguesa é rica em sabores e costumes, tornando-se um dos maiores tesouros do país.'}, {'score': 0.07860890030860901, 'token': 29829, 'token_str': ' cores', 'sequence': 'A culinária portuguesa é rica em sabores e cores, tornando-se um dos maiores tesouros do país.'}, {'score': 0.03278181701898575, 'token': 35277, 'token_str': ' arte', 'sequence': 'A culinária portuguesa é rica em sabores e arte, tornando-se um dos maiores tesouros do país.'}, {'score': 0.009515956044197083, 'token': 9240, 'token_str': ' cor', 'sequence': 'A culinária portuguesa é rica em sabores e cor, tornando-se um dos maiores tesouros do país.'}, {'score': 0.009381960146129131, 'token': 33455, 'token_str': ' nuances', 'sequence': 'A culinária portuguesa é rica em sabores e nuances, tornando-se um dos maiores tesouros do país.'}] ``` The model can be used by fine-tuning it for a specific task: ```python >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification, TrainingArguments, Trainer >>> from datasets import load_dataset >>> model = AutoModelForSequenceClassification.from_pretrained("PORTULAN/albertina-ptpt-base", num_labels=2) >>> tokenizer = AutoTokenizer.from_pretrained("PORTULAN/albertina-ptpt-base") >>> dataset = load_dataset("PORTULAN/glue-ptpt", "rte") >>> def tokenize_function(examples): ... return tokenizer(examples["sentence1"], examples["sentence2"], padding="max_length", truncation=True) >>> tokenized_datasets = dataset.map(tokenize_function, batched=True) >>> training_args = TrainingArguments(output_dir="albertina-ptpt-rte", evaluation_strategy="epoch") >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_datasets["train"], ... eval_dataset=tokenized_datasets["validation"], ... ) >>> trainer.train() ``` <br> # Citation When using or citing this model, kindly cite the following [publication](https://arxiv.org/abs/2403.01897): ``` latex @misc{albertina-pt-fostering, title={Fostering the Ecosystem of Open Neural Encoders for Portuguese with Albertina PT-* family}, author={Rodrigo Santos and João Rodrigues and Luís Gomes and João Silva and António Branco and Henrique Lopes Cardoso and Tomás Freitas Osório and Bernardo Leite}, year={2024}, eprint={2403.01897}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <br> # Acknowledgments The research reported here was partially supported by: PORTULAN CLARIN—Research Infrastructure for the Science and Technology of Language, funded by Lisboa 2020, Alentejo 2020 and FCT—Fundação para a Ciência e Tecnologia under the grant PINFRA/22117/2016; research project ALBERTINA - Foundation Encoder Model for Portuguese and AI, funded by FCT—Fundação para a Ciência e Tecnologia under the grant CPCA-IAC/AV/478394/2022; innovation project ACCELERAT.AI - Multilingual Intelligent Contact Centers, funded by IAPMEI, I.P. - Agência para a Competitividade e Inovação under the grant C625734525-00462629, of Plano de Recuperação e Resiliência, call RE-C05-i01.01 – Agendas/Alianças Mobilizadoras para a Reindustrialização; and LIACC - Laboratory for AI and Computer Science, funded by FCT—Fundação para a Ciência e Tecnologia under the grant FCT/UID/CEC/0027/2020.
{"datasets": ["dlb/plue", "oscar-corpus/OSCAR-2301", "PORTULAN/glue-ptpt"], "language": ["pt"], "license": "mit", "tags": ["albertina-pt*", "albertina-ptpt", "albertina-ptbr", "albertina-ptpt-base", "albertina-ptbr-base", "fill-mask", "bert", "deberta", "portuguese", "encoder", "foundation model"], "widget": [{"text": "A culinária portuguesa é rica em sabores e [MASK], tornando-se um dos maiores tesouros do país."}]}
task
[ "TRANSLATION" ]
43,645
prithivMLmods/Phi-4-QwQ
prithivMLmods
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "phi3", "phi", "conversational", "en", "base_model:microsoft/phi-4", "base_model:finetune:microsoft/phi-4", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-01-10T22:38:22Z
2025-01-11T16:23:32+00:00
124
8
--- base_model: - microsoft/phi-4 language: - en library_name: transformers license: mit pipeline_tag: text-generation tags: - text-generation-inference - llama - phi3 - phi --- ![3.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/UV-0cgT9xB7-l0GZIiZv7.png) # **Phi-4-QwQ [ Responsible Problem Solving & Advanced Reasoning ]** `[Phi-4-QwQ finetuned]` from Microsoft's Phi-4 is a state-of-the-art open model developed with a focus on **responsible problem solving** and **advanced reasoning capabilities**. Built upon a diverse blend of synthetic datasets, carefully filtered public domain websites, and high-quality academic books and Q&A datasets, Phi-4-QwQ ensures that small, capable models are trained with datasets of exceptional depth and precision. Phi-4-QwQ adopts a robust **safety post-training approach** using open-source and in-house synthetic datasets. This involves a combination of **SFT (Supervised Fine-Tuning)** and iterative **DPO (Direct Preference Optimization)** techniques, ensuring helpful and harmless outputs across various safety categories. --- # **Dataset Info** Phi-4-QwQ is fine-tuned on a carefully curated synthetic dataset generated using an advanced pipeline optimized for **Chain of Thought (CoT)** reasoning and **Responsible Problem Breakdown (RPB)** methodologies. This ensures that the model excels at: - **Logical reasoning** - **Step-by-step problem-solving** - **Breaking down complex tasks into manageable parts** The dataset also emphasizes responsible decision-making and fairness in generating solutions. --- # **Run with Transformers** ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Phi-4-QwQ") model = AutoModelForCausalLM.from_pretrained( "prithivMLmods/Phi-4-QwQ", device_map="auto", torch_dtype=torch.bfloat16, ) input_text = "Explain the concept of black holes." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=64) print(tokenizer.decode(outputs[0])) ``` For chat-style interactions, use `tokenizer.apply_chat_template`: ```python messages = [ {"role": "user", "content": "Explain the concept of black holes."}, ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda") outputs = model.generate(**input_ids, max_new_tokens=256) print(tokenizer.decode(outputs[0])) ``` # **Intended Use** Phi-4-QwQ is tailored for a wide range of applications, especially those involving **advanced reasoning**, **multilingual capabilities**, and **responsible problem-solving**. Its primary use cases include: 1. **Responsible Problem Solving** - Breaking down complex problems into logical, actionable steps. - Offering ethical, well-rounded solutions in academic and professional contexts. 2. **Advanced Reasoning Tasks** - Excelling in mathematics, logic, and scientific reasoning. - Providing detailed explanations and systematic answers. 3. **Content Generation** - Assisting in generating high-quality content for various domains, including creative writing and technical documentation. - Supporting marketers, writers, and educators with detailed and well-structured outputs. 4. **Educational Support** - Acting as a virtual tutor for students by generating practice questions, answers, and detailed explanations. - Helping educators design learning material that promotes critical thinking and step-by-step problem-solving. 5. **Customer Support & Dialogue Systems** - Enabling chatbots and virtual assistants to provide accurate, helpful, and responsible responses. - Enhancing customer service with reasoning-driven automation. 6. **Multilingual Capabilities** - Supporting multilingual communication and content generation while maintaining contextual accuracy. - Assisting in translations with a focus on retaining meaning and nuance. 7. **Safety-Critical Applications** - Ensuring safe and harmless outputs, making it suitable for sensitive domains. - Providing aligned interactions with human oversight for critical systems. --- # **Limitations** Despite its strengths, Phi-4-QwQ has some limitations that users should be aware of: 1. **Bias and Fairness** - While great effort has been made to minimize biases, users should critically assess the model’s output in sensitive scenarios to avoid unintended bias. 2. **Contextual Interpretation** - The model may occasionally misinterpret highly nuanced prompts or ambiguous contexts, leading to suboptimal responses. 3. **Knowledge Cutoff** - Phi-4-QwQ’s knowledge is static and based on the data available at the time of training. It does not include real-time updates or information on recent developments. 4. **Safety and Harmlessness** - Despite post-training safety alignment, inappropriate or harmful outputs may still occur. Continuous monitoring and human oversight are advised when using the model in critical contexts. 5. **Computational Requirements** - Deploying Phi-4-QwQ efficiently may require substantial computational resources, particularly for large-scale deployments or real-time applications. 6. **Ethical Considerations** - Users are responsible for ensuring that the model is not employed for malicious purposes, such as spreading misinformation, generating harmful content, or facilitating unethical behavior. 7. **Domain-Specific Expertise** - While the model is versatile, it may not perform optimally in highly specialized domains (e.g., law, medicine, finance) without further domain-specific fine-tuning.
null
TBD
![3.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/UV-0cgT9xB7-l0GZIiZv7.png) # **Phi-4-QwQ [ Responsible Problem Solving & Advanced Reasoning ]** `[Phi-4-QwQ finetuned]` from Microsoft's Phi-4 is a state-of-the-art open model developed with a focus on **responsible problem solving** and **advanced reasoning capabilities**. Built upon a diverse blend of synthetic datasets, carefully filtered public domain websites, and high-quality academic books and Q&A datasets, Phi-4-QwQ ensures that small, capable models are trained with datasets of exceptional depth and precision. Phi-4-QwQ adopts a robust **safety post-training approach** using open-source and in-house synthetic datasets. This involves a combination of **SFT (Supervised Fine-Tuning)** and iterative **DPO (Direct Preference Optimization)** techniques, ensuring helpful and harmless outputs across various safety categories. --- # **Dataset Info** Phi-4-QwQ is fine-tuned on a carefully curated synthetic dataset generated using an advanced pipeline optimized for **Chain of Thought (CoT)** reasoning and **Responsible Problem Breakdown (RPB)** methodologies. This ensures that the model excels at: - **Logical reasoning** - **Step-by-step problem-solving** - **Breaking down complex tasks into manageable parts** The dataset also emphasizes responsible decision-making and fairness in generating solutions. --- # **Run with Transformers** ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Phi-4-QwQ") model = AutoModelForCausalLM.from_pretrained( "prithivMLmods/Phi-4-QwQ", device_map="auto", torch_dtype=torch.bfloat16, ) input_text = "Explain the concept of black holes." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=64) print(tokenizer.decode(outputs[0])) ``` For chat-style interactions, use `tokenizer.apply_chat_template`: ```python messages = [ {"role": "user", "content": "Explain the concept of black holes."}, ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda") outputs = model.generate(**input_ids, max_new_tokens=256) print(tokenizer.decode(outputs[0])) ``` # **Intended Use** Phi-4-QwQ is tailored for a wide range of applications, especially those involving **advanced reasoning**, **multilingual capabilities**, and **responsible problem-solving**. Its primary use cases include: 1. **Responsible Problem Solving** - Breaking down complex problems into logical, actionable steps. - Offering ethical, well-rounded solutions in academic and professional contexts. 2. **Advanced Reasoning Tasks** - Excelling in mathematics, logic, and scientific reasoning. - Providing detailed explanations and systematic answers. 3. **Content Generation** - Assisting in generating high-quality content for various domains, including creative writing and technical documentation. - Supporting marketers, writers, and educators with detailed and well-structured outputs. 4. **Educational Support** - Acting as a virtual tutor for students by generating practice questions, answers, and detailed explanations. - Helping educators design learning material that promotes critical thinking and step-by-step problem-solving. 5. **Customer Support & Dialogue Systems** - Enabling chatbots and virtual assistants to provide accurate, helpful, and responsible responses. - Enhancing customer service with reasoning-driven automation. 6. **Multilingual Capabilities** - Supporting multilingual communication and content generation while maintaining contextual accuracy. - Assisting in translations with a focus on retaining meaning and nuance. 7. **Safety-Critical Applications** - Ensuring safe and harmless outputs, making it suitable for sensitive domains. - Providing aligned interactions with human oversight for critical systems. --- # **Limitations** Despite its strengths, Phi-4-QwQ has some limitations that users should be aware of: 1. **Bias and Fairness** - While great effort has been made to minimize biases, users should critically assess the model’s output in sensitive scenarios to avoid unintended bias. 2. **Contextual Interpretation** - The model may occasionally misinterpret highly nuanced prompts or ambiguous contexts, leading to suboptimal responses. 3. **Knowledge Cutoff** - Phi-4-QwQ’s knowledge is static and based on the data available at the time of training. It does not include real-time updates or information on recent developments. 4. **Safety and Harmlessness** - Despite post-training safety alignment, inappropriate or harmful outputs may still occur. Continuous monitoring and human oversight are advised when using the model in critical contexts. 5. **Computational Requirements** - Deploying Phi-4-QwQ efficiently may require substantial computational resources, particularly for large-scale deployments or real-time applications. 6. **Ethical Considerations** - Users are responsible for ensuring that the model is not employed for malicious purposes, such as spreading misinformation, generating harmful content, or facilitating unethical behavior. 7. **Domain-Specific Expertise** - While the model is versatile, it may not perform optimally in highly specialized domains (e.g., law, medicine, finance) without further domain-specific fine-tuning.
{"base_model": ["microsoft/phi-4"], "language": ["en"], "library_name": "transformers", "license": "mit", "pipeline_tag": "text-generation", "tags": ["text-generation-inference", "llama", "phi3", "phi"]}
task
[ "TRANSLATION" ]
43,646
tadeous/distilroberta-base-mrpc-glue-tadeous
tadeous
text-classification
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-01-28T23:25:07Z
2023-01-28T23:51:46+00:00
135
0
--- datasets: - glue license: apache-2.0 metrics: - accuracy - f1 tags: - text-classification - generated_from_trainer model-index: - name: distilroberta-base-mrpc-glue-tadeous results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - type: accuracy value: 0.821078431372549 name: Accuracy - type: f1 value: 0.8726003490401396 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-mrpc-glue-tadeous This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets. It achieves the following results on the evaluation set: - Loss: 0.6243 - Accuracy: 0.8211 - F1: 0.8726 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3219 | 1.09 | 500 | 0.6243 | 0.8211 | 0.8726 | | 0.3173 | 2.18 | 1000 | 0.6243 | 0.8211 | 0.8726 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-mrpc-glue-tadeous This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets. It achieves the following results on the evaluation set: - Loss: 0.6243 - Accuracy: 0.8211 - F1: 0.8726 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3219 | 1.09 | 500 | 0.6243 | 0.8211 | 0.8726 | | 0.3173 | 2.18 | 1000 | 0.6243 | 0.8211 | 0.8726 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["text-classification", "generated_from_trainer"], "model-index": [{"name": "distilroberta-base-mrpc-glue-tadeous", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "mrpc", "split": "validation", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.821078431372549, "name": "Accuracy"}, {"type": "f1", "value": 0.8726003490401396, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,647
uaritm/multilingual_en_ru_uk
uaritm
sentence-similarity
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "multilingual", "en", "ru", "uk", "pl", "dataset:ted_multi", "dataset:Helsinki-NLP/tatoeba_mt", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2022-09-22T06:33:04Z
2023-06-04T16:27:03+00:00
20
4
--- datasets: - ted_multi - Helsinki-NLP/tatoeba_mt language: - uk - en - pl - ru library_name: sentence-transformers license: apache-2.0 metrics: - mse pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - multilingual - en - ru - uk --- # uaritm/multilingual_en_ru_uk This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. A newer version of this model that adds Polish is available here: [uaritm/multilingual_en_uk_pl_ru](https://huggingface.co/uaritm/multilingual_en_uk_pl_ru) <!--- Describe your model here --> ## Usage (Sentence-Transformers) The model is used on the resource of multilingual analysis of patient complaints to determine the specialty of the doctor that is needed in this case: [Virtual General Practice](https://aihealth.site) You can test the quality and speed of the model Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('uaritm/multilingual_en_ru_uk') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('uaritm/multilingual_en_ru_uk') model = AutoModel.from_pretrained('uaritm/multilingual_en_ru_uk') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=uaritm/multilingual_en_ru_uk) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 17482 with parameters: ``` {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MSELoss.MSELoss` Parameters of the fit()-Method: ``` { "epochs": 15, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors ``` @misc{Uaritm, title={sentence-transformers: Semantic similarity of medical texts}, author={Vitaliy Ostashko}, year={2022}, url={https://aihealth.site}, } ``` <!--- Describe where people can find more information -->
null
BioNLP
# uaritm/multilingual_en_ru_uk This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. A newer version of this model that adds Polish is available here: [uaritm/multilingual_en_uk_pl_ru](https://huggingface.co/uaritm/multilingual_en_uk_pl_ru) <!--- Describe your model here --> ## Usage (Sentence-Transformers) The model is used on the resource of multilingual analysis of patient complaints to determine the specialty of the doctor that is needed in this case: [Virtual General Practice](https://aihealth.site) You can test the quality and speed of the model Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('uaritm/multilingual_en_ru_uk') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('uaritm/multilingual_en_ru_uk') model = AutoModel.from_pretrained('uaritm/multilingual_en_ru_uk') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=uaritm/multilingual_en_ru_uk) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 17482 with parameters: ``` {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MSELoss.MSELoss` Parameters of the fit()-Method: ``` { "epochs": 15, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors ``` @misc{Uaritm, title={sentence-transformers: Semantic similarity of medical texts}, author={Vitaliy Ostashko}, year={2022}, url={https://aihealth.site}, } ``` <!--- Describe where people can find more information -->
{"datasets": ["ted_multi", "Helsinki-NLP/tatoeba_mt"], "language": ["uk", "en", "pl", "ru"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["mse"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "multilingual", "en", "ru", "uk"]}
task
[ "SEMANTIC_SIMILARITY" ]
43,650
smalinin/Llama-3.1-Storm-8B-q4f32_1-MLC
smalinin
text-generation
[ "mlc-llm", "web-llm", "llama-3.1", "conversational", "instruction following", "reasoning", "function calling", "mergekit", "finetuning", "axolotl", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2406.06623", "arxiv:2311.07911", "arxiv:2311.12022", "arxiv:2406.01574", "arxiv:1803.05457", "arxiv:2310.16049", "arxiv:2210.09261", "arxiv:2109.07958", "base_model:gorilla-llm/gorilla-openfunctions-v2", "base_model:quantized:gorilla-llm/gorilla-openfunctions-v2", "license:llama3.1", "model-index", "region:us" ]
2025-03-04T15:20:51Z
2025-03-04T16:28:28+00:00
6
0
--- base_model: gorilla-llm/gorilla-openfunctions-v2 language: - en - de - fr - it - pt - hi - es - th library_name: mlc-llm license: llama3.1 pipeline_tag: text-generation tags: - mlc-llm - web-llm - llama-3.1 - conversational - instruction following - reasoning - function calling - mergekit - finetuning - axolotl model-index: - name: Llama-3.1-Storm-8B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 80.51 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 31.49 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 16.62 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 10.18 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 9.12 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 31.15 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B name: Open LLM Leaderboard --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/tmOlbERGKP7JSODa6T06J.jpeg) Authors: [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/), [Pawan Kumar Rajpoot](https://www.linkedin.com/in/pawanrajpoot/), [Ankur Parikh](https://www.linkedin.com/in/ankurnlpexpert/), [Akshita Sukhlecha](https://www.linkedin.com/in/akshita-sukhlecha/) **🤗 Hugging Face Announcement Blog**: https://huggingface.co/blog/akjindal53244/llama31-storm8b **🚀Ollama:** `ollama run ajindal/llama3.1-storm:8b` ## TL;DR ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/mDtDeiHwnBupw1k_n99Lf.png) We present the [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model that outperforms Meta AI's [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) and [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) models significantly across diverse benchmarks as shown in the performance comparison plot in the next section. Our approach consists of three key steps: 1. **Self-Curation**: We applied two self-curation methods to select approximately 1 million high-quality examples from a pool of ~2.8 million open-source examples. **Our curation criteria focused on educational value and difficulty level, using the same SLM for annotation instead of larger models (e.g. 70B, 405B).** 2. **Targeted fine-tuning**: We performed [Spectrum](https://arxiv.org/abs/2406.06623)-based targeted fine-tuning over the Llama-3.1-8B-Instruct model. The Spectrum method accelerates training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. In our work, 50% of layers are frozen. 3. **Model Merging**: We merged our fine-tuned model with the [Llama-Spark](https://huggingface.co/arcee-ai/Llama-Spark) model using [SLERP](https://huggingface.co/blog/mlabonne/merge-models#1-slerp) method. The merging method produces a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents. [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) improves Llama-3.1-8B-Instruct across 10 diverse benchmarks. These benchmarks cover areas such as instruction-following, knowledge-driven QA, reasoning, truthful answer generation, and function calling. ## 🏆 Introducing Llama-3.1-Storm-8B [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) builds upon the foundation of Llama-3.1-8B-Instruct, aiming to enhance both conversational and function calling capabilities within the 8B parameter model class. As shown in the left subplot of the above figure, [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model improves Meta-Llama-3.1-8B-Instruct across various benchmarks - Instruction-following ([IFEval](https://arxiv.org/abs/2311.07911)), Knowledge-driven QA benchmarks ([GPQA](https://arxiv.org/abs/2311.12022), [MMLU-Pro](https://arxiv.org/pdf/2406.01574)), Reasoning ([ARC-C](https://arxiv.org/abs/1803.05457), [MuSR](https://arxiv.org/abs/2310.16049), [BBH](https://arxiv.org/pdf/2210.09261)), Reduced Hallucinations ([TruthfulQA](https://arxiv.org/abs/2109.07958)), and Function-Calling ([BFCL](https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard)). This improvement is particularly significant for AI developers and enthusiasts who work with limited computational resources. We also benchmarked our model with the recently published model [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) built on top of the Llama-3.1-8B-Instruct model. As shown in the right subplot of the above figure, **Llama-3.1-Storm-8B outperforms Hermes-3-Llama-3.1-8B on 7 out of 9 benchmarks**, with Hermes-3-Llama-3.1-8B surpassing Llama-3.1-Storm-8B on the MuSR benchmark and both models showing comparable performance on the BBH benchmark. ## Llama-3.1-Storm-8B Model Strengths Llama-3.1-Storm-8B is a powerful generalist model useful for diverse applications. We invite the AI community to explore [Llama-3.1-Storm-8B](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) and look forward to seeing how it will be utilized in various projects and applications. <table> <tr> <td><strong>Model Strength</strong> </td> <td><strong>Relevant Benchmarks</strong> </td> <tr> <tr> <td>🎯 Improved Instruction Following </td> <td>IFEval Strict (+3.93%) </td> <tr> <tr> <td>🌐 Enhanced Knowledge Driven Question Answering </td> <td>GPQA (+7.21%), MMLU-Pro (+0.55%), AGIEval (+3.77%) </td> <tr> <tr> <td>🧠 Better Reasoning </td> <td>ARC-C (+3.92%), MuSR (+2.77%), BBH (+1.67%), AGIEval (+3.77%) </td> <tr> <tr> <td>🤖 Superior Agentic Capabilities </td> <td>BFCL: Overall Acc (+7.92%), BFCL: AST Summary (+12.32%) </td> <tr> <tr> <td>🚫 Reduced Hallucinations </td> <td>TruthfulQA (+9%) </td> <tr> </table> **Note**: All improvements are absolute gains over Meta-Llama-3.1-8B-Instruct. ## Llama-3.1-Storm-8B Models 1. `BF16`: [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) 2. ⚡ `FP8`: [Llama-3.1-Storm-8B-FP8-Dynamic](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic) 3. ⚡ `GGUF`: [Llama-3.1-Storm-8B-GGUF](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF) 4. 🚀 Ollama: `ollama run ajindal/llama3.1-storm:8b` ## 💻 How to Use the Model The Hugging Face `transformers` library loads the model in `bfloat16` by default. This is the type used by the [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) checkpoint, so it’s the recommended way to run to ensure the best results. ### Installation ```bash pip install --upgrade "transformers>=4.43.2" torch==2.3.1 accelerate vllm==0.5.3.post1 ``` Developers can easily integrate Llama-3.1-Storm-8B into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples: ### Conversational Use-case #### Use with [🤗 Transformers](https://github.com/huggingface/transformers) ##### Using `transformers.pipeline()` API ```python import transformers import torch model_id = "akjindal53244/Llama-3.1-Storm-8B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is 2+2?"} ] outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95) print(outputs[0]["generated_text"][-1]) # Expected Output: {'role': 'assistant', 'content': '2 + 2 = 4'} ``` ##### Using `model.generate()` API ```bash pip install flash_attn==2.6.3 ``` ```python import torch from transformers import AutoTokenizer, LlamaForCausalLM # Apply Llama3.1 chat-template def format_prompt(user_query): template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n""" return template.format(user_query) model_id = 'akjindal53244/Llama-3.1-Storm-8B' tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", load_in_8bit=False, load_in_4bit=False, use_flash_attention_2=True ) # Build final input prompt after applying chat-template prompt = format_prompt("What is 2+2?") input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=128, temperature=0.01, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True) print(response) # Expected Output: '2 + 2 = 4' ``` #### Use with [vLLM](https://github.com/vllm-project/vllm) ```python from vllm import LLM, SamplingParams from transformers import AutoTokenizer model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic" num_gpus = 1 tokenizer = AutoTokenizer.from_pretrained(model_id) llm = LLM(model=model_id, tensor_parallel_size=num_gpus) sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95) messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is 2+2?"} ] prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False) print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: 2 + 2 = 4 ``` #### Use with [LitGPT](https://github.com/Lightning-AI/litgpt) ```bash pip install 'litgpt[all]' litgpt download akjindal53244/Llama-3.1-Storm-8B --model_name meta-llama/Meta-Llama-3.1-8B ``` ```python from litgpt import LLM llm = LLM.load(model="akjindal53244/Llama-3.1-Storm-8B") llm.generate("What do Llamas eat?") ``` ### Function Calling Use-case [**Llama-3.1-Storm-8B**](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) has impressive function calling capabilities compared to Meta-Llama-3.1-8B-Instruct as demonstrated by the BFCL benchmark. #### Prompt Format for Function Calling Llama-3.1-Storm-8B is trained with specific system prompt for Function Calling: ``` You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably. Here are the available functions: <tools>LIST_OF_TOOLS</tools> For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format: <tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call> ``` Above system prompt should be used with passing `LIST_OF_TOOLS` as input. #### Use with [vLLM](https://github.com/vllm-project/vllm) ```python import json from vllm import LLM, SamplingParams from transformers import AutoTokenizer model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic" num_gpus = 1 tokenizer = AutoTokenizer.from_pretrained(model_id) llm = LLM(model=model_id, tensor_parallel_size=num_gpus) sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95) def create_system_prompt(tools_list): system_prompt_format = """You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably. Here are the available functions: <tools>{}</tools> For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format: <tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>""" # Convert the tools list to a string representation tools_str = json.dumps(tools_list, ensure_ascii=False) # Format the system prompt with the tools list system_prompt = system_prompt_format.format(tools_str) return system_prompt # Example tools list tools_list = [ { "name": "peers", "description": "Retrieves a list of company peers given a stock symbol.", "parameters": { "symbol": { "description": "The stock symbol for the company.", "type": "str", "default": "" } } }, { "name": "web_chain_details", "description": "python", "parameters": { "chain_slug": { "description": "The slug identifier for the blockchain (e.g., 'ethereum' for Ethereum mainnet).", "type": "str", "default": "ethereum" } } } ] # Create the system prompt with the tools list system_prompt = create_system_prompt(tools_list) messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": "I need to understand the details of the Ethereum blockchain for my cryptocurrency project. Can you fetch the details for 'ethereum'?"} ] prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False) print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: <tool_call>{'tool_name': 'web_chain_details', 'tool_arguments': {'chain_slug': 'ethereum'}}</tool_call> ``` #### Use with [Ollama](https://ollama.com/) ``` import ollama tools = [{ 'type': 'function', 'function': { 'name': 'get_current_weather', 'description': 'Get the current weather for a city', 'parameters': { 'type': 'object', 'properties': { 'city': { 'type': 'string', 'description': 'The name of the city', }, }, 'required': ['city'], }, }, }, { 'type': 'function', 'function': { 'name': 'get_places_to_vist', 'description': 'Get places to visit in a city', 'parameters': { 'type': 'object', 'properties': { 'city': { 'type': 'string', 'description': 'The name of the city', }, }, 'required': ['city'], }, }, }, ] response = ollama.chat( model='ajindal/llama3.1-storm:8b', messages=[ {'role': 'system', 'content': 'Do not answer to nay vulgar questions.'}, {'role': 'user', 'content': 'What is the weather in Toronto and San Francisco?'} ], tools=tools ) print(response['message']) # Expected Response: {'role': 'assistant', 'content': "<tool_call>{'tool_name': 'get_current_weather', 'tool_arguments': {'city': 'Toronto'}}</tool_call>"} ``` ## Alignment Note While **Llama-3.1-Storm-8B** did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model. ## Cite Our Work ``` @misc {ashvini_kumar_jindal_2024, author = { {Ashvini Kumar Jindal, Pawan Kumar Rajpoot, Ankur Parikh, Akshita Sukhlecha} }, title = { Llama-3.1-Storm-8B }, year = 2024, url = { https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B }, doi = { 10.57967/hf/2902 }, publisher = { Hugging Face } } ``` ## Support Our Work With 3 team-members spanned across 3 different time-zones, we have won [NeurIPS LLM Efficiency Challenge 2023](https://llm-efficiency-challenge.github.io/) and 4 other competitions in Finance and Arabic LLM space. We have also published [SOTA mathematical reasoning model](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B). **Llama-3.1-Storm-8B** is our most valuable contribution so far towards the open-source community. We are committed in developing efficient generalist LLMs. **We're seeking both computational resources and innovative collaborators to drive this initiative forward.** # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/akjindal53244__Llama-3.1-Storm-8B-details) | Metric |Value| |-------------------|----:| |Avg. |29.84| |IFEval (0-Shot) |80.51| |BBH (3-Shot) |31.49| |MATH Lvl 5 (4-Shot)|16.62| |GPQA (0-shot) |10.18| |MuSR (0-shot) | 9.12| |MMLU-PRO (5-shot) |31.15|
null
Non_BioNLP
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/tmOlbERGKP7JSODa6T06J.jpeg) Authors: [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/), [Pawan Kumar Rajpoot](https://www.linkedin.com/in/pawanrajpoot/), [Ankur Parikh](https://www.linkedin.com/in/ankurnlpexpert/), [Akshita Sukhlecha](https://www.linkedin.com/in/akshita-sukhlecha/) **🤗 Hugging Face Announcement Blog**: https://huggingface.co/blog/akjindal53244/llama31-storm8b **🚀Ollama:** `ollama run ajindal/llama3.1-storm:8b` ## TL;DR ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/mDtDeiHwnBupw1k_n99Lf.png) We present the [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model that outperforms Meta AI's [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) and [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) models significantly across diverse benchmarks as shown in the performance comparison plot in the next section. Our approach consists of three key steps: 1. **Self-Curation**: We applied two self-curation methods to select approximately 1 million high-quality examples from a pool of ~2.8 million open-source examples. **Our curation criteria focused on educational value and difficulty level, using the same SLM for annotation instead of larger models (e.g. 70B, 405B).** 2. **Targeted fine-tuning**: We performed [Spectrum](https://arxiv.org/abs/2406.06623)-based targeted fine-tuning over the Llama-3.1-8B-Instruct model. The Spectrum method accelerates training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. In our work, 50% of layers are frozen. 3. **Model Merging**: We merged our fine-tuned model with the [Llama-Spark](https://huggingface.co/arcee-ai/Llama-Spark) model using [SLERP](https://huggingface.co/blog/mlabonne/merge-models#1-slerp) method. The merging method produces a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents. [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) improves Llama-3.1-8B-Instruct across 10 diverse benchmarks. These benchmarks cover areas such as instruction-following, knowledge-driven QA, reasoning, truthful answer generation, and function calling. ## 🏆 Introducing Llama-3.1-Storm-8B [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) builds upon the foundation of Llama-3.1-8B-Instruct, aiming to enhance both conversational and function calling capabilities within the 8B parameter model class. As shown in the left subplot of the above figure, [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model improves Meta-Llama-3.1-8B-Instruct across various benchmarks - Instruction-following ([IFEval](https://arxiv.org/abs/2311.07911)), Knowledge-driven QA benchmarks ([GPQA](https://arxiv.org/abs/2311.12022), [MMLU-Pro](https://arxiv.org/pdf/2406.01574)), Reasoning ([ARC-C](https://arxiv.org/abs/1803.05457), [MuSR](https://arxiv.org/abs/2310.16049), [BBH](https://arxiv.org/pdf/2210.09261)), Reduced Hallucinations ([TruthfulQA](https://arxiv.org/abs/2109.07958)), and Function-Calling ([BFCL](https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard)). This improvement is particularly significant for AI developers and enthusiasts who work with limited computational resources. We also benchmarked our model with the recently published model [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) built on top of the Llama-3.1-8B-Instruct model. As shown in the right subplot of the above figure, **Llama-3.1-Storm-8B outperforms Hermes-3-Llama-3.1-8B on 7 out of 9 benchmarks**, with Hermes-3-Llama-3.1-8B surpassing Llama-3.1-Storm-8B on the MuSR benchmark and both models showing comparable performance on the BBH benchmark. ## Llama-3.1-Storm-8B Model Strengths Llama-3.1-Storm-8B is a powerful generalist model useful for diverse applications. We invite the AI community to explore [Llama-3.1-Storm-8B](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) and look forward to seeing how it will be utilized in various projects and applications. <table> <tr> <td><strong>Model Strength</strong> </td> <td><strong>Relevant Benchmarks</strong> </td> <tr> <tr> <td>🎯 Improved Instruction Following </td> <td>IFEval Strict (+3.93%) </td> <tr> <tr> <td>🌐 Enhanced Knowledge Driven Question Answering </td> <td>GPQA (+7.21%), MMLU-Pro (+0.55%), AGIEval (+3.77%) </td> <tr> <tr> <td>🧠 Better Reasoning </td> <td>ARC-C (+3.92%), MuSR (+2.77%), BBH (+1.67%), AGIEval (+3.77%) </td> <tr> <tr> <td>🤖 Superior Agentic Capabilities </td> <td>BFCL: Overall Acc (+7.92%), BFCL: AST Summary (+12.32%) </td> <tr> <tr> <td>🚫 Reduced Hallucinations </td> <td>TruthfulQA (+9%) </td> <tr> </table> **Note**: All improvements are absolute gains over Meta-Llama-3.1-8B-Instruct. ## Llama-3.1-Storm-8B Models 1. `BF16`: [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) 2. ⚡ `FP8`: [Llama-3.1-Storm-8B-FP8-Dynamic](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic) 3. ⚡ `GGUF`: [Llama-3.1-Storm-8B-GGUF](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF) 4. 🚀 Ollama: `ollama run ajindal/llama3.1-storm:8b` ## 💻 How to Use the Model The Hugging Face `transformers` library loads the model in `bfloat16` by default. This is the type used by the [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) checkpoint, so it’s the recommended way to run to ensure the best results. ### Installation ```bash pip install --upgrade "transformers>=4.43.2" torch==2.3.1 accelerate vllm==0.5.3.post1 ``` Developers can easily integrate Llama-3.1-Storm-8B into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples: ### Conversational Use-case #### Use with [🤗 Transformers](https://github.com/huggingface/transformers) ##### Using `transformers.pipeline()` API ```python import transformers import torch model_id = "akjindal53244/Llama-3.1-Storm-8B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is 2+2?"} ] outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95) print(outputs[0]["generated_text"][-1]) # Expected Output: {'role': 'assistant', 'content': '2 + 2 = 4'} ``` ##### Using `model.generate()` API ```bash pip install flash_attn==2.6.3 ``` ```python import torch from transformers import AutoTokenizer, LlamaForCausalLM # Apply Llama3.1 chat-template def format_prompt(user_query): template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n""" return template.format(user_query) model_id = 'akjindal53244/Llama-3.1-Storm-8B' tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", load_in_8bit=False, load_in_4bit=False, use_flash_attention_2=True ) # Build final input prompt after applying chat-template prompt = format_prompt("What is 2+2?") input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=128, temperature=0.01, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True) print(response) # Expected Output: '2 + 2 = 4' ``` #### Use with [vLLM](https://github.com/vllm-project/vllm) ```python from vllm import LLM, SamplingParams from transformers import AutoTokenizer model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic" num_gpus = 1 tokenizer = AutoTokenizer.from_pretrained(model_id) llm = LLM(model=model_id, tensor_parallel_size=num_gpus) sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95) messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is 2+2?"} ] prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False) print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: 2 + 2 = 4 ``` #### Use with [LitGPT](https://github.com/Lightning-AI/litgpt) ```bash pip install 'litgpt[all]' litgpt download akjindal53244/Llama-3.1-Storm-8B --model_name meta-llama/Meta-Llama-3.1-8B ``` ```python from litgpt import LLM llm = LLM.load(model="akjindal53244/Llama-3.1-Storm-8B") llm.generate("What do Llamas eat?") ``` ### Function Calling Use-case [**Llama-3.1-Storm-8B**](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) has impressive function calling capabilities compared to Meta-Llama-3.1-8B-Instruct as demonstrated by the BFCL benchmark. #### Prompt Format for Function Calling Llama-3.1-Storm-8B is trained with specific system prompt for Function Calling: ``` You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably. Here are the available functions: <tools>LIST_OF_TOOLS</tools> For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format: <tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call> ``` Above system prompt should be used with passing `LIST_OF_TOOLS` as input. #### Use with [vLLM](https://github.com/vllm-project/vllm) ```python import json from vllm import LLM, SamplingParams from transformers import AutoTokenizer model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic" num_gpus = 1 tokenizer = AutoTokenizer.from_pretrained(model_id) llm = LLM(model=model_id, tensor_parallel_size=num_gpus) sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95) def create_system_prompt(tools_list): system_prompt_format = """You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably. Here are the available functions: <tools>{}</tools> For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format: <tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>""" # Convert the tools list to a string representation tools_str = json.dumps(tools_list, ensure_ascii=False) # Format the system prompt with the tools list system_prompt = system_prompt_format.format(tools_str) return system_prompt # Example tools list tools_list = [ { "name": "peers", "description": "Retrieves a list of company peers given a stock symbol.", "parameters": { "symbol": { "description": "The stock symbol for the company.", "type": "str", "default": "" } } }, { "name": "web_chain_details", "description": "python", "parameters": { "chain_slug": { "description": "The slug identifier for the blockchain (e.g., 'ethereum' for Ethereum mainnet).", "type": "str", "default": "ethereum" } } } ] # Create the system prompt with the tools list system_prompt = create_system_prompt(tools_list) messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": "I need to understand the details of the Ethereum blockchain for my cryptocurrency project. Can you fetch the details for 'ethereum'?"} ] prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False) print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: <tool_call>{'tool_name': 'web_chain_details', 'tool_arguments': {'chain_slug': 'ethereum'}}</tool_call> ``` #### Use with [Ollama](https://ollama.com/) ``` import ollama tools = [{ 'type': 'function', 'function': { 'name': 'get_current_weather', 'description': 'Get the current weather for a city', 'parameters': { 'type': 'object', 'properties': { 'city': { 'type': 'string', 'description': 'The name of the city', }, }, 'required': ['city'], }, }, }, { 'type': 'function', 'function': { 'name': 'get_places_to_vist', 'description': 'Get places to visit in a city', 'parameters': { 'type': 'object', 'properties': { 'city': { 'type': 'string', 'description': 'The name of the city', }, }, 'required': ['city'], }, }, }, ] response = ollama.chat( model='ajindal/llama3.1-storm:8b', messages=[ {'role': 'system', 'content': 'Do not answer to nay vulgar questions.'}, {'role': 'user', 'content': 'What is the weather in Toronto and San Francisco?'} ], tools=tools ) print(response['message']) # Expected Response: {'role': 'assistant', 'content': "<tool_call>{'tool_name': 'get_current_weather', 'tool_arguments': {'city': 'Toronto'}}</tool_call>"} ``` ## Alignment Note While **Llama-3.1-Storm-8B** did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model. ## Cite Our Work ``` @misc {ashvini_kumar_jindal_2024, author = { {Ashvini Kumar Jindal, Pawan Kumar Rajpoot, Ankur Parikh, Akshita Sukhlecha} }, title = { Llama-3.1-Storm-8B }, year = 2024, url = { https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B }, doi = { 10.57967/hf/2902 }, publisher = { Hugging Face } } ``` ## Support Our Work With 3 team-members spanned across 3 different time-zones, we have won [NeurIPS LLM Efficiency Challenge 2023](https://llm-efficiency-challenge.github.io/) and 4 other competitions in Finance and Arabic LLM space. We have also published [SOTA mathematical reasoning model](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B). **Llama-3.1-Storm-8B** is our most valuable contribution so far towards the open-source community. We are committed in developing efficient generalist LLMs. **We're seeking both computational resources and innovative collaborators to drive this initiative forward.** # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/akjindal53244__Llama-3.1-Storm-8B-details) | Metric |Value| |-------------------|----:| |Avg. |29.84| |IFEval (0-Shot) |80.51| |BBH (3-Shot) |31.49| |MATH Lvl 5 (4-Shot)|16.62| |GPQA (0-shot) |10.18| |MuSR (0-shot) | 9.12| |MMLU-PRO (5-shot) |31.15|
{"base_model": "gorilla-llm/gorilla-openfunctions-v2", "language": ["en", "de", "fr", "it", "pt", "hi", "es", "th"], "library_name": "mlc-llm", "license": "llama3.1", "pipeline_tag": "text-generation", "tags": ["mlc-llm", "web-llm", "llama-3.1", "conversational", "instruction following", "reasoning", "function calling", "mergekit", "finetuning", "axolotl"], "model-index": [{"name": "Llama-3.1-Storm-8B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "IFEval (0-Shot)", "type": "HuggingFaceH4/ifeval", "args": {"num_few_shot": 0}}, "metrics": [{"type": "inst_level_strict_acc and prompt_level_strict_acc", "value": 80.51, "name": "strict accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "BBH (3-Shot)", "type": "BBH", "args": {"num_few_shot": 3}}, "metrics": [{"type": "acc_norm", "value": 31.49, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MATH Lvl 5 (4-Shot)", "type": "hendrycks/competition_math", "args": {"num_few_shot": 4}}, "metrics": [{"type": "exact_match", "value": 16.62, "name": "exact match"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GPQA (0-shot)", "type": "Idavidrein/gpqa", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc_norm", "value": 10.18, "name": "acc_norm"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MuSR (0-shot)", "type": "TAUR-Lab/MuSR", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc_norm", "value": 9.12, "name": "acc_norm"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU-PRO (5-shot)", "type": "TIGER-Lab/MMLU-Pro", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 31.15, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=akjindal53244/Llama-3.1-Storm-8B", "name": "Open LLM Leaderboard"}}]}]}
task
[ "QUESTION_ANSWERING" ]
43,651
seduerr/t5-pawraphrase
seduerr
text2text-generation
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2021-06-23T14:19:12+00:00
178
0
--- {} --- # Invoking more Creativity with Pawraphrases based on T5 ## This micro-service allows to find paraphrases for a given text based on T5. ![Imgur](https://i.imgur.com/v6DFBE0.png) We explain how we finetune the architecture T5 with the dataset PAWS (both from Google) to get the capability of creating paraphrases (or pawphrases since we are using the PAWS dataset :smile:). With this, we can create paraphrases for any given textual input. Find the code for the service in this [Github Repository](https://github.com/seduerr91/pawraphrase_public). In order to create your own __'pawrasphrase tool'__, follow these steps: ### Step 1: Find a Useful Architecture and Datasets Since Google's [T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) has been trained on multiple tasks (e.g., text summarization, question-answering) and it is solely based on Text-to-Text tasks it is pretty useful for extending its task-base through finetuning it with paraphrases. Luckily, the [PAWS](https://github.com/google-research-datasets/paws) dataset consists of approximately 50.000 labeled paraphrases that we can use to fine-tune T5. ### Step 2: Prepare the PAWS Dataset for the T5 Architecture Once identified, it is crucial to prepare the PAWS dataset to feed it into the T5 architecture for finetuning. Since PAWS is coming both with paraphrases and non-paraphases, it needs to be filtered for paraphrases only. Next, after packing it into a Pandas DataFrame, the necessary table headers had to be created. Next, you split the resulting training samples into test, train, and validation set. ![Imgur](https://i.imgur.com/MTM6apI.png) ### Step 3: Fine-tune T5-base with PAWS Next, following these [training instructions](https://towardsdatascience.com/paraphrase-any-question-with-t5-text-to-text-transfer-transformer-pretrained-model-and-cbb9e35f1555), in which they used the Quora dataset, we use the PAWS dataset and feed into T5. Central is the following code to ensure that T5 understands that it has to _paraphrase_. The adapted version can be found [here](https://github.com/seduerr91/pawraphrase_public/blob/master/t5_pawraphrase_training.ipynb). ![Imgur](https://i.imgur.com/uAd0bVo.png) Additionally, it is helpful to force the old versions of _torch==1.4.0, transformers==2.9.0_ and *pytorch_lightning==0.7.5*, since the newer versions break (trust me, I am speaking from experience). However, when doing such training, it is straightforward to start with the smallest architecture (here, _T5-small_) and a small version of your dataset (e.g., 100 paraphrase examples) to quickly identify where the training may fail or stop. ### Step 4: Start Inference by yourself. Next, you can use the fine-tuned T5 Architecture to create paraphrases from every input. As seen in the introductory image. The corresponding code can be found [here](https://github.com/seduerr91/pawraphrase_public/blob/master/t5_pawraphrase_inference.ipynb). ### Step 5: Using the fine-tuning through a GUI Finally, to make the service useful we can provide it as an API as done with the infilling model [here](https://seduerr91.github.io/blog/ilm-fastapi) or with this [frontend](https://github.com/renatoviolin/T5-paraphrase-generation) which was prepared by Renato. Kudos! Thank you for reading this article. I'd be curious about your opinion. #### Who am I? I am Sebastian an NLP Deep Learning Research Scientist (M.Sc. in IT and Business). In my former life, I was a manager at Austria's biggest bank. In the future, I want to work remotely flexibly & in the field of NLP. Drop me a message on [LinkedIn](https://www.linkedin.com/in/sebastianduerr/) if you want to get in touch!
null
Non_BioNLP
# Invoking more Creativity with Pawraphrases based on T5 ## This micro-service allows to find paraphrases for a given text based on T5. ![Imgur](https://i.imgur.com/v6DFBE0.png) We explain how we finetune the architecture T5 with the dataset PAWS (both from Google) to get the capability of creating paraphrases (or pawphrases since we are using the PAWS dataset :smile:). With this, we can create paraphrases for any given textual input. Find the code for the service in this [Github Repository](https://github.com/seduerr91/pawraphrase_public). In order to create your own __'pawrasphrase tool'__, follow these steps: ### Step 1: Find a Useful Architecture and Datasets Since Google's [T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) has been trained on multiple tasks (e.g., text summarization, question-answering) and it is solely based on Text-to-Text tasks it is pretty useful for extending its task-base through finetuning it with paraphrases. Luckily, the [PAWS](https://github.com/google-research-datasets/paws) dataset consists of approximately 50.000 labeled paraphrases that we can use to fine-tune T5. ### Step 2: Prepare the PAWS Dataset for the T5 Architecture Once identified, it is crucial to prepare the PAWS dataset to feed it into the T5 architecture for finetuning. Since PAWS is coming both with paraphrases and non-paraphases, it needs to be filtered for paraphrases only. Next, after packing it into a Pandas DataFrame, the necessary table headers had to be created. Next, you split the resulting training samples into test, train, and validation set. ![Imgur](https://i.imgur.com/MTM6apI.png) ### Step 3: Fine-tune T5-base with PAWS Next, following these [training instructions](https://towardsdatascience.com/paraphrase-any-question-with-t5-text-to-text-transfer-transformer-pretrained-model-and-cbb9e35f1555), in which they used the Quora dataset, we use the PAWS dataset and feed into T5. Central is the following code to ensure that T5 understands that it has to _paraphrase_. The adapted version can be found [here](https://github.com/seduerr91/pawraphrase_public/blob/master/t5_pawraphrase_training.ipynb). ![Imgur](https://i.imgur.com/uAd0bVo.png) Additionally, it is helpful to force the old versions of _torch==1.4.0, transformers==2.9.0_ and *pytorch_lightning==0.7.5*, since the newer versions break (trust me, I am speaking from experience). However, when doing such training, it is straightforward to start with the smallest architecture (here, _T5-small_) and a small version of your dataset (e.g., 100 paraphrase examples) to quickly identify where the training may fail or stop. ### Step 4: Start Inference by yourself. Next, you can use the fine-tuned T5 Architecture to create paraphrases from every input. As seen in the introductory image. The corresponding code can be found [here](https://github.com/seduerr91/pawraphrase_public/blob/master/t5_pawraphrase_inference.ipynb). ### Step 5: Using the fine-tuning through a GUI Finally, to make the service useful we can provide it as an API as done with the infilling model [here](https://seduerr91.github.io/blog/ilm-fastapi) or with this [frontend](https://github.com/renatoviolin/T5-paraphrase-generation) which was prepared by Renato. Kudos! Thank you for reading this article. I'd be curious about your opinion. #### Who am I? I am Sebastian an NLP Deep Learning Research Scientist (M.Sc. in IT and Business). In my former life, I was a manager at Austria's biggest bank. In the future, I want to work remotely flexibly & in the field of NLP. Drop me a message on [LinkedIn](https://www.linkedin.com/in/sebastianduerr/) if you want to get in touch!
{}
task
[ "SUMMARIZATION" ]
43,652
maithili12/autotrain-hin_sum3-50663120923
maithili12
summarization
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain", "summarization", "unk", "dataset:maithili12/autotrain-data-hin_sum3", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-04-19T02:58:07Z
2023-04-19T03:46:11+00:00
20
0
--- datasets: - maithili12/autotrain-data-hin_sum3 language: - unk tags: - autotrain - summarization widget: - text: I love AutoTrain 🤗 co2_eq_emissions: emissions: 0.08227551347865186 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 50663120923 - CO2 Emissions (in grams): 0.0823 ## Validation Metrics - Loss: 2.088 - Rouge1: 13.462 - Rouge2: 4.792 - RougeL: 10.741 - RougeLsum: 11.801 - Gen Len: 19.000 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/maithili12/autotrain-hin_sum3-50663120923 ```
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 50663120923 - CO2 Emissions (in grams): 0.0823 ## Validation Metrics - Loss: 2.088 - Rouge1: 13.462 - Rouge2: 4.792 - RougeL: 10.741 - RougeLsum: 11.801 - Gen Len: 19.000 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/maithili12/autotrain-hin_sum3-50663120923 ```
{"datasets": ["maithili12/autotrain-data-hin_sum3"], "language": ["unk"], "tags": ["autotrain", "summarization"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": {"emissions": 0.08227551347865186}}
task
[ "SUMMARIZATION" ]
43,653
ModelCloud/Llama3.2-1B-Instruct
ModelCloud
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "arxiv:2405.16406", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-11-28T05:51:41Z
2024-11-28T05:58:05+00:00
71
0
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers license: llama3.2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \n“Licensee” or “you” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display “Built with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include “Llama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a “Notice” text file distributed as a part of such copies: “Llama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagement \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software “bug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | | Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-1B-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-1B-Instruct" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 | | Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 | | Total | 833k | 86k | | 240 | 0 | \*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required. The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Quantization ### Quantization Scheme We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts: - All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations. - The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation. - Similar to classification layer, an 8-bit per channel quantization is used for embedding layer. ### Quantization-Aware Training and LoRA The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO). ### SpinQuant [SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 | | Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 | \*\*for comparison purposes only. Model not released. ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 | | | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 | | | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 | | | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 | | | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 | | | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 | | | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 | \*\*for comparison purposes only. Model not released. ## Inference time In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device. | Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) | | :---- | ----- | ----- | ----- | ----- | ----- | | 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 | | 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) | | 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) | | 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 | | 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) | | 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) | (\*) The performance measurement is done using an adb binary-based approach. (\*\*) It is measured on an Android OnePlus 12 device. (\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64 *Footnote:* - *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.* - *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.* - *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better* - *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch* - *RSS size \- Memory usage in resident set size (RSS)* ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
null
Non_BioNLP
## Model Information The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | | Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-1B-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-1B-Instruct" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 | | Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 | | Total | 833k | 86k | | 240 | 0 | \*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required. The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Quantization ### Quantization Scheme We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts: - All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations. - The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation. - Similar to classification layer, an 8-bit per channel quantization is used for embedding layer. ### Quantization-Aware Training and LoRA The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO). ### SpinQuant [SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 | | Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 | \*\*for comparison purposes only. Model not released. ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 | | | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 | | | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 | | | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 | | | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 | | | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 | | | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 | \*\*for comparison purposes only. Model not released. ## Inference time In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device. | Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) | | :---- | ----- | ----- | ----- | ----- | ----- | | 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 | | 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) | | 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) | | 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 | | 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) | | 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) | (\*) The performance measurement is done using an adb binary-based approach. (\*\*) It is measured on an Android OnePlus 12 device. (\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64 *Footnote:* - *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.* - *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.* - *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better* - *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch* - *RSS size \- Memory usage in resident set size (RSS)* ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
{"language": ["en", "de", "fr", "it", "pt", "hi", "es", "th"], "library_name": "transformers", "license": "llama3.2", "pipeline_tag": "text-generation", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "extra_gated_prompt": "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\n“Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\n“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. \nb. Redistribution and Use. \ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. \niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate the law or others’ rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law\n 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta \n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:\n 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled substances\n 11. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:\n 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 16. Generating, promoting, or further distributing spam\n 17. Impersonating another individual without consent, authorization, or legal right\n 18. Representing that the use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement \n4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.\n\nPlease report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "Job title": {"type": "select", "options": ["Student", "Research Graduate", "AI researcher", "AI developer/engineer", "Reporter", "Other"]}, "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
task
[ "SUMMARIZATION" ]
43,654
gcagrici/distilbert-base-uncased-finetuned-emotion
gcagrici
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-06-02T00:51:56Z
2023-06-02T01:14:27+00:00
10
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.9215 name: Accuracy - type: f1 value: 0.9215212244993529 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2110 - Accuracy: 0.9215 - F1: 0.9215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8353 | 1.0 | 250 | 0.3069 | 0.908 | 0.9053 | | 0.2433 | 2.0 | 500 | 0.2110 | 0.9215 | 0.9215 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2110 - Accuracy: 0.9215 - F1: 0.9215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8353 | 1.0 | 250 | 0.3069 | 0.908 | 0.9053 | | 0.2433 | 2.0 | 500 | 0.2110 | 0.9215 | 0.9215 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9215, "name": "Accuracy"}, {"type": "f1", "value": 0.9215212244993529, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,655
QuantFactory/OneLLM-Doey-V1-Llama-3.2-3B-GGUF
QuantFactory
text-generation
[ "transformers", "gguf", "text-generation", "en", "dataset:nvidia/ChatQA-Training-Data", "base_model:meta-llama/Llama-3.2-3B", "base_model:quantized:meta-llama/Llama-3.2-3B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
2024-11-22T06:23:53Z
2024-11-22T06:42:52+00:00
22,209
2
--- base_model: - meta-llama/Llama-3.2-3B datasets: - nvidia/ChatQA-Training-Data language: - en library_name: transformers license: apache-2.0 pipeline_tag: text-generation --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/OneLLM-Doey-V1-Llama-3.2-3B-GGUF This is quantized version of [DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B](https://huggingface.co/DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B) created using llama.cpp # Original Model Card ## **Model Summary** This model is a fine-tuned version of **LLaMA 3.2-3B**, optimized using **LoRA (Low-Rank Adaptation)** on the [NVIDIA ChatQA-Training-Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data). It is tailored for conversational AI, question answering, and other instruction-following tasks, with support for sequences up to 1024 tokens. --- ## **Key Features** - **Base Model**: LLaMA 3.2-3B - **Fine-Tuning Framework**: LoRA - **Dataset**: NVIDIA ChatQA-Training-Data - **Max Sequence Length**: 1024 tokens - **Use Case**: Instruction-based tasks, question answering, conversational AI. ## **Model Usage** This fine-tuned model is suitable for: - **Conversational AI**: Chatbots and dialogue agents with improved contextual understanding. - **Question Answering**: Generating concise and accurate answers to user queries. - **Instruction Following**: Responding to structured prompts. - **Long-Context Tasks**: Processing sequences up to 1024 tokens for long-text reasoning. # **How to Use DoeyLLM / OneLLM-Doey-V1-Llama-3.2-3B-Instruct** This guide explains how to use the **DoeyLLM** model on both app (iOS) and PC platforms. --- ## **App (iOS): Use with OneLLM** OneLLM brings versatile large language models (LLMs) to your device—Llama, Gemma, Qwen, Mistral, and more. Enjoy private, offline GPT and AI tools tailored to your needs. With OneLLM, experience the capabilities of leading-edge language models directly on your device, all without an internet connection. Get fast, reliable, and intelligent responses, while keeping your data secure with local processing. ### **Quick Start for iOS** Follow these steps to integrate the **DoeyLLM** model using the OneLLM app: 1. **Download OneLLM** Get the app from the [App Store](https://apps.apple.com/us/app/onellm-private-ai-gpt-llm/id6737907910) and install it on your iOS device. 2. **Load the DoeyLLM Model** Use the OneLLM interface to load the DoeyLLM model directly into the app: - Navigate to the **Model Library**. - Search for `DoeyLLM`. - Select the model and tap **Download** to store it locally on your device. 3. **Start Conversing** Once the model is loaded, you can begin interacting with it through the app's chat interface. For example: - Tap the **Chat** tab. - Type your question or prompt, such as: > "Explain the significance of AI in education." - Receive real-time, intelligent responses generated locally. ### **Key Features of OneLLM** - **Versatile Models**: Supports various LLMs, including Llama, Gemma, and Qwen. - **Private & Secure**: All processing occurs locally on your device, ensuring data privacy. - **Offline Capability**: Use the app without requiring an internet connection. - **Fast Performance**: Optimized for mobile devices, delivering low-latency responses. For more details or support, visit the [OneLLM App Store page](https://apps.apple.com/us/app/onellm-private-ai-gpt-llm/id6737907910). ## **PC: Use with Transformers** The DoeyLLM model can also be used on PC platforms through the `transformers` library, enabling robust and scalable inference for various NLP tasks. ### **Quick Start for PC** Follow these steps to use the model with Transformers: 1. **Install Transformers** Ensure you have `transformers >= 4.43.0` installed. Update or install it via pip: ```bash pip install --upgrade transformers 2. **Load the Model** Use the transformers library to load the model and tokenizer: Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "OneLLM-Doey-V1-Llama-3.2-3B" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` ## Responsibility & Safety As part of our responsible release strategy, we adopted a three-pronged approach to managing trust and safety risks: Enable developers to deploy helpful, safe, and flexible experiences for their target audience and the use cases supported by the model. Protect developers from adversarial users attempting to exploit the model’s capabilities to potentially cause harm. Provide safeguards for the community to help prevent the misuse of the model.
null
Non_BioNLP
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/OneLLM-Doey-V1-Llama-3.2-3B-GGUF This is quantized version of [DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B](https://huggingface.co/DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B) created using llama.cpp # Original Model Card ## **Model Summary** This model is a fine-tuned version of **LLaMA 3.2-3B**, optimized using **LoRA (Low-Rank Adaptation)** on the [NVIDIA ChatQA-Training-Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data). It is tailored for conversational AI, question answering, and other instruction-following tasks, with support for sequences up to 1024 tokens. --- ## **Key Features** - **Base Model**: LLaMA 3.2-3B - **Fine-Tuning Framework**: LoRA - **Dataset**: NVIDIA ChatQA-Training-Data - **Max Sequence Length**: 1024 tokens - **Use Case**: Instruction-based tasks, question answering, conversational AI. ## **Model Usage** This fine-tuned model is suitable for: - **Conversational AI**: Chatbots and dialogue agents with improved contextual understanding. - **Question Answering**: Generating concise and accurate answers to user queries. - **Instruction Following**: Responding to structured prompts. - **Long-Context Tasks**: Processing sequences up to 1024 tokens for long-text reasoning. # **How to Use DoeyLLM / OneLLM-Doey-V1-Llama-3.2-3B-Instruct** This guide explains how to use the **DoeyLLM** model on both app (iOS) and PC platforms. --- ## **App (iOS): Use with OneLLM** OneLLM brings versatile large language models (LLMs) to your device—Llama, Gemma, Qwen, Mistral, and more. Enjoy private, offline GPT and AI tools tailored to your needs. With OneLLM, experience the capabilities of leading-edge language models directly on your device, all without an internet connection. Get fast, reliable, and intelligent responses, while keeping your data secure with local processing. ### **Quick Start for iOS** Follow these steps to integrate the **DoeyLLM** model using the OneLLM app: 1. **Download OneLLM** Get the app from the [App Store](https://apps.apple.com/us/app/onellm-private-ai-gpt-llm/id6737907910) and install it on your iOS device. 2. **Load the DoeyLLM Model** Use the OneLLM interface to load the DoeyLLM model directly into the app: - Navigate to the **Model Library**. - Search for `DoeyLLM`. - Select the model and tap **Download** to store it locally on your device. 3. **Start Conversing** Once the model is loaded, you can begin interacting with it through the app's chat interface. For example: - Tap the **Chat** tab. - Type your question or prompt, such as: > "Explain the significance of AI in education." - Receive real-time, intelligent responses generated locally. ### **Key Features of OneLLM** - **Versatile Models**: Supports various LLMs, including Llama, Gemma, and Qwen. - **Private & Secure**: All processing occurs locally on your device, ensuring data privacy. - **Offline Capability**: Use the app without requiring an internet connection. - **Fast Performance**: Optimized for mobile devices, delivering low-latency responses. For more details or support, visit the [OneLLM App Store page](https://apps.apple.com/us/app/onellm-private-ai-gpt-llm/id6737907910). ## **PC: Use with Transformers** The DoeyLLM model can also be used on PC platforms through the `transformers` library, enabling robust and scalable inference for various NLP tasks. ### **Quick Start for PC** Follow these steps to use the model with Transformers: 1. **Install Transformers** Ensure you have `transformers >= 4.43.0` installed. Update or install it via pip: ```bash pip install --upgrade transformers 2. **Load the Model** Use the transformers library to load the model and tokenizer: Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "OneLLM-Doey-V1-Llama-3.2-3B" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` ## Responsibility & Safety As part of our responsible release strategy, we adopted a three-pronged approach to managing trust and safety risks: Enable developers to deploy helpful, safe, and flexible experiences for their target audience and the use cases supported by the model. Protect developers from adversarial users attempting to exploit the model’s capabilities to potentially cause harm. Provide safeguards for the community to help prevent the misuse of the model.
{"base_model": ["meta-llama/Llama-3.2-3B"], "datasets": ["nvidia/ChatQA-Training-Data"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation"}
task
[ "QUESTION_ANSWERING" ]
43,656
NilsDamAi/nils-nl-to-rx-pt-v5
NilsDamAi
translation
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "translation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-08-24T12:35:29Z
2022-08-24T12:46:59+00:00
11
0
--- license: apache-2.0 tags: - translation - generated_from_trainer model-index: - name: nils-nl-to-rx-pt-v5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nils-nl-to-rx-pt-v5 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3414 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8778 | 1.0 | 542 | 0.5839 | | 0.7316 | 2.0 | 1084 | 0.3861 | | 0.5707 | 3.0 | 1626 | 0.3414 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nils-nl-to-rx-pt-v5 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3414 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8778 | 1.0 | 542 | 0.5839 | | 0.7316 | 2.0 | 1084 | 0.3861 | | 0.5707 | 3.0 | 1626 | 0.3414 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
{"license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "nils-nl-to-rx-pt-v5", "results": []}]}
task
[ "TRANSLATION" ]
43,657
ymoslem/whisper-large-ga2en-v2.1
ymoslem
automatic-speech-recognition
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ga", "en", "dataset:ymoslem/IWSLT2023-GA-EN", "dataset:ymoslem/FLEURS-GA-EN", "dataset:ymoslem/BitesizeIrish-GA-EN", "dataset:ymoslem/SpokenWords-GA-EN-MTed", "base_model:openai/whisper-large", "base_model:finetune:openai/whisper-large", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
2024-04-18T17:33:12Z
2024-04-20T09:11:00+00:00
19
1
--- base_model: openai/whisper-large datasets: - ymoslem/IWSLT2023-GA-EN - ymoslem/FLEURS-GA-EN - ymoslem/BitesizeIrish-GA-EN - ymoslem/SpokenWords-GA-EN-MTed language: - ga - en license: apache-2.0 metrics: - bleu - wer - chrf tags: - generated_from_trainer model-index: - name: Whisper Large GA-EN Speech Translation results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia type: ymoslem/IWSLT2023-GA-EN metrics: - type: bleu value: 30.16 name: Bleu - type: wer value: 65.60108059432687 name: Wer --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large GA-EN Speech Translation This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia dataset. It achieves the following results on the evaluation set: - Loss: 1.1318 - Bleu: 31.26 - Chrf: 50.41 - Wer: 62.3143 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 0.03 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Bleu | Chrf | Validation Loss | Wer | |:-------------:|:------:|:----:|:-----:|:-----:|:---------------:|:--------:| | 3.1547 | 0.03 | 100 | 3.75 | 18.71 | 2.4173 | 124.0882 | | 2.6996 | 0.07 | 200 | 8.16 | 25.45 | 2.1329 | 114.1378 | | 2.4841 | 0.1 | 300 | 6.4 | 23.6 | 2.0262 | 158.1720 | | 2.4706 | 0.13 | 400 | 9.16 | 27.67 | 1.9688 | 120.0810 | | 2.3575 | 0.16 | 500 | 13.66 | 31.5 | 1.8284 | 100.8555 | | 2.1916 | 0.2 | 600 | 12.97 | 31.8 | 1.7486 | 110.1756 | | 2.1353 | 0.23 | 700 | 16.7 | 33.52 | 1.7568 | 86.8528 | | 1.9885 | 0.26 | 800 | 19.34 | 35.35 | 1.6395 | 78.7033 | | 1.9126 | 0.3 | 900 | 20.21 | 36.28 | 1.5658 | 78.2080 | | 1.6418 | 0.33 | 1000 | 18.61 | 38.49 | 1.4998 | 86.8528 | | 1.5782 | 0.36 | 1100 | 22.91 | 40.04 | 1.4716 | 71.0941 | | 1.4899 | 0.39 | 1200 | 21.55 | 40.92 | 1.4444 | 78.7933 | | 1.3155 | 0.43 | 1300 | 24.95 | 42.05 | 1.3934 | 70.9140 | | 1.4144 | 0.46 | 1400 | 28.38 | 46.18 | 1.2791 | 65.8262 | | 1.1949 | 0.49 | 1500 | 26.95 | 45.84 | 1.2879 | 70.6889 | | 1.0179 | 0.53 | 1600 | 26.12 | 46.4 | 1.2624 | 69.6983 | | 1.0935 | 0.56 | 1700 | 28.51 | 48.24 | 1.2076 | 67.4021 | | 1.061 | 0.59 | 1800 | 27.42 | 48.83 | 1.1812 | 71.4543 | | 1.0955 | 0.62 | 1900 | 31.32 | 49.91 | 1.1503 | 62.9896 | | 1.0607 | 0.66 | 2000 | 31.26 | 50.41 | 1.1318 | 62.3143 | | 1.1135 | 0.6897 | 2100 | 1.2135| 26.57 | 46.18 | 69.7884 | | 0.9819 | 0.7225 | 2200 | 1.2252| 26.95 | 49.47 | 65.0158 | | 0.9909 | 0.7553 | 2300 | 1.2072| 30.35 | 46.49 | 63.3048 | | 0.9521 | 0.7882 | 2400 | 1.2130| 24.76 | 46.44 | 70.6889 | | 0.8245 | 0.8210 | 2500 | 1.1724| 24.84 | 47.05 | 78.1630 | | 0.8303 | 0.8539 | 2600 | 1.1812| 27.56 | 47.48 | 70.1036 | | 0.6934 | 0.8867 | 2700 | 1.1716| 31.61 | 50.4 | 63.8001 | | 0.7117 | 0.9195 | 2800 | 1.1650| 30.82 | 49.95 | 65.0158 | | 0.6944 | 0.9524 | 2900 | 1.1516| 31.21 | 49.8 | 63.5750 | | 0.7132 | 0.9852 | 3000 | 1.1390| 30.16 | 49.77 | 65.6011 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.0.1+cu118 - Datasets 2.18.0 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large GA-EN Speech Translation This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia dataset. It achieves the following results on the evaluation set: - Loss: 1.1318 - Bleu: 31.26 - Chrf: 50.41 - Wer: 62.3143 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 0.03 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Bleu | Chrf | Validation Loss | Wer | |:-------------:|:------:|:----:|:-----:|:-----:|:---------------:|:--------:| | 3.1547 | 0.03 | 100 | 3.75 | 18.71 | 2.4173 | 124.0882 | | 2.6996 | 0.07 | 200 | 8.16 | 25.45 | 2.1329 | 114.1378 | | 2.4841 | 0.1 | 300 | 6.4 | 23.6 | 2.0262 | 158.1720 | | 2.4706 | 0.13 | 400 | 9.16 | 27.67 | 1.9688 | 120.0810 | | 2.3575 | 0.16 | 500 | 13.66 | 31.5 | 1.8284 | 100.8555 | | 2.1916 | 0.2 | 600 | 12.97 | 31.8 | 1.7486 | 110.1756 | | 2.1353 | 0.23 | 700 | 16.7 | 33.52 | 1.7568 | 86.8528 | | 1.9885 | 0.26 | 800 | 19.34 | 35.35 | 1.6395 | 78.7033 | | 1.9126 | 0.3 | 900 | 20.21 | 36.28 | 1.5658 | 78.2080 | | 1.6418 | 0.33 | 1000 | 18.61 | 38.49 | 1.4998 | 86.8528 | | 1.5782 | 0.36 | 1100 | 22.91 | 40.04 | 1.4716 | 71.0941 | | 1.4899 | 0.39 | 1200 | 21.55 | 40.92 | 1.4444 | 78.7933 | | 1.3155 | 0.43 | 1300 | 24.95 | 42.05 | 1.3934 | 70.9140 | | 1.4144 | 0.46 | 1400 | 28.38 | 46.18 | 1.2791 | 65.8262 | | 1.1949 | 0.49 | 1500 | 26.95 | 45.84 | 1.2879 | 70.6889 | | 1.0179 | 0.53 | 1600 | 26.12 | 46.4 | 1.2624 | 69.6983 | | 1.0935 | 0.56 | 1700 | 28.51 | 48.24 | 1.2076 | 67.4021 | | 1.061 | 0.59 | 1800 | 27.42 | 48.83 | 1.1812 | 71.4543 | | 1.0955 | 0.62 | 1900 | 31.32 | 49.91 | 1.1503 | 62.9896 | | 1.0607 | 0.66 | 2000 | 31.26 | 50.41 | 1.1318 | 62.3143 | | 1.1135 | 0.6897 | 2100 | 1.2135| 26.57 | 46.18 | 69.7884 | | 0.9819 | 0.7225 | 2200 | 1.2252| 26.95 | 49.47 | 65.0158 | | 0.9909 | 0.7553 | 2300 | 1.2072| 30.35 | 46.49 | 63.3048 | | 0.9521 | 0.7882 | 2400 | 1.2130| 24.76 | 46.44 | 70.6889 | | 0.8245 | 0.8210 | 2500 | 1.1724| 24.84 | 47.05 | 78.1630 | | 0.8303 | 0.8539 | 2600 | 1.1812| 27.56 | 47.48 | 70.1036 | | 0.6934 | 0.8867 | 2700 | 1.1716| 31.61 | 50.4 | 63.8001 | | 0.7117 | 0.9195 | 2800 | 1.1650| 30.82 | 49.95 | 65.0158 | | 0.6944 | 0.9524 | 2900 | 1.1516| 31.21 | 49.8 | 63.5750 | | 0.7132 | 0.9852 | 3000 | 1.1390| 30.16 | 49.77 | 65.6011 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.0.1+cu118 - Datasets 2.18.0 - Tokenizers 0.19.1
{"base_model": "openai/whisper-large", "datasets": ["ymoslem/IWSLT2023-GA-EN", "ymoslem/FLEURS-GA-EN", "ymoslem/BitesizeIrish-GA-EN", "ymoslem/SpokenWords-GA-EN-MTed"], "language": ["ga", "en"], "license": "apache-2.0", "metrics": ["bleu", "wer", "chrf"], "tags": ["generated_from_trainer"], "model-index": [{"name": "Whisper Large GA-EN Speech Translation", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia", "type": "ymoslem/IWSLT2023-GA-EN"}, "metrics": [{"type": "bleu", "value": 30.16, "name": "Bleu"}, {"type": "wer", "value": 65.60108059432687, "name": "Wer"}]}]}]}
task
[ "TRANSLATION" ]
43,658
LoneStriker/gemma-1.1-7b-it-6.0bpw-h6-exl2
LoneStriker
text-generation
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:2312.11805", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2304.06364", "arxiv:2206.04615", "arxiv:1804.06876", "arxiv:2110.08193", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
2024-04-06T02:08:23Z
2024-04-06T02:11:53+00:00
5
0
--- library_name: transformers license: gemma widget: - messages: - role: user content: How does the brain work? inference: parameters: max_new_tokens: 200 extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the latest 7B instruct version of the Gemma model. Here you can find other models in the Gemma family: | | Base | Instruct | |----|----------------------------------------------------|----------------------------------------------------------------------| | 2B | [gemma-2b](https://huggingface.co/google/gemma-2b) | [gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it) | | 7B | [gemma-7b](https://huggingface.co/google/gemma-7b) | [**gemma-1.1-7b-it**](https://huggingface.co/google/gemma-1.1-7b-it) | **Release Notes** This is Gemma 1.1 7B (IT), an update over the original instruction-tuned Gemma release. Gemma 1.1 was trained using a novel RLHF method, leading to substantial gains on quality, coding capabilities, factuality, instruction following and multi-turn conversation quality. We also fixed a bug in multi-turn conversations, and made sure that model responses don't always start with `"Sure,"`. We believe this release represents an improvement for most use cases, but we encourage users to test in their particular applications. The previous model [will continue to be available in the same repo](https://huggingface.co/google/gemma-7b-it). We appreciate the enthusiastic adoption of Gemma, and we continue to welcome all feedback from the community. **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Running the model on a CPU As explained below, we recommend `torch.bfloat16` as the default dtype. You can use [a different precision](#precisions) if necessary. ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-1.1-7b-it", torch_dtype=torch.bfloat16 ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids, max_new_tokens=50) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-1.1-7b-it", device_map="auto", torch_dtype=torch.bfloat16 ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` <a name="precisions"></a> #### Running the model on a GPU using different precisions The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision. You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below. * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-1.1-7b-it", device_map="auto", torch_dtype=torch.float16, revision="float16", ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-1.1-7b-it", device_map="auto", torch_dtype=torch.bfloat16 ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Upcasting to `torch.float32`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-1.1-7b-it", device_map="auto" ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-1.1-7b-it", quantization_config=quantization_config ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-1.1-7b-it", quantization_config=quantization_config ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` #### Running the model in JAX / Flax Use the `flax` branch of the repository: ```python import jax.numpy as jnp from transformers import AutoTokenizer, FlaxGemmaForCausalLM model_id = "google/gemma-1.1-7b-it" tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.padding_side = "left" model, params = FlaxGemmaForCausalLM.from_pretrained( model_id, dtype=jnp.bfloat16, revision="flax", _do_init=False, ) inputs = tokenizer("Valencia and Málaga are", return_tensors="np", padding=True) output = model.generate(**inputs, params=params, max_new_tokens=20, do_sample=False) output_text = tokenizer.batch_decode(output.sequences, skip_special_tokens=True) ``` [Check this notebook](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/jax_gemma.ipynb) for a comprehensive walkthrough on how to parallelize JAX inference. ### Chat Template The instruction-tuned models use a chat template that must be adhered to for conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: ```py from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "google/gemma-1.1-7b-it" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype=dtype, ) chat = [ { "role": "user", "content": "Write a hello world program" }, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` At this point, the prompt contains the following text: ``` <bos><start_of_turn>user Write a hello world program<end_of_turn> <start_of_turn>model ``` As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with the `<end_of_turn>` token. You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template. After the prompt is ready, generation can be performed like this: ```py inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150) ``` ### Fine-tuning You can find some fine-tuning scripts under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt them to this model, simply change the model-id to `google/gemma-1.1-7b-it`. We provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on the English quotes dataset ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results The pre-trained base models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | Gemma PT 2B | Gemma PT 7B | | ------------------------------ | ------------- | ----------- | ----------- | | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | 71.4 | 81.2 | | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 49.7 | 51.8 | | [BoolQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | 12.5 | 23.0 | | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | | ------------------------------ | ------------- | ----------- | ----------- | | **Average** | | **44.9** | **56.4** | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. #### Gemma 1.0 | Benchmark | Metric | Gemma 1.0 IT 2B | Gemma 1.0 IT 7B | | ------------------------ | ------------- | --------------- | --------------- | | [RealToxicity][realtox] | average | 6.86 | 7.90 | | [BOLD][bold] | | 45.57 | 49.08 | | [CrowS-Pairs][crows] | top-1 | 45.82 | 51.33 | | [BBQ Ambig][bbq] | 1-shot, top-1 | 62.58 | 92.54 | | [BBQ Disambig][bbq] | top-1 | 54.62 | 71.99 | | [Winogender][winogender] | top-1 | 51.25 | 54.17 | | [TruthfulQA][truthfulqa] | | 44.84 | 31.81 | | [Winobias 1_2][winobias] | | 56.12 | 59.09 | | [Winobias 2_2][winobias] | | 91.10 | 92.23 | | [Toxigen][toxigen] | | 29.77 | 39.59 | | ------------------------ | ------------- | --------------- | --------------- | #### Gemma 1.1 | Benchmark | Metric | Gemma 1.1 IT 2B | Gemma 1.1 IT 7B | | ------------------------ | ------------- | --------------- | --------------- | | [RealToxicity][realtox] | average | 7.03 | 8.04 | | [BOLD][bold] | | 47.76 | | | [CrowS-Pairs][crows] | top-1 | 45.89 | 49.67 | | [BBQ Ambig][bbq] | 1-shot, top-1 | 58.97 | 86.06 | | [BBQ Disambig][bbq] | top-1 | 53.90 | 85.08 | | [Winogender][winogender] | top-1 | 50.14 | 57.64 | | [TruthfulQA][truthfulqa] | | 44.24 | 45.34 | | [Winobias 1_2][winobias] | | 55.93 | 59.22 | | [Winobias 2_2][winobias] | | 89.46 | 89.2 | | [Toxigen][toxigen] | | 29.64 | 38.75 | | ------------------------ | ------------- | --------------- | --------------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
null
Non_BioNLP
# Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the latest 7B instruct version of the Gemma model. Here you can find other models in the Gemma family: | | Base | Instruct | |----|----------------------------------------------------|----------------------------------------------------------------------| | 2B | [gemma-2b](https://huggingface.co/google/gemma-2b) | [gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it) | | 7B | [gemma-7b](https://huggingface.co/google/gemma-7b) | [**gemma-1.1-7b-it**](https://huggingface.co/google/gemma-1.1-7b-it) | **Release Notes** This is Gemma 1.1 7B (IT), an update over the original instruction-tuned Gemma release. Gemma 1.1 was trained using a novel RLHF method, leading to substantial gains on quality, coding capabilities, factuality, instruction following and multi-turn conversation quality. We also fixed a bug in multi-turn conversations, and made sure that model responses don't always start with `"Sure,"`. We believe this release represents an improvement for most use cases, but we encourage users to test in their particular applications. The previous model [will continue to be available in the same repo](https://huggingface.co/google/gemma-7b-it). We appreciate the enthusiastic adoption of Gemma, and we continue to welcome all feedback from the community. **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Running the model on a CPU As explained below, we recommend `torch.bfloat16` as the default dtype. You can use [a different precision](#precisions) if necessary. ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-1.1-7b-it", torch_dtype=torch.bfloat16 ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids, max_new_tokens=50) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-1.1-7b-it", device_map="auto", torch_dtype=torch.bfloat16 ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` <a name="precisions"></a> #### Running the model on a GPU using different precisions The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision. You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below. * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-1.1-7b-it", device_map="auto", torch_dtype=torch.float16, revision="float16", ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-1.1-7b-it", device_map="auto", torch_dtype=torch.bfloat16 ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Upcasting to `torch.float32`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-1.1-7b-it", device_map="auto" ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-1.1-7b-it", quantization_config=quantization_config ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-1.1-7b-it", quantization_config=quantization_config ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` #### Running the model in JAX / Flax Use the `flax` branch of the repository: ```python import jax.numpy as jnp from transformers import AutoTokenizer, FlaxGemmaForCausalLM model_id = "google/gemma-1.1-7b-it" tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.padding_side = "left" model, params = FlaxGemmaForCausalLM.from_pretrained( model_id, dtype=jnp.bfloat16, revision="flax", _do_init=False, ) inputs = tokenizer("Valencia and Málaga are", return_tensors="np", padding=True) output = model.generate(**inputs, params=params, max_new_tokens=20, do_sample=False) output_text = tokenizer.batch_decode(output.sequences, skip_special_tokens=True) ``` [Check this notebook](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/jax_gemma.ipynb) for a comprehensive walkthrough on how to parallelize JAX inference. ### Chat Template The instruction-tuned models use a chat template that must be adhered to for conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: ```py from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "google/gemma-1.1-7b-it" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype=dtype, ) chat = [ { "role": "user", "content": "Write a hello world program" }, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` At this point, the prompt contains the following text: ``` <bos><start_of_turn>user Write a hello world program<end_of_turn> <start_of_turn>model ``` As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with the `<end_of_turn>` token. You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template. After the prompt is ready, generation can be performed like this: ```py inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150) ``` ### Fine-tuning You can find some fine-tuning scripts under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt them to this model, simply change the model-id to `google/gemma-1.1-7b-it`. We provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on the English quotes dataset ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results The pre-trained base models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | Gemma PT 2B | Gemma PT 7B | | ------------------------------ | ------------- | ----------- | ----------- | | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | 71.4 | 81.2 | | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 49.7 | 51.8 | | [BoolQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | 12.5 | 23.0 | | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | | ------------------------------ | ------------- | ----------- | ----------- | | **Average** | | **44.9** | **56.4** | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. #### Gemma 1.0 | Benchmark | Metric | Gemma 1.0 IT 2B | Gemma 1.0 IT 7B | | ------------------------ | ------------- | --------------- | --------------- | | [RealToxicity][realtox] | average | 6.86 | 7.90 | | [BOLD][bold] | | 45.57 | 49.08 | | [CrowS-Pairs][crows] | top-1 | 45.82 | 51.33 | | [BBQ Ambig][bbq] | 1-shot, top-1 | 62.58 | 92.54 | | [BBQ Disambig][bbq] | top-1 | 54.62 | 71.99 | | [Winogender][winogender] | top-1 | 51.25 | 54.17 | | [TruthfulQA][truthfulqa] | | 44.84 | 31.81 | | [Winobias 1_2][winobias] | | 56.12 | 59.09 | | [Winobias 2_2][winobias] | | 91.10 | 92.23 | | [Toxigen][toxigen] | | 29.77 | 39.59 | | ------------------------ | ------------- | --------------- | --------------- | #### Gemma 1.1 | Benchmark | Metric | Gemma 1.1 IT 2B | Gemma 1.1 IT 7B | | ------------------------ | ------------- | --------------- | --------------- | | [RealToxicity][realtox] | average | 7.03 | 8.04 | | [BOLD][bold] | | 47.76 | | | [CrowS-Pairs][crows] | top-1 | 45.89 | 49.67 | | [BBQ Ambig][bbq] | 1-shot, top-1 | 58.97 | 86.06 | | [BBQ Disambig][bbq] | top-1 | 53.90 | 85.08 | | [Winogender][winogender] | top-1 | 50.14 | 57.64 | | [TruthfulQA][truthfulqa] | | 44.24 | 45.34 | | [Winobias 1_2][winobias] | | 55.93 | 59.22 | | [Winobias 2_2][winobias] | | 89.46 | 89.2 | | [Toxigen][toxigen] | | 29.64 | 38.75 | | ------------------------ | ------------- | --------------- | --------------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
{"library_name": "transformers", "license": "gemma", "widget": [{"messages": [{"role": "user", "content": "How does the brain work?"}]}], "inference": {"parameters": {"max_new_tokens": 200}}, "extra_gated_heading": "Access Gemma on Hugging Face", "extra_gated_prompt": "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
43,659
daviddrzik/SK_BPE_BLM-topic-news
daviddrzik
text-classification
[ "transformers", "safetensors", "roberta", "text-classification", "topic", "sk", "base_model:daviddrzik/SK_BPE_BLM", "base_model:finetune:daviddrzik/SK_BPE_BLM", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-09-04T07:58:47Z
2024-09-05T08:56:50+00:00
10
0
--- base_model: daviddrzik/SK_BPE_BLM language: - sk library_name: transformers license: mit metrics: - f1 pipeline_tag: text-classification tags: - topic --- # Fine-Tuned Topic Classification Model - SK_BPE_BLM (Topic News) ## Model Overview This model is a fine-tuned version of the [SK_BPE_BLM model](https://huggingface.co/daviddrzik/SK_BPE_BLM) for topic classification. For this task, we used the Slovak Categorized News Corpus, which contains news articles divided into six categories: Economy and Business, Culture, News, World, Sports, and Healthcare. The corpus provides text files with detailed annotations, including token and sentence boundary identification, stop words, morphological analysis, named entity recognition, and lemmatization. ## Topic Labels Each record in the dataset is labeled with one of the following topics: - **Healthcare (0):** 2,564 records - **News (1):** 4,174 records - **Sports (2):** 2,759 records - **World (3):** 1,660 records - **Economy and Business (4):** 4,199 records - **Culture (5):** 137 records ## Dataset Details The original corpus did not contain continuous text, requiring significant preprocessing. The process involved: 1. **Reconstruction:** We reconstructed coherent text from individual annotated files, resulting in over 86,000 sentences. 2. **Combining Sentences:** Sentences from each file were combined into single records, with a maximum length of 600 characters (approximately 200 tokens). The final dataset comprises a total of 15,493 records, each labeled according to the categories listed above. For more information about the dataset, please visit [this link]( https://nlp.kemt.fei.tuke.sk/language/categorizednews). ## Fine-Tuning Hyperparameters The following hyperparameters were used during the fine-tuning process: - **Learning Rate:** 1e-05 - **Training Batch Size:** 64 - **Evaluation Batch Size:** 64 - **Seed:** 42 - **Optimizer:** Adam (default) - **Number of Epochs:** 10 ## Model Performance The model was evaluated using stratified 10-fold cross-validation, achieving a weighted F1-score with a median value of <span style="font-size: 24px;">**0.966**</span> across the folds. ## Model Usage This model is suitable for topic classification in Slovak text, particularly for news articles across various categories. It is specifically designed for applications requiring topic categorization of news content and may not generalize well to other types of text. ### Example Usage Below is an example of how to use the fine-tuned `SK_Morph_BLM-topic-news ` model in a Python script: ```python import torch from transformers import RobertaForSequenceClassification, RobertaTokenizerFast class TopicClassifier: def __init__(self, tokenizer, model): self.model = RobertaForSequenceClassification.from_pretrained(model, num_labels=6) self.tokenizer = RobertaTokenizerFast.from_pretrained(tokenizer, max_length=256) def tokenize_text(self, text): encoded_text = self.tokenizer.encode_plus( text.lower(), max_length=256, padding='max_length', truncation=True, return_tensors='pt' ) return encoded_text def classify_text(self, encoded_text): with torch.no_grad(): output = self.model(**encoded_text) logits = output.logits predicted_class = torch.argmax(logits, dim=1).item() probabilities = torch.softmax(logits, dim=1) class_probabilities = probabilities[0].tolist() predicted_class_text = self.model.config.id2label[predicted_class] return predicted_class, predicted_class_text, class_probabilities # Instantiate the topic classifier with the specified tokenizer and model classifier = TopicClassifier(tokenizer="daviddrzik/SK_BPE_BLM", model="daviddrzik/SK_BPE_BLM-topic-news") # Example text to classify topic text_to_classify = "Tento dôležitý zápas medzi Českou republikou a Švajčiarskom sa po troch tretinách skončil 2:0." print("Text to classify: " + text_to_classify + "\n") # Tokenize the input text encoded_text = classifier.tokenize_text(text_to_classify) # Classify the topic of the tokenized text predicted_class, predicted_class_text, logits = classifier.classify_text(encoded_text) # Print the predicted class label and index print(f"Predicted class: {predicted_class_text} ({predicted_class})") # Print the probabilities for each class print(f"Class probabilities: {logits}") ``` Example Output Here is the output when running the above example: ```yaml Text to classify: Tento dôležitý zápas medzi Českou republikou a Švajčiarskom sa po troch tretinách skončil 2:0. Predicted class: Sport (2) Class probabilities: [0.0005132766091264784, 0.0001338021393166855, 0.9980706572532654, 0.00035721372114494443, 0.0005179148865863681, 0.00040712326881475747] ```
null
Non_BioNLP
# Fine-Tuned Topic Classification Model - SK_BPE_BLM (Topic News) ## Model Overview This model is a fine-tuned version of the [SK_BPE_BLM model](https://huggingface.co/daviddrzik/SK_BPE_BLM) for topic classification. For this task, we used the Slovak Categorized News Corpus, which contains news articles divided into six categories: Economy and Business, Culture, News, World, Sports, and Healthcare. The corpus provides text files with detailed annotations, including token and sentence boundary identification, stop words, morphological analysis, named entity recognition, and lemmatization. ## Topic Labels Each record in the dataset is labeled with one of the following topics: - **Healthcare (0):** 2,564 records - **News (1):** 4,174 records - **Sports (2):** 2,759 records - **World (3):** 1,660 records - **Economy and Business (4):** 4,199 records - **Culture (5):** 137 records ## Dataset Details The original corpus did not contain continuous text, requiring significant preprocessing. The process involved: 1. **Reconstruction:** We reconstructed coherent text from individual annotated files, resulting in over 86,000 sentences. 2. **Combining Sentences:** Sentences from each file were combined into single records, with a maximum length of 600 characters (approximately 200 tokens). The final dataset comprises a total of 15,493 records, each labeled according to the categories listed above. For more information about the dataset, please visit [this link]( https://nlp.kemt.fei.tuke.sk/language/categorizednews). ## Fine-Tuning Hyperparameters The following hyperparameters were used during the fine-tuning process: - **Learning Rate:** 1e-05 - **Training Batch Size:** 64 - **Evaluation Batch Size:** 64 - **Seed:** 42 - **Optimizer:** Adam (default) - **Number of Epochs:** 10 ## Model Performance The model was evaluated using stratified 10-fold cross-validation, achieving a weighted F1-score with a median value of <span style="font-size: 24px;">**0.966**</span> across the folds. ## Model Usage This model is suitable for topic classification in Slovak text, particularly for news articles across various categories. It is specifically designed for applications requiring topic categorization of news content and may not generalize well to other types of text. ### Example Usage Below is an example of how to use the fine-tuned `SK_Morph_BLM-topic-news ` model in a Python script: ```python import torch from transformers import RobertaForSequenceClassification, RobertaTokenizerFast class TopicClassifier: def __init__(self, tokenizer, model): self.model = RobertaForSequenceClassification.from_pretrained(model, num_labels=6) self.tokenizer = RobertaTokenizerFast.from_pretrained(tokenizer, max_length=256) def tokenize_text(self, text): encoded_text = self.tokenizer.encode_plus( text.lower(), max_length=256, padding='max_length', truncation=True, return_tensors='pt' ) return encoded_text def classify_text(self, encoded_text): with torch.no_grad(): output = self.model(**encoded_text) logits = output.logits predicted_class = torch.argmax(logits, dim=1).item() probabilities = torch.softmax(logits, dim=1) class_probabilities = probabilities[0].tolist() predicted_class_text = self.model.config.id2label[predicted_class] return predicted_class, predicted_class_text, class_probabilities # Instantiate the topic classifier with the specified tokenizer and model classifier = TopicClassifier(tokenizer="daviddrzik/SK_BPE_BLM", model="daviddrzik/SK_BPE_BLM-topic-news") # Example text to classify topic text_to_classify = "Tento dôležitý zápas medzi Českou republikou a Švajčiarskom sa po troch tretinách skončil 2:0." print("Text to classify: " + text_to_classify + "\n") # Tokenize the input text encoded_text = classifier.tokenize_text(text_to_classify) # Classify the topic of the tokenized text predicted_class, predicted_class_text, logits = classifier.classify_text(encoded_text) # Print the predicted class label and index print(f"Predicted class: {predicted_class_text} ({predicted_class})") # Print the probabilities for each class print(f"Class probabilities: {logits}") ``` Example Output Here is the output when running the above example: ```yaml Text to classify: Tento dôležitý zápas medzi Českou republikou a Švajčiarskom sa po troch tretinách skončil 2:0. Predicted class: Sport (2) Class probabilities: [0.0005132766091264784, 0.0001338021393166855, 0.9980706572532654, 0.00035721372114494443, 0.0005179148865863681, 0.00040712326881475747] ```
{"base_model": "daviddrzik/SK_BPE_BLM", "language": ["sk"], "library_name": "transformers", "license": "mit", "metrics": ["f1"], "pipeline_tag": "text-classification", "tags": ["topic"]}
task
[ "NAMED_ENTITY_RECOGNITION" ]
43,660
swap-uniba/LLM-wsd-TT-20000
swap-uniba
null
[ "safetensors", "llama", "text-generation-inference", "de", "en", "es", "fr", "it", "arxiv:2503.08662", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "region:us" ]
2025-03-06T12:22:40Z
2025-03-12T14:02:58+00:00
6
0
--- base_model: - meta-llama/Llama-3.1-8B-Instruct language: - de - en - es - fr - it license: llama3.1 tags: - text-generation-inference --- # Model Card for LLM-wsd-TT-20000 ## Model description <!-- Provide a quick summary of what the model is/does. --> **LLM-wsd-TT-20000** is a *Large Language Model (LLM)* instruction-tuned over **meta-llama/Meta-Llama-3.1-8B-Instruct**. This model has been trained for the **WSD** task over a balanced training dataset (20000 instances per language), with machine-translation. It is capable of providing the definition of a word in a given sentence. Specifically, it can answer both: 1) **Open-ended questions**, where the model will generate the definition of the target word; 2) **Closed-ended questions**, where the model will generate the identifier of the correct option out of a list of alternatives. More details regarding the training procedure (e.g. hyperparameters, dataset construction, and so on) can be found in Section 4.2 of the [paper](https://arxiv.org/abs/2503.08662). - **Developed by:** Pierpaolo Basile, Lucia Siciliani, Elio Musacchio - **Model type:** LLaMA 3.1 Instruct - **Language(s) (NLP):** English, French, German, Italian and Spanish - **License:** [LLAMA 3.1 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE) - **Finetuned from model:** [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) ## Prompt Format The model has been trained using several instructions depending on language, task (open-ended or closed-ended) and number of occurences of target word in the sentence. In [Instructions](#instructions), we provide the instructions used for all cases. The following placeholder variables have to be replaced: - {target_word}: the target word in the input to disambiguate; - {options}: options to provide to the model for the closed-ended task only. The options should be newline separated and each option should be identified by a number. Refer to the [closed-ended example](#closed-ended) for an example of options formatting; - {occurrence}: the ordinal number of the {target_word} occurrence (e.g. "second"). This is required only when the input sentence contains multiple occurrences of {target_word}. Please note that the complete prompt also has the following string after the instruction: ```python " Input: \"{sentence}\"" ``` where {sentence} is the input sentence containing the word to disambiguate. ## How to Get Started with the Model Below you can find two examples of model usage, for open-ended and closed-ended generation respectively. ### Open-ended ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.trainer_utils import set_seed target_word = "long" instruction = f"Give a brief definition of the word \"{target_word}\" in the sentence given as input. Generate only the definition." input_sentence = "How long has it been since you reviewed the objectives of your benefit and service program?" model_id = "swap-uniba/LLM-wsd-TT-20000" set_seed(42) tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False) tokenizer.padding_side = "left" model = AutoModelForCausalLM.from_pretrained( model_id, device_map='cuda', torch_dtype=torch.bfloat16, ).eval() terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] messages = [ {"role": "user", "content": instruction + " Input: \"" + input_sentence + "\""}, ] input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt") outputs = model.generate( input_ids.to('cuda'), max_new_tokens=512, eos_token_id=terminators, num_beams=1, do_sample=False ) print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)) ``` ### Closed-ended ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.trainer_utils import set_seed target_word = "hurry" instruction = f"Given the word \"{target_word}\" in the input sentence, choose the correct meaning from the following:\n1) Move very fast\n2) Urge to an unnatural speed\n\nGenerate only the number of the selected option." input_sentence = "If you hurry you might beat the headquarters boys." model_id = "swap-uniba/LLM-wsd-TT-20000" set_seed(42) tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False) tokenizer.padding_side = "left" model = AutoModelForCausalLM.from_pretrained( model_id, device_map='cuda', torch_dtype=torch.bfloat16, ).eval() terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] messages = [ {"role": "user", "content": instruction + " Input: \"" + input_sentence + "\""}, ] input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt") outputs = model.generate( input_ids.to('cuda'), max_new_tokens=512, eos_token_id=terminators, num_beams=1, do_sample=False ) print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)) ``` ## Citation If you use this model in your research, please cite the following: ```bibtex @misc{basile2025exploringwordsensedisambiguation, title={Exploring the Word Sense Disambiguation Capabilities of Large Language Models}, author={Pierpaolo Basile and Lucia Siciliani and Elio Musacchio and Giovanni Semeraro}, year={2025}, eprint={2503.08662}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2503.08662}, } ``` ## Instructions ### Single occurrence of target word (open-ended) #### English ```python "Give a brief definition of the word \"{target_word}\" in the sentence given as input. Generate only the definition." ``` #### French ```python "Donnez une brève définition du mot \"{target_word}\" dans la phrase d’entrée donnée. Ne donnez que la définition." ``` #### German ```python "Geben Sie eine kurze Definition des Wortes \"{target_word}\" in dem gegebenen Satz an. Erzeugen Sie nur die Definition." ``` #### Italian ```python "Fornisci una breve definizione della parola \"{target_word}\" nella frase data in input. Genera solo la definizione." ``` #### Spanish ```python "Proporciona una definición breve de la palabra \"{target_word}\" en la frase dada en entrada. Genera solo la definición." ``` ### Multiple occurences of target word (open-ended) #### English ```python "Give a brief definition of the {occurrence} occurrence of the word \"{target_word}\" in the sentence given as input. Generate only the definition." ``` #### French ```python "Donnez une brève définition de l'occurrence {occurrence} du mot \"{target_word}\" dans la phrase d’entrée donnée. Ne donnez que la définition." ``` #### German ```python "Geben Sie eine kurze Definition des {occurrence} Vorkommens des Wortes \"{target_word}\" in dem gegebenen Eingabesatz an. Erzeugen Sie nur die Definition." ``` #### Italian ```python "Fornisci una breve definizione della {occurrence} occorrenza della parola \"{target_word}\" nella frase data in input. Genera solo la definizione." ``` #### Spanish ```python "Proporciona una definición breve de la {occurrence} ocurrencia de la palabra \"{target_word}\" en la frase dada en entrada. Genera solo la definición." ``` ### Single occurrence of target word (closed-ended) #### English ```python "Given the word \"{target_word}\" in the input sentence, choose the correct meaning from the following:\n{options}\n\nGenerate only the number of the selected option." ``` #### French ```python "Étant donné le mot \"{target_word}\" dans la phrase saisie, choisissez la signification correcte parmi les suivantes:\n{options}\n\nNe donnez que le numéro de l’option sélectionnée." ``` #### German ```python "Wählen Sie für das Wort \"{target_word}\" im Eingabesatz die richtige Bedeutung aus den folgenden Angaben:\n{options}\n\nErzeugt nur die Nummer der ausgewählten Option" ``` #### Italian ```python "Data la parola \"{target_word}\" nella frase in input, scegli il significato corretto tra i seguenti:\n{options}\n\nGenera solo il numero dell'opzione selezionata." ``` #### Spanish ```python "Dada la palabra \"{target_word}\" en la frase de entrada, elija el significado correcto entre los siguientes:\n{options}\n\nGenera solo el número de la opción seleccionada." ``` ### Multiple occurrences of target word (closed-ended) #### English ```python "Given the word \"{target_word}\" in the input sentence, choose the correct meaning from the following:\n{options}\n\nGenerate only the number of the selected option." ``` #### French ```python "Étant donné l'occurrence {occurrence} du mot \"{target_word}\" dans la phrase d'entrée, choisissez la signification correcte parmi les suivantes:\n{options}\n\nNe donnez que le numéro de l’option sélectionnée." ``` #### German ```python "Wählen Sie angesichts des {occurrence} Vorkommens des Wortes \"{target_word}\" im Eingabesatz die richtige Bedeutung aus der folgenden Liste aus:\n{options}\n\nErzeugt nur die Nummer der ausgewählten Option." ``` #### Italian ```python "Data la {occurrence} occorrenza della parola \"{target_word}\" nella frase in input, scegli il significato corretto tra i seguenti:\n{options}\n\nGenera solo il numero dell'opzione selezionata." ``` #### Spanish ```python "Dada la {occurrence} ocurrencia de la palabra \"{target_word}\" en la frase de entrada, elije el significado correcto entre los siguientes:\n{options}\n\nGenera solo el número de la opción seleccionada." ```
null
Non_BioNLP
# Model Card for LLM-wsd-TT-20000 ## Model description <!-- Provide a quick summary of what the model is/does. --> **LLM-wsd-TT-20000** is a *Large Language Model (LLM)* instruction-tuned over **meta-llama/Meta-Llama-3.1-8B-Instruct**. This model has been trained for the **WSD** task over a balanced training dataset (20000 instances per language), with machine-translation. It is capable of providing the definition of a word in a given sentence. Specifically, it can answer both: 1) **Open-ended questions**, where the model will generate the definition of the target word; 2) **Closed-ended questions**, where the model will generate the identifier of the correct option out of a list of alternatives. More details regarding the training procedure (e.g. hyperparameters, dataset construction, and so on) can be found in Section 4.2 of the [paper](https://arxiv.org/abs/2503.08662). - **Developed by:** Pierpaolo Basile, Lucia Siciliani, Elio Musacchio - **Model type:** LLaMA 3.1 Instruct - **Language(s) (NLP):** English, French, German, Italian and Spanish - **License:** [LLAMA 3.1 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE) - **Finetuned from model:** [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) ## Prompt Format The model has been trained using several instructions depending on language, task (open-ended or closed-ended) and number of occurences of target word in the sentence. In [Instructions](#instructions), we provide the instructions used for all cases. The following placeholder variables have to be replaced: - {target_word}: the target word in the input to disambiguate; - {options}: options to provide to the model for the closed-ended task only. The options should be newline separated and each option should be identified by a number. Refer to the [closed-ended example](#closed-ended) for an example of options formatting; - {occurrence}: the ordinal number of the {target_word} occurrence (e.g. "second"). This is required only when the input sentence contains multiple occurrences of {target_word}. Please note that the complete prompt also has the following string after the instruction: ```python " Input: \"{sentence}\"" ``` where {sentence} is the input sentence containing the word to disambiguate. ## How to Get Started with the Model Below you can find two examples of model usage, for open-ended and closed-ended generation respectively. ### Open-ended ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.trainer_utils import set_seed target_word = "long" instruction = f"Give a brief definition of the word \"{target_word}\" in the sentence given as input. Generate only the definition." input_sentence = "How long has it been since you reviewed the objectives of your benefit and service program?" model_id = "swap-uniba/LLM-wsd-TT-20000" set_seed(42) tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False) tokenizer.padding_side = "left" model = AutoModelForCausalLM.from_pretrained( model_id, device_map='cuda', torch_dtype=torch.bfloat16, ).eval() terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] messages = [ {"role": "user", "content": instruction + " Input: \"" + input_sentence + "\""}, ] input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt") outputs = model.generate( input_ids.to('cuda'), max_new_tokens=512, eos_token_id=terminators, num_beams=1, do_sample=False ) print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)) ``` ### Closed-ended ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.trainer_utils import set_seed target_word = "hurry" instruction = f"Given the word \"{target_word}\" in the input sentence, choose the correct meaning from the following:\n1) Move very fast\n2) Urge to an unnatural speed\n\nGenerate only the number of the selected option." input_sentence = "If you hurry you might beat the headquarters boys." model_id = "swap-uniba/LLM-wsd-TT-20000" set_seed(42) tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False) tokenizer.padding_side = "left" model = AutoModelForCausalLM.from_pretrained( model_id, device_map='cuda', torch_dtype=torch.bfloat16, ).eval() terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] messages = [ {"role": "user", "content": instruction + " Input: \"" + input_sentence + "\""}, ] input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt") outputs = model.generate( input_ids.to('cuda'), max_new_tokens=512, eos_token_id=terminators, num_beams=1, do_sample=False ) print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)) ``` ## Citation If you use this model in your research, please cite the following: ```bibtex @misc{basile2025exploringwordsensedisambiguation, title={Exploring the Word Sense Disambiguation Capabilities of Large Language Models}, author={Pierpaolo Basile and Lucia Siciliani and Elio Musacchio and Giovanni Semeraro}, year={2025}, eprint={2503.08662}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2503.08662}, } ``` ## Instructions ### Single occurrence of target word (open-ended) #### English ```python "Give a brief definition of the word \"{target_word}\" in the sentence given as input. Generate only the definition." ``` #### French ```python "Donnez une brève définition du mot \"{target_word}\" dans la phrase d’entrée donnée. Ne donnez que la définition." ``` #### German ```python "Geben Sie eine kurze Definition des Wortes \"{target_word}\" in dem gegebenen Satz an. Erzeugen Sie nur die Definition." ``` #### Italian ```python "Fornisci una breve definizione della parola \"{target_word}\" nella frase data in input. Genera solo la definizione." ``` #### Spanish ```python "Proporciona una definición breve de la palabra \"{target_word}\" en la frase dada en entrada. Genera solo la definición." ``` ### Multiple occurences of target word (open-ended) #### English ```python "Give a brief definition of the {occurrence} occurrence of the word \"{target_word}\" in the sentence given as input. Generate only the definition." ``` #### French ```python "Donnez une brève définition de l'occurrence {occurrence} du mot \"{target_word}\" dans la phrase d’entrée donnée. Ne donnez que la définition." ``` #### German ```python "Geben Sie eine kurze Definition des {occurrence} Vorkommens des Wortes \"{target_word}\" in dem gegebenen Eingabesatz an. Erzeugen Sie nur die Definition." ``` #### Italian ```python "Fornisci una breve definizione della {occurrence} occorrenza della parola \"{target_word}\" nella frase data in input. Genera solo la definizione." ``` #### Spanish ```python "Proporciona una definición breve de la {occurrence} ocurrencia de la palabra \"{target_word}\" en la frase dada en entrada. Genera solo la definición." ``` ### Single occurrence of target word (closed-ended) #### English ```python "Given the word \"{target_word}\" in the input sentence, choose the correct meaning from the following:\n{options}\n\nGenerate only the number of the selected option." ``` #### French ```python "Étant donné le mot \"{target_word}\" dans la phrase saisie, choisissez la signification correcte parmi les suivantes:\n{options}\n\nNe donnez que le numéro de l’option sélectionnée." ``` #### German ```python "Wählen Sie für das Wort \"{target_word}\" im Eingabesatz die richtige Bedeutung aus den folgenden Angaben:\n{options}\n\nErzeugt nur die Nummer der ausgewählten Option" ``` #### Italian ```python "Data la parola \"{target_word}\" nella frase in input, scegli il significato corretto tra i seguenti:\n{options}\n\nGenera solo il numero dell'opzione selezionata." ``` #### Spanish ```python "Dada la palabra \"{target_word}\" en la frase de entrada, elija el significado correcto entre los siguientes:\n{options}\n\nGenera solo el número de la opción seleccionada." ``` ### Multiple occurrences of target word (closed-ended) #### English ```python "Given the word \"{target_word}\" in the input sentence, choose the correct meaning from the following:\n{options}\n\nGenerate only the number of the selected option." ``` #### French ```python "Étant donné l'occurrence {occurrence} du mot \"{target_word}\" dans la phrase d'entrée, choisissez la signification correcte parmi les suivantes:\n{options}\n\nNe donnez que le numéro de l’option sélectionnée." ``` #### German ```python "Wählen Sie angesichts des {occurrence} Vorkommens des Wortes \"{target_word}\" im Eingabesatz die richtige Bedeutung aus der folgenden Liste aus:\n{options}\n\nErzeugt nur die Nummer der ausgewählten Option." ``` #### Italian ```python "Data la {occurrence} occorrenza della parola \"{target_word}\" nella frase in input, scegli il significato corretto tra i seguenti:\n{options}\n\nGenera solo il numero dell'opzione selezionata." ``` #### Spanish ```python "Dada la {occurrence} ocurrencia de la palabra \"{target_word}\" en la frase de entrada, elije el significado correcto entre los siguientes:\n{options}\n\nGenera solo el número de la opción seleccionada." ```
{"base_model": ["meta-llama/Llama-3.1-8B-Instruct"], "language": ["de", "en", "es", "fr", "it"], "license": "llama3.1", "tags": ["text-generation-inference"]}
task
[ "TRANSLATION" ]
43,661
aroot/eng-fra-simcse_random_ssrl
aroot
translation
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-07T22:51:26Z
2023-07-07T23:06:31+00:00
8
0
--- metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: eng-fra-simcse_random_ssrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_random_ssrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1462 - Bleu: 31.7089 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_random_ssrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1462 - Bleu: 31.7089 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "eng-fra-simcse_random_ssrl", "results": []}]}
task
[ "TRANSLATION" ]
43,662
gyr66/machine_translation
gyr66
translation
[ "transformers", "safetensors", "mbart", "feature-extraction", "translation", "en", "zh", "base_model:facebook/mbart-large-cc25", "base_model:finetune:facebook/mbart-large-cc25", "endpoints_compatible", "region:us" ]
2023-12-20T04:27:46Z
2023-12-20T06:23:56+00:00
86
0
--- base_model: facebook/mbart-large-cc25 language: - en - zh metrics: - sacrebleu pipeline_tag: translation --- # eval This model is a fine-tuned version of [facebook/mbart-large-cc25 ](https://huggingface.co/facebook/mbart-large-cc25) on IWSLT14 En-Zh dataset. It achieves the following results on the evaluation set: - eval_loss: 3.8405 - eval_bleu: 3.5173 - eval_gen_len: 21.5826 It achieves the following results on the test set: - test_loss: 3.8337 - test_bleu: 3.277 - test_gen_len: 21.6287 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 7 - num_epochs: 9 ### Framework versions - Transformers 4.35.2 - Pytorch 2.0.1+cu117 - Datasets 2.15.0 - Tokenizers 0.15.0
null
Non_BioNLP
# eval This model is a fine-tuned version of [facebook/mbart-large-cc25 ](https://huggingface.co/facebook/mbart-large-cc25) on IWSLT14 En-Zh dataset. It achieves the following results on the evaluation set: - eval_loss: 3.8405 - eval_bleu: 3.5173 - eval_gen_len: 21.5826 It achieves the following results on the test set: - test_loss: 3.8337 - test_bleu: 3.277 - test_gen_len: 21.6287 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 7 - num_epochs: 9 ### Framework versions - Transformers 4.35.2 - Pytorch 2.0.1+cu117 - Datasets 2.15.0 - Tokenizers 0.15.0
{"base_model": "facebook/mbart-large-cc25", "language": ["en", "zh"], "metrics": ["sacrebleu"], "pipeline_tag": "translation"}
task
[ "TRANSLATION" ]
43,663
LoneStriker/Gorgon-7b-v0.1-GGUF
LoneStriker
null
[ "gguf", "Erotica", "Porn", "NSFW", "Summarization", "Ecommerce", "SEO", "en", "dataset:openerotica/gorgon-lima-v0.1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-01-22T17:22:26Z
2024-01-22T17:39:31+00:00
26
1
--- datasets: - openerotica/gorgon-lima-v0.1 language: - en license: apache-2.0 tags: - Erotica - Porn - NSFW - Summarization - Ecommerce - SEO --- This is an experimental lima style model trained on a small subset of freedom-rp and erotica-analysis-16k. Due to the much smaller dataset size (about 1000 samples from each original dataset) it was much easier to edit and clean thoroughly. I also used a slightly lower learning rate of 0.00015. The prompt format is chatml. I have not tested the model yet, but I am hoping I can use this to help me create more training data for specific genres. Please consider subscribing to my patreon or buying a giant candle dick on my etsy to show your support. https://www.patreon.com/openerotica http://openerotica.etsy.com/
null
TBD
This is an experimental lima style model trained on a small subset of freedom-rp and erotica-analysis-16k. Due to the much smaller dataset size (about 1000 samples from each original dataset) it was much easier to edit and clean thoroughly. I also used a slightly lower learning rate of 0.00015. The prompt format is chatml. I have not tested the model yet, but I am hoping I can use this to help me create more training data for specific genres. Please consider subscribing to my patreon or buying a giant candle dick on my etsy to show your support. https://www.patreon.com/openerotica http://openerotica.etsy.com/
{"datasets": ["openerotica/gorgon-lima-v0.1"], "language": ["en"], "license": "apache-2.0", "tags": ["Erotica", "Porn", "NSFW", "Summarization", "Ecommerce", "SEO"]}
task
[ "SUMMARIZATION" ]
43,664
mrapacz/interlinear-en-mt5-large-emb-sum-normalized-ob
mrapacz
text2text-generation
[ "transformers", "pytorch", "morph-t5-sum", "text2text-generation", "en", "dataset:mrapacz/greek-interlinear-translations", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-02-08T12:25:54Z
2025-02-21T21:31:23+00:00
16
0
--- base_model: - mT5-large datasets: - mrapacz/greek-interlinear-translations language: - en library_name: transformers license: cc-by-sa-4.0 metrics: - bleu --- # Model Card for Ancient Greek to English Interlinear Translation Model This model performs interlinear translation from Ancient Greek to English, maintaining word-level alignment between source and target texts. You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation). ## Model Details ### Model Description - **Developed By:** Maciej Rapacz, AGH University of Kraków - **Model Type:** MorphT5SumForConditionalGeneration - **Base Model:** mT5-large - **Tokenizer:** mT5 - **Language(s):** Ancient Greek (source) → English (target) - **License:** CC BY-NC-SA 4.0 - **Tag Set:** OB (Oblubienica) - **Text Preprocessing:** Normalized - **Morphological Encoding:** emb-sum ### Model Performance - **BLEU Score:** 56.24 - **SemScore:** 0.88 ### Model Sources - **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation - **Paper:** https://aclanthology.org/2025.loreslm-1.11/ ## Usage Example > **Note**: This model uses a modification of T5-family models that includes dedicated embedding layers for encoding morphological information. To load these models, install the [morpht5](https://github.com/mrapacz/loreslm-interlinear-translation/blob/master/morpht5/README.md) package: > ```bash > pip install morpht5 > ``` ```python >>> from morpht5 import MorphT5SumForConditionalGeneration, MorphT5Tokenizer >>> text = ['λεγει', 'αυτω', 'ο', 'ιησους', 'εγειρε', 'αρον', 'τον', 'κραβαττον', 'σου', 'και', 'περιπατει'] >>> tags = ['vi Pres Act 3 Sg', 'pp Dat Sg m', 't_ Nom Sg m', 'n_ Nom Sg m', 'vm Pres Act 2 Sg', 'vm Aor Act 2 Sg', 't_ Acc Sg m', 'n_ Acc Sg m', 'pp 2 Gen Sg', 'Conj', 'vm Pres Act 2 Sg'] >>> tokenizer = MorphT5Tokenizer.from_pretrained("mrapacz/interlinear-en-mt5-large-emb-sum-normalized-ob") >>> inputs = tokenizer( text=text, morph_tags=tags, return_tensors="pt" ) >>> model = MorphT5SumForConditionalGeneration.from_pretrained("mrapacz/interlinear-en-mt5-large-emb-sum-normalized-ob") >>> outputs = model.generate( **inputs, max_new_tokens=100, early_stopping=True, ) >>> decoded = tokenizer.decode(outputs[0], skip_special_tokens=True, keep_block_separator=True) >>> decoded = decoded.replace(tokenizer.target_block_separator_token, " | ") >>> decoded 'Says | to him | - | Jesus | Arise | take up | the | mat | of you | and | walk' ``` ## Citation If you use this model, please cite the following paper: ``` @inproceedings{rapacz-smywinski-pohl-2025-low, title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek", author = "Rapacz, Maciej and Smywi{\'n}ski-Pohl, Aleksander", editor = "Hettiarachchi, Hansi and Ranasinghe, Tharindu and Rayson, Paul and Mitkov, Ruslan and Gaber, Mohamed and Premasiri, Damith and Tan, Fiona Anting and Uyangodage, Lasitha", booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages", month = jan, year = "2025", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.loreslm-1.11/", pages = "145--165", abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios." } ```
null
Non_BioNLP
# Model Card for Ancient Greek to English Interlinear Translation Model This model performs interlinear translation from Ancient Greek to English, maintaining word-level alignment between source and target texts. You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation). ## Model Details ### Model Description - **Developed By:** Maciej Rapacz, AGH University of Kraków - **Model Type:** MorphT5SumForConditionalGeneration - **Base Model:** mT5-large - **Tokenizer:** mT5 - **Language(s):** Ancient Greek (source) → English (target) - **License:** CC BY-NC-SA 4.0 - **Tag Set:** OB (Oblubienica) - **Text Preprocessing:** Normalized - **Morphological Encoding:** emb-sum ### Model Performance - **BLEU Score:** 56.24 - **SemScore:** 0.88 ### Model Sources - **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation - **Paper:** https://aclanthology.org/2025.loreslm-1.11/ ## Usage Example > **Note**: This model uses a modification of T5-family models that includes dedicated embedding layers for encoding morphological information. To load these models, install the [morpht5](https://github.com/mrapacz/loreslm-interlinear-translation/blob/master/morpht5/README.md) package: > ```bash > pip install morpht5 > ``` ```python >>> from morpht5 import MorphT5SumForConditionalGeneration, MorphT5Tokenizer >>> text = ['λεγει', 'αυτω', 'ο', 'ιησους', 'εγειρε', 'αρον', 'τον', 'κραβαττον', 'σου', 'και', 'περιπατει'] >>> tags = ['vi Pres Act 3 Sg', 'pp Dat Sg m', 't_ Nom Sg m', 'n_ Nom Sg m', 'vm Pres Act 2 Sg', 'vm Aor Act 2 Sg', 't_ Acc Sg m', 'n_ Acc Sg m', 'pp 2 Gen Sg', 'Conj', 'vm Pres Act 2 Sg'] >>> tokenizer = MorphT5Tokenizer.from_pretrained("mrapacz/interlinear-en-mt5-large-emb-sum-normalized-ob") >>> inputs = tokenizer( text=text, morph_tags=tags, return_tensors="pt" ) >>> model = MorphT5SumForConditionalGeneration.from_pretrained("mrapacz/interlinear-en-mt5-large-emb-sum-normalized-ob") >>> outputs = model.generate( **inputs, max_new_tokens=100, early_stopping=True, ) >>> decoded = tokenizer.decode(outputs[0], skip_special_tokens=True, keep_block_separator=True) >>> decoded = decoded.replace(tokenizer.target_block_separator_token, " | ") >>> decoded 'Says | to him | - | Jesus | Arise | take up | the | mat | of you | and | walk' ``` ## Citation If you use this model, please cite the following paper: ``` @inproceedings{rapacz-smywinski-pohl-2025-low, title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek", author = "Rapacz, Maciej and Smywi{\'n}ski-Pohl, Aleksander", editor = "Hettiarachchi, Hansi and Ranasinghe, Tharindu and Rayson, Paul and Mitkov, Ruslan and Gaber, Mohamed and Premasiri, Damith and Tan, Fiona Anting and Uyangodage, Lasitha", booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages", month = jan, year = "2025", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.loreslm-1.11/", pages = "145--165", abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios." } ```
{"base_model": ["mT5-large"], "datasets": ["mrapacz/greek-interlinear-translations"], "language": ["en"], "library_name": "transformers", "license": "cc-by-sa-4.0", "metrics": ["bleu"]}
task
[ "TRANSLATION" ]
43,665
Helsinki-NLP/opus-mt-tc-big-zle-de
Helsinki-NLP
translation
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "be", "de", "ru", "uk", "zle", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-24T08:57:20Z
2023-10-10T10:41:28+00:00
57
0
--- language: - be - de - ru - uk - zle license: cc-by-4.0 tags: - translation - opus-mt-tc model-index: - name: opus-mt-tc-big-zle-de results: - task: type: translation name: Translation rus-deu dataset: name: flores101-devtest type: flores_101 args: rus deu devtest metrics: - type: bleu value: 26.1 name: BLEU - type: bleu value: 28.1 name: BLEU - task: type: translation name: Translation bel-deu dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: bel-deu metrics: - type: bleu value: 44.8 name: BLEU - type: bleu value: 51.8 name: BLEU - type: bleu value: 54.7 name: BLEU - task: type: translation name: Translation rus-deu dataset: name: newstest2013 type: wmt-2013-news args: rus-deu metrics: - type: bleu value: 25.2 name: BLEU --- # opus-mt-tc-big-zle-de Neural machine translation model for translating from East Slavic languages (zle) to German (de). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-19 * source language(s): bel rus ukr * target language(s): deu * model: transformer-big * data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807_transformer-big_2022-03-19.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-deu/opusTCv20210807_transformer-big_2022-03-19.zip) * more information released models: [OPUS-MT zle-deu README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-deu/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Это был по-настоящему прекрасный день.", "Дождь кончился?" ] model_name = "pytorch-models/opus-mt-tc-big-zle-de" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Es war ein wirklich schöner Tag. # Ist der Regen vorbei? ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-de") print(pipe("Это был по-настоящему прекрасный день.")) # expected output: Es war ein wirklich schöner Tag. ``` ## Benchmarks * test set translations: [opusTCv20210807_transformer-big_2022-03-19.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-deu/opusTCv20210807_transformer-big_2022-03-19.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-03-19.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-deu/opusTCv20210807_transformer-big_2022-03-19.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | bel-deu | tatoeba-test-v2021-08-07 | 0.63720 | 44.8 | 551 | 4182 | | rus-deu | tatoeba-test-v2021-08-07 | 0.69768 | 51.8 | 12800 | 98842 | | ukr-deu | tatoeba-test-v2021-08-07 | 0.70860 | 54.7 | 10319 | 64646 | | bel-deu | flores101-devtest | 0.47052 | 12.9 | 1012 | 25094 | | rus-deu | flores101-devtest | 0.56159 | 26.1 | 1012 | 25094 | | ukr-deu | flores101-devtest | 0.57251 | 28.1 | 1012 | 25094 | | rus-deu | newstest2012 | 0.49257 | 19.8 | 3003 | 72886 | | rus-deu | newstest2013 | 0.54015 | 25.2 | 3000 | 63737 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Wed Mar 23 22:16:45 EET 2022 * port machine: LM0-400-22516.local
null
Non_BioNLP
# opus-mt-tc-big-zle-de Neural machine translation model for translating from East Slavic languages (zle) to German (de). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-19 * source language(s): bel rus ukr * target language(s): deu * model: transformer-big * data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807_transformer-big_2022-03-19.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-deu/opusTCv20210807_transformer-big_2022-03-19.zip) * more information released models: [OPUS-MT zle-deu README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-deu/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Это был по-настоящему прекрасный день.", "Дождь кончился?" ] model_name = "pytorch-models/opus-mt-tc-big-zle-de" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Es war ein wirklich schöner Tag. # Ist der Regen vorbei? ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-de") print(pipe("Это был по-настоящему прекрасный день.")) # expected output: Es war ein wirklich schöner Tag. ``` ## Benchmarks * test set translations: [opusTCv20210807_transformer-big_2022-03-19.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-deu/opusTCv20210807_transformer-big_2022-03-19.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-03-19.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-deu/opusTCv20210807_transformer-big_2022-03-19.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | bel-deu | tatoeba-test-v2021-08-07 | 0.63720 | 44.8 | 551 | 4182 | | rus-deu | tatoeba-test-v2021-08-07 | 0.69768 | 51.8 | 12800 | 98842 | | ukr-deu | tatoeba-test-v2021-08-07 | 0.70860 | 54.7 | 10319 | 64646 | | bel-deu | flores101-devtest | 0.47052 | 12.9 | 1012 | 25094 | | rus-deu | flores101-devtest | 0.56159 | 26.1 | 1012 | 25094 | | ukr-deu | flores101-devtest | 0.57251 | 28.1 | 1012 | 25094 | | rus-deu | newstest2012 | 0.49257 | 19.8 | 3003 | 72886 | | rus-deu | newstest2013 | 0.54015 | 25.2 | 3000 | 63737 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Wed Mar 23 22:16:45 EET 2022 * port machine: LM0-400-22516.local
{"language": ["be", "de", "ru", "uk", "zle"], "license": "cc-by-4.0", "tags": ["translation", "opus-mt-tc"], "model-index": [{"name": "opus-mt-tc-big-zle-de", "results": [{"task": {"type": "translation", "name": "Translation rus-deu"}, "dataset": {"name": "flores101-devtest", "type": "flores_101", "args": "rus deu devtest"}, "metrics": [{"type": "bleu", "value": 26.1, "name": "BLEU"}, {"type": "bleu", "value": 28.1, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation bel-deu"}, "dataset": {"name": "tatoeba-test-v2021-08-07", "type": "tatoeba_mt", "args": "bel-deu"}, "metrics": [{"type": "bleu", "value": 44.8, "name": "BLEU"}, {"type": "bleu", "value": 51.8, "name": "BLEU"}, {"type": "bleu", "value": 54.7, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation rus-deu"}, "dataset": {"name": "newstest2013", "type": "wmt-2013-news", "args": "rus-deu"}, "metrics": [{"type": "bleu", "value": 25.2, "name": "BLEU"}]}]}]}
task
[ "TRANSLATION" ]
43,666
google/paligemma-3b-ft-infovqa-448-jax
google
image-text-to-text
[ "big_vision", "paligemma", "jax", "image-text-to-text", "arxiv:2310.09199", "arxiv:2303.15343", "arxiv:2403.08295", "arxiv:1706.03762", "arxiv:2010.11929", "arxiv:2209.06794", "arxiv:2209.04372", "arxiv:2103.01913", "arxiv:2401.06209", "arxiv:2305.10355", "arxiv:2205.12522", "arxiv:2110.11624", "arxiv:2108.03353", "arxiv:2010.04295", "arxiv:2203.10244", "arxiv:1810.12440", "arxiv:1905.13648", "arxiv:1608.00272", "arxiv:1908.04913", "arxiv:2407.07726", "license:gemma", "region:us" ]
2024-05-12T01:39:01Z
2024-07-19T12:09:26+00:00
21
0
--- library_name: big_vision license: gemma pipeline_tag: image-text-to-text tags: - paligemma - jax extra_gated_heading: Access PaliGemma on Hugging Face extra_gated_prompt: To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # PaliGemma model card **Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma) JAX/FLAX PaliGemma 3B weights, fine-tuned with 448*448 input images on the <a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic_VQA</a> dataset. The models are available in float32, bfloat16 and float16 format for research purposes only. The fine-tune config is available at <a href="https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/paligemma/transfers/infovqa.py">big_vision</a>. **Resources and technical documentation:** * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [PaliGemma on Kaggle](https://www.kaggle.com/models/google/paligemma) * [PaliGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363) **Terms of Use:** [Terms](https://www.kaggle.com/models/google/paligemma-ft/license/consent/verify/huggingface?returnModelRepoId=google/paligemma-3b-ft-infovqa-448-jax) **Authors:** Google ## Model information ### Model summary #### Description PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by [PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma language model](https://arxiv.org/abs/2403.08295). It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation. #### Model architecture PaliGemma is the composition of a [Transformer decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image encoder](https://arxiv.org/abs/2010.11929), with a total of 3 billion params. The text decoder is initialized from [Gemma-2B](https://www.kaggle.com/models/google/gemma). The image encoder is initialized from [SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb). PaliGemma is trained following the PaLI-3 recipes. #### Inputs and outputs * **Input:** Image and text string, such as a prompt to caption the image, or a question. * **Output:** Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords. ### Model data #### Pre-train datasets PaliGemma is pre-trained on the following mixture of datasets: * **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is a web-scale multilingual image-text dataset built from the public web. A wide range of WebLI splits are used to acquire versatile model capabilities, such as visual semantic understanding, object localization, visually-situated text understanding, multilinguality, etc. * **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud Translation API](https://cloud.google.com/translate) to translate into 34 additional languages. * **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al., 2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the same additional 34 languages as CC3M-35L, using the [Google Cloud Translation API](https://cloud.google.com/translate). * **OpenImages:** Detection and object-aware questions and answers ([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by handcrafted rules on the [OpenImages dataset]. * **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al., 2021](https://arxiv.org/abs/2103.01913)). [OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html #### Data responsibility filtering The following filters are applied to WebLI, with the goal of training PaliGemma on clean data: * **Pornographic image filtering:** This filter removes images deemed to be of pornographic nature. * **Text safety filtering:** We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about CSAI, pornography, vulgarities, or otherwise offensive. * **Text toxicity filtering:** We further use the [Perspective API](https://perspectiveapi.com/) to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic. * **Text personal information filtering:** We filtered certain personal information and other sensitive data using [Cloud Data Loss Prevention (DLP) API](https://cloud.google.com/security/products/dlp) to protect the privacy of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed. * **Additional methods:** Filtering based on content quality and safety in line with our policies and practices. [other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759 ## Implementation information ### Hardware PaliGemma was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e). ### Software Training was done using [JAX](https://github.com/google/jax), [Flax](https://github.com/google/flax), [TFDS](https://github.com/tensorflow/datasets) and [`big_vision`](https://github.com/google-research/big_vision). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma fine-tune code and inference code are released in the `big_vision` GitHub repository. ## Evaluation information ### Benchmark results In order to verify the transferability of PaliGemma to a wide variety of academic tasks, we fine-tune the pretrained models on each task. Additionally we train the mix model with a mixture of the transfer tasks. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web-scale pre-training data. #### Mix model (fine-tune on mixture of transfer tasks) <table> <tbody><tr> <th>Benchmark</th> <th>Metric (split)</th> <th>mix-224</th> <th>mix-448</th> </tr> <tr> <td><a href="https://arxiv.org/abs/2401.06209">MMVP</a></td> <td>Paired Accuracy</td> <td>46.00</td> <td>45.33</td> </tr> <tr> <td><a href="https://arxiv.org/abs/2305.10355">POPE</a></td> <td>Accuracy<br>(random/popular/adversarial)</td> <td> 88.00<br> 86.63<br> 85.67 </td> <td> 89.37<br> 88.40<br> 87.47 </td> </tr> <tr> <td><a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a></td> <td>Accuracy (test)</td> <td>65.20</td> <td>65.47</td> </tr> </tbody></table> #### Single task (fine-tune on single task) <table> <tbody><tr> <th>Benchmark<br>(train split)</th> <th>Metric<br>(split)</th> <th>pt-224</th> <th>pt-448</th> <th>pt-896</th> </tr> <tr> <th>Captioning</th> </tr> <tr> <td> <a href="https://cocodataset.org/#home">COCO captions</a><br>(train+restval) </td> <td>CIDEr (val)</td> <td>141.92</td> <td>144.60</td> </tr> <tr> <td> <a href="https://nocaps.org/">NoCaps</a><br>(Eval of COCO<br>captions transfer) </td> <td>CIDEr (val)</td> <td>121.72</td> <td>123.58</td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a><br>(train) </td> <td>CIDEr dev<br>(en/avg-34/avg)</td> <td> 139.2<br> 115.8<br> 116.4 </td> <td> 141.2<br> 118.0<br> 118.6 </td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/2205.12522">XM3600</a><br>(Eval of COCO-35L transfer) </td> <td>CIDEr dev<br>(en/avg-34/avg)</td> <td> 78.1<br> 41.3<br> 42.4 </td> <td> 80.0<br> 41.9<br> 42.9 </td> </tr> <tr> <td> <a href="https://textvqa.org/textcaps/">TextCaps</a><br>(train) </td> <td>CIDEr (val)</td> <td>127.48</td> <td>153.94</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2110.11624">SciCap</a><br>(first sentence, no subfigure)<br>(train+val) </td> <td>CIDEr/BLEU-4<br>(test)</td> <td> 162.25<br> 0.192<br> </td> <td> 181.49<br> 0.211<br> </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2108.03353">Screen2words</a><br>(train+dev) </td> <td>CIDEr (test)</td> <td>117.57</td> <td>119.59</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2010.04295">Widget Captioning</a><br>(train+dev) </td> <td>CIDEr (test)</td> <td>136.07</td> <td>148.36</td> </tr> <tr> <th>Question answering</th> </tr> <tr> <td> <a href="https://visualqa.org/index.html">VQAv2</a><br>(train+validation) </td> <td>Accuracy<br>(Test server - std)</td> <td>83.19</td> <td>85.64</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2401.06209">MMVP</a><br>(Eval of VQAv2 transfer) </td> <td>Paired Accuracy</td> <td>47.33</td> <td>45.33</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2305.10355">POPE</a><br>(Eval of VQAv2 transfer) </td> <td>Accuracy<br>(random/popular/<br>adversarial)</td> <td> 87.80<br> 85.87<br> 84.27 </td> <td> 88.23<br> 86.77<br> 85.90 </td> </tr> <tr> <td> <a href="https://okvqa.allenai.org/">OKVQA</a><br>(train) </td> <td>Accuracy (val)</td> <td>63.54</td> <td>63.15</td> </tr> <tr> <td> <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (MC)<br>(train+val) </td> <td>Accuracy<br>(Test server)</td> <td>76.37</td> <td>76.90</td> </tr> <tr> <td> <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (DA)<br>(train+val) </td> <td>Accuracy<br>(Test server)</td> <td>61.85</td> <td>63.22</td> </tr> <tr> <td> <a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a><br>(train_balanced+<br>val_balanced) </td> <td>Accuracy<br>(testdev balanced)</td> <td>65.61</td> <td>67.03</td> </tr> <tr> <td> <a href="https://aclanthology.org/2022.findings-acl.196/">xGQA</a><br>(Eval of GQA transfer) </td> <td>Mean Accuracy<br>(bn, de, en, id,<br>ko, pt, ru, zh)</td> <td>58.37</td> <td>59.07</td> </tr> <tr> <td> <a href="https://lil.nlp.cornell.edu/nlvr/">NLVR2</a><br>(train+dev) </td> <td>Accuracy (test)</td> <td>90.02</td> <td>88.93</td> </tr> <tr> <td> <a href="https://marvl-challenge.github.io/">MaRVL</a><br>(Eval of NLVR2 transfer) </td> <td>Mean Accuracy<br>(test)<br>(id, sw, ta, tr, zh)</td> <td>80.57</td> <td>76.78</td> </tr> <tr> <td> <a href="https://allenai.org/data/diagrams">AI2D</a><br>(train) </td> <td>Accuracy (test)</td> <td>72.12</td> <td>73.28</td> </tr> <tr> <td> <a href="https://scienceqa.github.io/">ScienceQA</a><br>(Img subset, no CoT)<br>(train+val) </td> <td>Accuracy (test)</td> <td>95.39</td> <td>95.93</td> </tr> <tr> <td> <a href="https://zenodo.org/records/6344334">RSVQA-LR</a> (Non numeric)<br>(train+val) </td> <td>Mean Accuracy<br>(test)</td> <td>92.65</td> <td>93.11</td> </tr> <tr> <td> <a href="https://zenodo.org/records/6344367">RSVQA-HR</a> (Non numeric)<br>(train+val) </td> <td>Mean Accuracy<br>(test/test2)</td> <td> 92.61<br> 90.58 </td> <td> 92.79<br> 90.54 </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2203.10244">ChartQA</a><br>(human+aug)x(train+val) </td> <td>Mean Relaxed<br>Accuracy<br>(test_human,<br>test_aug)</td> <td>57.08</td> <td>71.36</td> </tr> <tr> <td> <a href="https://vizwiz.org/tasks-and-datasets/vqa/">VizWiz VQA</a><br>(train+val) </td> <td>Accuracy<br>(Test server - std)</td> <td> 73.7 </td> <td> 75.52 </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/1810.12440">TallyQA</a><br>(train) </td> <td>Accuracy<br>(test_simple/<br>test_complex)</td> <td> 81.72<br> 69.56 </td> <td> 84.86<br> 72.27 </td> </tr> <tr> <td> <a href="https://ocr-vqa.github.io/">OCR-VQA</a><br>(train+val) </td> <td>Accuracy (test)</td> <td>72.32</td> <td>74.61</td> <td>74.93</td> </tr> <tr> <td> <a href="https://textvqa.org/">TextVQA</a><br>(train+val) </td> <td>Accuracy<br>(Test server - std)</td> <td>55.47</td> <td>73.15</td> <td>76.48</td> </tr> <tr> <td> <a href="https://www.docvqa.org/">DocVQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>43.74</td> <td>78.02</td> <td>84.77</td> </tr> <tr> <td> <a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic VQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>28.46</td> <td>40.47</td> <td>47.75</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/1905.13648">SceneText VQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>63.29</td> <td>81.82</td> <td>84.40</td> </tr> <tr> <th>Segmentation</th> </tr> <tr> <td> <a href="https://arxiv.org/abs/1608.00272">RefCOCO</a><br>(combined refcoco, refcoco+,<br>refcocog excluding val<br>and test images) </td> <td>MIoU<br>(validation)<br>refcoco/refcoco+/<br>refcocog</td> <td> 73.40<br> 68.32<br> 67.65 </td> <td> 75.57<br> 69.76<br> 70.17 </td> <td> 76.94<br> 72.18<br> 72.22 </td> </tr> <tr> <th>Video tasks (Caption/QA)</th> </tr> <tr> <td>MSR-VTT (Captioning)</td> <td>CIDEr (test)</td> <td>70.54</td> </tr> <tr> <td>MSR-VTT (QA)</td> <td>Accuracy (test)</td> <td>50.09</td> </tr> <tr> <td>ActivityNet (Captioning)</td> <td>CIDEr (test)</td> <td>34.62</td> </tr> <tr> <td>ActivityNet (QA)</td> <td>Accuracy (test)</td> <td>50.78</td> </tr> <tr> <td>VATEX (Captioning)</td> <td>CIDEr (test)</td> <td>79.73</td> </tr> <tr> <td>MSVD (QA)</td> <td>Accuracy (test)</td> <td>60.22</td> </tr> </tbody></table> ## Ethics and safety ### Evaluation approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Human evaluation on prompts covering child safety, content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach, but with image captioning and visual question answering setups. * Image-to-Text benchmark evaluation: Benchmark against relevant academic datasets such as FairFace Dataset ([Karkkainen et al., 2021](https://arxiv.org/abs/1908.04913)). ### Evaluation results * The human evaluation results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety and representational harms. * On top of robust internal evaluations, we also use the Perspective API (threshold of 0.8) to measure toxicity, profanity, and other potential issues in the generated captions for images sourced from the FairFace dataset. We report the maximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes. <table> <tbody><tr> </tr></tbody><tbody><tr><th>Metric</th> <th>Perceived<br>gender</th> <th></th> <th>Ethnicity</th> <th></th> <th>Age group</th> <th></th> </tr> <tr> <th></th> <th>Maximum</th> <th>Median</th> <th>Maximum</th> <th>Median</th> <th>Maximum</th> <th>Median</th> </tr> <tr> <td>Toxicity</td> <td>0.04%</td> <td>0.03%</td> <td>0.08%</td> <td>0.00%</td> <td>0.09%</td> <td>0.00%</td> </tr> <tr> <td>Identity Attack</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> <tr> <td>Insult</td> <td>0.06%</td> <td>0.04%</td> <td>0.09%</td> <td>0.07%</td> <td>0.16%</td> <td>0.00%</td> </tr> <tr> <td>Threat</td> <td>0.06%</td> <td>0.05%</td> <td>0.14%</td> <td>0.05%</td> <td>0.17%</td> <td>0.00%</td> </tr> <tr> <td>Profanity</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> </tbody></table> ## Usage and limitations ### Intended usage Open Vision Language Models (VLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. Fine-tune on specific vision-language task: * The pre-trained models can be fine-tuned on a wide range of vision-language tasks such as: image captioning, short video caption, visual question answering, text reading, object detection and object segmentation. * The pre-trained models can be fine-tuned for specific domains such as remote sensing question answering, visual questions from people who are blind, science question answering, describe UI element functionalities. * The pre-trained models can be fine-tuned for tasks with non-textual outputs such as bounding boxes or segmentation masks. Vision-language research: * The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM techniques, develop algorithms, and contribute to the advancement of the field. ### Ethical considerations and risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * VLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible). * Transparency and Accountability * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * **Perpetuation of biases:** It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * **Generation of harmful content:** Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * **Misuse for malicious purposes:** Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Limitations * Most limitations inherited from the underlying Gemma model still apply: * VLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * Natural language is inherently complex. VLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * VLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * VLMs rely on statistical patterns in language and images. They might lack the ability to apply common sense reasoning in certain situations. * PaliGemma was designed first and foremost to serve as a general pre-trained model for transfer to specialized tasks. Hence, its "out of the box" or "zero-shot" performance might lag behind models designed specifically for that. * PaliGemma is not a multi-turn chatbot. It is designed for a single round of image and text input. ## Citation ```bibtex @article{beyer2024paligemma, title={{PaliGemma: A versatile 3B VLM for transfer}}, author={Lucas Beyer* and Andreas Steiner* and André Susano Pinto* and Alexander Kolesnikov* and Xiao Wang* and Daniel Salz and Maxim Neumann and Ibrahim Alabdulmohsin and Michael Tschannen and Emanuele Bugliarello and Thomas Unterthiner and Daniel Keysers and Skanda Koppula and Fangyu Liu and Adam Grycner and Alexey Gritsenko and Neil Houlsby and Manoj Kumar and Keran Rong and Julian Eisenschlos and Rishabh Kabra and Matthias Bauer and Matko Bošnjak and Xi Chen and Matthias Minderer and Paul Voigtlaender and Ioana Bica and Ivana Balazevic and Joan Puigcerver and Pinelopi Papalampidi and Olivier Henaff and Xi Xiong and Radu Soricut and Jeremiah Harmsen and Xiaohua Zhai*}, year={2024}, journal={arXiv preprint arXiv:2407.07726} } ``` Find the paper [here](https://arxiv.org/abs/2407.07726).
null
Non_BioNLP
# PaliGemma model card **Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma) JAX/FLAX PaliGemma 3B weights, fine-tuned with 448*448 input images on the <a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic_VQA</a> dataset. The models are available in float32, bfloat16 and float16 format for research purposes only. The fine-tune config is available at <a href="https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/paligemma/transfers/infovqa.py">big_vision</a>. **Resources and technical documentation:** * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [PaliGemma on Kaggle](https://www.kaggle.com/models/google/paligemma) * [PaliGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363) **Terms of Use:** [Terms](https://www.kaggle.com/models/google/paligemma-ft/license/consent/verify/huggingface?returnModelRepoId=google/paligemma-3b-ft-infovqa-448-jax) **Authors:** Google ## Model information ### Model summary #### Description PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by [PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma language model](https://arxiv.org/abs/2403.08295). It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation. #### Model architecture PaliGemma is the composition of a [Transformer decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image encoder](https://arxiv.org/abs/2010.11929), with a total of 3 billion params. The text decoder is initialized from [Gemma-2B](https://www.kaggle.com/models/google/gemma). The image encoder is initialized from [SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb). PaliGemma is trained following the PaLI-3 recipes. #### Inputs and outputs * **Input:** Image and text string, such as a prompt to caption the image, or a question. * **Output:** Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords. ### Model data #### Pre-train datasets PaliGemma is pre-trained on the following mixture of datasets: * **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is a web-scale multilingual image-text dataset built from the public web. A wide range of WebLI splits are used to acquire versatile model capabilities, such as visual semantic understanding, object localization, visually-situated text understanding, multilinguality, etc. * **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud Translation API](https://cloud.google.com/translate) to translate into 34 additional languages. * **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al., 2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the same additional 34 languages as CC3M-35L, using the [Google Cloud Translation API](https://cloud.google.com/translate). * **OpenImages:** Detection and object-aware questions and answers ([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by handcrafted rules on the [OpenImages dataset]. * **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al., 2021](https://arxiv.org/abs/2103.01913)). [OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html #### Data responsibility filtering The following filters are applied to WebLI, with the goal of training PaliGemma on clean data: * **Pornographic image filtering:** This filter removes images deemed to be of pornographic nature. * **Text safety filtering:** We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about CSAI, pornography, vulgarities, or otherwise offensive. * **Text toxicity filtering:** We further use the [Perspective API](https://perspectiveapi.com/) to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic. * **Text personal information filtering:** We filtered certain personal information and other sensitive data using [Cloud Data Loss Prevention (DLP) API](https://cloud.google.com/security/products/dlp) to protect the privacy of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed. * **Additional methods:** Filtering based on content quality and safety in line with our policies and practices. [other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759 ## Implementation information ### Hardware PaliGemma was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e). ### Software Training was done using [JAX](https://github.com/google/jax), [Flax](https://github.com/google/flax), [TFDS](https://github.com/tensorflow/datasets) and [`big_vision`](https://github.com/google-research/big_vision). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma fine-tune code and inference code are released in the `big_vision` GitHub repository. ## Evaluation information ### Benchmark results In order to verify the transferability of PaliGemma to a wide variety of academic tasks, we fine-tune the pretrained models on each task. Additionally we train the mix model with a mixture of the transfer tasks. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web-scale pre-training data. #### Mix model (fine-tune on mixture of transfer tasks) <table> <tbody><tr> <th>Benchmark</th> <th>Metric (split)</th> <th>mix-224</th> <th>mix-448</th> </tr> <tr> <td><a href="https://arxiv.org/abs/2401.06209">MMVP</a></td> <td>Paired Accuracy</td> <td>46.00</td> <td>45.33</td> </tr> <tr> <td><a href="https://arxiv.org/abs/2305.10355">POPE</a></td> <td>Accuracy<br>(random/popular/adversarial)</td> <td> 88.00<br> 86.63<br> 85.67 </td> <td> 89.37<br> 88.40<br> 87.47 </td> </tr> <tr> <td><a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a></td> <td>Accuracy (test)</td> <td>65.20</td> <td>65.47</td> </tr> </tbody></table> #### Single task (fine-tune on single task) <table> <tbody><tr> <th>Benchmark<br>(train split)</th> <th>Metric<br>(split)</th> <th>pt-224</th> <th>pt-448</th> <th>pt-896</th> </tr> <tr> <th>Captioning</th> </tr> <tr> <td> <a href="https://cocodataset.org/#home">COCO captions</a><br>(train+restval) </td> <td>CIDEr (val)</td> <td>141.92</td> <td>144.60</td> </tr> <tr> <td> <a href="https://nocaps.org/">NoCaps</a><br>(Eval of COCO<br>captions transfer) </td> <td>CIDEr (val)</td> <td>121.72</td> <td>123.58</td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a><br>(train) </td> <td>CIDEr dev<br>(en/avg-34/avg)</td> <td> 139.2<br> 115.8<br> 116.4 </td> <td> 141.2<br> 118.0<br> 118.6 </td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/2205.12522">XM3600</a><br>(Eval of COCO-35L transfer) </td> <td>CIDEr dev<br>(en/avg-34/avg)</td> <td> 78.1<br> 41.3<br> 42.4 </td> <td> 80.0<br> 41.9<br> 42.9 </td> </tr> <tr> <td> <a href="https://textvqa.org/textcaps/">TextCaps</a><br>(train) </td> <td>CIDEr (val)</td> <td>127.48</td> <td>153.94</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2110.11624">SciCap</a><br>(first sentence, no subfigure)<br>(train+val) </td> <td>CIDEr/BLEU-4<br>(test)</td> <td> 162.25<br> 0.192<br> </td> <td> 181.49<br> 0.211<br> </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2108.03353">Screen2words</a><br>(train+dev) </td> <td>CIDEr (test)</td> <td>117.57</td> <td>119.59</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2010.04295">Widget Captioning</a><br>(train+dev) </td> <td>CIDEr (test)</td> <td>136.07</td> <td>148.36</td> </tr> <tr> <th>Question answering</th> </tr> <tr> <td> <a href="https://visualqa.org/index.html">VQAv2</a><br>(train+validation) </td> <td>Accuracy<br>(Test server - std)</td> <td>83.19</td> <td>85.64</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2401.06209">MMVP</a><br>(Eval of VQAv2 transfer) </td> <td>Paired Accuracy</td> <td>47.33</td> <td>45.33</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2305.10355">POPE</a><br>(Eval of VQAv2 transfer) </td> <td>Accuracy<br>(random/popular/<br>adversarial)</td> <td> 87.80<br> 85.87<br> 84.27 </td> <td> 88.23<br> 86.77<br> 85.90 </td> </tr> <tr> <td> <a href="https://okvqa.allenai.org/">OKVQA</a><br>(train) </td> <td>Accuracy (val)</td> <td>63.54</td> <td>63.15</td> </tr> <tr> <td> <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (MC)<br>(train+val) </td> <td>Accuracy<br>(Test server)</td> <td>76.37</td> <td>76.90</td> </tr> <tr> <td> <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (DA)<br>(train+val) </td> <td>Accuracy<br>(Test server)</td> <td>61.85</td> <td>63.22</td> </tr> <tr> <td> <a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a><br>(train_balanced+<br>val_balanced) </td> <td>Accuracy<br>(testdev balanced)</td> <td>65.61</td> <td>67.03</td> </tr> <tr> <td> <a href="https://aclanthology.org/2022.findings-acl.196/">xGQA</a><br>(Eval of GQA transfer) </td> <td>Mean Accuracy<br>(bn, de, en, id,<br>ko, pt, ru, zh)</td> <td>58.37</td> <td>59.07</td> </tr> <tr> <td> <a href="https://lil.nlp.cornell.edu/nlvr/">NLVR2</a><br>(train+dev) </td> <td>Accuracy (test)</td> <td>90.02</td> <td>88.93</td> </tr> <tr> <td> <a href="https://marvl-challenge.github.io/">MaRVL</a><br>(Eval of NLVR2 transfer) </td> <td>Mean Accuracy<br>(test)<br>(id, sw, ta, tr, zh)</td> <td>80.57</td> <td>76.78</td> </tr> <tr> <td> <a href="https://allenai.org/data/diagrams">AI2D</a><br>(train) </td> <td>Accuracy (test)</td> <td>72.12</td> <td>73.28</td> </tr> <tr> <td> <a href="https://scienceqa.github.io/">ScienceQA</a><br>(Img subset, no CoT)<br>(train+val) </td> <td>Accuracy (test)</td> <td>95.39</td> <td>95.93</td> </tr> <tr> <td> <a href="https://zenodo.org/records/6344334">RSVQA-LR</a> (Non numeric)<br>(train+val) </td> <td>Mean Accuracy<br>(test)</td> <td>92.65</td> <td>93.11</td> </tr> <tr> <td> <a href="https://zenodo.org/records/6344367">RSVQA-HR</a> (Non numeric)<br>(train+val) </td> <td>Mean Accuracy<br>(test/test2)</td> <td> 92.61<br> 90.58 </td> <td> 92.79<br> 90.54 </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2203.10244">ChartQA</a><br>(human+aug)x(train+val) </td> <td>Mean Relaxed<br>Accuracy<br>(test_human,<br>test_aug)</td> <td>57.08</td> <td>71.36</td> </tr> <tr> <td> <a href="https://vizwiz.org/tasks-and-datasets/vqa/">VizWiz VQA</a><br>(train+val) </td> <td>Accuracy<br>(Test server - std)</td> <td> 73.7 </td> <td> 75.52 </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/1810.12440">TallyQA</a><br>(train) </td> <td>Accuracy<br>(test_simple/<br>test_complex)</td> <td> 81.72<br> 69.56 </td> <td> 84.86<br> 72.27 </td> </tr> <tr> <td> <a href="https://ocr-vqa.github.io/">OCR-VQA</a><br>(train+val) </td> <td>Accuracy (test)</td> <td>72.32</td> <td>74.61</td> <td>74.93</td> </tr> <tr> <td> <a href="https://textvqa.org/">TextVQA</a><br>(train+val) </td> <td>Accuracy<br>(Test server - std)</td> <td>55.47</td> <td>73.15</td> <td>76.48</td> </tr> <tr> <td> <a href="https://www.docvqa.org/">DocVQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>43.74</td> <td>78.02</td> <td>84.77</td> </tr> <tr> <td> <a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic VQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>28.46</td> <td>40.47</td> <td>47.75</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/1905.13648">SceneText VQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>63.29</td> <td>81.82</td> <td>84.40</td> </tr> <tr> <th>Segmentation</th> </tr> <tr> <td> <a href="https://arxiv.org/abs/1608.00272">RefCOCO</a><br>(combined refcoco, refcoco+,<br>refcocog excluding val<br>and test images) </td> <td>MIoU<br>(validation)<br>refcoco/refcoco+/<br>refcocog</td> <td> 73.40<br> 68.32<br> 67.65 </td> <td> 75.57<br> 69.76<br> 70.17 </td> <td> 76.94<br> 72.18<br> 72.22 </td> </tr> <tr> <th>Video tasks (Caption/QA)</th> </tr> <tr> <td>MSR-VTT (Captioning)</td> <td>CIDEr (test)</td> <td>70.54</td> </tr> <tr> <td>MSR-VTT (QA)</td> <td>Accuracy (test)</td> <td>50.09</td> </tr> <tr> <td>ActivityNet (Captioning)</td> <td>CIDEr (test)</td> <td>34.62</td> </tr> <tr> <td>ActivityNet (QA)</td> <td>Accuracy (test)</td> <td>50.78</td> </tr> <tr> <td>VATEX (Captioning)</td> <td>CIDEr (test)</td> <td>79.73</td> </tr> <tr> <td>MSVD (QA)</td> <td>Accuracy (test)</td> <td>60.22</td> </tr> </tbody></table> ## Ethics and safety ### Evaluation approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Human evaluation on prompts covering child safety, content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach, but with image captioning and visual question answering setups. * Image-to-Text benchmark evaluation: Benchmark against relevant academic datasets such as FairFace Dataset ([Karkkainen et al., 2021](https://arxiv.org/abs/1908.04913)). ### Evaluation results * The human evaluation results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety and representational harms. * On top of robust internal evaluations, we also use the Perspective API (threshold of 0.8) to measure toxicity, profanity, and other potential issues in the generated captions for images sourced from the FairFace dataset. We report the maximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes. <table> <tbody><tr> </tr></tbody><tbody><tr><th>Metric</th> <th>Perceived<br>gender</th> <th></th> <th>Ethnicity</th> <th></th> <th>Age group</th> <th></th> </tr> <tr> <th></th> <th>Maximum</th> <th>Median</th> <th>Maximum</th> <th>Median</th> <th>Maximum</th> <th>Median</th> </tr> <tr> <td>Toxicity</td> <td>0.04%</td> <td>0.03%</td> <td>0.08%</td> <td>0.00%</td> <td>0.09%</td> <td>0.00%</td> </tr> <tr> <td>Identity Attack</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> <tr> <td>Insult</td> <td>0.06%</td> <td>0.04%</td> <td>0.09%</td> <td>0.07%</td> <td>0.16%</td> <td>0.00%</td> </tr> <tr> <td>Threat</td> <td>0.06%</td> <td>0.05%</td> <td>0.14%</td> <td>0.05%</td> <td>0.17%</td> <td>0.00%</td> </tr> <tr> <td>Profanity</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> </tbody></table> ## Usage and limitations ### Intended usage Open Vision Language Models (VLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. Fine-tune on specific vision-language task: * The pre-trained models can be fine-tuned on a wide range of vision-language tasks such as: image captioning, short video caption, visual question answering, text reading, object detection and object segmentation. * The pre-trained models can be fine-tuned for specific domains such as remote sensing question answering, visual questions from people who are blind, science question answering, describe UI element functionalities. * The pre-trained models can be fine-tuned for tasks with non-textual outputs such as bounding boxes or segmentation masks. Vision-language research: * The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM techniques, develop algorithms, and contribute to the advancement of the field. ### Ethical considerations and risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * VLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible). * Transparency and Accountability * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * **Perpetuation of biases:** It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * **Generation of harmful content:** Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * **Misuse for malicious purposes:** Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Limitations * Most limitations inherited from the underlying Gemma model still apply: * VLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * Natural language is inherently complex. VLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * VLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * VLMs rely on statistical patterns in language and images. They might lack the ability to apply common sense reasoning in certain situations. * PaliGemma was designed first and foremost to serve as a general pre-trained model for transfer to specialized tasks. Hence, its "out of the box" or "zero-shot" performance might lag behind models designed specifically for that. * PaliGemma is not a multi-turn chatbot. It is designed for a single round of image and text input. ## Citation ```bibtex @article{beyer2024paligemma, title={{PaliGemma: A versatile 3B VLM for transfer}}, author={Lucas Beyer* and Andreas Steiner* and André Susano Pinto* and Alexander Kolesnikov* and Xiao Wang* and Daniel Salz and Maxim Neumann and Ibrahim Alabdulmohsin and Michael Tschannen and Emanuele Bugliarello and Thomas Unterthiner and Daniel Keysers and Skanda Koppula and Fangyu Liu and Adam Grycner and Alexey Gritsenko and Neil Houlsby and Manoj Kumar and Keran Rong and Julian Eisenschlos and Rishabh Kabra and Matthias Bauer and Matko Bošnjak and Xi Chen and Matthias Minderer and Paul Voigtlaender and Ioana Bica and Ivana Balazevic and Joan Puigcerver and Pinelopi Papalampidi and Olivier Henaff and Xi Xiong and Radu Soricut and Jeremiah Harmsen and Xiaohua Zhai*}, year={2024}, journal={arXiv preprint arXiv:2407.07726} } ``` Find the paper [here](https://arxiv.org/abs/2407.07726).
{"library_name": "big_vision", "license": "gemma", "pipeline_tag": "image-text-to-text", "tags": ["paligemma", "jax"], "extra_gated_heading": "Access PaliGemma on Hugging Face", "extra_gated_prompt": "To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
task
[ "QUESTION_ANSWERING", "TRANSLATION" ]
43,667
aminramezani345/finetuning-sentiment-model-3000-samples
aminramezani345
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-06-05T15:28:26Z
2023-06-22T16:11:54+00:00
17
0
--- datasets: - imdb license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: finetuning-sentiment-model-3000-samples results: - task: type: text-classification name: Text Classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - type: accuracy value: 0.8766666666666667 name: Accuracy - type: f1 value: 0.8786885245901639 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3040 - Accuracy: 0.8767 - F1: 0.8787 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3040 - Accuracy: 0.8767 - F1: 0.8787 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
{"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuning-sentiment-model-3000-samples", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "test", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.8766666666666667, "name": "Accuracy"}, {"type": "f1", "value": 0.8786885245901639, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,668
pinzhenchen/sft-lora-fi-bloom-1b1
pinzhenchen
null
[ "generation", "question answering", "instruction tuning", "fi", "arxiv:2309.08958", "license:cc-by-nc-4.0", "region:us" ]
2024-03-05T23:46:19Z
2024-03-05T23:46:22+00:00
0
0
--- language: - fi license: cc-by-nc-4.0 tags: - generation - question answering - instruction tuning --- ### Model Description This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable. * [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main) * [Paper](https://arxiv.org/abs/2309.08958) #### Instruction tuning details * Base model: [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) * Instruction tuning language: Finnish * Training method: LoRA. * LoRA details: rank=8, alpha=16, target modules={key, query, value}. * Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs. * Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data). #### Usage The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries. Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions. #### Citation ``` @inproceedings{chen-etal-2024-monolingual, title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}", author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield", year="2024", booktitle = "Findings of the Association for Computational Linguistics: EACL 2024", } ```
null
Non_BioNLP
### Model Description This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable. * [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main) * [Paper](https://arxiv.org/abs/2309.08958) #### Instruction tuning details * Base model: [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) * Instruction tuning language: Finnish * Training method: LoRA. * LoRA details: rank=8, alpha=16, target modules={key, query, value}. * Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs. * Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data). #### Usage The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries. Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions. #### Citation ``` @inproceedings{chen-etal-2024-monolingual, title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}", author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield", year="2024", booktitle = "Findings of the Association for Computational Linguistics: EACL 2024", } ```
{"language": ["fi"], "license": "cc-by-nc-4.0", "tags": ["generation", "question answering", "instruction tuning"]}
task
[ "QUESTION_ANSWERING" ]
43,669
fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-80745457
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-80745457", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-05-28T23:17:46Z
2024-05-28T23:18:25+00:00
6
0
--- datasets: - fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-80745457 - allenai/c4 language: - en - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-80745457', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
null
Non_BioNLP
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-80745457', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
{"datasets": ["fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-80745457", "allenai/c4"], "language": ["en", "en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"]}
task
[ "TEXT_CLASSIFICATION" ]
43,670
SynthAIzer/finetuned-sentence-similarity
SynthAIzer
text-classification
[ "safetensors", "mpnet", "text classification", "Transformers", "bert", "text-classification", "en", "arxiv:1910.09700", "region:us" ]
2024-10-29T05:02:47Z
2024-11-04T06:23:02+00:00
3
1
--- language: - en pipeline_tag: text-classification tags: - text classification - Transformers - bert --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
null
Non_BioNLP
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"language": ["en"], "pipeline_tag": "text-classification", "tags": ["text classification", "Transformers", "bert"]}
task
[ "TEXT_CLASSIFICATION" ]
43,671
gchhablani/fnet-base-finetuned-mrpc
gchhablani
text-classification
[ "transformers", "pytorch", "tensorboard", "fnet", "text-classification", "generated_from_trainer", "fnet-bert-base-comparison", "en", "dataset:glue", "arxiv:2105.03824", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2021-09-20T09:06:55+00:00
16
0
--- datasets: - glue language: - en license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer - fnet-bert-base-comparison model-index: - name: fnet-base-finetuned-mrpc results: - task: type: text-classification name: Text Classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - type: accuracy value: 0.7720588235294118 name: Accuracy - type: f1 value: 0.8502415458937198 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fnet-base-finetuned-mrpc This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.9653 - Accuracy: 0.7721 - F1: 0.8502 - Combined Score: 0.8112 The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used: ```bash #!/usr/bin/bash python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name mrpc \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir fnet-base-finetuned-mrpc \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.544 | 1.0 | 230 | 0.5272 | 0.7328 | 0.8300 | 0.7814 | | 0.4034 | 2.0 | 460 | 0.6211 | 0.7255 | 0.8298 | 0.7776 | | 0.2602 | 3.0 | 690 | 0.9110 | 0.7230 | 0.8306 | 0.7768 | | 0.1688 | 4.0 | 920 | 0.8640 | 0.7696 | 0.8489 | 0.8092 | | 0.0913 | 5.0 | 1150 | 0.9653 | 0.7721 | 0.8502 | 0.8112 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fnet-base-finetuned-mrpc This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.9653 - Accuracy: 0.7721 - F1: 0.8502 - Combined Score: 0.8112 The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used: ```bash #!/usr/bin/bash python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name mrpc \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir fnet-base-finetuned-mrpc \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.544 | 1.0 | 230 | 0.5272 | 0.7328 | 0.8300 | 0.7814 | | 0.4034 | 2.0 | 460 | 0.6211 | 0.7255 | 0.8298 | 0.7776 | | 0.2602 | 3.0 | 690 | 0.9110 | 0.7230 | 0.8306 | 0.7768 | | 0.1688 | 4.0 | 920 | 0.8640 | 0.7696 | 0.8489 | 0.8092 | | 0.0913 | 5.0 | 1150 | 0.9653 | 0.7721 | 0.8502 | 0.8112 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer", "fnet-bert-base-comparison"], "model-index": [{"name": "fnet-base-finetuned-mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE MRPC", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.7720588235294118, "name": "Accuracy"}, {"type": "f1", "value": 0.8502415458937198, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,672
SouhilOuchene/ACPRECBERT_Part2_islem
SouhilOuchene
text-classification
[ "sentence-transformers", "pytorch", "camembert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-06-21T16:21:02Z
2023-06-21T16:21:46+00:00
23
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # SouhilOuchene/ACPRECBERT_Part2_islem This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("SouhilOuchene/ACPRECBERT_Part2_islem") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# SouhilOuchene/ACPRECBERT_Part2_islem This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("SouhilOuchene/ACPRECBERT_Part2_islem") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
43,673
hopkins/mbart-finetuned-eng-kor-13
hopkins
translation
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-02T20:35:37Z
2023-07-02T20:54:01+00:00
8
0
--- metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: mbart-finetuned-eng-kor-13 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-eng-kor-13 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9899 - Bleu: 6.9629 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-eng-kor-13 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9899 - Bleu: 6.9629 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "mbart-finetuned-eng-kor-13", "results": []}]}
task
[ "TRANSLATION" ]
43,674
ThuyNT03/distilbert-base-uncased-finetuned-emotion
ThuyNT03
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-08-03T19:29:12Z
2023-08-03T19:49:25+00:00
13
0
--- base_model: distilbert-base-uncased datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.9235 name: Accuracy - type: f1 value: 0.9234271116279907 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2146 - Accuracy: 0.9235 - F1: 0.9234 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8024 | 1.0 | 250 | 0.3062 | 0.9085 | 0.9078 | | 0.2396 | 2.0 | 500 | 0.2146 | 0.9235 | 0.9234 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.3 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2146 - Accuracy: 0.9235 - F1: 0.9234 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8024 | 1.0 | 250 | 0.3062 | 0.9085 | 0.9078 | | 0.2396 | 2.0 | 500 | 0.2146 | 0.9235 | 0.9234 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.3 - Tokenizers 0.13.3
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9235, "name": "Accuracy"}, {"type": "f1", "value": 0.9234271116279907, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,675
amperie/quora-mpnet
amperie
sentence-similarity
[ "sentence-transformers", "safetensors", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset:wikihow", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/QQP", "dataset:embedding-data/SPECTER", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/WikiAnswers", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-10-30T20:38:45Z
2024-10-30T20:41:35+00:00
8
0
--- datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - natural_questions - trivia_qa - embedding-data/sentence-compression - embedding-data/flickr30k-captions - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/QQP - embedding-data/SPECTER - embedding-data/PAQ_pairs - embedding-data/WikiAnswers language: en library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2') model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 384 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
null
Non_BioNLP
# all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2') model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 384 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
{"datasets": ["s2orc", "flax-sentence-embeddings/stackexchange_xml", "ms_marco", "gooaq", "yahoo_answers_topics", "code_search_net", "search_qa", "eli5", "snli", "multi_nli", "wikihow", "natural_questions", "trivia_qa", "embedding-data/sentence-compression", "embedding-data/flickr30k-captions", "embedding-data/altlex", "embedding-data/simple-wiki", "embedding-data/QQP", "embedding-data/SPECTER", "embedding-data/PAQ_pairs", "embedding-data/WikiAnswers"], "language": "en", "library_name": "sentence-transformers", "license": "apache-2.0", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"]}
task
[ "QUESTION_ANSWERING" ]
43,676
Helsinki-NLP/opus-mt-rnd-sv
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "rnd", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T12:03:10+00:00
31
0
--- license: apache-2.0 tags: - translation --- ### opus-mt-rnd-sv * source languages: rnd * target languages: sv * OPUS readme: [rnd-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rnd-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rnd-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.rnd.sv | 21.2 | 0.387 |
null
Non_BioNLP
### opus-mt-rnd-sv * source languages: rnd * target languages: sv * OPUS readme: [rnd-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rnd-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rnd-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.rnd.sv | 21.2 | 0.387 |
{"license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
43,677
cuwfnguyen/ecomerce_intent_classification
cuwfnguyen
text-classification
[ "tensorboard", "safetensors", "roberta", "autotrain", "text-classification", "base_model:vinai/phobert-base", "base_model:finetune:vinai/phobert-base", "region:us" ]
2024-09-04T02:07:36Z
2024-09-04T06:33:53+00:00
5
0
--- base_model: vinai/phobert-base tags: - autotrain - text-classification widget: - text: I love AutoTrain --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.890067458152771 f1_macro: 0.7178212262554823 f1_micro: 0.76 f1_weighted: 0.7341365415630122 precision_macro: 0.7468954248366012 precision_micro: 0.76 precision_weighted: 0.766861111111111 recall_macro: 0.7448179271708684 recall_micro: 0.76 recall_weighted: 0.76 accuracy: 0.76
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.890067458152771 f1_macro: 0.7178212262554823 f1_micro: 0.76 f1_weighted: 0.7341365415630122 precision_macro: 0.7468954248366012 precision_micro: 0.76 precision_weighted: 0.766861111111111 recall_macro: 0.7448179271708684 recall_micro: 0.76 recall_weighted: 0.76 accuracy: 0.76
{"base_model": "vinai/phobert-base", "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
task
[ "TEXT_CLASSIFICATION" ]
43,678
gsarti/it5-large-question-answering
gsarti
text2text-generation
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "italian", "sequence-to-sequence", "squad_it", "text2text-question-answering", "it", "dataset:squad_it", "arxiv:2203.03759", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2022-03-09T07:57:53+00:00
20
5
--- datasets: - squad_it language: - it license: apache-2.0 metrics: - f1 - exact-match tags: - italian - sequence-to-sequence - squad_it - text2text-question-answering - text2text-generation widget: - text: 'In seguito all'' evento di estinzione del Cretaceo-Paleogene, l'' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l'' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all'' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l'' evoluzione di un'' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?' - text: 'L'' embargo non era uniforme in tutta Europa. Dei nove membri della Comunità Economica Europea (CEE), i Paesi Bassi hanno dovuto affrontare un embargo totale, il Regno Unito e la Francia hanno ricevuto forniture quasi ininterrotte (poichè si sono rifiutati di consentire all'' America di utilizzare i loro aerodromi e le armi e forniture embargo sia agli arabi che agli israeliani), mentre gli altri sei hanno dovuto affrontare tagli parziali. Il Regno Unito era tradizionalmente un alleato di Israele, e il governo di Harold Wilson ha sostenuto gli israeliani durante la guerra dei sei giorni. Il suo successore, Ted Heath, ribaltò questa politica nel 1970, chiedendo a Israele di ritirarsi ai suoi confini prima del 1967. Domanda: Il Regno Unito e la Francia non hanno avuto interruzioni dell'' approvvigionamento petrolifero in quanto non hanno consentito a quale paese di utilizzare il loro aeroporto?' - text: 'Nel 1962, il grafico Paul Rand ridisegna il logo ABC nella sua forma più conosciuta (e attuale) con le lettere minuscole "abc" racchiuse in un unico cerchio nero. Il nuovo logo esordisce in onda per le promozioni di ABC all'' inizio della stagione 1963-64. Le lettere ricordano fortemente il carattere tipografico Bauhaus disegnato da Herbert Bayer negli anni Venti, ma condividono anche similitudini con diversi altri caratteri, come ITC Avant Garde e Horatio, e lo Chalet più simile. La semplicità del logo ha reso più facile la riprogettazione e la duplicazione, il che ha conferito un beneficio per ABC (soprattutto prima dell'' avvento della computer grafica). Domanda: Di quale carattere tipografico ricordano le lettere dell'' iconico logo ABC?' - text: 'La fotorespirazione può verificarsi quando la concentrazione di ossigeno è troppo elevata. Rubisco non è in grado di distinguere molto bene tra ossigeno e anidride carbonica, quindi può accidentalmente aggiungere O2 invece di CO2 a RuBP. Questo processo riduce l'' efficienza della fotosintesi: consuma ATP e ossigeno, rilascia CO2 e non produce zucchero. Può sprecare fino alla metà del carbonio fissato dal ciclo di Calvin. Diversi meccanismi si sono evoluti in diversi lignaggi che aumentano la concentrazione di anidride carbonica rispetto all'' ossigeno all'' interno del cloroplasto, aumentando l'' efficienza della fotosintesi. Questi meccanismi sono chiamati meccanismi di concentrazione dell'' anidride carbonica, o CCM. Tra questi figurano il metabolismo degli acidi crassulaceanici, la fissazione del carbonio C4 e i pirenoidi. I cloroplasti negli impianti C4 sono notevoli in quanto presentano un chiaro dimorfismo cloroplastico. Domanda: Che cosa può fare rubisco per errore?' co2_eq_emissions: emissions: 51g" source: Google Cloud Platform Carbon Footprint training_type: fine-tuning geographical_location: Eemshaven, Netherlands, Europe hardware_used: 1 TPU v3-8 VM thumbnail: https://gsarti.com/publication/it5/featured.png model-index: - name: it5-large-question-answering results: - task: type: question-answering name: Question Answering dataset: name: SQuAD-IT type: squad_it metrics: - type: f1 value: 0.78 name: Test F1 - type: exact-match value: 0.691 name: Test Exact Match --- # IT5 Large for Question Answering ⁉️ 🇮🇹 This repository contains the checkpoint for the [IT5 Large](https://huggingface.co/gsarti/it5-large) model fine-tuned on extractive question answering on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines qa = pipeline("text2text-generation", model='it5/it5-large-question-answering') qa("In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?") >>> [{"generated_text": "ultimo massimo glaciale"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-large-question-answering") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-large-question-answering") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
null
Non_BioNLP
# IT5 Large for Question Answering ⁉️ 🇮🇹 This repository contains the checkpoint for the [IT5 Large](https://huggingface.co/gsarti/it5-large) model fine-tuned on extractive question answering on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines qa = pipeline("text2text-generation", model='it5/it5-large-question-answering') qa("In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?") >>> [{"generated_text": "ultimo massimo glaciale"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-large-question-answering") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-large-question-answering") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
{"datasets": ["squad_it"], "language": ["it"], "license": "apache-2.0", "metrics": ["f1", "exact-match"], "tags": ["italian", "sequence-to-sequence", "squad_it", "text2text-question-answering", "text2text-generation"], "widget": [{"text": "In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?"}, {"text": "L' embargo non era uniforme in tutta Europa. Dei nove membri della Comunità Economica Europea (CEE), i Paesi Bassi hanno dovuto affrontare un embargo totale, il Regno Unito e la Francia hanno ricevuto forniture quasi ininterrotte (poichè si sono rifiutati di consentire all' America di utilizzare i loro aerodromi e le armi e forniture embargo sia agli arabi che agli israeliani), mentre gli altri sei hanno dovuto affrontare tagli parziali. Il Regno Unito era tradizionalmente un alleato di Israele, e il governo di Harold Wilson ha sostenuto gli israeliani durante la guerra dei sei giorni. Il suo successore, Ted Heath, ribaltò questa politica nel 1970, chiedendo a Israele di ritirarsi ai suoi confini prima del 1967. Domanda: Il Regno Unito e la Francia non hanno avuto interruzioni dell' approvvigionamento petrolifero in quanto non hanno consentito a quale paese di utilizzare il loro aeroporto?"}, {"text": "Nel 1962, il grafico Paul Rand ridisegna il logo ABC nella sua forma più conosciuta (e attuale) con le lettere minuscole \"abc\" racchiuse in un unico cerchio nero. Il nuovo logo esordisce in onda per le promozioni di ABC all' inizio della stagione 1963-64. Le lettere ricordano fortemente il carattere tipografico Bauhaus disegnato da Herbert Bayer negli anni Venti, ma condividono anche similitudini con diversi altri caratteri, come ITC Avant Garde e Horatio, e lo Chalet più simile. La semplicità del logo ha reso più facile la riprogettazione e la duplicazione, il che ha conferito un beneficio per ABC (soprattutto prima dell' avvento della computer grafica). Domanda: Di quale carattere tipografico ricordano le lettere dell' iconico logo ABC?"}, {"text": "La fotorespirazione può verificarsi quando la concentrazione di ossigeno è troppo elevata. Rubisco non è in grado di distinguere molto bene tra ossigeno e anidride carbonica, quindi può accidentalmente aggiungere O2 invece di CO2 a RuBP. Questo processo riduce l' efficienza della fotosintesi: consuma ATP e ossigeno, rilascia CO2 e non produce zucchero. Può sprecare fino alla metà del carbonio fissato dal ciclo di Calvin. Diversi meccanismi si sono evoluti in diversi lignaggi che aumentano la concentrazione di anidride carbonica rispetto all' ossigeno all' interno del cloroplasto, aumentando l' efficienza della fotosintesi. Questi meccanismi sono chiamati meccanismi di concentrazione dell' anidride carbonica, o CCM. Tra questi figurano il metabolismo degli acidi crassulaceanici, la fissazione del carbonio C4 e i pirenoidi. I cloroplasti negli impianti C4 sono notevoli in quanto presentano un chiaro dimorfismo cloroplastico. Domanda: Che cosa può fare rubisco per errore?"}], "co2_eq_emissions": {"emissions": "51g\"", "source": "Google Cloud Platform Carbon Footprint", "training_type": "fine-tuning", "geographical_location": "Eemshaven, Netherlands, Europe", "hardware_used": "1 TPU v3-8 VM"}, "thumbnail": "https://gsarti.com/publication/it5/featured.png", "model-index": [{"name": "it5-large-question-answering", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "SQuAD-IT", "type": "squad_it"}, "metrics": [{"type": "f1", "value": 0.78, "name": "Test F1"}, {"type": "exact-match", "value": 0.691, "name": "Test Exact Match"}]}]}]}
task
[ "QUESTION_ANSWERING" ]
43,679
gokuls/hbertv1-Massive-intent_48
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "dataset:massive", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-06-19T22:12:24Z
2023-06-19T22:21:18+00:00
10
0
--- datasets: - massive metrics: - accuracy tags: - generated_from_trainer model-index: - name: hbertv1-Massive-intent_48 results: - task: type: text-classification name: Text Classification dataset: name: massive type: massive config: en-US split: validation args: en-US metrics: - type: accuracy value: 0.8573536645351697 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hbertv1-Massive-intent_48 This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 0.8740 - Accuracy: 0.8574 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 33 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.4348 | 1.0 | 180 | 1.2038 | 0.6798 | | 1.0006 | 2.0 | 360 | 0.8063 | 0.7831 | | 0.6914 | 3.0 | 540 | 0.7823 | 0.7924 | | 0.5 | 4.0 | 720 | 0.8175 | 0.7959 | | 0.3877 | 5.0 | 900 | 0.7489 | 0.8239 | | 0.2981 | 6.0 | 1080 | 0.7043 | 0.8446 | | 0.2251 | 7.0 | 1260 | 0.7596 | 0.8372 | | 0.181 | 8.0 | 1440 | 0.8237 | 0.8357 | | 0.1367 | 9.0 | 1620 | 0.8323 | 0.8362 | | 0.0995 | 10.0 | 1800 | 0.8589 | 0.8396 | | 0.0726 | 11.0 | 1980 | 0.8476 | 0.8510 | | 0.0501 | 12.0 | 2160 | 0.8901 | 0.8534 | | 0.0338 | 13.0 | 2340 | 0.8992 | 0.8519 | | 0.022 | 14.0 | 2520 | 0.8740 | 0.8574 | | 0.0124 | 15.0 | 2700 | 0.8828 | 0.8554 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.13.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hbertv1-Massive-intent_48 This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 0.8740 - Accuracy: 0.8574 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 33 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.4348 | 1.0 | 180 | 1.2038 | 0.6798 | | 1.0006 | 2.0 | 360 | 0.8063 | 0.7831 | | 0.6914 | 3.0 | 540 | 0.7823 | 0.7924 | | 0.5 | 4.0 | 720 | 0.8175 | 0.7959 | | 0.3877 | 5.0 | 900 | 0.7489 | 0.8239 | | 0.2981 | 6.0 | 1080 | 0.7043 | 0.8446 | | 0.2251 | 7.0 | 1260 | 0.7596 | 0.8372 | | 0.181 | 8.0 | 1440 | 0.8237 | 0.8357 | | 0.1367 | 9.0 | 1620 | 0.8323 | 0.8362 | | 0.0995 | 10.0 | 1800 | 0.8589 | 0.8396 | | 0.0726 | 11.0 | 1980 | 0.8476 | 0.8510 | | 0.0501 | 12.0 | 2160 | 0.8901 | 0.8534 | | 0.0338 | 13.0 | 2340 | 0.8992 | 0.8519 | | 0.022 | 14.0 | 2520 | 0.8740 | 0.8574 | | 0.0124 | 15.0 | 2700 | 0.8828 | 0.8554 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.13.0 - Tokenizers 0.13.3
{"datasets": ["massive"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "hbertv1-Massive-intent_48", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "massive", "type": "massive", "config": "en-US", "split": "validation", "args": "en-US"}, "metrics": [{"type": "accuracy", "value": 0.8573536645351697, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,680
Realgon/N_bert_imdb_padding30model
Realgon
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-12-13T13:16:38Z
2023-12-13T15:40:44+00:00
7
0
--- base_model: bert-base-uncased datasets: - imdb license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: N_bert_imdb_padding30model results: - task: type: text-classification name: Text Classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - type: accuracy value: 0.93944 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # N_bert_imdb_padding30model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.6907 - Accuracy: 0.9394 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2178 | 1.0 | 1563 | 0.2389 | 0.9234 | | 0.1617 | 2.0 | 3126 | 0.2474 | 0.9303 | | 0.0872 | 3.0 | 4689 | 0.3029 | 0.9283 | | 0.065 | 4.0 | 6252 | 0.3493 | 0.9316 | | 0.0348 | 5.0 | 7815 | 0.3685 | 0.9365 | | 0.0311 | 6.0 | 9378 | 0.4913 | 0.9310 | | 0.0205 | 7.0 | 10941 | 0.4485 | 0.9362 | | 0.0177 | 8.0 | 12504 | 0.4903 | 0.9354 | | 0.0147 | 9.0 | 14067 | 0.5786 | 0.9322 | | 0.0119 | 10.0 | 15630 | 0.5245 | 0.9356 | | 0.01 | 11.0 | 17193 | 0.5730 | 0.9364 | | 0.0091 | 12.0 | 18756 | 0.5730 | 0.9383 | | 0.006 | 13.0 | 20319 | 0.5596 | 0.9386 | | 0.004 | 14.0 | 21882 | 0.6760 | 0.9354 | | 0.0018 | 15.0 | 23445 | 0.5813 | 0.9402 | | 0.0018 | 16.0 | 25008 | 0.6526 | 0.9378 | | 0.0035 | 17.0 | 26571 | 0.6453 | 0.9384 | | 0.0002 | 18.0 | 28134 | 0.6714 | 0.9392 | | 0.0001 | 19.0 | 29697 | 0.6893 | 0.9397 | | 0.0 | 20.0 | 31260 | 0.6907 | 0.9394 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # N_bert_imdb_padding30model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.6907 - Accuracy: 0.9394 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2178 | 1.0 | 1563 | 0.2389 | 0.9234 | | 0.1617 | 2.0 | 3126 | 0.2474 | 0.9303 | | 0.0872 | 3.0 | 4689 | 0.3029 | 0.9283 | | 0.065 | 4.0 | 6252 | 0.3493 | 0.9316 | | 0.0348 | 5.0 | 7815 | 0.3685 | 0.9365 | | 0.0311 | 6.0 | 9378 | 0.4913 | 0.9310 | | 0.0205 | 7.0 | 10941 | 0.4485 | 0.9362 | | 0.0177 | 8.0 | 12504 | 0.4903 | 0.9354 | | 0.0147 | 9.0 | 14067 | 0.5786 | 0.9322 | | 0.0119 | 10.0 | 15630 | 0.5245 | 0.9356 | | 0.01 | 11.0 | 17193 | 0.5730 | 0.9364 | | 0.0091 | 12.0 | 18756 | 0.5730 | 0.9383 | | 0.006 | 13.0 | 20319 | 0.5596 | 0.9386 | | 0.004 | 14.0 | 21882 | 0.6760 | 0.9354 | | 0.0018 | 15.0 | 23445 | 0.5813 | 0.9402 | | 0.0018 | 16.0 | 25008 | 0.6526 | 0.9378 | | 0.0035 | 17.0 | 26571 | 0.6453 | 0.9384 | | 0.0002 | 18.0 | 28134 | 0.6714 | 0.9392 | | 0.0001 | 19.0 | 29697 | 0.6893 | 0.9397 | | 0.0 | 20.0 | 31260 | 0.6907 | 0.9394 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
{"base_model": "bert-base-uncased", "datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "N_bert_imdb_padding30model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "test", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.93944, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,681
KoontzP/Finetuned-sentiment-model
KoontzP
text-classification
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-04-17T06:34:52Z
2024-04-17T06:39:51+00:00
4
0
--- base_model: distilbert-base-uncased datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: Finetuned-sentiment-model results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.9315 name: Accuracy - type: f1 value: 0.9315994122530189 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Finetuned-sentiment-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1792 - Accuracy: 0.9315 - F1: 0.9316 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 125 | 0.5311 | 0.831 | 0.8081 | | No log | 2.0 | 250 | 0.2390 | 0.9215 | 0.9214 | | No log | 3.0 | 375 | 0.1895 | 0.932 | 0.9319 | | 0.4559 | 4.0 | 500 | 0.1792 | 0.9315 | 0.9316 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Finetuned-sentiment-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1792 - Accuracy: 0.9315 - F1: 0.9316 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 125 | 0.5311 | 0.831 | 0.8081 | | No log | 2.0 | 250 | 0.2390 | 0.9215 | 0.9214 | | No log | 3.0 | 375 | 0.1895 | 0.932 | 0.9319 | | 0.4559 | 4.0 | 500 | 0.1792 | 0.9315 | 0.9316 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "Finetuned-sentiment-model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9315, "name": "Accuracy"}, {"type": "f1", "value": 0.9315994122530189, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
43,682
hopkins/eng-kor-wsample.43
hopkins
translation
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-04T21:21:23Z
2023-07-04T21:38:55+00:00
8
0
--- metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: eng-kor-wsample.43 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-kor-wsample.43 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9892 - Bleu: 6.9989 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-kor-wsample.43 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9892 - Bleu: 6.9989 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "eng-kor-wsample.43", "results": []}]}
task
[ "TRANSLATION" ]
43,683
adityavelusamy/Questy-v1
adityavelusamy
summarization
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "autotrain", "summarization", "unk", "dataset:adityavelusamy/autotrain-data-f", "co2_eq_emissions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-05-27T18:48:37Z
2023-05-27T18:56:44+00:00
21
0
--- datasets: - adityavelusamy/autotrain-data-f language: - unk tags: - autotrain - summarization widget: - text: I love AutoTrain co2_eq_emissions: emissions: 0.5793683469903973 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 62230135023 - CO2 Emissions (in grams): 0.5794 ## Validation Metrics - Loss: 0.883 - Rouge1: 52.493 - Rouge2: 33.950 - RougeL: 47.184 - RougeLsum: 47.225 - Gen Len: 15.493 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/adityavelusamy/autotrain-f-62230135023 ```
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 62230135023 - CO2 Emissions (in grams): 0.5794 ## Validation Metrics - Loss: 0.883 - Rouge1: 52.493 - Rouge2: 33.950 - RougeL: 47.184 - RougeLsum: 47.225 - Gen Len: 15.493 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/adityavelusamy/autotrain-f-62230135023 ```
{"datasets": ["adityavelusamy/autotrain-data-f"], "language": ["unk"], "tags": ["autotrain", "summarization"], "widget": [{"text": "I love AutoTrain"}], "co2_eq_emissions": {"emissions": 0.5793683469903973}}
task
[ "SUMMARIZATION" ]
43,684
MultiBertGunjanPatrick/multiberts-seed-18
MultiBertGunjanPatrick
null
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-10-04T04:53:54+00:00
102
0
--- datasets: - bookcorpus - wikipedia language: en license: apache-2.0 tags: - exbert - multiberts --- # MultiBERTs Seed 18 (uncased) Seed 18 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-18') model = BertModel.from_pretrained("multiberts-seed-18") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
null
Non_BioNLP
# MultiBERTs Seed 18 (uncased) Seed 18 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-18') model = BertModel.from_pretrained("multiberts-seed-18") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"]}
task
[ "QUESTION_ANSWERING" ]
43,685
Triangle104/EuroLLM-9B-Q6_K-GGUF
Triangle104
null
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "en", "de", "es", "fr", "it", "pt", "pl", "nl", "tr", "sv", "cs", "el", "hu", "ro", "fi", "uk", "sl", "sk", "da", "lt", "lv", "et", "bg", "no", "ca", "hr", "ga", "mt", "gl", "zh", "ru", "ko", "ja", "ar", "hi", "base_model:utter-project/EuroLLM-9B", "base_model:quantized:utter-project/EuroLLM-9B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-12-09T09:54:05Z
2024-12-09T09:55:49+00:00
13
0
--- base_model: utter-project/EuroLLM-9B language: - en - de - es - fr - it - pt - pl - nl - tr - sv - cs - el - hu - ro - fi - uk - sl - sk - da - lt - lv - et - bg - 'no' - ca - hr - ga - mt - gl - zh - ru - ko - ja - ar - hi library_name: transformers license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # Triangle104/EuroLLM-9B-Q6_K-GGUF This model was converted to GGUF format from [`utter-project/EuroLLM-9B`](https://huggingface.co/utter-project/EuroLLM-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/utter-project/EuroLLM-9B) for more details on the model. --- Model details: - This is the model card for EuroLLM-9B. You can also check the instruction tuned version: EuroLLM-9B-Instruct. Developed by: Unbabel, Instituto Superior Técnico, Instituto de Telecomunicações, University of Edinburgh, Aveni, University of Paris-Saclay, University of Amsterdam, Naver Labs, Sorbonne Université. Funded by: European Union. Model type: A 9B parameter multilingual transfomer LLM. Language(s) (NLP): Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish, Arabic, Catalan, Chinese, Galician, Hindi, Japanese, Korean, Norwegian, Russian, Turkish, and Ukrainian. License: Apache License 2.0. Model Details The EuroLLM project has the goal of creating a suite of LLMs capable of understanding and generating text in all European Union languages as well as some additional relevant languages. EuroLLM-9B is a 9B parameter model trained on 4 trillion tokens divided across the considered languages and several data sources: Web data, parallel data (en-xx and xx-en), and high-quality datasets. EuroLLM-9B-Instruct was further instruction tuned on EuroBlocks, an instruction tuning dataset with focus on general instruction-following and machine translation. Model Description EuroLLM uses a standard, dense Transformer architecture: We use grouped query attention (GQA) with 8 key-value heads, since it has been shown to increase speed at inference time while maintaining downstream performance. We perform pre-layer normalization, since it improves the training stability, and use the RMSNorm, which is faster. We use the SwiGLU activation function, since it has been shown to lead to good results on downstream tasks. We use rotary positional embeddings (RoPE) in every layer, since these have been shown to lead to good performances while allowing the extension of the context length. For pre-training, we use 400 Nvidia H100 GPUs of the Marenostrum 5 supercomputer, training the model with a constant batch size of 2,800 sequences, which corresponds to approximately 12 million tokens, using the Adam optimizer, and BF16 precision. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/EuroLLM-9B-Q6_K-GGUF --hf-file eurollm-9b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/EuroLLM-9B-Q6_K-GGUF --hf-file eurollm-9b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/EuroLLM-9B-Q6_K-GGUF --hf-file eurollm-9b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/EuroLLM-9B-Q6_K-GGUF --hf-file eurollm-9b-q6_k.gguf -c 2048 ```
null
Non_BioNLP
# Triangle104/EuroLLM-9B-Q6_K-GGUF This model was converted to GGUF format from [`utter-project/EuroLLM-9B`](https://huggingface.co/utter-project/EuroLLM-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/utter-project/EuroLLM-9B) for more details on the model. --- Model details: - This is the model card for EuroLLM-9B. You can also check the instruction tuned version: EuroLLM-9B-Instruct. Developed by: Unbabel, Instituto Superior Técnico, Instituto de Telecomunicações, University of Edinburgh, Aveni, University of Paris-Saclay, University of Amsterdam, Naver Labs, Sorbonne Université. Funded by: European Union. Model type: A 9B parameter multilingual transfomer LLM. Language(s) (NLP): Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish, Arabic, Catalan, Chinese, Galician, Hindi, Japanese, Korean, Norwegian, Russian, Turkish, and Ukrainian. License: Apache License 2.0. Model Details The EuroLLM project has the goal of creating a suite of LLMs capable of understanding and generating text in all European Union languages as well as some additional relevant languages. EuroLLM-9B is a 9B parameter model trained on 4 trillion tokens divided across the considered languages and several data sources: Web data, parallel data (en-xx and xx-en), and high-quality datasets. EuroLLM-9B-Instruct was further instruction tuned on EuroBlocks, an instruction tuning dataset with focus on general instruction-following and machine translation. Model Description EuroLLM uses a standard, dense Transformer architecture: We use grouped query attention (GQA) with 8 key-value heads, since it has been shown to increase speed at inference time while maintaining downstream performance. We perform pre-layer normalization, since it improves the training stability, and use the RMSNorm, which is faster. We use the SwiGLU activation function, since it has been shown to lead to good results on downstream tasks. We use rotary positional embeddings (RoPE) in every layer, since these have been shown to lead to good performances while allowing the extension of the context length. For pre-training, we use 400 Nvidia H100 GPUs of the Marenostrum 5 supercomputer, training the model with a constant batch size of 2,800 sequences, which corresponds to approximately 12 million tokens, using the Adam optimizer, and BF16 precision. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/EuroLLM-9B-Q6_K-GGUF --hf-file eurollm-9b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/EuroLLM-9B-Q6_K-GGUF --hf-file eurollm-9b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/EuroLLM-9B-Q6_K-GGUF --hf-file eurollm-9b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/EuroLLM-9B-Q6_K-GGUF --hf-file eurollm-9b-q6_k.gguf -c 2048 ```
{"base_model": "utter-project/EuroLLM-9B", "language": ["en", "de", "es", "fr", "it", "pt", "pl", "nl", "tr", "sv", "cs", "el", "hu", "ro", "fi", "uk", "sl", "sk", "da", "lt", "lv", "et", "bg", "no", "ca", "hr", "ga", "mt", "gl", "zh", "ru", "ko", "ja", "ar", "hi"], "library_name": "transformers", "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"]}
task
[ "TRANSLATION" ]
43,686
interneuronai/az-mistral
interneuronai
null
[ "peft", "safetensors", "base_model:unsloth/mistral-7b-bnb-4bit", "base_model:adapter:unsloth/mistral-7b-bnb-4bit", "region:us" ]
2024-03-09T11:48:18Z
2024-03-09T18:14:22+00:00
0
0
--- base_model: unsloth/mistral-7b-bnb-4bit library_name: peft --- Model Details Original Model: unsloth/mistral-7b-bnb-4bit Fine-Tuned For: Azerbaijani language understanding and generation Dataset Used: Azerbaijani translation of the Stanford Alpaca dataset Fine-Tuning Method: Self-instruct method This model, is part of the ["project/Barbarossa"](https://github.com/Alas-Development-Center/project-barbarossa) initiative, aimed at enhancing natural language processing capabilities for the Azerbaijani language. By fine-tuning this model on the Azerbaijani translation of the Stanford Alpaca dataset using the self-instruct method, we've made significant strides in improving AI's understanding and generation of Azerbaijani text. __Our primary objective with this model is to offer insights into the feasibility and outcomes of fine-tuning large language models (LLMs) for the Azerbaijani language. The fine-tuning process was undertaken with limited resources, providing valuable learnings rather than creating a model ready for production use. Therefore, we recommend treating this model as a reference or a guide to understanding the potential and challenges involved in fine-tuning LLMs for specific languages. It serves as a foundational step towards further research and development rather than a direct solution for production environments.__ This project is a proud product of the [Alas Development Center (ADC)](https://az.linkedin.com/company/alas-development-center?trk=ppro_cprof). We are thrilled to offer these finely-tuned large language models to the public, free of charge. How to use? ``` from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, pipeline model_path = "alasdevcenter/az-mistral" model = AutoModelForCausalLM.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) instruction = "Təbiətin qorunması " formatted_prompt = f"""Aşağıda daha çox kontekst təmin edən təlimat var. Sorğunu adekvat şəkildə tamamlayan cavab yazın. ### Təlimat: {instruction} ### Cavab: """ result = pipe(formatted_prompt) print(result[0]['generated_text']) ```
null
Non_BioNLP
Model Details Original Model: unsloth/mistral-7b-bnb-4bit Fine-Tuned For: Azerbaijani language understanding and generation Dataset Used: Azerbaijani translation of the Stanford Alpaca dataset Fine-Tuning Method: Self-instruct method This model, is part of the ["project/Barbarossa"](https://github.com/Alas-Development-Center/project-barbarossa) initiative, aimed at enhancing natural language processing capabilities for the Azerbaijani language. By fine-tuning this model on the Azerbaijani translation of the Stanford Alpaca dataset using the self-instruct method, we've made significant strides in improving AI's understanding and generation of Azerbaijani text. __Our primary objective with this model is to offer insights into the feasibility and outcomes of fine-tuning large language models (LLMs) for the Azerbaijani language. The fine-tuning process was undertaken with limited resources, providing valuable learnings rather than creating a model ready for production use. Therefore, we recommend treating this model as a reference or a guide to understanding the potential and challenges involved in fine-tuning LLMs for specific languages. It serves as a foundational step towards further research and development rather than a direct solution for production environments.__ This project is a proud product of the [Alas Development Center (ADC)](https://az.linkedin.com/company/alas-development-center?trk=ppro_cprof). We are thrilled to offer these finely-tuned large language models to the public, free of charge. How to use? ``` from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, pipeline model_path = "alasdevcenter/az-mistral" model = AutoModelForCausalLM.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) instruction = "Təbiətin qorunması " formatted_prompt = f"""Aşağıda daha çox kontekst təmin edən təlimat var. Sorğunu adekvat şəkildə tamamlayan cavab yazın. ### Təlimat: {instruction} ### Cavab: """ result = pipe(formatted_prompt) print(result[0]['generated_text']) ```
{"base_model": "unsloth/mistral-7b-bnb-4bit", "library_name": "peft"}
task
[ "TRANSLATION" ]
43,687