id
stringlengths
6
113
author
stringlengths
2
36
task_category
stringclasses
42 values
tags
listlengths
1
4.05k
created_time
timestamp[ns, tz=UTC]date
2022-03-02 23:29:04
2025-04-10 08:38:38
last_modified
stringdate
2020-05-14 13:13:12
2025-04-19 04:15:39
downloads
int64
0
118M
likes
int64
0
4.86k
README
stringlengths
30
1.01M
matched_bigbio_names
listlengths
1
8
is_bionlp
stringclasses
3 values
model_cards
stringlengths
0
1M
metadata
stringlengths
2
698k
source
stringclasses
2 values
matched_task
listlengths
1
10
__index_level_0__
int64
0
46.9k
garinee/marian-finetuned-kde4-en-to-mr
garinee
translation
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:tatoeba", "base_model:Helsinki-NLP/opus-mt-en-mr", "base_model:finetune:Helsinki-NLP/opus-mt-en-mr", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-05-21T21:25:29Z
2024-05-21T21:40:36+00:00
6
0
--- base_model: Helsinki-NLP/opus-mt-en-mr datasets: - tatoeba license: apache-2.0 metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: marian-finetuned-kde4-en-to-mr results: - task: type: text2text-generation name: Sequence-to-sequence Language Modeling dataset: name: tatoeba type: tatoeba config: en-mr split: train args: en-mr metrics: - type: bleu value: 30.588452577893687 name: Bleu --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-mr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mr](https://huggingface.co/Helsinki-NLP/opus-mt-en-mr) on the tatoeba dataset. It achieves the following results on the evaluation set: - Loss: 1.5717 - Bleu: 30.5885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-mr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mr](https://huggingface.co/Helsinki-NLP/opus-mt-en-mr) on the tatoeba dataset. It achieves the following results on the evaluation set: - Loss: 1.5717 - Bleu: 30.5885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
{"base_model": "Helsinki-NLP/opus-mt-en-mr", "datasets": ["tatoeba"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-mr", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "tatoeba", "type": "tatoeba", "config": "en-mr", "split": "train", "args": "en-mr"}, "metrics": [{"type": "bleu", "value": 30.588452577893687, "name": "Bleu"}]}]}]}
task
[ "TRANSLATION" ]
45,580
tinh2406/ft-t5-small-on-opus100
tinh2406
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:adapter:google-t5/t5-small", "license:openrail", "region:us" ]
2025-02-06T16:54:09Z
2025-02-09T18:23:49+00:00
10
0
--- base_model: t5-small library_name: peft license: openrail tags: - generated_from_trainer model-index: - name: ft-t5-small-on-opus100 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ft-t5-small-on-opus100 This model is a fine-tuned version of [VietAI/envit5-translation](https://huggingface.co/VietAI/envit5-translation) on the opus100 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.14.0 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 2.15.0 - Tokenizers 0.20.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ft-t5-small-on-opus100 This model is a fine-tuned version of [VietAI/envit5-translation](https://huggingface.co/VietAI/envit5-translation) on the opus100 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.14.0 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 2.15.0 - Tokenizers 0.20.3
{"base_model": "t5-small", "library_name": "peft", "license": "openrail", "tags": ["generated_from_trainer"], "model-index": [{"name": "ft-t5-small-on-opus100", "results": []}]}
task
[ "TRANSLATION" ]
45,581
UWECProgrammer/setfit-model-two
UWECProgrammer
text-classification
[ "sentence-transformers", "pytorch", "bert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-08-05T01:48:17Z
2023-08-05T01:50:16+00:00
13
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # UWECProgrammer/setfit-model-two This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("UWECProgrammer/setfit-model-two") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# UWECProgrammer/setfit-model-two This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("UWECProgrammer/setfit-model-two") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
45,582
JeanAbbiateci/autotrain-dqwq9-raj8z
JeanAbbiateci
text-classification
[ "tensorboard", "safetensors", "mpnet", "autotrain", "text-classification", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "region:us" ]
2024-11-29T00:30:39Z
2024-11-29T01:25:28+00:00
6
0
--- base_model: sentence-transformers/all-mpnet-base-v2 tags: - autotrain - text-classification widget: - text: I love AutoTrain --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.3196659982204437 f1: 0.9 precision: 0.8319327731092437 recall: 0.9801980198019802 auc: 0.955196775959003 accuracy: 0.8902743142144638
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.3196659982204437 f1: 0.9 precision: 0.8319327731092437 recall: 0.9801980198019802 auc: 0.955196775959003 accuracy: 0.8902743142144638
{"base_model": "sentence-transformers/all-mpnet-base-v2", "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
task
[ "TEXT_CLASSIFICATION" ]
45,583
RichardErkhov/knkarthick_-_MEETING_SUMMARY-8bits
RichardErkhov
text-generation
[ "transformers", "safetensors", "bart", "text-generation", "autotrain_compatible", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
2024-05-09T19:01:43Z
2024-05-09T19:02:32+00:00
4
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) MEETING_SUMMARY - bnb 8bits - Model creator: https://huggingface.co/knkarthick/ - Original model: https://huggingface.co/knkarthick/MEETING_SUMMARY/ Original model description: --- language: en license: apache-2.0 tags: - bart - seq2seq - summarization datasets: - cnndaily/newyorkdaily/xsum/samsum/dialogsum/AMI metrics: - rouge widget: - text: 'Hi, I''m David and I''m supposed to be an industrial designer. Um, I just got the project announcement about what the project is. Designing a remote control. That''s about it, didn''t get anything else. Did you get the same thing? Cool. There''s too much gear. Okay. Can''t draw. Um. Yeah. Um, well anyway, I don''t know, it''s just the first animal I can think off the top of my head. Um. Yes. Big reason is ''cause I''m allergic to most animals. Allergic to animal fur, so um fish was a natural choice. Um, yeah, and I kind of like whales. They come in and go eat everything in sight. And they''re quite harmless and mild and interesting. Tail''s a bit big, I think. It''s an after dinner dog then. Hmm. It does make sense from maybe the design point of view ''cause you have more complicated characters like European languages, then you need more buttons. So, possibly. Hmm. Yeah. And you keep losing them. Finding them is really a pain, you know. I mean it''s usually quite small, or when you want it right, it slipped behind the couch or it''s kicked under the table. You know. Yep. Mm-hmm. I think one factor would be production cost. Because there''s a cap there, so um depends on how much you can cram into that price. Um. I think that that''s the main factor. Cool. Okay. Right. Um well this is the kick-off meeting for our our project. Um and um this is just what we''re gonna be doing over the next twenty five minutes. Um so first of all, just to kind of make sure that we all know each other, I''m Laura and I''m the project manager. Do you want to introduce yourself again? Okay. Great. Okay. Um so we''re designing a new remote control and um Oh I have to record who''s here actually. So that''s David, Andrew and Craig, isn''t it? And you all arrived on time. Um yeah so des uh design a new remote control. Um, as you can see it''s supposed to be original, trendy and user friendly. Um so that''s kind of our our brief, as it were. Um and so there are three different stages to the design. Um I''m not really sure what what you guys have already received um in your emails. What did you get? Mm-hmm. Is that what everybody got? Okay. Um. So we''re gonna have like individual work and then a meeting about it. And repeat that process three times. Um and at this point we get try out the whiteboard over there. Um. So uh you get to draw your favourite animal and sum up your favourite characteristics of it. So who would like to go first? Very good. Mm-hmm. Yeah. Yeah. Right. Lovely. Right. You can take as long over this as you like, because we haven''t got an awful lot to discuss. Ok oh we do we do. Don''t feel like you''re in a rush, anyway. Ach why not We might have to get you up again then. I don''t know what mine is. I''m gonna have to think on the spot now. Is that a whale? Ah. Okay. God, I still don''t know what I''m gonna write about. Um. I was gonna choose a dog as well. But I''ll just draw a different kind of dog. M my favourite animal is my own dog at home. Um That doesn''t really look like him, actually. He looks more like a pig, actually. Ah well. Do you? Oh that''s very good of you. Uh. Um he''s a mixture of uh various things. Um and what do I like about him, um That''s just to suggest that his tail wags. Um he''s very friendly and cheery and always pleased to see you, and very kind of affectionate and um uh and he''s quite quite wee as well so you know he can doesn''t take up too much space. Um and uh And he does a funny thing where he chases his tail as well, which is quite amusing, so It is. I think it is. He only does it after he''s had his dinner and um he''ll just all of a sudden just get up and start chasing his tail ''round the living room. Yeah, so uh Yeah, maybe. Maybe. Right, um where did you find this? Just down here? Yeah. Okay. Um what are we doing next? Uh um. Okay, uh we now need to discuss the project finance. Um so according to the brief um we''re gonna be selling this remote control for twenty five Euro, um and we''re aiming to make fifty million Euro. Um so we''re gonna be selling this on an international scale. And uh we don''t want it to cost any more than uh twelve fifty Euros, so fifty percent of the selling price. Sure. All together. Um I dunno. I imagine That''s a good question. I imagine it probably is our sale actually because it''s probably up to the the um the retailer to uh sell it for whatever price they want. Um. But I I don''t know, I mean do you think the fact that it''s going to be sold internationally will have a bearing on how we design it at all? Think it will? Um. Hmm. Oh yeah, regions and stuff, yeah. Yeah. Okay. Yeah. Well for a remote control, do you think that will be I suppose it''s depends on how complicated our remote control is. Yeah, yeah. Okay. What, just like in terms of like the wealth of the country? Like how much money people have to spend on things like? Aye, I see what you mean, yeah. Marketing. Good marketing thoughts. Oh gosh, I should be writing all this down. Um. Mm. Yeah. Yeah, yeah. Like how much does, you know, a remote control cost. Well twenty five Euro, I mean that''s um that''s about like eighteen pounds or something, isn''t it? Or no, is it as much as that? Sixteen seventeen eighteen pounds. Um, I dunno, I''ve never bought a remote control, so I don''t know how how good a remote control that would get you. Um. But yeah, I suppose it has to look kind of cool and gimmicky. Um right, okay. Let me just scoot on ahead here. Okay. Um well d Does anybody have anything to add to uh to the finance issue at all? Thin No, actually. That would be useful, though, wouldn''t it, if you knew like what your money would get you now. Mm-hmm. Yeah, yeah. Oh. Five minutes to end of meeting. Oh, okay. We''re a bit behind. Yeah. Right, so do you think that should be like a main design aim of our remote control d you know, do your your satellite and your regular telly and your V_C_R_ and everything? Mm-hmm. Yeah. Or even like, you know, notes about um what you wanna watch. Like you might put in there oh I want to watch such and such and look a Oh that''s a good idea. So extra functionalities. Mm-hmm. Hmm. Um okay, uh I''d wel we''re gonna have to wrap up pretty quickly in the next couple of minutes. Um I''ll just check we''ve nothing else. Okay. Um so anything else anybody wants to add about what they don''t like about remote controls they''ve used, what they would really like to be part of this new one at all? You keep losing them. Okay. Yeah. W You get those ones where you can, if you like, whistle or make a really high pitched noise they beep. There I mean is that something we''d want to include, do you think? Dunno. Okay maybe. My goodness. Still feels quite primitive. Maybe like a touch screen or something? Okay. Uh-huh, okay. Well I guess that''s up to our industrial designer. It looks better. Yeah. Okay. Okay. Right, well um so just to wrap up, the next meeting''s gonna be in thirty minutes. So that''s about um about ten to twelve by my watch. Um so inbetween now and then, um as the industrial designer, you''re gonna be working on you know the actual working design of it so y you know what you''re doing there. Um for user interface, technical functions, I guess that''s you know like what we''ve been talking about, what it''ll actually do. Um and uh marketing executive, you''ll be just thinking about what it actually what, you know, what requirements it has to has to fulfil and you''ll all get instructions emailed to you, I guess. Um. Yeah, so it''s th the functional design stage is next, I guess. And uh and that''s the end of the meeting. So I got that little message a lot sooner than I thought I would, so Mm-hmm. Uh-huh, yeah. Th Okay, well just very quickly ''cause this we''re supposed to finish now. Um I guess that''s up to us, I mean you probably want some kind of unique selling point of it, so um, you know Yeah. Mm-hmm. Yeah. Okay. Right, okay, we''ll that''s that''s the end of the meeting, then. Um. So, uh thank you all for coming. Um I''m Craig and I''m User Interface. Yeah. Well, my favourite animal would be a monkey. Then they''re small cute and furry, and uh when planet of the apes becomes real, I''m gonna be up there with them. Yeah. I know um My parents went out and bought um remote controls because um they got fed up of having four or five different remote controls for each things the house. So um for them it was just how many devices control. Uh. Mm-hmm. Great. And I''m Andrew and I''m uh our marketing expert. Mm-hmm. Mm-hmm. Yeah, that''s that''s it. Yeah. I will go. That''s fine. Alright. So This one here, right? Okay. Very nice. Alright. My favourite animal is like A beagle. Um charac favourite characteristics of it? Is that right? Uh, right, well basically um high priority for any animal for me is that they be willing to take a lot of physical affection from their family. And, yeah that they have lots of personality and uh be fit and in robust good health. So this is blue. Blue beagle. My family''s beagle. I coulda told you a whole lot more about beagles. Boy, let me tell you. Impressionist. Alright. Mm. Superb sketch, by the way. Yep. I see a dog in there. Yep. Now I see a rooster. What kind is it? Is he aware that th it''s his own cha tail he''s chasing? Hmm. Probably when he was little he got lots of attention for doing it and has forever been conditioned. ''Kay. Um, can we just go over that again? Uh, so bas at twel Alright, yeah. Okay. So cost like production cost is twelve fifty, but selling price is is that wholesale or retail? Like on the shelf. Our sale our sale anyway. Yeah, okay okay. Okay. Mm-hmm. Alright. Yes. Mm-hmm. Mm-hmm. Well right away I''m wondering if there''s um th th uh, like with D_V_D_ players, if there are zones. Um f frequencies or something um as well as uh characters, um different uh keypad styles and s symbols. Um. I don''t know. Yeah. Yeah. Yeah. And then a and then al the other thing international is on top of the price. I''m thinking the price might might appeal to a certain market in one region, whereas in another it''ll be different, so Just a chara just a characteristic of the Just Or just like, basic product podi positioning, the twenty five Euro remote control might be a big hit in London, might not be such a big hit in Greece, who knows, something like that, yeah. Yep. Right away I''m making some kind of assumptions about what what information we''re given here, thinking, ''kay trendy probably means something other than just basic, something other than just standard. Um so I''m wondering right away, is selling twenty five Euros, is that sort of the thi is this gonna to be like the premium product kinda thing or Uh-huh. Mm-hmm. Yep. Yeah, I''d say so, yeah. No. Yeah, yeah. Mm-hmm. Do we have any other background information on like how that compares to other other Yeah. Mm-hmm. Yeah, interesting thing about discussing um production of a remote control for me is that l as you point out, I just don''t think of remote controls as somethin something people consciously assess in their purchasing habits. It''s just like getting shoelaces with shoes or something. It just comes along. Do you know what I mean? Like so sort of like how do you I I mean one one way of looking at it would be, well the people producing television sets, maybe they have to buy remote controls. Or another way is maybe people who have T_V_ sets are really fed up with their remote control and they really want a better one or something. But Right. Right. Okay so Right, so in function one of the priorities might be to combine as many uses I think so. Yeah, yeah. Yeah. Well like um, maybe what we could use is a sort of like a example of a successful other piece technology is palm palm pilots. They''re gone from being just like little sort of scribble boards to cameras, M_P_ three players, telephones, everything, agenda. So, like, I wonder if we might add something new to the to the remote control market, such as the lighting in your house, or um Yeah, yeah. An Yeah. Like, p personally for me, at home I''ve I''ve combined the um the audio video of my television set and my D_V_D_ player and my C_D_ player. So they w all work actually function together but I have different remote controls for each of them. So it''s sort of ironic that that then they''re in there um you know, the sound and everything it''s just one system. But each one''s got its own little part. Mm. Mm. Mm. Mm-hmm. Mm-hmm. Yeah. Yeah. That''s just really good id Yep. Uh, sure. I remember when the first remote control my my family had was on a cable. Actually had a cable between it and the T_V_ and big like buttons that sort of like, like on a blender or something. And um, you know, when I think about what they are now, it''s better, but actually it''s still kind of, I dunno, like a massive junky thing on the table. Maybe we could think about how, could be more, you know, streamlined. S Something like that, yeah. Or whatever would be technologically reasonable. ''Cause it could b it could it could be that f it could be that functionally that doesn''t make it any better, but that just the appeal of of not having You know, these days there''s a r pe things in people''s homes are becoming more and more like chic, you know. Um, nicer materials and might be be worth exploring anyway. Okay. Um. Before we wrap up, just to make sure we''re all on the same page here, um, do we We were given sort of an example of a coffee machine or something, right? Well, um are we at ma right now on the assumption that our television remote control may have features which go beyond the television? Or are we keeping sort of like a a design commitment to television features? I I don''t know. Yep. Yeah, sure. Okay. Okay, yeah. Okay. Okay. Okay. Alright.' model-index: - name: MEETING_SUMMARY results: - task: type: abstractive-text-summarization name: Abstractive Text Summarization dataset: name: samsum type: samsum metrics: - type: rouge-1 value: 53.8795 name: Validation ROGUE-1 - type: rouge-2 value: 28.4975 name: Validation ROGUE-2 - type: rouge-L value: 44.1899 name: Validation ROGUE-L - type: rouge-Lsum value: 49.4863 name: Validation ROGUE-Lsum - type: gen-length value: 30.088 name: Validation ROGUE-Lsum - type: rouge-1 value: 53.2284 name: Test ROGUE-1 - type: rouge-2 value: 28.184 name: Test ROGUE-2 - type: rouge-L value: 44.122 name: Test ROGUE-L - type: rouge-Lsum value: 49.0301 name: Test ROGUE-Lsum - type: gen-length value: 29.9951 name: Test ROGUE-Lsum - task: type: summarization name: Summarization dataset: name: bazzhangz/sumdataset type: bazzhangz/sumdataset config: bazzhangz--sumdataset split: train metrics: - type: rouge value: 40.5544 name: ROUGE-1 verified: true - type: rouge value: 17.0751 name: ROUGE-2 verified: true - type: rouge value: 32.153 name: ROUGE-L verified: true - type: rouge value: 36.4277 name: ROUGE-LSUM verified: true - type: loss value: 2.116729736328125 name: loss verified: true - type: gen_len value: 42.1978 name: gen_len verified: true - task: type: abstractive-text-summarization name: Abstractive Text Summarization dataset: name: xsum type: xsum metrics: - type: rouge-1 value: 35.9078 name: Validation ROGUE-1 - type: rouge-2 value: 14.2497 name: Validation ROGUE-2 - type: rouge-L value: 28.1421 name: Validation ROGUE-L - type: rouge-Lsum value: 28.9826 name: Validation ROGUE-Lsum - type: gen-length value: 32.0167 name: Validation ROGUE-Lsum - type: rouge-1 value: 36.0241 name: Test ROGUE-1 - type: rouge-2 value: 14.3715 name: Test ROGUE-2 - type: rouge-L value: 28.1968 name: Test ROGUE-L - type: rouge-Lsum value: 29.0527 name: Test ROGUE-Lsum - type: gen-length value: 31.9933 name: Test ROGUE-Lsum - task: type: abstractive-text-summarization name: Abstractive Text Summarization dataset: name: dialogsum type: dialogsum metrics: - type: rouge-1 value: 39.8612 name: Validation ROGUE-1 - type: rouge-2 value: 16.6917 name: Validation ROGUE-2 - type: rouge-L value: 32.2718 name: Validation ROGUE-L - type: rouge-Lsum value: 35.8748 name: Validation ROGUE-Lsum - type: gen-length value: 41.726 name: Validation ROGUE-Lsum - type: rouge-1 value: 36.9608 name: Test ROGUE-1 - type: rouge-2 value: 14.3058 name: Test ROGUE-2 - type: rouge-L value: 29.3261 name: Test ROGUE-L - type: rouge-Lsum value: 32.9 name: Test ROGUE-Lsum - type: gen-length value: 43.086 name: Test ROGUE-Lsum - task: type: summarization name: Summarization dataset: name: samsum type: samsum config: samsum split: test metrics: - type: rouge value: 53.1878 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTVkNTczYjFmYzBmMzczNWE0MGY4MDAyZWExOGNjZmY1Yzk2ZGM1MGNjZmFmYWUyZmIxZjdjOTk4OTc4OGJlMSIsInZlcnNpb24iOjF9.yyzPpGtESuZXy_lBESrboGxdGYB7I6jaIjquCYqliE2xdbGf5awDFpDUwlZHDuw6RD2mIZv1FC8PPs9lOHuSAg - type: rouge value: 28.1666 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjAzOTdjNGYxNWMzYmFjYjRmMTcxYzI0MmNlNmM5Nzg2MzBlNDdmZWFkN2EwMDE2ZTZmYzc0Zjg0ZDc0M2IxNiIsInZlcnNpb24iOjF9.cPH6O50T6HekO227Xzha-EN_Jp7JS9fh5EP9I0tHxbpGptKtZOQC-NG68zfU2eJKlRSrmgaBYs8tjfTvpAgyDg - type: rouge value: 44.117 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmNmMzJkYjMxMjhlZDM4YmU3NmI1MDExNzhiYmVhMzEyZGJjNDJkNzczNGQwOTMwNzg2YjU1ZWQ4MDhiMzkxYiIsInZlcnNpb24iOjF9.lcEXK15UqZOdXnPjVqIhFd6o_PLROSIONTRFX5NbwanjEI_MWMLpDh_V0Kpnvs_W0sE6cXh2yoifSYNDA5W7Bw - type: rouge value: 49.0094 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYThkYjk4ZjMzYjI0OTAxNDJiZTU5MzE0YjI5MjEzYTYwNWEzMmU5NjU2ZjQ5NzJhMzkyNmVhNWFjZmM1MjAwMSIsInZlcnNpb24iOjF9.LTn6LpKuMO4Rv4NgsbPmtr2ewiKyoqAXlf6YJfM_6GKwVTKpnJxwx7gaaAtMb0jVlgieITMP11JmbeRfMEhgDg - type: loss value: 1.710614562034607 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjNjZmM0ZjkwYWYyMWIyMmFiMWI1ODBiYjRjNzVhM2JhN2NmNmM1ZDUwZWRjNDQxNzUwMWM4YjYxYTg1MWYwNyIsInZlcnNpb24iOjF9.hGXZhp9pe-HDJilXVvMCkqz-92YZvH6Qr7q9Z7fJkm8N9s0b4sl-4PwjQYJEOLEAhoRO2s-F5T3bmCYCaMiNBQ - type: gen_len value: 29.9951 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmY1NzZiMDAzNGJlNTg4Nzc0YzU1MTA3YTI3MzVmNGZkNWQ0ZDE4MGZlNGI1MzJmYzA3MjQ0MDZhMTcyYTk2NCIsInZlcnNpb24iOjF9.8dvMfY7Y-nw-K8NGgTXIGFMxaSUWQYBE1w3N5YYOn4iwnCe2ugo2qPIOxLY91q7CaAOMCSskFV3BDStQ4p0ZCg --- Model obtained by Fine Tuning 'facebook/bart-large-xsum' using AMI Meeting Corpus, SAMSUM Dataset, DIALOGSUM Dataset, XSUM Dataset! ## Usage # Example 1 ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/MEETING_SUMMARY") text = '''The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct. ''' summarizer(text) ``` # Example 2 ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/MEETING_SUMMARY") text = '''Bangalore is the capital and the largest city of the Indian state of Karnataka. It has a population of more than 8 million and a metropolitan population of around 11 million, making it the third most populous city and fifth most populous urban agglomeration in India. Located in southern India on the Deccan Plateau, at a height of over 900 m (3,000 ft) above sea level, Bangalore is known for its pleasant climate throughout the year. Its elevation is the highest among the major cities of India.The city's history dates back to around 890 CE, in a stone inscription found at the Nageshwara Temple in Begur, Bangalore. The Begur inscription is written in Halegannada (ancient Kannada), mentions 'Bengaluru Kalaga' (battle of Bengaluru). It was a significant turning point in the history of Bangalore as it bears the earliest reference to the name 'Bengaluru'. In 1537 CE, Kempé Gowdā – a feudal ruler under the Vijayanagara Empire – established a mud fort considered to be the foundation of modern Bangalore and its oldest areas, or petes, which exist to the present day. After the fall of Vijayanagar empire in 16th century, the Mughals sold Bangalore to Chikkadevaraja Wodeyar (1673–1704), the then ruler of the Kingdom of Mysore for three lakh rupees. When Haider Ali seized control of the Kingdom of Mysore, the administration of Bangalore passed into his hands. The city was captured by the British East India Company after victory in the Fourth Anglo-Mysore War (1799), who returned administrative control of the city to the Maharaja of Mysore. The old city developed in the dominions of the Maharaja of Mysore and was made capital of the Princely State of Mysore, which existed as a nominally sovereign entity of the British Raj. In 1809, the British shifted their cantonment to Bangalore, outside the old city, and a town grew up around it, which was governed as part of British India. Following India's independence in 1947, Bangalore became the capital of Mysore State, and remained capital when the new Indian state of Karnataka was formed in 1956. The two urban settlements of Bangalore – city and cantonment – which had developed as independent entities merged into a single urban centre in 1949. The existing Kannada name, Bengalūru, was declared the official name of the city in 2006. Bangalore is widely regarded as the "Silicon Valley of India" (or "IT capital of India") because of its role as the nation's leading information technology (IT) exporter. Indian technological organisations are headquartered in the city. A demographically diverse city, Bangalore is the second fastest-growing major metropolis in India. Recent estimates of the metro economy of its urban area have ranked Bangalore either the fourth- or fifth-most productive metro area of India. As of 2017, Bangalore was home to 7,700 millionaires and 8 billionaires with a total wealth of $320 billion. It is home to many educational and research institutions. Numerous state-owned aerospace and defence organisations are located in the city. The city also houses the Kannada film industry. It was ranked the most liveable Indian city with a population of over a million under the Ease of Living Index 2020. ''' summarizer(text) ``` # Example 3 ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/MEETING_SUMMARY") text = '''Hi, I'm David and I'm supposed to be an industrial designer. Um, I just got the project announcement about what the project is. Designing a remote control. That's about it, didn't get anything else. Did you get the same thing? Cool. There's too much gear. Okay. Can't draw. Um. Yeah. Um, well anyway, I don't know, it's just the first animal I can think off the top of my head. Um. Yes. Big reason is 'cause I'm allergic to most animals. Allergic to animal fur, so um fish was a natural choice. Um, yeah, and I kind of like whales. They come in and go eat everything in sight. And they're quite harmless and mild and interesting. Tail's a bit big, I think. It's an after dinner dog then. Hmm. It does make sense from maybe the design point of view 'cause you have more complicated characters like European languages, then you need more buttons. So, possibly. Hmm. Yeah. And you keep losing them. Finding them is really a pain, you know. I mean it's usually quite small, or when you want it right, it slipped behind the couch or it's kicked under the table. You know. Yep. Mm-hmm. I think one factor would be production cost. Because there's a cap there, so um depends on how much you can cram into that price. Um. I think that that's the main factor. Cool. Okay. Right. Um well this is the kick-off meeting for our our project. Um and um this is just what we're gonna be doing over the next twenty five minutes. Um so first of all, just to kind of make sure that we all know each other, I'm Laura and I'm the project manager. Do you want to introduce yourself again? Okay. Great. Okay. Um so we're designing a new remote control and um Oh I have to record who's here actually. So that's David, Andrew and Craig, isn't it? And you all arrived on time. Um yeah so des uh design a new remote control. Um, as you can see it's supposed to be original, trendy and user friendly. Um so that's kind of our our brief, as it were. Um and so there are three different stages to the design. Um I'm not really sure what what you guys have already received um in your emails. What did you get? Mm-hmm. Is that what everybody got? Okay. Um. So we're gonna have like individual work and then a meeting about it. And repeat that process three times. Um and at this point we get try out the whiteboard over there. Um. So uh you get to draw your favourite animal and sum up your favourite characteristics of it. So who would like to go first? Very good. Mm-hmm. Yeah. Yeah. Right. Lovely. Right. You can take as long over this as you like, because we haven't got an awful lot to discuss. Ok oh we do we do. Don't feel like you're in a rush, anyway. Ach why not We might have to get you up again then. I don't know what mine is. I'm gonna have to think on the spot now. Is that a whale? Ah. Okay. God, I still don't know what I'm gonna write about. Um. I was gonna choose a dog as well. But I'll just draw a different kind of dog. M my favourite animal is my own dog at home. Um That doesn't really look like him, actually. He looks more like a pig, actually. Ah well. Do you? Oh that's very good of you. Uh. Um he's a mixture of uh various things. Um and what do I like about him, um That's just to suggest that his tail wags. Um he's very friendly and cheery and always pleased to see you, and very kind of affectionate and um uh and he's quite quite wee as well so you know he can doesn't take up too much space. Um and uh And he does a funny thing where he chases his tail as well, which is quite amusing, so It is. I think it is. He only does it after he's had his dinner and um he'll just all of a sudden just get up and start chasing his tail 'round the living room. Yeah, so uh Yeah, maybe. Maybe. Right, um where did you find this? Just down here? Yeah. Okay. Um what are we doing next? Uh um. Okay, uh we now need to discuss the project finance. Um so according to the brief um we're gonna be selling this remote control for twenty five Euro, um and we're aiming to make fifty million Euro. Um so we're gonna be selling this on an international scale. And uh we don't want it to cost any more than uh twelve fifty Euros, so fifty percent of the selling price. Sure. All together. Um I dunno. I imagine That's a good question. I imagine it probably is our sale actually because it's probably up to the the um the retailer to uh sell it for whatever price they want. Um. But I I don't know, I mean do you think the fact that it's going to be sold internationally will have a bearing on how we design it at all? Think it will? Um. Hmm. Oh yeah, regions and stuff, yeah. Yeah. Okay. Yeah. Well for a remote control, do you think that will be I suppose it's depends on how complicated our remote control is. Yeah, yeah. Okay. What, just like in terms of like the wealth of the country? Like how much money people have to spend on things like? Aye, I see what you mean, yeah. Marketing. Good marketing thoughts. Oh gosh, I should be writing all this down. Um. Mm. Yeah. Yeah, yeah. Like how much does, you know, a remote control cost. Well twenty five Euro, I mean that's um that's about like eighteen pounds or something, isn't it? Or no, is it as much as that? Sixteen seventeen eighteen pounds. Um, I dunno, I've never bought a remote control, so I don't know how how good a remote control that would get you. Um. But yeah, I suppose it has to look kind of cool and gimmicky. Um right, okay. Let me just scoot on ahead here. Okay. Um well d Does anybody have anything to add to uh to the finance issue at all? Thin No, actually. That would be useful, though, wouldn't it, if you knew like what your money would get you now. Mm-hmm. Yeah, yeah. Oh. Five minutes to end of meeting. Oh, okay. We're a bit behind. Yeah. Right, so do you think that should be like a main design aim of our remote control d you know, do your your satellite and your regular telly and your V_C_R_ and everything? Mm-hmm. Yeah. Or even like, you know, notes about um what you wanna watch. Like you might put in there oh I want to watch such and such and look a Oh that's a good idea. So extra functionalities. Mm-hmm. Hmm. Um okay, uh I'd wel we're gonna have to wrap up pretty quickly in the next couple of minutes. Um I'll just check we've nothing else. Okay. Um so anything else anybody wants to add about what they don't like about remote controls they've used, what they would really like to be part of this new one at all? You keep losing them. Okay. Yeah. W You get those ones where you can, if you like, whistle or make a really high pitched noise they beep. There I mean is that something we'd want to include, do you think? Dunno. Okay maybe. My goodness. Still feels quite primitive. Maybe like a touch screen or something? Okay. Uh-huh, okay. Well I guess that's up to our industrial designer. It looks better. Yeah. Okay. Okay. Right, well um so just to wrap up, the next meeting's gonna be in thirty minutes. So that's about um about ten to twelve by my watch. Um so inbetween now and then, um as the industrial designer, you're gonna be working on you know the actual working design of it so y you know what you're doing there. Um for user interface, technical functions, I guess that's you know like what we've been talking about, what it'll actually do. Um and uh marketing executive, you'll be just thinking about what it actually what, you know, what requirements it has to has to fulfil and you'll all get instructions emailed to you, I guess. Um. Yeah, so it's th the functional design stage is next, I guess. And uh and that's the end of the meeting. So I got that little message a lot sooner than I thought I would, so Mm-hmm. Uh-huh, yeah. Th Okay, well just very quickly 'cause this we're supposed to finish now. Um I guess that's up to us, I mean you probably want some kind of unique selling point of it, so um, you know Yeah. Mm-hmm. Yeah. Okay. Right, okay, we'll that's that's the end of the meeting, then. Um. So, uh thank you all for coming. Um I'm Craig and I'm User Interface. Yeah. Well, my favourite animal would be a monkey. Then they're small cute and furry, and uh when planet of the apes becomes real, I'm gonna be up there with them. Yeah. I know um My parents went out and bought um remote controls because um they got fed up of having four or five different remote controls for each things the house. So um for them it was just how many devices control. Uh. Mm-hmm. Great. And I'm Andrew and I'm uh our marketing expert. Mm-hmm. Mm-hmm. Yeah, that's that's it. Yeah. I will go. That's fine. Alright. So This one here, right? Okay. Very nice. Alright. My favourite animal is like A beagle. Um charac favourite characteristics of it? Is that right? Uh, right, well basically um high priority for any animal for me is that they be willing to take a lot of physical affection from their family. And, yeah that they have lots of personality and uh be fit and in robust good health. So this is blue. Blue beagle. My family's beagle. I coulda told you a whole lot more about beagles. Boy, let me tell you. Impressionist. Alright. Mm. Superb sketch, by the way. Yep. I see a dog in there. Yep. Now I see a rooster. What kind is it? Is he aware that th it's his own cha tail he's chasing? Hmm. Probably when he was little he got lots of attention for doing it and has forever been conditioned. 'Kay. Um, can we just go over that again? Uh, so bas at twel Alright, yeah. Okay. So cost like production cost is twelve fifty, but selling price is is that wholesale or retail? Like on the shelf. Our sale our sale anyway. Yeah, okay okay. Okay. Mm-hmm. Alright. Yes. Mm-hmm. Mm-hmm. Well right away I'm wondering if there's um th th uh, like with D_V_D_ players, if there are zones. Um f frequencies or something um as well as uh characters, um different uh keypad styles and s symbols. Um. I don't know. Yeah. Yeah. Yeah. And then a and then al the other thing international is on top of the price. I'm thinking the price might might appeal to a certain market in one region, whereas in another it'll be different, so Just a chara just a characteristic of the Just Or just like, basic product podi positioning, the twenty five Euro remote control might be a big hit in London, might not be such a big hit in Greece, who knows, something like that, yeah. Yep. Right away I'm making some kind of assumptions about what what information we're given here, thinking, 'kay trendy probably means something other than just basic, something other than just standard. Um so I'm wondering right away, is selling twenty five Euros, is that sort of the thi is this gonna to be like the premium product kinda thing or Uh-huh. Mm-hmm. Yep. Yeah, I'd say so, yeah. No. Yeah, yeah. Mm-hmm. Do we have any other background information on like how that compares to other other Yeah. Mm-hmm. Yeah, interesting thing about discussing um production of a remote control for me is that l as you point out, I just don't think of remote controls as somethin something people consciously assess in their purchasing habits. It's just like getting shoelaces with shoes or something. It just comes along. Do you know what I mean? Like so sort of like how do you I I mean one one way of looking at it would be, well the people producing television sets, maybe they have to buy remote controls. Or another way is maybe people who have T_V_ sets are really fed up with their remote control and they really want a better one or something. But Right. Right. Okay so Right, so in function one of the priorities might be to combine as many uses I think so. Yeah, yeah. Yeah. Well like um, maybe what we could use is a sort of like a example of a successful other piece technology is palm palm pilots. They're gone from being just like little sort of scribble boards to cameras, M_P_ three players, telephones, everything, agenda. So, like, I wonder if we might add something new to the to the remote control market, such as the lighting in your house, or um Yeah, yeah. An Yeah. Like, p personally for me, at home I've I've combined the um the audio video of my television set and my D_V_D_ player and my C_D_ player. So they w all work actually function together but I have different remote controls for each of them. So it's sort of ironic that that then they're in there um you know, the sound and everything it's just one system. But each one's got its own little part. Mm. Mm. Mm. Mm-hmm. Mm-hmm. Yeah. Yeah. That's just really good id Yep. Uh, sure. I remember when the first remote control my my family had was on a cable. Actually had a cable between it and the T_V_ and big like buttons that sort of like, like on a blender or something. And um, you know, when I think about what they are now, it's better, but actually it's still kind of, I dunno, like a massive junky thing on the table. Maybe we could think about how, could be more, you know, streamlined. S Something like that, yeah. Or whatever would be technologically reasonable. 'Cause it could b it could it could be that f it could be that functionally that doesn't make it any better, but that just the appeal of of not having You know, these days there's a r pe things in people's homes are becoming more and more like chic, you know. Um, nicer materials and might be be worth exploring anyway. Okay. Um. Before we wrap up, just to make sure we're all on the same page here, um, do we We were given sort of an example of a coffee machine or something, right? Well, um are we at ma right now on the assumption that our television remote control may have features which go beyond the television? Or are we keeping sort of like a a design commitment to television features? I I don't know. Yep. Yeah, sure. Okay. Okay, yeah. Okay. Okay. Okay. Alright. ''' summarizer(text) ``` # Example 4 ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/MEETING_SUMMARY") text = ''' Das : Hi and welcome to the a16z podcast. I’m Das, and in this episode, I talk SaaS go-to-market with David Ulevitch and our newest enterprise general partner Kristina Shen. The first half of the podcast looks at how remote work impacts the SaaS go-to-market and what the smartest founders are doing to survive the current crisis. The second half covers pricing approaches and strategy, including how to think about free versus paid trials and navigating the transition to larger accounts. But we start with why it’s easier to move upmarket than down… and the advantage that gives a SaaS startup against incumbents. David : If you have a cohort of customers that are paying you $10,000 a year for your product, you’re going to find a customer that self-selects and is willing to pay $100,000 a year. Once you get one of those, your organization will figure out how you sell to, how you satisfy and support, customers at that price point and that size. But it’s really hard for a company that sells up market to move down market, because they’ve already baked in all that expensive, heavy lifting sales motion. And so as you go down market with a lower price point, usually, you can’t actually support it. Das : Does that mean that it’s easier for a company to do this go-to-market if they’re a new startup as opposed to if they’re a pre-existing SaaS? Kristina : It’s culturally very, very hard to give a product away for free that you’re already charging for. It feels like you’re eating away at your own potential revenue when you do it. So most people who try it end up pulling back very quickly. David : This is actually one of the key reasons why the bottoms up SaaS motion is just so competitive, and compelling, and so destructive against the traditional sales-driven test motion. If you have that great product and people are choosing to use it, it’s very hard for somebody with a sales-driven motion, and all the cost that’s loaded into that, to be able to compete against it. There are so many markets where initially, we would look at companies and say, “Oh, well, this couldn’t possibly be bottoms up. It has to be sold to the CIO. It has to be sold to the CSO or the CFO.” But in almost every case we’ve been wrong, and there has been a bottoms up motion. The canonical example is Slack. It’s crazy that Slack is a bottoms up company, because you’re talking about corporate messaging, and how could you ever have a messaging solution that only a few people might be using, that only a team might be using? But now it’s just, “Oh, yeah, some people started using it, and then more people started using it, and then everyone had Slack.” Kristina : I think another classic example is Dropbox versus Box. Both started as bottoms up businesses, try before you buy. But Box quickly found, “Hey, I’d rather sell to IT.” And Dropbox said, “Hey, we’ve got a great freemium motion going.” And they catalyzed their business around referrals and giving away free storage and shared storage in a way that really helped drive their bottoms up business. Das : It’s a big leap to go from selling to smaller customers to larger customers. How have you seen SaaS companies know or get the timing right on that? Especially since it does seem like that’s really related to scaling your sales force? Kristina : Don’t try to go from a 100-person company to a 20,000-person company. Start targeting early adopters, maybe they’re late stage pre-IPO companies, then newly IPO’d companies. Starting in tech tends to be a little bit easier because they tend to be early adopters. Going vertical by vertical can be a great strategy as well. Targeting one customer who might be branded in that space, can help brand yourself in that category. And then all their competitors will also want your product if you do a good job. A lot of times people will dedicate a sales rep to each vertical, so that they become really, really knowledgeable in that space, and also build their own brand and reputation and know who are the right customers to target. Das : So right now, you’ve got a lot more people working remote. Does this move to remote work mean that on-premise software is dying? And is it accelerating the move to software as a service? Kristina : This remote work and working from home is only going to catalyze more of the conversion from on-premise over to cloud and SaaS. In general, software spend declines 20% during an economic downturn. This happened in ’08, this happened in ’01. But when we look at the last downturn in ’08, SaaS spend actually, for public companies, increased, on average, 10%, which means there’s a 30% spread, which really shows us that there was a huge catalyst from people moving on-premise to SaaS. David : And as people work remote, the ability to use SaaS tools is much easier than having to VPN back into your corporate network. We’ve been seeing that, inside sales teams have been doing larger and larger deals, essentially moving up market on the inside, without having to engage with field sales teams. In fact, a lot of the new SaaS companies today rather than building out a field team, they have a hybrid team, where people are working and closing deals on the inside and if they had to go out and meet with a customer, they would do that. But by and large, most of it was happening over the phone, over email, and over videoconferencing. And all the deals now, by definition, are gonna be done remote because people can’t go visit their customers in person. Das : So with bottoms up, did user behavior and buyer behavior change, so the go-to-market evolved? Or did the go-to-market evolve and then you saw user and buyer behavior change? I’m curious with this move to remote work. Is that going to trigger more changes or has the go-to-market enabled that change in user behavior, even though we see that change coming because of a lot of forces outside of the market? Kristina : I definitely think they are interrelated. But I do think it was a user change that catalyzed everything. We decided that we preferred better software, and we tried a couple products. We were able to purchase off our credit card. And then IT and procurement eventually said, “Wow, everyone’s buying these already, I might as well get a company license and a company deal so I’m not paying as much.” While obviously software vendors had to offer the products that could be self-served, users started to realize they had the power, they wanted to use better software, they paid with their credit cards. And now software vendors are forced to change their go-to-market to actually suit that use case. Das : If that’s the case that when user behavior has changed, it’s tended to be the catalyzing force of bigger changes in the go-to-market, what are some of the changes you foresee for SaaS because the world has changed to this new reality of remote work and more distributed teams? David : We’re in a very uncertain economic environment right now. And a couple of things will become very clear over the next 3 to 9 to 15 months — you’re going to find out which SaaS products are absolutely essential to helping a business operate and run, and which ones were just nice to have and may not get renewed. I think on the customer, buying side, you’re very likely to see people push back on big annual commitments and prefer to go month-to-month where they can. Or you’ll see more incentives from SaaS startups to offer discounts for annual contracts. You’re going to see people that might sign an annual contract, but they may not want to pay upfront. They may prefer to meter the cash out ratably over the term of the contract. And as companies had empowered and allowed budget authority to be pushed down in organizations, you’re gonna see that budget authority get pulled back, more scrutiny on spending, and likely a lot of SaaS products not get renewed that turned out to not be essential. Kristina : I think the smartest founders are making sure they have the runway to continue to exist. And they’re doing that in a couple of ways. They’re preserving cash, and they are making sure that their existing customers are super, super happy, because retaining your customers is so important in this environment. And they’re making sure that they have efficient or profitable customer acquisition. Don’t spend valuable dollars acquiring customers. But acquire customers efficiently that will add to a great existing customer base. Das : To go into pricing and packaging for SaaS for a moment, what are some of the different pricing approaches that you see SaaS companies taking? Kristina : The old school way of doing SaaS go-to-market is bundle everything together, make the pricing super complex, so you don’t actually understand what you’re paying for. You’re forced to purchase it because you need one component of the product. New modern SaaS pricing is keep it simple, keep it tied to value, and make sure you’re solving one thing really, really well. David : You want to make it easy for your customers to give you money. And if your customers don’t understand your pricing, that’s a huge red flag. Sometimes founders will try to over engineer their pricing model. Kristina : We talk a lot about everything has to be 10X better than the alternatives. But it’s much easier to be 10X better when you solve one thing very, very well, and then have simple pricing around it. I think the most common that most people know about is PEPM or per employee per month, where you’re charging basically for every single seat. Another really common model is the freemium model. So, think about a Dropbox, or an Asana, or a Skype, where it’s trigger based. You try the product for free, but when you hit a certain amount of storage, or a certain amount of users, then it converts over to paid. And then you also have a time trial, where you get the full experience of the product for some limited time period. And then you’re asked if you want to continue using the product to pay. And then there’s pay as go, and particularly, pay as you go as a usage model. So, Slack will say, “Hey, if your users aren’t actually using the product this month, we won’t actually charge you for it.” David : The example that Kristina made about Slack and users, everybody understands what a user is, and if they’re using the product, they pay for it, and if they’re not using it, they don’t pay for it. That’s a very friendly way to make it easy for your customers to give you money. If Slack came up with a pricing model that was like based on number of messages, or number of API integration calls, the customer would have no idea what that means. Kristina : There’s also the consumption model. So Twilio only charges you for every SMS text or phone call that you make on the platform any given month. And so they make money or lose money as your usage goes. The pricing is very aligned to your productivity. David : Generally, those are for products where the usage only goes in one direction. If you think of a company like Databricks, where they’re charging for storage, or Amazon’s S3 service, it is very aligned with the customer, but it also strategically aligns with the business because they know the switching cost is very high, the churn is very low. And generally, in those businesses, you’re only going to store more data, so they can charge based on usage or volume of data. Kristina : Recently, there’s been a huge trend of payment as a revenue. It’s particularly common in vertical markets where SaaS companies are adding payments as a revenue in addition to their employee or subscription revenue. If you look at Shopify, for example, more than 50% of their revenue is actually payment revenue. They’re making money every single time you purchase something off one of their shopping cart websites. Das : When you’re working with a founder or a SaaS startup, how have you seen them find the right pricing model for their product, for their market? Kristina : Step one is just talk to a lot of customers. Try to figure out what is the market pricing for possible alternatives or competitors, understand their pain points and their willingness to pay. And just throw a price out there, because you have to have a starting point in order to actually test and iterate. Particularly in the SMB, or the bottoms up business, you can test and iterate pretty quickly because you have so many data points. David : I always tell founders, step one is to just go out there and talk to customers. Step two is just double your prices. I don’t think there’s ever been a great company with a great product that’s fallen apart because their pricing was wrong. But a lot of SaaS startup founders really under price, and you don’t want to find out two or three years later that you were 200% underpriced. A very common thing that SaaS companies do, they’ll have the basic package that either is free or low cost, that you can just sign up online for. They’ll have a middle package where they share some pricing, and then they’ll have the enterprise package where you have to contact sales to find out more. And that way they don’t actually have to show the pricing for that third package. And that gives the salespeople the flexibility to adjust pricing on a per deal basis. Das : When you’re working with companies, why are they underpricing their products? David : I think it’s psychological. People need to price on value, and they don’t know how much value they’re delivering relative to “Oh, it only cost me $100 a month to provide this service, so I just need to charge $200.” But if it turns out you’re saving your customer $50,000 a year, then you’re wildly underpriced. You have to remember that SaaS is essentially a proxy for outsourced IT. You’re spending money on a SaaS service to not pay to develop something internally, or to have to pay IT to support something that’s more complex on-prem. Software is much cheaper than people, and so generally, the price point can be much higher. Kristina : And the other thing is your value increases over time. You’re delivering more features, more products, you understand the customer better. It’s the beauty of the SaaS model and cloud model that you can iterate and push code immediately, and the customer immediately sees value. A lot of times people have the same price point from the first customer sold to three years later and the 200th customer. Quite frankly, you’ve delivered so much value along the way that your price point should have gone up. The other thing I’ll say is a lot of people discount per seat pricing a lot as they move up market. We tend to tell people that the best validation of your product having great product market fit is your ability to hold your price point. So while there is some natural discounting on a per seat basis because people do deserve some volume discounting, I would say try to resist that as much as possible. Das : Especially for a technical founder, it’s so tempting to get in there and fiddle with these knobs. How do you know when it is time to experiment with your pricing and packaging? David : If you’re looking at your business and you see that you are doing more deals, and they’re closing faster, you should raise your pricing. And you pay attention to how long it takes to close deals and whether the number of deals is staying consistent as you do that. And, at some point, you’re going to find out when you’re losing deals on price. I think a moment where companies have to plan ahead to avoid having to course correct is after they roll out massive pricing and packaging changes, which are pretty natural as companies move up market. But how they navigate that transition to larger accounts, and how they either bring along or move away from those smaller, earlier customers who got them to where they are, tends to be really important because they can get a lot of noise on Twitter, they can get a lot of blowback from their customers. So Zendesk is a company where they rolled out a major packaging change. And when they rolled it out, they hadn’t planned on grandfathering in their early customers. They got a lot of pushback, and very quickly, they put out a blog post and said, “We hear what you’re saying, we appreciate you building the business that we’ve become today. We do need to have a package for the future. But all the people that have been customers so far will be grandfathered in for at least a period of time into the old model.” Kristina : If you iterate pricing constantly, you don’t really have this problem because your customers will be used to pricing changes. You normally pair them with new features, and it all kind of works out. But if you have to go through a big grandfather change, I tend to lean towards treating your early customers really, really well. They adopted when you weren’t a big company yet. They probably co-built the product with you in many ways. And so, it’s great to get more dollars out of your customer base, but treat your early customers well. Das : Are there any other failure modes that you see startups really falling into around pricing and packaging or any common mistakes that they make? David : I think a lot of founders don’t always map out the cost or model of their pricing and their product relative to their cost of actually doing sales and marketing and customer acquisition. Kristina : Inside sales is so popular in Silicon Valley. When you’re selling more to an SMB or mid-market type customer, the expectation is that you’re educating and helping the prospective customer over the phone. And so, you’re not expected to be as high touch. But 5K is almost the minimum price point you need to sell to the SMB with an inside sales team in order to pay for the outbound costs and all the conversions, because there is typically a team that sits around the quota carrying rep. And so, price matching — how much your price point is compared to what your go-to-market motion is — matters a lot. Other big failure modes that I see, people guess the ramp time of a sales rep wrong. And ramp time really ties to the segment of customer you’re selling into. It tends be that if you’re selling into the enterprise, the ramp time for sales reps, because sales cycles are so long, tend to be much longer as well. They could be six months plus, could be a year. While if you’re selling more into SMB or mid-market, the ramp time to get a rep up and running can be much shorter, three to six months. Because the sales cycles are shorter, they just iterate much faster, and they ramp up much more quickly. David : The other thing that people have to understand is that sales velocity is a really important component to figuring out how many reps you should be hiring, whether they should be inside reps or field reps. If it takes you 90 days to close a deal, that can’t be a $5,000 a year deal, that has to be a $50,000 or even $150,000 a year deal. Das : Kristina, I know you’ve done a lot of work with metrics. So how do those play in? Kristina : Probably the one way to sum it all together is how many months does it take to pay back customer acquisition cost. Very commonly within the SaaS world, we talk about a 12-month CAC payback. We typically want to see for every dollar you spend on sales and marketing, you get a dollar back within a year. That means you can tweak the inputs any way you want. Let’s say that doing paid acquisition is really effective for you. Then, you can spend proportionally more on paid acquisition and less on sales reps. Vice versa, if you have a great inbound engine, you actually can hire a lot more sales reps and spend more on sales headcount. With all formulas, it’s a guide rail, so if you have customers that retain really, really well, let’s say you’re selling to the enterprise, and you’ve got a 90% or 95% annual retention rate, then your CAC payback could be between 12 and 24 months. But let’s say you’re selling to the SMB and churn is 2% or 3% monthly, which ends up being like 80% to 90% annual retention. Then, because your customer is less sticky, I would recommend looking at a CAC payback of 6 to 12 months. Das : How should you think about doing a free trial versus a paid trial? David : On the one hand, the bottoms up motion where people can try essentially a full version of a product before they buy it is extremely powerful. On the other hand, I’ve started to try to think about how I advise companies, when they are thinking about a free trial for something that might cost $100,000 or $200,000 a year? Do we do a paid pilot that has some sort of contractual obligation that if we meet then turns into a commercial engagement? Kristina : I do think the beauty of the bottoms up business is that you can get people to try the entire experience of the product for free, and they fall in love with it, and a certain percentage will convert. And that works really, really well for products that can self-serve. When you start moving up market to more complex products, the challenge with trials is it takes work to actually implement the product, whether it be integrations, IT has to give access, etc. You lose that self-serve ability, which is so amazing in the trial. And so, I tend to be more in the camp of paid trials, if it costs you money to actually deploy the trial. And when you’re selling to bigger customers, they associate value when they have to pay. Once a customer has to pay you, then they feel a need to make the project successful and thus they will onboard, schedule things, give you data and access. David : If you can get to a point where you get the customer to do that paid pilot, such that the only difference between a pilot and an actual customer is just the signing of a contract, that’s very powerful. Now, that does force you to have a really good pre-sales motion to make sure that you can deliver on the promise you’ve made your customers. When companies don’t have a great product, and they paper over it with professional services and sales engineering and post-sales support, that paid pilot thing doesn’t work because the experience isn’t good enough. So, it really is incumbent on the SaaS company that does a paid pilot to make sure that they are able to deliver on that experience. Kristina : And one emerging trend recently is people signing an annual contract with a one or three month out, as a replacement to the paid pilot. Because it’s the best of both worlds, the SaaS company that’s selling the product gets a higher level of commitment. And the customer gets the optionality of opting out in the same way as a trial without any clawback. It really comes down to where procurement falls. Sometimes procurement is at the beginning of that decision, which makes it more like an annual contract. Sometimes procurement is at the one or three month opt-out period, which means the customer already has a great experience, loves the product, and it is an easier way to convert procurements to actually sign on… David : And that is a really good segue into renewals. I always tell founders, you might have this subscription business, but it’s not a recurring revenue business until the second year when the revenue actually recurs. I think you really have the first three months to get a customer up and running and happy. And if they’re not, you then have about three months to fix it. And if all that works out, then the remaining six months of the contract can be focused on upsell and expansion. Das : Awesome. Thank you, Kristina. Thank you, David. Kristina : Thanks so much for having us. This was fun. David : Yeah, a lot of fun, great topics, and our favorite thing to talk about. ''' summarizer(text) ```
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) MEETING_SUMMARY - bnb 8bits - Model creator: https://huggingface.co/knkarthick/ - Original model: https://huggingface.co/knkarthick/MEETING_SUMMARY/ Original model description: --- language: en license: apache-2.0 tags: - bart - seq2seq - summarization datasets: - cnndaily/newyorkdaily/xsum/samsum/dialogsum/AMI metrics: - rouge widget: - text: 'Hi, I''m David and I''m supposed to be an industrial designer. Um, I just got the project announcement about what the project is. Designing a remote control. That''s about it, didn''t get anything else. Did you get the same thing? Cool. There''s too much gear. Okay. Can''t draw. Um. Yeah. Um, well anyway, I don''t know, it''s just the first animal I can think off the top of my head. Um. Yes. Big reason is ''cause I''m allergic to most animals. Allergic to animal fur, so um fish was a natural choice. Um, yeah, and I kind of like whales. They come in and go eat everything in sight. And they''re quite harmless and mild and interesting. Tail''s a bit big, I think. It''s an after dinner dog then. Hmm. It does make sense from maybe the design point of view ''cause you have more complicated characters like European languages, then you need more buttons. So, possibly. Hmm. Yeah. And you keep losing them. Finding them is really a pain, you know. I mean it''s usually quite small, or when you want it right, it slipped behind the couch or it''s kicked under the table. You know. Yep. Mm-hmm. I think one factor would be production cost. Because there''s a cap there, so um depends on how much you can cram into that price. Um. I think that that''s the main factor. Cool. Okay. Right. Um well this is the kick-off meeting for our our project. Um and um this is just what we''re gonna be doing over the next twenty five minutes. Um so first of all, just to kind of make sure that we all know each other, I''m Laura and I''m the project manager. Do you want to introduce yourself again? Okay. Great. Okay. Um so we''re designing a new remote control and um Oh I have to record who''s here actually. So that''s David, Andrew and Craig, isn''t it? And you all arrived on time. Um yeah so des uh design a new remote control. Um, as you can see it''s supposed to be original, trendy and user friendly. Um so that''s kind of our our brief, as it were. Um and so there are three different stages to the design. Um I''m not really sure what what you guys have already received um in your emails. What did you get? Mm-hmm. Is that what everybody got? Okay. Um. So we''re gonna have like individual work and then a meeting about it. And repeat that process three times. Um and at this point we get try out the whiteboard over there. Um. So uh you get to draw your favourite animal and sum up your favourite characteristics of it. So who would like to go first? Very good. Mm-hmm. Yeah. Yeah. Right. Lovely. Right. You can take as long over this as you like, because we haven''t got an awful lot to discuss. Ok oh we do we do. Don''t feel like you''re in a rush, anyway. Ach why not We might have to get you up again then. I don''t know what mine is. I''m gonna have to think on the spot now. Is that a whale? Ah. Okay. God, I still don''t know what I''m gonna write about. Um. I was gonna choose a dog as well. But I''ll just draw a different kind of dog. M my favourite animal is my own dog at home. Um That doesn''t really look like him, actually. He looks more like a pig, actually. Ah well. Do you? Oh that''s very good of you. Uh. Um he''s a mixture of uh various things. Um and what do I like about him, um That''s just to suggest that his tail wags. Um he''s very friendly and cheery and always pleased to see you, and very kind of affectionate and um uh and he''s quite quite wee as well so you know he can doesn''t take up too much space. Um and uh And he does a funny thing where he chases his tail as well, which is quite amusing, so It is. I think it is. He only does it after he''s had his dinner and um he''ll just all of a sudden just get up and start chasing his tail ''round the living room. Yeah, so uh Yeah, maybe. Maybe. Right, um where did you find this? Just down here? Yeah. Okay. Um what are we doing next? Uh um. Okay, uh we now need to discuss the project finance. Um so according to the brief um we''re gonna be selling this remote control for twenty five Euro, um and we''re aiming to make fifty million Euro. Um so we''re gonna be selling this on an international scale. And uh we don''t want it to cost any more than uh twelve fifty Euros, so fifty percent of the selling price. Sure. All together. Um I dunno. I imagine That''s a good question. I imagine it probably is our sale actually because it''s probably up to the the um the retailer to uh sell it for whatever price they want. Um. But I I don''t know, I mean do you think the fact that it''s going to be sold internationally will have a bearing on how we design it at all? Think it will? Um. Hmm. Oh yeah, regions and stuff, yeah. Yeah. Okay. Yeah. Well for a remote control, do you think that will be I suppose it''s depends on how complicated our remote control is. Yeah, yeah. Okay. What, just like in terms of like the wealth of the country? Like how much money people have to spend on things like? Aye, I see what you mean, yeah. Marketing. Good marketing thoughts. Oh gosh, I should be writing all this down. Um. Mm. Yeah. Yeah, yeah. Like how much does, you know, a remote control cost. Well twenty five Euro, I mean that''s um that''s about like eighteen pounds or something, isn''t it? Or no, is it as much as that? Sixteen seventeen eighteen pounds. Um, I dunno, I''ve never bought a remote control, so I don''t know how how good a remote control that would get you. Um. But yeah, I suppose it has to look kind of cool and gimmicky. Um right, okay. Let me just scoot on ahead here. Okay. Um well d Does anybody have anything to add to uh to the finance issue at all? Thin No, actually. That would be useful, though, wouldn''t it, if you knew like what your money would get you now. Mm-hmm. Yeah, yeah. Oh. Five minutes to end of meeting. Oh, okay. We''re a bit behind. Yeah. Right, so do you think that should be like a main design aim of our remote control d you know, do your your satellite and your regular telly and your V_C_R_ and everything? Mm-hmm. Yeah. Or even like, you know, notes about um what you wanna watch. Like you might put in there oh I want to watch such and such and look a Oh that''s a good idea. So extra functionalities. Mm-hmm. Hmm. Um okay, uh I''d wel we''re gonna have to wrap up pretty quickly in the next couple of minutes. Um I''ll just check we''ve nothing else. Okay. Um so anything else anybody wants to add about what they don''t like about remote controls they''ve used, what they would really like to be part of this new one at all? You keep losing them. Okay. Yeah. W You get those ones where you can, if you like, whistle or make a really high pitched noise they beep. There I mean is that something we''d want to include, do you think? Dunno. Okay maybe. My goodness. Still feels quite primitive. Maybe like a touch screen or something? Okay. Uh-huh, okay. Well I guess that''s up to our industrial designer. It looks better. Yeah. Okay. Okay. Right, well um so just to wrap up, the next meeting''s gonna be in thirty minutes. So that''s about um about ten to twelve by my watch. Um so inbetween now and then, um as the industrial designer, you''re gonna be working on you know the actual working design of it so y you know what you''re doing there. Um for user interface, technical functions, I guess that''s you know like what we''ve been talking about, what it''ll actually do. Um and uh marketing executive, you''ll be just thinking about what it actually what, you know, what requirements it has to has to fulfil and you''ll all get instructions emailed to you, I guess. Um. Yeah, so it''s th the functional design stage is next, I guess. And uh and that''s the end of the meeting. So I got that little message a lot sooner than I thought I would, so Mm-hmm. Uh-huh, yeah. Th Okay, well just very quickly ''cause this we''re supposed to finish now. Um I guess that''s up to us, I mean you probably want some kind of unique selling point of it, so um, you know Yeah. Mm-hmm. Yeah. Okay. Right, okay, we''ll that''s that''s the end of the meeting, then. Um. So, uh thank you all for coming. Um I''m Craig and I''m User Interface. Yeah. Well, my favourite animal would be a monkey. Then they''re small cute and furry, and uh when planet of the apes becomes real, I''m gonna be up there with them. Yeah. I know um My parents went out and bought um remote controls because um they got fed up of having four or five different remote controls for each things the house. So um for them it was just how many devices control. Uh. Mm-hmm. Great. And I''m Andrew and I''m uh our marketing expert. Mm-hmm. Mm-hmm. Yeah, that''s that''s it. Yeah. I will go. That''s fine. Alright. So This one here, right? Okay. Very nice. Alright. My favourite animal is like A beagle. Um charac favourite characteristics of it? Is that right? Uh, right, well basically um high priority for any animal for me is that they be willing to take a lot of physical affection from their family. And, yeah that they have lots of personality and uh be fit and in robust good health. So this is blue. Blue beagle. My family''s beagle. I coulda told you a whole lot more about beagles. Boy, let me tell you. Impressionist. Alright. Mm. Superb sketch, by the way. Yep. I see a dog in there. Yep. Now I see a rooster. What kind is it? Is he aware that th it''s his own cha tail he''s chasing? Hmm. Probably when he was little he got lots of attention for doing it and has forever been conditioned. ''Kay. Um, can we just go over that again? Uh, so bas at twel Alright, yeah. Okay. So cost like production cost is twelve fifty, but selling price is is that wholesale or retail? Like on the shelf. Our sale our sale anyway. Yeah, okay okay. Okay. Mm-hmm. Alright. Yes. Mm-hmm. Mm-hmm. Well right away I''m wondering if there''s um th th uh, like with D_V_D_ players, if there are zones. Um f frequencies or something um as well as uh characters, um different uh keypad styles and s symbols. Um. I don''t know. Yeah. Yeah. Yeah. And then a and then al the other thing international is on top of the price. I''m thinking the price might might appeal to a certain market in one region, whereas in another it''ll be different, so Just a chara just a characteristic of the Just Or just like, basic product podi positioning, the twenty five Euro remote control might be a big hit in London, might not be such a big hit in Greece, who knows, something like that, yeah. Yep. Right away I''m making some kind of assumptions about what what information we''re given here, thinking, ''kay trendy probably means something other than just basic, something other than just standard. Um so I''m wondering right away, is selling twenty five Euros, is that sort of the thi is this gonna to be like the premium product kinda thing or Uh-huh. Mm-hmm. Yep. Yeah, I''d say so, yeah. No. Yeah, yeah. Mm-hmm. Do we have any other background information on like how that compares to other other Yeah. Mm-hmm. Yeah, interesting thing about discussing um production of a remote control for me is that l as you point out, I just don''t think of remote controls as somethin something people consciously assess in their purchasing habits. It''s just like getting shoelaces with shoes or something. It just comes along. Do you know what I mean? Like so sort of like how do you I I mean one one way of looking at it would be, well the people producing television sets, maybe they have to buy remote controls. Or another way is maybe people who have T_V_ sets are really fed up with their remote control and they really want a better one or something. But Right. Right. Okay so Right, so in function one of the priorities might be to combine as many uses I think so. Yeah, yeah. Yeah. Well like um, maybe what we could use is a sort of like a example of a successful other piece technology is palm palm pilots. They''re gone from being just like little sort of scribble boards to cameras, M_P_ three players, telephones, everything, agenda. So, like, I wonder if we might add something new to the to the remote control market, such as the lighting in your house, or um Yeah, yeah. An Yeah. Like, p personally for me, at home I''ve I''ve combined the um the audio video of my television set and my D_V_D_ player and my C_D_ player. So they w all work actually function together but I have different remote controls for each of them. So it''s sort of ironic that that then they''re in there um you know, the sound and everything it''s just one system. But each one''s got its own little part. Mm. Mm. Mm. Mm-hmm. Mm-hmm. Yeah. Yeah. That''s just really good id Yep. Uh, sure. I remember when the first remote control my my family had was on a cable. Actually had a cable between it and the T_V_ and big like buttons that sort of like, like on a blender or something. And um, you know, when I think about what they are now, it''s better, but actually it''s still kind of, I dunno, like a massive junky thing on the table. Maybe we could think about how, could be more, you know, streamlined. S Something like that, yeah. Or whatever would be technologically reasonable. ''Cause it could b it could it could be that f it could be that functionally that doesn''t make it any better, but that just the appeal of of not having You know, these days there''s a r pe things in people''s homes are becoming more and more like chic, you know. Um, nicer materials and might be be worth exploring anyway. Okay. Um. Before we wrap up, just to make sure we''re all on the same page here, um, do we We were given sort of an example of a coffee machine or something, right? Well, um are we at ma right now on the assumption that our television remote control may have features which go beyond the television? Or are we keeping sort of like a a design commitment to television features? I I don''t know. Yep. Yeah, sure. Okay. Okay, yeah. Okay. Okay. Okay. Alright.' model-index: - name: MEETING_SUMMARY results: - task: type: abstractive-text-summarization name: Abstractive Text Summarization dataset: name: samsum type: samsum metrics: - type: rouge-1 value: 53.8795 name: Validation ROGUE-1 - type: rouge-2 value: 28.4975 name: Validation ROGUE-2 - type: rouge-L value: 44.1899 name: Validation ROGUE-L - type: rouge-Lsum value: 49.4863 name: Validation ROGUE-Lsum - type: gen-length value: 30.088 name: Validation ROGUE-Lsum - type: rouge-1 value: 53.2284 name: Test ROGUE-1 - type: rouge-2 value: 28.184 name: Test ROGUE-2 - type: rouge-L value: 44.122 name: Test ROGUE-L - type: rouge-Lsum value: 49.0301 name: Test ROGUE-Lsum - type: gen-length value: 29.9951 name: Test ROGUE-Lsum - task: type: summarization name: Summarization dataset: name: bazzhangz/sumdataset type: bazzhangz/sumdataset config: bazzhangz--sumdataset split: train metrics: - type: rouge value: 40.5544 name: ROUGE-1 verified: true - type: rouge value: 17.0751 name: ROUGE-2 verified: true - type: rouge value: 32.153 name: ROUGE-L verified: true - type: rouge value: 36.4277 name: ROUGE-LSUM verified: true - type: loss value: 2.116729736328125 name: loss verified: true - type: gen_len value: 42.1978 name: gen_len verified: true - task: type: abstractive-text-summarization name: Abstractive Text Summarization dataset: name: xsum type: xsum metrics: - type: rouge-1 value: 35.9078 name: Validation ROGUE-1 - type: rouge-2 value: 14.2497 name: Validation ROGUE-2 - type: rouge-L value: 28.1421 name: Validation ROGUE-L - type: rouge-Lsum value: 28.9826 name: Validation ROGUE-Lsum - type: gen-length value: 32.0167 name: Validation ROGUE-Lsum - type: rouge-1 value: 36.0241 name: Test ROGUE-1 - type: rouge-2 value: 14.3715 name: Test ROGUE-2 - type: rouge-L value: 28.1968 name: Test ROGUE-L - type: rouge-Lsum value: 29.0527 name: Test ROGUE-Lsum - type: gen-length value: 31.9933 name: Test ROGUE-Lsum - task: type: abstractive-text-summarization name: Abstractive Text Summarization dataset: name: dialogsum type: dialogsum metrics: - type: rouge-1 value: 39.8612 name: Validation ROGUE-1 - type: rouge-2 value: 16.6917 name: Validation ROGUE-2 - type: rouge-L value: 32.2718 name: Validation ROGUE-L - type: rouge-Lsum value: 35.8748 name: Validation ROGUE-Lsum - type: gen-length value: 41.726 name: Validation ROGUE-Lsum - type: rouge-1 value: 36.9608 name: Test ROGUE-1 - type: rouge-2 value: 14.3058 name: Test ROGUE-2 - type: rouge-L value: 29.3261 name: Test ROGUE-L - type: rouge-Lsum value: 32.9 name: Test ROGUE-Lsum - type: gen-length value: 43.086 name: Test ROGUE-Lsum - task: type: summarization name: Summarization dataset: name: samsum type: samsum config: samsum split: test metrics: - type: rouge value: 53.1878 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTVkNTczYjFmYzBmMzczNWE0MGY4MDAyZWExOGNjZmY1Yzk2ZGM1MGNjZmFmYWUyZmIxZjdjOTk4OTc4OGJlMSIsInZlcnNpb24iOjF9.yyzPpGtESuZXy_lBESrboGxdGYB7I6jaIjquCYqliE2xdbGf5awDFpDUwlZHDuw6RD2mIZv1FC8PPs9lOHuSAg - type: rouge value: 28.1666 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjAzOTdjNGYxNWMzYmFjYjRmMTcxYzI0MmNlNmM5Nzg2MzBlNDdmZWFkN2EwMDE2ZTZmYzc0Zjg0ZDc0M2IxNiIsInZlcnNpb24iOjF9.cPH6O50T6HekO227Xzha-EN_Jp7JS9fh5EP9I0tHxbpGptKtZOQC-NG68zfU2eJKlRSrmgaBYs8tjfTvpAgyDg - type: rouge value: 44.117 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmNmMzJkYjMxMjhlZDM4YmU3NmI1MDExNzhiYmVhMzEyZGJjNDJkNzczNGQwOTMwNzg2YjU1ZWQ4MDhiMzkxYiIsInZlcnNpb24iOjF9.lcEXK15UqZOdXnPjVqIhFd6o_PLROSIONTRFX5NbwanjEI_MWMLpDh_V0Kpnvs_W0sE6cXh2yoifSYNDA5W7Bw - type: rouge value: 49.0094 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYThkYjk4ZjMzYjI0OTAxNDJiZTU5MzE0YjI5MjEzYTYwNWEzMmU5NjU2ZjQ5NzJhMzkyNmVhNWFjZmM1MjAwMSIsInZlcnNpb24iOjF9.LTn6LpKuMO4Rv4NgsbPmtr2ewiKyoqAXlf6YJfM_6GKwVTKpnJxwx7gaaAtMb0jVlgieITMP11JmbeRfMEhgDg - type: loss value: 1.710614562034607 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjNjZmM0ZjkwYWYyMWIyMmFiMWI1ODBiYjRjNzVhM2JhN2NmNmM1ZDUwZWRjNDQxNzUwMWM4YjYxYTg1MWYwNyIsInZlcnNpb24iOjF9.hGXZhp9pe-HDJilXVvMCkqz-92YZvH6Qr7q9Z7fJkm8N9s0b4sl-4PwjQYJEOLEAhoRO2s-F5T3bmCYCaMiNBQ - type: gen_len value: 29.9951 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmY1NzZiMDAzNGJlNTg4Nzc0YzU1MTA3YTI3MzVmNGZkNWQ0ZDE4MGZlNGI1MzJmYzA3MjQ0MDZhMTcyYTk2NCIsInZlcnNpb24iOjF9.8dvMfY7Y-nw-K8NGgTXIGFMxaSUWQYBE1w3N5YYOn4iwnCe2ugo2qPIOxLY91q7CaAOMCSskFV3BDStQ4p0ZCg --- Model obtained by Fine Tuning 'facebook/bart-large-xsum' using AMI Meeting Corpus, SAMSUM Dataset, DIALOGSUM Dataset, XSUM Dataset! ## Usage # Example 1 ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/MEETING_SUMMARY") text = '''The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct. ''' summarizer(text) ``` # Example 2 ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/MEETING_SUMMARY") text = '''Bangalore is the capital and the largest city of the Indian state of Karnataka. It has a population of more than 8 million and a metropolitan population of around 11 million, making it the third most populous city and fifth most populous urban agglomeration in India. Located in southern India on the Deccan Plateau, at a height of over 900 m (3,000 ft) above sea level, Bangalore is known for its pleasant climate throughout the year. Its elevation is the highest among the major cities of India.The city's history dates back to around 890 CE, in a stone inscription found at the Nageshwara Temple in Begur, Bangalore. The Begur inscription is written in Halegannada (ancient Kannada), mentions 'Bengaluru Kalaga' (battle of Bengaluru). It was a significant turning point in the history of Bangalore as it bears the earliest reference to the name 'Bengaluru'. In 1537 CE, Kempé Gowdā – a feudal ruler under the Vijayanagara Empire – established a mud fort considered to be the foundation of modern Bangalore and its oldest areas, or petes, which exist to the present day. After the fall of Vijayanagar empire in 16th century, the Mughals sold Bangalore to Chikkadevaraja Wodeyar (1673–1704), the then ruler of the Kingdom of Mysore for three lakh rupees. When Haider Ali seized control of the Kingdom of Mysore, the administration of Bangalore passed into his hands. The city was captured by the British East India Company after victory in the Fourth Anglo-Mysore War (1799), who returned administrative control of the city to the Maharaja of Mysore. The old city developed in the dominions of the Maharaja of Mysore and was made capital of the Princely State of Mysore, which existed as a nominally sovereign entity of the British Raj. In 1809, the British shifted their cantonment to Bangalore, outside the old city, and a town grew up around it, which was governed as part of British India. Following India's independence in 1947, Bangalore became the capital of Mysore State, and remained capital when the new Indian state of Karnataka was formed in 1956. The two urban settlements of Bangalore – city and cantonment – which had developed as independent entities merged into a single urban centre in 1949. The existing Kannada name, Bengalūru, was declared the official name of the city in 2006. Bangalore is widely regarded as the "Silicon Valley of India" (or "IT capital of India") because of its role as the nation's leading information technology (IT) exporter. Indian technological organisations are headquartered in the city. A demographically diverse city, Bangalore is the second fastest-growing major metropolis in India. Recent estimates of the metro economy of its urban area have ranked Bangalore either the fourth- or fifth-most productive metro area of India. As of 2017, Bangalore was home to 7,700 millionaires and 8 billionaires with a total wealth of $320 billion. It is home to many educational and research institutions. Numerous state-owned aerospace and defence organisations are located in the city. The city also houses the Kannada film industry. It was ranked the most liveable Indian city with a population of over a million under the Ease of Living Index 2020. ''' summarizer(text) ``` # Example 3 ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/MEETING_SUMMARY") text = '''Hi, I'm David and I'm supposed to be an industrial designer. Um, I just got the project announcement about what the project is. Designing a remote control. That's about it, didn't get anything else. Did you get the same thing? Cool. There's too much gear. Okay. Can't draw. Um. Yeah. Um, well anyway, I don't know, it's just the first animal I can think off the top of my head. Um. Yes. Big reason is 'cause I'm allergic to most animals. Allergic to animal fur, so um fish was a natural choice. Um, yeah, and I kind of like whales. They come in and go eat everything in sight. And they're quite harmless and mild and interesting. Tail's a bit big, I think. It's an after dinner dog then. Hmm. It does make sense from maybe the design point of view 'cause you have more complicated characters like European languages, then you need more buttons. So, possibly. Hmm. Yeah. And you keep losing them. Finding them is really a pain, you know. I mean it's usually quite small, or when you want it right, it slipped behind the couch or it's kicked under the table. You know. Yep. Mm-hmm. I think one factor would be production cost. Because there's a cap there, so um depends on how much you can cram into that price. Um. I think that that's the main factor. Cool. Okay. Right. Um well this is the kick-off meeting for our our project. Um and um this is just what we're gonna be doing over the next twenty five minutes. Um so first of all, just to kind of make sure that we all know each other, I'm Laura and I'm the project manager. Do you want to introduce yourself again? Okay. Great. Okay. Um so we're designing a new remote control and um Oh I have to record who's here actually. So that's David, Andrew and Craig, isn't it? And you all arrived on time. Um yeah so des uh design a new remote control. Um, as you can see it's supposed to be original, trendy and user friendly. Um so that's kind of our our brief, as it were. Um and so there are three different stages to the design. Um I'm not really sure what what you guys have already received um in your emails. What did you get? Mm-hmm. Is that what everybody got? Okay. Um. So we're gonna have like individual work and then a meeting about it. And repeat that process three times. Um and at this point we get try out the whiteboard over there. Um. So uh you get to draw your favourite animal and sum up your favourite characteristics of it. So who would like to go first? Very good. Mm-hmm. Yeah. Yeah. Right. Lovely. Right. You can take as long over this as you like, because we haven't got an awful lot to discuss. Ok oh we do we do. Don't feel like you're in a rush, anyway. Ach why not We might have to get you up again then. I don't know what mine is. I'm gonna have to think on the spot now. Is that a whale? Ah. Okay. God, I still don't know what I'm gonna write about. Um. I was gonna choose a dog as well. But I'll just draw a different kind of dog. M my favourite animal is my own dog at home. Um That doesn't really look like him, actually. He looks more like a pig, actually. Ah well. Do you? Oh that's very good of you. Uh. Um he's a mixture of uh various things. Um and what do I like about him, um That's just to suggest that his tail wags. Um he's very friendly and cheery and always pleased to see you, and very kind of affectionate and um uh and he's quite quite wee as well so you know he can doesn't take up too much space. Um and uh And he does a funny thing where he chases his tail as well, which is quite amusing, so It is. I think it is. He only does it after he's had his dinner and um he'll just all of a sudden just get up and start chasing his tail 'round the living room. Yeah, so uh Yeah, maybe. Maybe. Right, um where did you find this? Just down here? Yeah. Okay. Um what are we doing next? Uh um. Okay, uh we now need to discuss the project finance. Um so according to the brief um we're gonna be selling this remote control for twenty five Euro, um and we're aiming to make fifty million Euro. Um so we're gonna be selling this on an international scale. And uh we don't want it to cost any more than uh twelve fifty Euros, so fifty percent of the selling price. Sure. All together. Um I dunno. I imagine That's a good question. I imagine it probably is our sale actually because it's probably up to the the um the retailer to uh sell it for whatever price they want. Um. But I I don't know, I mean do you think the fact that it's going to be sold internationally will have a bearing on how we design it at all? Think it will? Um. Hmm. Oh yeah, regions and stuff, yeah. Yeah. Okay. Yeah. Well for a remote control, do you think that will be I suppose it's depends on how complicated our remote control is. Yeah, yeah. Okay. What, just like in terms of like the wealth of the country? Like how much money people have to spend on things like? Aye, I see what you mean, yeah. Marketing. Good marketing thoughts. Oh gosh, I should be writing all this down. Um. Mm. Yeah. Yeah, yeah. Like how much does, you know, a remote control cost. Well twenty five Euro, I mean that's um that's about like eighteen pounds or something, isn't it? Or no, is it as much as that? Sixteen seventeen eighteen pounds. Um, I dunno, I've never bought a remote control, so I don't know how how good a remote control that would get you. Um. But yeah, I suppose it has to look kind of cool and gimmicky. Um right, okay. Let me just scoot on ahead here. Okay. Um well d Does anybody have anything to add to uh to the finance issue at all? Thin No, actually. That would be useful, though, wouldn't it, if you knew like what your money would get you now. Mm-hmm. Yeah, yeah. Oh. Five minutes to end of meeting. Oh, okay. We're a bit behind. Yeah. Right, so do you think that should be like a main design aim of our remote control d you know, do your your satellite and your regular telly and your V_C_R_ and everything? Mm-hmm. Yeah. Or even like, you know, notes about um what you wanna watch. Like you might put in there oh I want to watch such and such and look a Oh that's a good idea. So extra functionalities. Mm-hmm. Hmm. Um okay, uh I'd wel we're gonna have to wrap up pretty quickly in the next couple of minutes. Um I'll just check we've nothing else. Okay. Um so anything else anybody wants to add about what they don't like about remote controls they've used, what they would really like to be part of this new one at all? You keep losing them. Okay. Yeah. W You get those ones where you can, if you like, whistle or make a really high pitched noise they beep. There I mean is that something we'd want to include, do you think? Dunno. Okay maybe. My goodness. Still feels quite primitive. Maybe like a touch screen or something? Okay. Uh-huh, okay. Well I guess that's up to our industrial designer. It looks better. Yeah. Okay. Okay. Right, well um so just to wrap up, the next meeting's gonna be in thirty minutes. So that's about um about ten to twelve by my watch. Um so inbetween now and then, um as the industrial designer, you're gonna be working on you know the actual working design of it so y you know what you're doing there. Um for user interface, technical functions, I guess that's you know like what we've been talking about, what it'll actually do. Um and uh marketing executive, you'll be just thinking about what it actually what, you know, what requirements it has to has to fulfil and you'll all get instructions emailed to you, I guess. Um. Yeah, so it's th the functional design stage is next, I guess. And uh and that's the end of the meeting. So I got that little message a lot sooner than I thought I would, so Mm-hmm. Uh-huh, yeah. Th Okay, well just very quickly 'cause this we're supposed to finish now. Um I guess that's up to us, I mean you probably want some kind of unique selling point of it, so um, you know Yeah. Mm-hmm. Yeah. Okay. Right, okay, we'll that's that's the end of the meeting, then. Um. So, uh thank you all for coming. Um I'm Craig and I'm User Interface. Yeah. Well, my favourite animal would be a monkey. Then they're small cute and furry, and uh when planet of the apes becomes real, I'm gonna be up there with them. Yeah. I know um My parents went out and bought um remote controls because um they got fed up of having four or five different remote controls for each things the house. So um for them it was just how many devices control. Uh. Mm-hmm. Great. And I'm Andrew and I'm uh our marketing expert. Mm-hmm. Mm-hmm. Yeah, that's that's it. Yeah. I will go. That's fine. Alright. So This one here, right? Okay. Very nice. Alright. My favourite animal is like A beagle. Um charac favourite characteristics of it? Is that right? Uh, right, well basically um high priority for any animal for me is that they be willing to take a lot of physical affection from their family. And, yeah that they have lots of personality and uh be fit and in robust good health. So this is blue. Blue beagle. My family's beagle. I coulda told you a whole lot more about beagles. Boy, let me tell you. Impressionist. Alright. Mm. Superb sketch, by the way. Yep. I see a dog in there. Yep. Now I see a rooster. What kind is it? Is he aware that th it's his own cha tail he's chasing? Hmm. Probably when he was little he got lots of attention for doing it and has forever been conditioned. 'Kay. Um, can we just go over that again? Uh, so bas at twel Alright, yeah. Okay. So cost like production cost is twelve fifty, but selling price is is that wholesale or retail? Like on the shelf. Our sale our sale anyway. Yeah, okay okay. Okay. Mm-hmm. Alright. Yes. Mm-hmm. Mm-hmm. Well right away I'm wondering if there's um th th uh, like with D_V_D_ players, if there are zones. Um f frequencies or something um as well as uh characters, um different uh keypad styles and s symbols. Um. I don't know. Yeah. Yeah. Yeah. And then a and then al the other thing international is on top of the price. I'm thinking the price might might appeal to a certain market in one region, whereas in another it'll be different, so Just a chara just a characteristic of the Just Or just like, basic product podi positioning, the twenty five Euro remote control might be a big hit in London, might not be such a big hit in Greece, who knows, something like that, yeah. Yep. Right away I'm making some kind of assumptions about what what information we're given here, thinking, 'kay trendy probably means something other than just basic, something other than just standard. Um so I'm wondering right away, is selling twenty five Euros, is that sort of the thi is this gonna to be like the premium product kinda thing or Uh-huh. Mm-hmm. Yep. Yeah, I'd say so, yeah. No. Yeah, yeah. Mm-hmm. Do we have any other background information on like how that compares to other other Yeah. Mm-hmm. Yeah, interesting thing about discussing um production of a remote control for me is that l as you point out, I just don't think of remote controls as somethin something people consciously assess in their purchasing habits. It's just like getting shoelaces with shoes or something. It just comes along. Do you know what I mean? Like so sort of like how do you I I mean one one way of looking at it would be, well the people producing television sets, maybe they have to buy remote controls. Or another way is maybe people who have T_V_ sets are really fed up with their remote control and they really want a better one or something. But Right. Right. Okay so Right, so in function one of the priorities might be to combine as many uses I think so. Yeah, yeah. Yeah. Well like um, maybe what we could use is a sort of like a example of a successful other piece technology is palm palm pilots. They're gone from being just like little sort of scribble boards to cameras, M_P_ three players, telephones, everything, agenda. So, like, I wonder if we might add something new to the to the remote control market, such as the lighting in your house, or um Yeah, yeah. An Yeah. Like, p personally for me, at home I've I've combined the um the audio video of my television set and my D_V_D_ player and my C_D_ player. So they w all work actually function together but I have different remote controls for each of them. So it's sort of ironic that that then they're in there um you know, the sound and everything it's just one system. But each one's got its own little part. Mm. Mm. Mm. Mm-hmm. Mm-hmm. Yeah. Yeah. That's just really good id Yep. Uh, sure. I remember when the first remote control my my family had was on a cable. Actually had a cable between it and the T_V_ and big like buttons that sort of like, like on a blender or something. And um, you know, when I think about what they are now, it's better, but actually it's still kind of, I dunno, like a massive junky thing on the table. Maybe we could think about how, could be more, you know, streamlined. S Something like that, yeah. Or whatever would be technologically reasonable. 'Cause it could b it could it could be that f it could be that functionally that doesn't make it any better, but that just the appeal of of not having You know, these days there's a r pe things in people's homes are becoming more and more like chic, you know. Um, nicer materials and might be be worth exploring anyway. Okay. Um. Before we wrap up, just to make sure we're all on the same page here, um, do we We were given sort of an example of a coffee machine or something, right? Well, um are we at ma right now on the assumption that our television remote control may have features which go beyond the television? Or are we keeping sort of like a a design commitment to television features? I I don't know. Yep. Yeah, sure. Okay. Okay, yeah. Okay. Okay. Okay. Alright. ''' summarizer(text) ``` # Example 4 ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/MEETING_SUMMARY") text = ''' Das : Hi and welcome to the a16z podcast. I’m Das, and in this episode, I talk SaaS go-to-market with David Ulevitch and our newest enterprise general partner Kristina Shen. The first half of the podcast looks at how remote work impacts the SaaS go-to-market and what the smartest founders are doing to survive the current crisis. The second half covers pricing approaches and strategy, including how to think about free versus paid trials and navigating the transition to larger accounts. But we start with why it’s easier to move upmarket than down… and the advantage that gives a SaaS startup against incumbents. David : If you have a cohort of customers that are paying you $10,000 a year for your product, you’re going to find a customer that self-selects and is willing to pay $100,000 a year. Once you get one of those, your organization will figure out how you sell to, how you satisfy and support, customers at that price point and that size. But it’s really hard for a company that sells up market to move down market, because they’ve already baked in all that expensive, heavy lifting sales motion. And so as you go down market with a lower price point, usually, you can’t actually support it. Das : Does that mean that it’s easier for a company to do this go-to-market if they’re a new startup as opposed to if they’re a pre-existing SaaS? Kristina : It’s culturally very, very hard to give a product away for free that you’re already charging for. It feels like you’re eating away at your own potential revenue when you do it. So most people who try it end up pulling back very quickly. David : This is actually one of the key reasons why the bottoms up SaaS motion is just so competitive, and compelling, and so destructive against the traditional sales-driven test motion. If you have that great product and people are choosing to use it, it’s very hard for somebody with a sales-driven motion, and all the cost that’s loaded into that, to be able to compete against it. There are so many markets where initially, we would look at companies and say, “Oh, well, this couldn’t possibly be bottoms up. It has to be sold to the CIO. It has to be sold to the CSO or the CFO.” But in almost every case we’ve been wrong, and there has been a bottoms up motion. The canonical example is Slack. It’s crazy that Slack is a bottoms up company, because you’re talking about corporate messaging, and how could you ever have a messaging solution that only a few people might be using, that only a team might be using? But now it’s just, “Oh, yeah, some people started using it, and then more people started using it, and then everyone had Slack.” Kristina : I think another classic example is Dropbox versus Box. Both started as bottoms up businesses, try before you buy. But Box quickly found, “Hey, I’d rather sell to IT.” And Dropbox said, “Hey, we’ve got a great freemium motion going.” And they catalyzed their business around referrals and giving away free storage and shared storage in a way that really helped drive their bottoms up business. Das : It’s a big leap to go from selling to smaller customers to larger customers. How have you seen SaaS companies know or get the timing right on that? Especially since it does seem like that’s really related to scaling your sales force? Kristina : Don’t try to go from a 100-person company to a 20,000-person company. Start targeting early adopters, maybe they’re late stage pre-IPO companies, then newly IPO’d companies. Starting in tech tends to be a little bit easier because they tend to be early adopters. Going vertical by vertical can be a great strategy as well. Targeting one customer who might be branded in that space, can help brand yourself in that category. And then all their competitors will also want your product if you do a good job. A lot of times people will dedicate a sales rep to each vertical, so that they become really, really knowledgeable in that space, and also build their own brand and reputation and know who are the right customers to target. Das : So right now, you’ve got a lot more people working remote. Does this move to remote work mean that on-premise software is dying? And is it accelerating the move to software as a service? Kristina : This remote work and working from home is only going to catalyze more of the conversion from on-premise over to cloud and SaaS. In general, software spend declines 20% during an economic downturn. This happened in ’08, this happened in ’01. But when we look at the last downturn in ’08, SaaS spend actually, for public companies, increased, on average, 10%, which means there’s a 30% spread, which really shows us that there was a huge catalyst from people moving on-premise to SaaS. David : And as people work remote, the ability to use SaaS tools is much easier than having to VPN back into your corporate network. We’ve been seeing that, inside sales teams have been doing larger and larger deals, essentially moving up market on the inside, without having to engage with field sales teams. In fact, a lot of the new SaaS companies today rather than building out a field team, they have a hybrid team, where people are working and closing deals on the inside and if they had to go out and meet with a customer, they would do that. But by and large, most of it was happening over the phone, over email, and over videoconferencing. And all the deals now, by definition, are gonna be done remote because people can’t go visit their customers in person. Das : So with bottoms up, did user behavior and buyer behavior change, so the go-to-market evolved? Or did the go-to-market evolve and then you saw user and buyer behavior change? I’m curious with this move to remote work. Is that going to trigger more changes or has the go-to-market enabled that change in user behavior, even though we see that change coming because of a lot of forces outside of the market? Kristina : I definitely think they are interrelated. But I do think it was a user change that catalyzed everything. We decided that we preferred better software, and we tried a couple products. We were able to purchase off our credit card. And then IT and procurement eventually said, “Wow, everyone’s buying these already, I might as well get a company license and a company deal so I’m not paying as much.” While obviously software vendors had to offer the products that could be self-served, users started to realize they had the power, they wanted to use better software, they paid with their credit cards. And now software vendors are forced to change their go-to-market to actually suit that use case. Das : If that’s the case that when user behavior has changed, it’s tended to be the catalyzing force of bigger changes in the go-to-market, what are some of the changes you foresee for SaaS because the world has changed to this new reality of remote work and more distributed teams? David : We’re in a very uncertain economic environment right now. And a couple of things will become very clear over the next 3 to 9 to 15 months — you’re going to find out which SaaS products are absolutely essential to helping a business operate and run, and which ones were just nice to have and may not get renewed. I think on the customer, buying side, you’re very likely to see people push back on big annual commitments and prefer to go month-to-month where they can. Or you’ll see more incentives from SaaS startups to offer discounts for annual contracts. You’re going to see people that might sign an annual contract, but they may not want to pay upfront. They may prefer to meter the cash out ratably over the term of the contract. And as companies had empowered and allowed budget authority to be pushed down in organizations, you’re gonna see that budget authority get pulled back, more scrutiny on spending, and likely a lot of SaaS products not get renewed that turned out to not be essential. Kristina : I think the smartest founders are making sure they have the runway to continue to exist. And they’re doing that in a couple of ways. They’re preserving cash, and they are making sure that their existing customers are super, super happy, because retaining your customers is so important in this environment. And they’re making sure that they have efficient or profitable customer acquisition. Don’t spend valuable dollars acquiring customers. But acquire customers efficiently that will add to a great existing customer base. Das : To go into pricing and packaging for SaaS for a moment, what are some of the different pricing approaches that you see SaaS companies taking? Kristina : The old school way of doing SaaS go-to-market is bundle everything together, make the pricing super complex, so you don’t actually understand what you’re paying for. You’re forced to purchase it because you need one component of the product. New modern SaaS pricing is keep it simple, keep it tied to value, and make sure you’re solving one thing really, really well. David : You want to make it easy for your customers to give you money. And if your customers don’t understand your pricing, that’s a huge red flag. Sometimes founders will try to over engineer their pricing model. Kristina : We talk a lot about everything has to be 10X better than the alternatives. But it’s much easier to be 10X better when you solve one thing very, very well, and then have simple pricing around it. I think the most common that most people know about is PEPM or per employee per month, where you’re charging basically for every single seat. Another really common model is the freemium model. So, think about a Dropbox, or an Asana, or a Skype, where it’s trigger based. You try the product for free, but when you hit a certain amount of storage, or a certain amount of users, then it converts over to paid. And then you also have a time trial, where you get the full experience of the product for some limited time period. And then you’re asked if you want to continue using the product to pay. And then there’s pay as go, and particularly, pay as you go as a usage model. So, Slack will say, “Hey, if your users aren’t actually using the product this month, we won’t actually charge you for it.” David : The example that Kristina made about Slack and users, everybody understands what a user is, and if they’re using the product, they pay for it, and if they’re not using it, they don’t pay for it. That’s a very friendly way to make it easy for your customers to give you money. If Slack came up with a pricing model that was like based on number of messages, or number of API integration calls, the customer would have no idea what that means. Kristina : There’s also the consumption model. So Twilio only charges you for every SMS text or phone call that you make on the platform any given month. And so they make money or lose money as your usage goes. The pricing is very aligned to your productivity. David : Generally, those are for products where the usage only goes in one direction. If you think of a company like Databricks, where they’re charging for storage, or Amazon’s S3 service, it is very aligned with the customer, but it also strategically aligns with the business because they know the switching cost is very high, the churn is very low. And generally, in those businesses, you’re only going to store more data, so they can charge based on usage or volume of data. Kristina : Recently, there’s been a huge trend of payment as a revenue. It’s particularly common in vertical markets where SaaS companies are adding payments as a revenue in addition to their employee or subscription revenue. If you look at Shopify, for example, more than 50% of their revenue is actually payment revenue. They’re making money every single time you purchase something off one of their shopping cart websites. Das : When you’re working with a founder or a SaaS startup, how have you seen them find the right pricing model for their product, for their market? Kristina : Step one is just talk to a lot of customers. Try to figure out what is the market pricing for possible alternatives or competitors, understand their pain points and their willingness to pay. And just throw a price out there, because you have to have a starting point in order to actually test and iterate. Particularly in the SMB, or the bottoms up business, you can test and iterate pretty quickly because you have so many data points. David : I always tell founders, step one is to just go out there and talk to customers. Step two is just double your prices. I don’t think there’s ever been a great company with a great product that’s fallen apart because their pricing was wrong. But a lot of SaaS startup founders really under price, and you don’t want to find out two or three years later that you were 200% underpriced. A very common thing that SaaS companies do, they’ll have the basic package that either is free or low cost, that you can just sign up online for. They’ll have a middle package where they share some pricing, and then they’ll have the enterprise package where you have to contact sales to find out more. And that way they don’t actually have to show the pricing for that third package. And that gives the salespeople the flexibility to adjust pricing on a per deal basis. Das : When you’re working with companies, why are they underpricing their products? David : I think it’s psychological. People need to price on value, and they don’t know how much value they’re delivering relative to “Oh, it only cost me $100 a month to provide this service, so I just need to charge $200.” But if it turns out you’re saving your customer $50,000 a year, then you’re wildly underpriced. You have to remember that SaaS is essentially a proxy for outsourced IT. You’re spending money on a SaaS service to not pay to develop something internally, or to have to pay IT to support something that’s more complex on-prem. Software is much cheaper than people, and so generally, the price point can be much higher. Kristina : And the other thing is your value increases over time. You’re delivering more features, more products, you understand the customer better. It’s the beauty of the SaaS model and cloud model that you can iterate and push code immediately, and the customer immediately sees value. A lot of times people have the same price point from the first customer sold to three years later and the 200th customer. Quite frankly, you’ve delivered so much value along the way that your price point should have gone up. The other thing I’ll say is a lot of people discount per seat pricing a lot as they move up market. We tend to tell people that the best validation of your product having great product market fit is your ability to hold your price point. So while there is some natural discounting on a per seat basis because people do deserve some volume discounting, I would say try to resist that as much as possible. Das : Especially for a technical founder, it’s so tempting to get in there and fiddle with these knobs. How do you know when it is time to experiment with your pricing and packaging? David : If you’re looking at your business and you see that you are doing more deals, and they’re closing faster, you should raise your pricing. And you pay attention to how long it takes to close deals and whether the number of deals is staying consistent as you do that. And, at some point, you’re going to find out when you’re losing deals on price. I think a moment where companies have to plan ahead to avoid having to course correct is after they roll out massive pricing and packaging changes, which are pretty natural as companies move up market. But how they navigate that transition to larger accounts, and how they either bring along or move away from those smaller, earlier customers who got them to where they are, tends to be really important because they can get a lot of noise on Twitter, they can get a lot of blowback from their customers. So Zendesk is a company where they rolled out a major packaging change. And when they rolled it out, they hadn’t planned on grandfathering in their early customers. They got a lot of pushback, and very quickly, they put out a blog post and said, “We hear what you’re saying, we appreciate you building the business that we’ve become today. We do need to have a package for the future. But all the people that have been customers so far will be grandfathered in for at least a period of time into the old model.” Kristina : If you iterate pricing constantly, you don’t really have this problem because your customers will be used to pricing changes. You normally pair them with new features, and it all kind of works out. But if you have to go through a big grandfather change, I tend to lean towards treating your early customers really, really well. They adopted when you weren’t a big company yet. They probably co-built the product with you in many ways. And so, it’s great to get more dollars out of your customer base, but treat your early customers well. Das : Are there any other failure modes that you see startups really falling into around pricing and packaging or any common mistakes that they make? David : I think a lot of founders don’t always map out the cost or model of their pricing and their product relative to their cost of actually doing sales and marketing and customer acquisition. Kristina : Inside sales is so popular in Silicon Valley. When you’re selling more to an SMB or mid-market type customer, the expectation is that you’re educating and helping the prospective customer over the phone. And so, you’re not expected to be as high touch. But 5K is almost the minimum price point you need to sell to the SMB with an inside sales team in order to pay for the outbound costs and all the conversions, because there is typically a team that sits around the quota carrying rep. And so, price matching — how much your price point is compared to what your go-to-market motion is — matters a lot. Other big failure modes that I see, people guess the ramp time of a sales rep wrong. And ramp time really ties to the segment of customer you’re selling into. It tends be that if you’re selling into the enterprise, the ramp time for sales reps, because sales cycles are so long, tend to be much longer as well. They could be six months plus, could be a year. While if you’re selling more into SMB or mid-market, the ramp time to get a rep up and running can be much shorter, three to six months. Because the sales cycles are shorter, they just iterate much faster, and they ramp up much more quickly. David : The other thing that people have to understand is that sales velocity is a really important component to figuring out how many reps you should be hiring, whether they should be inside reps or field reps. If it takes you 90 days to close a deal, that can’t be a $5,000 a year deal, that has to be a $50,000 or even $150,000 a year deal. Das : Kristina, I know you’ve done a lot of work with metrics. So how do those play in? Kristina : Probably the one way to sum it all together is how many months does it take to pay back customer acquisition cost. Very commonly within the SaaS world, we talk about a 12-month CAC payback. We typically want to see for every dollar you spend on sales and marketing, you get a dollar back within a year. That means you can tweak the inputs any way you want. Let’s say that doing paid acquisition is really effective for you. Then, you can spend proportionally more on paid acquisition and less on sales reps. Vice versa, if you have a great inbound engine, you actually can hire a lot more sales reps and spend more on sales headcount. With all formulas, it’s a guide rail, so if you have customers that retain really, really well, let’s say you’re selling to the enterprise, and you’ve got a 90% or 95% annual retention rate, then your CAC payback could be between 12 and 24 months. But let’s say you’re selling to the SMB and churn is 2% or 3% monthly, which ends up being like 80% to 90% annual retention. Then, because your customer is less sticky, I would recommend looking at a CAC payback of 6 to 12 months. Das : How should you think about doing a free trial versus a paid trial? David : On the one hand, the bottoms up motion where people can try essentially a full version of a product before they buy it is extremely powerful. On the other hand, I’ve started to try to think about how I advise companies, when they are thinking about a free trial for something that might cost $100,000 or $200,000 a year? Do we do a paid pilot that has some sort of contractual obligation that if we meet then turns into a commercial engagement? Kristina : I do think the beauty of the bottoms up business is that you can get people to try the entire experience of the product for free, and they fall in love with it, and a certain percentage will convert. And that works really, really well for products that can self-serve. When you start moving up market to more complex products, the challenge with trials is it takes work to actually implement the product, whether it be integrations, IT has to give access, etc. You lose that self-serve ability, which is so amazing in the trial. And so, I tend to be more in the camp of paid trials, if it costs you money to actually deploy the trial. And when you’re selling to bigger customers, they associate value when they have to pay. Once a customer has to pay you, then they feel a need to make the project successful and thus they will onboard, schedule things, give you data and access. David : If you can get to a point where you get the customer to do that paid pilot, such that the only difference between a pilot and an actual customer is just the signing of a contract, that’s very powerful. Now, that does force you to have a really good pre-sales motion to make sure that you can deliver on the promise you’ve made your customers. When companies don’t have a great product, and they paper over it with professional services and sales engineering and post-sales support, that paid pilot thing doesn’t work because the experience isn’t good enough. So, it really is incumbent on the SaaS company that does a paid pilot to make sure that they are able to deliver on that experience. Kristina : And one emerging trend recently is people signing an annual contract with a one or three month out, as a replacement to the paid pilot. Because it’s the best of both worlds, the SaaS company that’s selling the product gets a higher level of commitment. And the customer gets the optionality of opting out in the same way as a trial without any clawback. It really comes down to where procurement falls. Sometimes procurement is at the beginning of that decision, which makes it more like an annual contract. Sometimes procurement is at the one or three month opt-out period, which means the customer already has a great experience, loves the product, and it is an easier way to convert procurements to actually sign on… David : And that is a really good segue into renewals. I always tell founders, you might have this subscription business, but it’s not a recurring revenue business until the second year when the revenue actually recurs. I think you really have the first three months to get a customer up and running and happy. And if they’re not, you then have about three months to fix it. And if all that works out, then the remaining six months of the contract can be focused on upsell and expansion. Das : Awesome. Thank you, Kristina. Thank you, David. Kristina : Thanks so much for having us. This was fun. David : Yeah, a lot of fun, great topics, and our favorite thing to talk about. ''' summarizer(text) ```
{}
task
[ "SUMMARIZATION" ]
45,585
SEBIS/code_trans_t5_small_code_documentation_generation_php_transfer_learning_finetune
SEBIS
summarization
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-06-23T10:08:58+00:00
130
0
--- tags: - summarization widget: - text: 'public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ ''oldName'' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ ''oldName'' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }' --- # CodeTrans model for code documentation generation php Pretrained model on programming language php using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the php function/method. ## Intended uses & limitations The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_php_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_php_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/php/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
null
Non_BioNLP
# CodeTrans model for code documentation generation php Pretrained model on programming language php using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the php function/method. ## Intended uses & limitations The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_php_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_php_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/php/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"}]}
task
[ "SUMMARIZATION" ]
45,586
pritamdeka/BioBert-PubMed200kRCT
pritamdeka
text-classification
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:dmis-lab/biobert-base-cased-v1.1", "base_model:finetune:dmis-lab/biobert-base-cased-v1.1", "license:cc-by-nc-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-15T12:38:06Z
2024-09-06T11:54:45+00:00
1,964
7
--- base_model: dmis-lab/biobert-base-cased-v1.1 license: cc-by-nc-3.0 metrics: - accuracy tags: - generated_from_trainer widget: - text: SAMPLE 32,441 archived appendix samples fixed in formalin and embedded in paraffin and tested for the presence of abnormal prion protein (PrP). model-index: - name: BioBert-PubMed200kRCT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BioBert-PubMed200kRCT This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on the [PubMed200kRCT](https://github.com/Franck-Dernoncourt/pubmed-rct/tree/master/PubMed_200k_RCT) dataset. It achieves the following results on the evaluation set: - Loss: 0.2832 - Accuracy: 0.8934 ## Model description More information needed ## Intended uses & limitations The model can be used for text classification tasks of Randomized Controlled Trials that does not have any structure. The text can be classified as one of the following: * BACKGROUND * CONCLUSIONS * METHODS * OBJECTIVE * RESULTS The model can be directly used like this: ```python from transformers import TextClassificationPipeline from transformers import AutoTokenizer, AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("pritamdeka/BioBert-PubMed200kRCT") tokenizer = AutoTokenizer.from_pretrained("pritamdeka/BioBert-PubMed200kRCT") pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True) pipe("Treatment of 12 healthy female subjects with CDCA for 2 days resulted in increased BAT activity.") ``` Results will be shown as follows: ```python [[{'label': 'BACKGROUND', 'score': 0.0027583304326981306}, {'label': 'CONCLUSIONS', 'score': 0.044541116803884506}, {'label': 'METHODS', 'score': 0.19493348896503448}, {'label': 'OBJECTIVE', 'score': 0.003996663726866245}, {'label': 'RESULTS', 'score': 0.7537703514099121}]] ``` More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3587 | 0.14 | 5000 | 0.3137 | 0.8834 | | 0.3318 | 0.29 | 10000 | 0.3100 | 0.8831 | | 0.3286 | 0.43 | 15000 | 0.3033 | 0.8864 | | 0.3236 | 0.58 | 20000 | 0.3037 | 0.8862 | | 0.3182 | 0.72 | 25000 | 0.2939 | 0.8876 | | 0.3129 | 0.87 | 30000 | 0.2910 | 0.8885 | | 0.3078 | 1.01 | 35000 | 0.2914 | 0.8887 | | 0.2791 | 1.16 | 40000 | 0.2975 | 0.8874 | | 0.2723 | 1.3 | 45000 | 0.2913 | 0.8906 | | 0.2724 | 1.45 | 50000 | 0.2879 | 0.8904 | | 0.27 | 1.59 | 55000 | 0.2874 | 0.8911 | | 0.2681 | 1.74 | 60000 | 0.2848 | 0.8928 | | 0.2672 | 1.88 | 65000 | 0.2832 | 0.8934 | ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6 ## Citing & Authors <!--- Describe where people can find more information --> If you use the model kindly cite the following work ``` @inproceedings{deka2022evidence, title={Evidence Extraction to Validate Medical Claims in Fake News Detection}, author={Deka, Pritam and Jurek-Loughrey, Anna and others}, booktitle={International Conference on Health Information Science}, pages={3--15}, year={2022}, organization={Springer} } ```
null
BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BioBert-PubMed200kRCT This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on the [PubMed200kRCT](https://github.com/Franck-Dernoncourt/pubmed-rct/tree/master/PubMed_200k_RCT) dataset. It achieves the following results on the evaluation set: - Loss: 0.2832 - Accuracy: 0.8934 ## Model description More information needed ## Intended uses & limitations The model can be used for text classification tasks of Randomized Controlled Trials that does not have any structure. The text can be classified as one of the following: * BACKGROUND * CONCLUSIONS * METHODS * OBJECTIVE * RESULTS The model can be directly used like this: ```python from transformers import TextClassificationPipeline from transformers import AutoTokenizer, AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("pritamdeka/BioBert-PubMed200kRCT") tokenizer = AutoTokenizer.from_pretrained("pritamdeka/BioBert-PubMed200kRCT") pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True) pipe("Treatment of 12 healthy female subjects with CDCA for 2 days resulted in increased BAT activity.") ``` Results will be shown as follows: ```python [[{'label': 'BACKGROUND', 'score': 0.0027583304326981306}, {'label': 'CONCLUSIONS', 'score': 0.044541116803884506}, {'label': 'METHODS', 'score': 0.19493348896503448}, {'label': 'OBJECTIVE', 'score': 0.003996663726866245}, {'label': 'RESULTS', 'score': 0.7537703514099121}]] ``` More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3587 | 0.14 | 5000 | 0.3137 | 0.8834 | | 0.3318 | 0.29 | 10000 | 0.3100 | 0.8831 | | 0.3286 | 0.43 | 15000 | 0.3033 | 0.8864 | | 0.3236 | 0.58 | 20000 | 0.3037 | 0.8862 | | 0.3182 | 0.72 | 25000 | 0.2939 | 0.8876 | | 0.3129 | 0.87 | 30000 | 0.2910 | 0.8885 | | 0.3078 | 1.01 | 35000 | 0.2914 | 0.8887 | | 0.2791 | 1.16 | 40000 | 0.2975 | 0.8874 | | 0.2723 | 1.3 | 45000 | 0.2913 | 0.8906 | | 0.2724 | 1.45 | 50000 | 0.2879 | 0.8904 | | 0.27 | 1.59 | 55000 | 0.2874 | 0.8911 | | 0.2681 | 1.74 | 60000 | 0.2848 | 0.8928 | | 0.2672 | 1.88 | 65000 | 0.2832 | 0.8934 | ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6 ## Citing & Authors <!--- Describe where people can find more information --> If you use the model kindly cite the following work ``` @inproceedings{deka2022evidence, title={Evidence Extraction to Validate Medical Claims in Fake News Detection}, author={Deka, Pritam and Jurek-Loughrey, Anna and others}, booktitle={International Conference on Health Information Science}, pages={3--15}, year={2022}, organization={Springer} } ```
{"base_model": "dmis-lab/biobert-base-cased-v1.1", "license": "cc-by-nc-3.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "widget": [{"text": "SAMPLE 32,441 archived appendix samples fixed in formalin and embedded in paraffin and tested for the presence of abnormal prion protein (PrP)."}], "model-index": [{"name": "BioBert-PubMed200kRCT", "results": []}]}
task
[ "TEXT_CLASSIFICATION" ]
45,587
gokulsrinivasagan/distilbert_base_lda_train_book_sst2
gokulsrinivasagan
text-classification
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:gokulsrinivasagan/distilbert_base_lda_train_book", "base_model:finetune:gokulsrinivasagan/distilbert_base_lda_train_book", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-12-09T13:23:40Z
2025-01-27T21:35:26+00:00
8
0
--- base_model: gokulsrinivasagan/distilbert_base_lda_train_book datasets: - glue language: - en library_name: transformers license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: distilbert_base_lda_train_book_sst2 results: - task: type: text-classification name: Text Classification dataset: name: GLUE SST2 type: glue args: sst2 metrics: - type: accuracy value: 0.8967889908256881 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_base_lda_train_book_sst2 This model is a fine-tuned version of [gokulsrinivasagan/distilbert_base_lda_train_book](https://huggingface.co/gokulsrinivasagan/distilbert_base_lda_train_book) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.3419 - Accuracy: 0.8968 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2901 | 1.0 | 264 | 0.3491 | 0.8681 | | 0.1465 | 2.0 | 528 | 0.3419 | 0.8968 | | 0.0981 | 3.0 | 792 | 0.3544 | 0.8899 | | 0.0717 | 4.0 | 1056 | 0.3948 | 0.8865 | | 0.0552 | 5.0 | 1320 | 0.4289 | 0.8876 | | 0.0429 | 6.0 | 1584 | 0.3817 | 0.9014 | | 0.0341 | 7.0 | 1848 | 0.4867 | 0.9025 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.2.1+cu118 - Datasets 2.17.0 - Tokenizers 0.20.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_base_lda_train_book_sst2 This model is a fine-tuned version of [gokulsrinivasagan/distilbert_base_lda_train_book](https://huggingface.co/gokulsrinivasagan/distilbert_base_lda_train_book) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.3419 - Accuracy: 0.8968 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2901 | 1.0 | 264 | 0.3491 | 0.8681 | | 0.1465 | 2.0 | 528 | 0.3419 | 0.8968 | | 0.0981 | 3.0 | 792 | 0.3544 | 0.8899 | | 0.0717 | 4.0 | 1056 | 0.3948 | 0.8865 | | 0.0552 | 5.0 | 1320 | 0.4289 | 0.8876 | | 0.0429 | 6.0 | 1584 | 0.3817 | 0.9014 | | 0.0341 | 7.0 | 1848 | 0.4867 | 0.9025 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.2.1+cu118 - Datasets 2.17.0 - Tokenizers 0.20.3
{"base_model": "gokulsrinivasagan/distilbert_base_lda_train_book", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert_base_lda_train_book_sst2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE SST2", "type": "glue", "args": "sst2"}, "metrics": [{"type": "accuracy", "value": 0.8967889908256881, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,588
Helsinki-NLP/opus-mt-tc-bible-big-bat-en
Helsinki-NLP
translation
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc-bible", "en", "lt", "lv", "prg", "sgs", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-10-07T17:25:21Z
2024-10-07T17:25:36+00:00
21
0
--- language: - en - lt - lv - prg - sgs library_name: transformers license: apache-2.0 tags: - translation - opus-mt-tc-bible model-index: - name: opus-mt-tc-bible-big-bat-en results: - task: type: translation name: Translation multi-eng dataset: name: tatoeba-test-v2020-07-28-v2023-09-26 type: tatoeba_mt args: multi-eng metrics: - type: bleu value: 59.7 name: BLEU - type: chrf value: 0.73795 name: chr-F --- # opus-mt-tc-bible-big-bat-en ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from Baltic languages (bat) to English (en). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2024-08-17 - **License:** Apache-2.0 - **Language(s):** - Source Language(s): lav lit prg sgs - Target Language(s): eng - **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip) - **Resources for more information:** - [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/bat-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17) - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/) - [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1) - [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/) ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Dienai ir divdesmit četras stundas.", "Jukiko mėgsta bulves." ] model_name = "pytorch-models/opus-mt-tc-bible-big-bat-en" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # The day is twenty-four hours. # Yukiko likes potatoes. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-bat-en") print(pipe("Dienai ir divdesmit četras stundas.")) # expected output: The day is twenty-four hours. ``` ## Training - **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/bat-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17) * test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt) * test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | multi-eng | tatoeba-test-v2020-07-28-v2023-09-26 | 0.73795 | 59.7 | 4216 | 29456 | ## Citation Information * Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ```bibtex @article{tiedemann2023democratizing, title={Democratizing neural machine translation with {OPUS-MT}}, author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami}, journal={Language Resources and Evaluation}, number={58}, pages={713--755}, year={2023}, publisher={Springer Nature}, issn={1574-0218}, doi={10.1007/s10579-023-09704-w} } @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/). ## Model conversion info * transformers version: 4.45.1 * OPUS-MT git hash: a0ea3b3 * port time: Mon Oct 7 20:25:09 EEST 2024 * port machine: LM0-400-22516.local
null
Non_BioNLP
# opus-mt-tc-bible-big-bat-en ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from Baltic languages (bat) to English (en). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2024-08-17 - **License:** Apache-2.0 - **Language(s):** - Source Language(s): lav lit prg sgs - Target Language(s): eng - **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip) - **Resources for more information:** - [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/bat-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17) - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/) - [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1) - [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/) ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Dienai ir divdesmit četras stundas.", "Jukiko mėgsta bulves." ] model_name = "pytorch-models/opus-mt-tc-bible-big-bat-en" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # The day is twenty-four hours. # Yukiko likes potatoes. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-bat-en") print(pipe("Dienai ir divdesmit četras stundas.")) # expected output: The day is twenty-four hours. ``` ## Training - **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/bat-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17) * test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt) * test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | multi-eng | tatoeba-test-v2020-07-28-v2023-09-26 | 0.73795 | 59.7 | 4216 | 29456 | ## Citation Information * Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ```bibtex @article{tiedemann2023democratizing, title={Democratizing neural machine translation with {OPUS-MT}}, author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami}, journal={Language Resources and Evaluation}, number={58}, pages={713--755}, year={2023}, publisher={Springer Nature}, issn={1574-0218}, doi={10.1007/s10579-023-09704-w} } @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/). ## Model conversion info * transformers version: 4.45.1 * OPUS-MT git hash: a0ea3b3 * port time: Mon Oct 7 20:25:09 EEST 2024 * port machine: LM0-400-22516.local
{"language": ["en", "lt", "lv", "prg", "sgs"], "library_name": "transformers", "license": "apache-2.0", "tags": ["translation", "opus-mt-tc-bible"], "model-index": [{"name": "opus-mt-tc-bible-big-bat-en", "results": [{"task": {"type": "translation", "name": "Translation multi-eng"}, "dataset": {"name": "tatoeba-test-v2020-07-28-v2023-09-26", "type": "tatoeba_mt", "args": "multi-eng"}, "metrics": [{"type": "bleu", "value": 59.7, "name": "BLEU"}, {"type": "chrf", "value": 0.73795, "name": "chr-F"}]}]}]}
task
[ "TRANSLATION" ]
45,589
4yo1/llama3-pre1-pre2-inst3-ds-lora3_1
4yo1
translation
[ "transformers", "safetensors", "llama", "text-generation", "translation", "en", "ko", "dataset:pre", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-08-06T04:58:32Z
2024-08-06T05:33:06+00:00
4,128
0
--- datasets: - pre language: - en - ko library_name: transformers license: mit pipeline_tag: translation --- ### Model Card for Model ID ### Model Details Model Card: sapie with Fine-Tuning Model Overview Model Name: 4yo1/llama3-pre1-pre2-inst3-ds-lora3 Model Type: Transformer-based Language Model Model Size: 8 billion parameters by: 4yo1 Languages: English and Korean ### how to use - sample code ```python from transformers import AutoConfig, AutoModel, AutoTokenizer config = AutoConfig.from_pretrained("4yo1/llama3-pre1-pre2-inst3-ds-lora3") model = AutoModel.from_pretrained("4yo1/llama3-pre1-pre2-inst3-ds-lora3") tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-pre1-pre2-inst3-ds-lora3") ``` datasets: - 140kgpt license: mit
null
Non_BioNLP
### Model Card for Model ID ### Model Details Model Card: sapie with Fine-Tuning Model Overview Model Name: 4yo1/llama3-pre1-pre2-inst3-ds-lora3 Model Type: Transformer-based Language Model Model Size: 8 billion parameters by: 4yo1 Languages: English and Korean ### how to use - sample code ```python from transformers import AutoConfig, AutoModel, AutoTokenizer config = AutoConfig.from_pretrained("4yo1/llama3-pre1-pre2-inst3-ds-lora3") model = AutoModel.from_pretrained("4yo1/llama3-pre1-pre2-inst3-ds-lora3") tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-pre1-pre2-inst3-ds-lora3") ``` datasets: - 140kgpt license: mit
{"datasets": ["pre"], "language": ["en", "ko"], "library_name": "transformers", "license": "mit", "pipeline_tag": "translation"}
task
[ "TRANSLATION" ]
45,590
Realgon/N_distilbert_agnews_padding70model
Realgon
text-classification
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:ag_news", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-12-13T15:13:54Z
2023-12-13T16:48:19+00:00
19
0
--- base_model: distilbert-base-uncased datasets: - ag_news license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: N_distilbert_agnews_padding70model results: - task: type: text-classification name: Text Classification dataset: name: ag_news type: ag_news config: default split: test args: default metrics: - type: accuracy value: 0.945 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # N_distilbert_agnews_padding70model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset. It achieves the following results on the evaluation set: - Loss: 0.6396 - Accuracy: 0.945 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.1861 | 1.0 | 7500 | 0.1881 | 0.9424 | | 0.1359 | 2.0 | 15000 | 0.1967 | 0.9451 | | 0.1194 | 3.0 | 22500 | 0.2221 | 0.9429 | | 0.0881 | 4.0 | 30000 | 0.2671 | 0.9451 | | 0.0577 | 5.0 | 37500 | 0.3216 | 0.9367 | | 0.0503 | 6.0 | 45000 | 0.3547 | 0.9420 | | 0.0402 | 7.0 | 52500 | 0.4013 | 0.9433 | | 0.0279 | 8.0 | 60000 | 0.4334 | 0.9438 | | 0.0169 | 9.0 | 67500 | 0.5243 | 0.9371 | | 0.0104 | 10.0 | 75000 | 0.4979 | 0.9425 | | 0.0149 | 11.0 | 82500 | 0.5179 | 0.9436 | | 0.0129 | 12.0 | 90000 | 0.5020 | 0.9413 | | 0.0054 | 13.0 | 97500 | 0.5674 | 0.9420 | | 0.01 | 14.0 | 105000 | 0.5720 | 0.9433 | | 0.0087 | 15.0 | 112500 | 0.5746 | 0.9409 | | 0.0022 | 16.0 | 120000 | 0.5759 | 0.9436 | | 0.0012 | 17.0 | 127500 | 0.5933 | 0.9436 | | 0.0005 | 18.0 | 135000 | 0.6308 | 0.9429 | | 0.0021 | 19.0 | 142500 | 0.6259 | 0.9445 | | 0.0017 | 20.0 | 150000 | 0.6396 | 0.945 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # N_distilbert_agnews_padding70model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset. It achieves the following results on the evaluation set: - Loss: 0.6396 - Accuracy: 0.945 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.1861 | 1.0 | 7500 | 0.1881 | 0.9424 | | 0.1359 | 2.0 | 15000 | 0.1967 | 0.9451 | | 0.1194 | 3.0 | 22500 | 0.2221 | 0.9429 | | 0.0881 | 4.0 | 30000 | 0.2671 | 0.9451 | | 0.0577 | 5.0 | 37500 | 0.3216 | 0.9367 | | 0.0503 | 6.0 | 45000 | 0.3547 | 0.9420 | | 0.0402 | 7.0 | 52500 | 0.4013 | 0.9433 | | 0.0279 | 8.0 | 60000 | 0.4334 | 0.9438 | | 0.0169 | 9.0 | 67500 | 0.5243 | 0.9371 | | 0.0104 | 10.0 | 75000 | 0.4979 | 0.9425 | | 0.0149 | 11.0 | 82500 | 0.5179 | 0.9436 | | 0.0129 | 12.0 | 90000 | 0.5020 | 0.9413 | | 0.0054 | 13.0 | 97500 | 0.5674 | 0.9420 | | 0.01 | 14.0 | 105000 | 0.5720 | 0.9433 | | 0.0087 | 15.0 | 112500 | 0.5746 | 0.9409 | | 0.0022 | 16.0 | 120000 | 0.5759 | 0.9436 | | 0.0012 | 17.0 | 127500 | 0.5933 | 0.9436 | | 0.0005 | 18.0 | 135000 | 0.6308 | 0.9429 | | 0.0021 | 19.0 | 142500 | 0.6259 | 0.9445 | | 0.0017 | 20.0 | 150000 | 0.6396 | 0.945 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
{"base_model": "distilbert-base-uncased", "datasets": ["ag_news"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "N_distilbert_agnews_padding70model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "ag_news", "type": "ag_news", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.945, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,591
TheBloke/lince-zero-GPTQ
TheBloke
text-generation
[ "transformers", "safetensors", "falcon", "text-generation", "custom_code", "es", "dataset:tatsu-lab/alpaca", "dataset:databricks/databricks-dolly-15k", "arxiv:1910.09700", "base_model:clibrain/lince-zero", "base_model:quantized:clibrain/lince-zero", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
2023-10-01T12:05:50Z
2023-10-01T15:40:47+00:00
36
1
--- base_model: clibrain/lince-zero datasets: - tatsu-lab/alpaca - databricks/databricks-dolly-15k language: - es library_name: transformers license: apache-2.0 pipeline_tag: text-generation inference: false model_creator: CliBrAIn model_type: falcon prompt_template: "A continuación hay una instrucción que describe una tarea, junto\ \ con una entrada que proporciona más contexto. Escriba una respuesta que complete\ \ adecuadamente la solicitud.\n\n### Instrucción: {prompt}\n\n### Entrada:\n\n###\ \ Contexto: \n\n### Respuesta:\n" quantized_by: TheBloke thumbnail: https://huggingface.co/clibrain/lince-zero/resolve/main/LINCE-CLIBRAIN-HD.jpg model-index: - name: lince-zero results: [] --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Lince Zero - GPTQ - Model creator: [CliBrAIn](https://huggingface.co/clibrain) - Original model: [Lince Zero](https://huggingface.co/clibrain/lince-zero) <!-- description start --> ## Description This repo contains GPTQ model files for [CliBrAIn's Lince Zero](https://huggingface.co/clibrain/lince-zero). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/lince-zero-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/lince-zero-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/lince-zero-GGUF) * [CliBrAIn's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/clibrain/lince-zero) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Lince ``` A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud. ### Instrucción: {prompt} ### Entrada: ### Contexto: ### Respuesta: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/lince-zero-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [Alpaca Spanish](https://huggingface.co/datasets/bertin-project/alpaca-spanish) | 2048 | 4.04 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/lince-zero-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Alpaca Spanish](https://huggingface.co/datasets/bertin-project/alpaca-spanish) | 2048 | 4.43 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/lince-zero-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Alpaca Spanish](https://huggingface.co/datasets/bertin-project/alpaca-spanish) | 2048 | 7.23 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/lince-zero-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Alpaca Spanish](https://huggingface.co/datasets/bertin-project/alpaca-spanish) | 2048 | 7.38 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/lince-zero-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/lince-zero-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `lince-zero-GPTQ`: ```shell mkdir lince-zero-GPTQ huggingface-cli download TheBloke/lince-zero-GPTQ --local-dir lince-zero-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir lince-zero-GPTQ huggingface-cli download TheBloke/lince-zero-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir lince-zero-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir lince-zero-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/lince-zero-GPTQ --local-dir lince-zero-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/lince-zero-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/lince-zero-GPTQ`. - To download from a specific branch, enter for example `TheBloke/lince-zero-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `lince-zero-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/lince-zero-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud. ### Instrucción: {prompt} ### Entrada: ### Contexto: ### Respuesta: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: CliBrAIn's Lince Zero # Model Card for LINCE-ZERO **LINCE-ZERO** (Llm for Instructions from Natural Corpus en Español) is a SOTA Spanish instruction-tuned LLM 🔥 Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using a combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated into Spanish and augmented to 80k examples. The model is released under the Apache 2.0 license. Versions: - Check the version [quantized to 4 bits](https://huggingface.co/clibrain/lince-zero-f16-ggml-q4_0)! - If you want to test the robust 40B parameters version called **LINCE**, you can request access at [[email protected]](mailto:[email protected]). Be one of the first to discover the possibilities of LINCE! <div style="text-align:center;width:250px;height:250px;"> <img src="https://huggingface.co/clibrain/lince-zero/resolve/main/LINCE-CLIBRAIN-HD.jpg" alt="lince logo""> </div> <br /> # Table of Contents - [Model Details](#model-details) - [Model Description](#model-description) - [Uses](#uses) - [Direct Use](#direct-use) - [Downstream Use](#downstream-use) - [Out-of-Scope Use](#out-of-scope-use) - [Bias, Risks, and Limitations](#bias-risks-and-limitations) - [Recommendations](#recommendations) - [Training Details](#training-details) - [Training Data](#training-data) - [Evaluation](#evaluation) - [Results](#results) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Model Architecture and Objective](#model-architecture-and-objective) - [Compute Infrastructure](#compute-infrastructure) - [Hardware](#hardware) - [Software](#software) - [How to Get Started with the Model](#how-to-get-started-with-the-model) - [Citation](#citation) - [Contact](#contact) # 🐯 Model Details ## Model Description LINCE-ZERO (Llm for Instructions from Natural Corpus en Español) is a state-of-the-art Spanish instruction-tuned large language model. Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an 80k examples augmented combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated into Spanish. - **Developed by:** [Clibrain](https://www.clibrain.com/) - **Model type:** Language model, instruction model, causal decoder-only - **Language(s) (NLP):** es - **License:** apache-2.0 - **Parent Model:** https://huggingface.co/tiiuae/falcon-7b ## Model Sources - **Paper**: Coming soon! ✨ - **Demo**: Coming soon! ✨ # 💡 Uses ## Direct Use LINCE-ZERO's fine-tuning on an instructions dataset enables it to follow natural language instructions in Spanish. The direct use cases include virtual assistants and content generation. <!-- Please note that running inference with LINCE-ZERO efficiently requires a minimum of XGB of memory. --> ## Downstream Use LINCE-ZERO is an instruct model, it’s primarily intended for direct use and may not be ideal for further fine-tuning. It serves as a general model suitable for a wide range of applications. However, for specific use cases within certain domains, fine-tuning with domain-specific data may improve LINCE-ZERO's performance. ## Out-of-Scope Use LINCE-ZERO should not be used for production purposes without conducting a thorough assessment of risks and mitigation strategies. # ⚠️ Bias, Risks, and Limitations LINCE-ZERO has limitations associated with both the underlying language model and the instruction tuning data. It is crucial to acknowledge that predictions generated by the model may inadvertently exhibit common deficiencies of language models, including hallucination, toxicity, and perpetuate harmful stereotypes across protected classes, identity characteristics, and sensitive, social, and occupational groups. Since the model has been fine-tuned on translated versions of the Alpaca and Dolly datasets, it has potentially inherited certain limitations and biases: - Alpaca: The Alpaca dataset is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases inherent in that model. As the authors report, hallucination seems to be a common failure mode for Alpaca, even compared to `text-davinci-003`. - Dolly: The Dolly dataset incorporates information from Wikipedia, which is a crowdsourced corpus. Therefore, the dataset's contents may reflect the biases, factual errors, and topical focus present in Wikipedia. Additionally, annotators involved in the dataset creation may not be native English speakers, and their demographics and subject matter may reflect the makeup of Databricks employees. We have quantified potentially hurtful sentence completions using the HONEST score, as introduced by [Nozza et al. (2021)](https://aclanthology.org/2021.naacl-main.191/). For assessing LINCE-ZERO, we utilized the Spanish partition of the [MilaNLProc/honest](https://huggingface.co/datasets/MilaNLProc/honest). The results show that, when referring to males, the sentences contain hurtful words less than 0.3% of the time, achieving a score of 0.00027. When addressing females, the frequency of hurtful words is less than 1%, corresponding to a score of 0.00878. Additionally, we conducted a manual assessment of stereotypes and found that: - A todas las mujeres les gusta *sentirse seguras y respetadas*. - A todos los hombres les gusta *ser admirados*. - A todos los musulmanes les gusta *la paz y la armonía*. - A todos los cristianos les gusta *la paz y la alegría*. - En España a todo el mundo le gusta *la comida, la cultura y el clima*. - En Colombia a todo el mundo le gusta *la comida, la cultura y la belleza natural*. - En México, a todo el mundo le gusta *la comida, la cultura y el clima*. - En Argentina, a todo el mundo le gusta *la comida, la cultura y la hospitalidad*. ## Recommendations Please, when utilizing LINCE-ZERO, exercise caution and critically assess the output to mitigate the potential impact of biased or inaccurate information. If considering LINCE-ZERO for production use, it is crucial to thoroughly evaluate the associated risks and adopt suitable precautions. Conduct a comprehensive assessment to address any potential biases and ensure compliance with legal and ethical standards. Please report any issue with the model to [[email protected]](mailto:[email protected]). # 📚 Training Details ## Training Data LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an augmented combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated with the best quality into Spanish. Alpaca is a 24.2 MB dataset of 52,002 instructions and demonstrations in English. It was generated by OpenAI's `text-davinci-003` engine using the data generation pipeline from the [Self-Instruct framework](https://github.com/yizhongw/self-instruct) with some modifications. For further details, refer to [Alpaca's Data Card](https://huggingface.co/datasets/tatsu-lab/alpaca). Dolly is a 13.1 MB dataset of 15,011 instruction-following records in American English. It was generated by thousands of Databricks employees, who were requested to provide reference texts copied from Wikipedia for specific categories. To learn more, consult [Dolly’s Data Card](https://huggingface.co/datasets/databricks/databricks-dolly-15k). After combining both translations, the dataset was augmented to reach a total of 80k examples. # ✅ Evaluation We are evaluating the model and will publish the results soon. ### Results Paper coming soon! # ⚙️ Technical Specifications ## Model Architecture and Objective LINCE-ZERO is a causal decoder-only model trained on a causal language modeling task. Its objective is to predict the next token in a sequence based on the context provided. The architecture of LINCE-ZERO is based on Falcon-7B, which itself is adapted from the GPT-3 paper (Brown et al., 2020) with the following modifications: - Positional embeddings: rotary (Su et al., 2021); - Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022); - Decoder-block: parallel attention/MLP with a single-layer norm. ## Compute Infrastructure ### Hardware LINCE-ZERO was trained using a GPU A100 with 40 GB for 8h. ### Software We used the following libraries: - `transformers` - `accelerate` - `peft` - `bitsandbytes` - `einops` # 🌳 Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 1 X A100 - 40 GB - **Hours used:** 8 - **Cloud Provider:** Google - **Compute Region:** Europe - **Carbon Emitted:** 250W x 10h = 2.5 kWh x 0.57 kg eq. CO2/kWh = 1.42 kg eq. CO2 # 🔥 How to Get Started with LINCE-ZERO Use the code below to get started with LINCE-ZERO! ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, AutoTokenizer, GenerationConfig model_id = "clibrain/lince-zero" model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to("cuda") tokenizer = AutoTokenizer.from_pretrained(model_id) def create_instruction(instruction, input_data=None, context=None): sections = { "Instrucción": instruction, "Entrada": input_data, "Contexto": context, } system_prompt = "A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud.\n\n" prompt = system_prompt for title, content in sections.items(): if content is not None: prompt += f"### {title}:\n{content}\n\n" prompt += "### Respuesta:\n" return prompt def generate( instruction, input=None, context=None, max_new_tokens=128, temperature=0.1, top_p=0.75, top_k=40, num_beams=4, **kwargs ): prompt = create_instruction(instruction, input, context) print(prompt.replace("### Respuesta:\n", "")) inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].to("cuda") attention_mask = inputs["attention_mask"].to("cuda") generation_config = GenerationConfig( temperature=temperature, top_p=top_p, top_k=top_k, num_beams=num_beams, **kwargs, ) with torch.no_grad(): generation_output = model.generate( input_ids=input_ids, attention_mask=attention_mask, generation_config=generation_config, return_dict_in_generate=True, output_scores=True, max_new_tokens=max_new_tokens, early_stopping=True ) s = generation_output.sequences[0] output = tokenizer.decode(s) return output.split("### Respuesta:")[1].lstrip("\n") instruction = "Dame una lista de lugares a visitar en España." print(generate(instruction)) ``` # 📝 Citation There is a paper coming soon! Meanwhile, when using LINCE-ZERO please use the following information to cite: ```markdown @article{lince-zero, title={{LINCE-ZERO}: Llm for Instructions from Natural Corpus en Español}, author={clibrain.com}, year={2023} } ``` # 📧 Contact [[email protected]](mailto:[email protected])
null
Non_BioNLP
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Lince Zero - GPTQ - Model creator: [CliBrAIn](https://huggingface.co/clibrain) - Original model: [Lince Zero](https://huggingface.co/clibrain/lince-zero) <!-- description start --> ## Description This repo contains GPTQ model files for [CliBrAIn's Lince Zero](https://huggingface.co/clibrain/lince-zero). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/lince-zero-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/lince-zero-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/lince-zero-GGUF) * [CliBrAIn's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/clibrain/lince-zero) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Lince ``` A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud. ### Instrucción: {prompt} ### Entrada: ### Contexto: ### Respuesta: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/lince-zero-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [Alpaca Spanish](https://huggingface.co/datasets/bertin-project/alpaca-spanish) | 2048 | 4.04 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/lince-zero-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Alpaca Spanish](https://huggingface.co/datasets/bertin-project/alpaca-spanish) | 2048 | 4.43 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/lince-zero-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Alpaca Spanish](https://huggingface.co/datasets/bertin-project/alpaca-spanish) | 2048 | 7.23 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/lince-zero-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Alpaca Spanish](https://huggingface.co/datasets/bertin-project/alpaca-spanish) | 2048 | 7.38 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/lince-zero-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/lince-zero-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `lince-zero-GPTQ`: ```shell mkdir lince-zero-GPTQ huggingface-cli download TheBloke/lince-zero-GPTQ --local-dir lince-zero-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir lince-zero-GPTQ huggingface-cli download TheBloke/lince-zero-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir lince-zero-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir lince-zero-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/lince-zero-GPTQ --local-dir lince-zero-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/lince-zero-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/lince-zero-GPTQ`. - To download from a specific branch, enter for example `TheBloke/lince-zero-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `lince-zero-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/lince-zero-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud. ### Instrucción: {prompt} ### Entrada: ### Contexto: ### Respuesta: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: CliBrAIn's Lince Zero # Model Card for LINCE-ZERO **LINCE-ZERO** (Llm for Instructions from Natural Corpus en Español) is a SOTA Spanish instruction-tuned LLM 🔥 Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using a combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated into Spanish and augmented to 80k examples. The model is released under the Apache 2.0 license. Versions: - Check the version [quantized to 4 bits](https://huggingface.co/clibrain/lince-zero-f16-ggml-q4_0)! - If you want to test the robust 40B parameters version called **LINCE**, you can request access at [[email protected]](mailto:[email protected]). Be one of the first to discover the possibilities of LINCE! <div style="text-align:center;width:250px;height:250px;"> <img src="https://huggingface.co/clibrain/lince-zero/resolve/main/LINCE-CLIBRAIN-HD.jpg" alt="lince logo""> </div> <br /> # Table of Contents - [Model Details](#model-details) - [Model Description](#model-description) - [Uses](#uses) - [Direct Use](#direct-use) - [Downstream Use](#downstream-use) - [Out-of-Scope Use](#out-of-scope-use) - [Bias, Risks, and Limitations](#bias-risks-and-limitations) - [Recommendations](#recommendations) - [Training Details](#training-details) - [Training Data](#training-data) - [Evaluation](#evaluation) - [Results](#results) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Model Architecture and Objective](#model-architecture-and-objective) - [Compute Infrastructure](#compute-infrastructure) - [Hardware](#hardware) - [Software](#software) - [How to Get Started with the Model](#how-to-get-started-with-the-model) - [Citation](#citation) - [Contact](#contact) # 🐯 Model Details ## Model Description LINCE-ZERO (Llm for Instructions from Natural Corpus en Español) is a state-of-the-art Spanish instruction-tuned large language model. Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an 80k examples augmented combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated into Spanish. - **Developed by:** [Clibrain](https://www.clibrain.com/) - **Model type:** Language model, instruction model, causal decoder-only - **Language(s) (NLP):** es - **License:** apache-2.0 - **Parent Model:** https://huggingface.co/tiiuae/falcon-7b ## Model Sources - **Paper**: Coming soon! ✨ - **Demo**: Coming soon! ✨ # 💡 Uses ## Direct Use LINCE-ZERO's fine-tuning on an instructions dataset enables it to follow natural language instructions in Spanish. The direct use cases include virtual assistants and content generation. <!-- Please note that running inference with LINCE-ZERO efficiently requires a minimum of XGB of memory. --> ## Downstream Use LINCE-ZERO is an instruct model, it’s primarily intended for direct use and may not be ideal for further fine-tuning. It serves as a general model suitable for a wide range of applications. However, for specific use cases within certain domains, fine-tuning with domain-specific data may improve LINCE-ZERO's performance. ## Out-of-Scope Use LINCE-ZERO should not be used for production purposes without conducting a thorough assessment of risks and mitigation strategies. # ⚠️ Bias, Risks, and Limitations LINCE-ZERO has limitations associated with both the underlying language model and the instruction tuning data. It is crucial to acknowledge that predictions generated by the model may inadvertently exhibit common deficiencies of language models, including hallucination, toxicity, and perpetuate harmful stereotypes across protected classes, identity characteristics, and sensitive, social, and occupational groups. Since the model has been fine-tuned on translated versions of the Alpaca and Dolly datasets, it has potentially inherited certain limitations and biases: - Alpaca: The Alpaca dataset is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases inherent in that model. As the authors report, hallucination seems to be a common failure mode for Alpaca, even compared to `text-davinci-003`. - Dolly: The Dolly dataset incorporates information from Wikipedia, which is a crowdsourced corpus. Therefore, the dataset's contents may reflect the biases, factual errors, and topical focus present in Wikipedia. Additionally, annotators involved in the dataset creation may not be native English speakers, and their demographics and subject matter may reflect the makeup of Databricks employees. We have quantified potentially hurtful sentence completions using the HONEST score, as introduced by [Nozza et al. (2021)](https://aclanthology.org/2021.naacl-main.191/). For assessing LINCE-ZERO, we utilized the Spanish partition of the [MilaNLProc/honest](https://huggingface.co/datasets/MilaNLProc/honest). The results show that, when referring to males, the sentences contain hurtful words less than 0.3% of the time, achieving a score of 0.00027. When addressing females, the frequency of hurtful words is less than 1%, corresponding to a score of 0.00878. Additionally, we conducted a manual assessment of stereotypes and found that: - A todas las mujeres les gusta *sentirse seguras y respetadas*. - A todos los hombres les gusta *ser admirados*. - A todos los musulmanes les gusta *la paz y la armonía*. - A todos los cristianos les gusta *la paz y la alegría*. - En España a todo el mundo le gusta *la comida, la cultura y el clima*. - En Colombia a todo el mundo le gusta *la comida, la cultura y la belleza natural*. - En México, a todo el mundo le gusta *la comida, la cultura y el clima*. - En Argentina, a todo el mundo le gusta *la comida, la cultura y la hospitalidad*. ## Recommendations Please, when utilizing LINCE-ZERO, exercise caution and critically assess the output to mitigate the potential impact of biased or inaccurate information. If considering LINCE-ZERO for production use, it is crucial to thoroughly evaluate the associated risks and adopt suitable precautions. Conduct a comprehensive assessment to address any potential biases and ensure compliance with legal and ethical standards. Please report any issue with the model to [[email protected]](mailto:[email protected]). # 📚 Training Details ## Training Data LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an augmented combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated with the best quality into Spanish. Alpaca is a 24.2 MB dataset of 52,002 instructions and demonstrations in English. It was generated by OpenAI's `text-davinci-003` engine using the data generation pipeline from the [Self-Instruct framework](https://github.com/yizhongw/self-instruct) with some modifications. For further details, refer to [Alpaca's Data Card](https://huggingface.co/datasets/tatsu-lab/alpaca). Dolly is a 13.1 MB dataset of 15,011 instruction-following records in American English. It was generated by thousands of Databricks employees, who were requested to provide reference texts copied from Wikipedia for specific categories. To learn more, consult [Dolly’s Data Card](https://huggingface.co/datasets/databricks/databricks-dolly-15k). After combining both translations, the dataset was augmented to reach a total of 80k examples. # ✅ Evaluation We are evaluating the model and will publish the results soon. ### Results Paper coming soon! # ⚙️ Technical Specifications ## Model Architecture and Objective LINCE-ZERO is a causal decoder-only model trained on a causal language modeling task. Its objective is to predict the next token in a sequence based on the context provided. The architecture of LINCE-ZERO is based on Falcon-7B, which itself is adapted from the GPT-3 paper (Brown et al., 2020) with the following modifications: - Positional embeddings: rotary (Su et al., 2021); - Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022); - Decoder-block: parallel attention/MLP with a single-layer norm. ## Compute Infrastructure ### Hardware LINCE-ZERO was trained using a GPU A100 with 40 GB for 8h. ### Software We used the following libraries: - `transformers` - `accelerate` - `peft` - `bitsandbytes` - `einops` # 🌳 Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 1 X A100 - 40 GB - **Hours used:** 8 - **Cloud Provider:** Google - **Compute Region:** Europe - **Carbon Emitted:** 250W x 10h = 2.5 kWh x 0.57 kg eq. CO2/kWh = 1.42 kg eq. CO2 # 🔥 How to Get Started with LINCE-ZERO Use the code below to get started with LINCE-ZERO! ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, AutoTokenizer, GenerationConfig model_id = "clibrain/lince-zero" model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to("cuda") tokenizer = AutoTokenizer.from_pretrained(model_id) def create_instruction(instruction, input_data=None, context=None): sections = { "Instrucción": instruction, "Entrada": input_data, "Contexto": context, } system_prompt = "A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud.\n\n" prompt = system_prompt for title, content in sections.items(): if content is not None: prompt += f"### {title}:\n{content}\n\n" prompt += "### Respuesta:\n" return prompt def generate( instruction, input=None, context=None, max_new_tokens=128, temperature=0.1, top_p=0.75, top_k=40, num_beams=4, **kwargs ): prompt = create_instruction(instruction, input, context) print(prompt.replace("### Respuesta:\n", "")) inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].to("cuda") attention_mask = inputs["attention_mask"].to("cuda") generation_config = GenerationConfig( temperature=temperature, top_p=top_p, top_k=top_k, num_beams=num_beams, **kwargs, ) with torch.no_grad(): generation_output = model.generate( input_ids=input_ids, attention_mask=attention_mask, generation_config=generation_config, return_dict_in_generate=True, output_scores=True, max_new_tokens=max_new_tokens, early_stopping=True ) s = generation_output.sequences[0] output = tokenizer.decode(s) return output.split("### Respuesta:")[1].lstrip("\n") instruction = "Dame una lista de lugares a visitar en España." print(generate(instruction)) ``` # 📝 Citation There is a paper coming soon! Meanwhile, when using LINCE-ZERO please use the following information to cite: ```markdown @article{lince-zero, title={{LINCE-ZERO}: Llm for Instructions from Natural Corpus en Español}, author={clibrain.com}, year={2023} } ``` # 📧 Contact [[email protected]](mailto:[email protected])
{"base_model": "clibrain/lince-zero", "datasets": ["tatsu-lab/alpaca", "databricks/databricks-dolly-15k"], "language": ["es"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "inference": false, "model_creator": "CliBrAIn", "model_type": "falcon", "prompt_template": "A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud.\n\n### Instrucción: {prompt}\n\n### Entrada:\n\n### Contexto: \n\n### Respuesta:\n", "quantized_by": "TheBloke", "thumbnail": "https://huggingface.co/clibrain/lince-zero/resolve/main/LINCE-CLIBRAIN-HD.jpg", "model-index": [{"name": "lince-zero", "results": []}]}
task
[ "TRANSLATION" ]
45,592
meandyou200175/vn_bi_encoder_med
meandyou200175
sentence-similarity
[ "sentence-transformers", "safetensors", "roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:43804", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:bkai-foundation-models/vietnamese-bi-encoder", "base_model:finetune:bkai-foundation-models/vietnamese-bi-encoder", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-10-17T18:12:57Z
2024-10-19T18:09:24+00:00
7
0
--- base_model: bkai-foundation-models/vietnamese-bi-encoder library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:43804 - loss:MultipleNegativesRankingLoss widget: - source_sentence: Nhờ bác sĩ cho biết việc lựa chọn đóng đinh nội tủy và nẹp vít để kết hợp xương đòn dựa trên cơ sở nào ạ? Ca phẫu thuật thường kéo dài trong bao lâu? Bệnh nhân nằm viện mấy ngày? sentences: - ' Chào em, là bệnh mãn tính phải điều trị suốt đời, phải kiên nhẫn và kiên trì nên đôi khi lượng đường trong cơ thể không ổn định. Lúc đi khám xét nghiệm thì ổn do bản thân biết mai đi khám nên sẽ kiêng ăn, ăn ít... còn bệnh lâu dài nên trong ngày đôi khi thèm chút này hay thích ăn chút kia, quên uống thuốc, suy nghĩ, mất ngủ cũng làm đường không ổn định. Đường trong cơ thể lúc lên lúc xuống dễ đưa đến biến chứng. Em hay thấy bệnh nhân tiểu đường tháo khớp ngón chân, ngón tay, đôi khi tháo khớp gối, khớp háng, đây là do tê liệt hệ thần kinh nên khi va chạm bệnh nhân không phát hiện. Đến khi phát hiện thì đã nhiễm trùng nặng phải tháo khớp. Theo BS mẹ em có khả năng do biến chứng tiểu đường vì mẹ em bị bệnh khá lâu nên ít nhiều ảnh hưởng thần kinh bị tê liệt gây đau. Em nên nhớ dặn mẹ đi tái khám và điều trị cho thật ổn định nhé! Thân mến!' - ' Để lựa chọn phương pháp đóng đinh nội tủy hay nẹp vít cho bệnh nhân cần dựa vào nhiều yếu tố. Trong lòng tủy xương có một cái ống, nếu lòng tủy bệnh nhân nhỏ mà đường gãy không bị gãy thành nhiều mảnh thì nên lựa chọn phương pháp đóng đinh. Phương pháp này có nhược điểm dễ bị lộ phần đinh khi đinh vừa đóng, chưa chắc vào xương. Tuy nhiên, ưu điểm là khi đóng đinh, đường mổ sẽ nhỏ, đơn giản. Đối với nẹp vít, đường mổ dài hơn nhưng phần nắn chỉnh sẽ tuyệt đối, vững chắc hơn. Nhìn chung, giữa 2 phương pháp thời gian mổ không khác biệt nhau nhiều, từ 30-45 phút sẽ hoàn thành cuộc phẫu thuật kết hợp xương. Tại bệnh viện Nhân dân 115, sau khi bệnh nhân được làm phẫu thuật có thể xuất viện rất sớm trong vòng khoảng 3-5 ngày, tùy theo đường mổ lớn hay nhỏ. Giữa việc lựa chọn phẫu thuật hay bảo tồn, đinh nội tủy hay nẹp vít phụ thuộc vào lòng tủy của bệnh nhân và thói quen, sự đánh giá của phẫu thuật viên. Cá nhân tôi thường lựa chọn phương pháp phẫu thuật nẹp vít sẽ cho kết quả nắn chỉnh tốt, chắc hơn và bệnh nhân không bị biến chứng trồi đinh về sau. Thân mến.' - Chào em, Tình trạng người mệt mỏi, khó thở, tim đập nhanh xảy ra khi không gắng sức có thể do nhiều nguyên nhân, gồm tim mạch, hô hấp, thần kinh cơ, tiêu hóa (chủ yếu là ống tiêu hóa trên), tâm lý, bệnh lý nội tiết tố… Viêm dạ dày trào ngược có thể gây các triệu chứng này do dịch acid trào ngược từ dạ dày lên thực quản kích thích thần kinh tim. Mặt khác bệnh dạ dày là bệnh có thể tái phát, điều trị hết bệnh rồi thì bệnh vẫn có thể tái lại. Do đó, nếu em đã khám tim mạch và hô hấp bình thường, để biết có phải mình mệt mỏi do bệnh dạ dày gây ra hay không thì tốt nhất là em khám chuyên khoa nội tiêu hóa và điều trị trào ngược dạ dày thực quản thử, nếu triệu chứng cải thiện nhanh chóng thì chính hắn là nguyên nhân, em nhé. - source_sentence: Tôi bị tình trạng nuốt nước miếng có cảm giác bị vướng ở cổ, không đau rát, không ho sốt, ăn uống bình thường đã 1 ngày nay. Chỉ có nuốt nước miếng là có cảm giác vướng thôi, lỗ tai bên trái thì cảm giác ngứa nhẹ. Xin hỏi là bệnh gì vậy ạ? sentences: - "Em Lan thân mến, Hiện nay, xét nghiệm được xem là một xét nghiệm\r\nthường quy,\ \ nên thai kỳ của em cũng rất cần được làm những xét nghiệm này mặc\r\ndù gia\ \ đình em không có bệnh lý bất thường. Tuy nhiên, thai kỳ của em đã qua thời gian\ \ làm xét nghiệm Double test, bây\r\ngiờ em phải chờ đến lúc thai được 16 – 18\ \ tuần tuổi, làm xét nghiệm Triple test\r\nem nhé! Chúc em và bé khỏe mạnh!" - 'Trường hợp thoái hóa cột sống thắt lưng gây đau mỏi liên tục dù đã dùng thuốc giảm đau liều cao Chào em, Thoái hóa khớp, thoái hóa cột sống là tiến trình lão hóa không thể tránh khỏi của con người, đặc biệt có thể xảy ra sớm và nhanh hơn ở người nữ sau mãn kinh, sinh nở nhiều, suy dinh dưỡng hay ăn uống thiếu chất khoáng, lao động vất vả lúc còn trẻ. Trường hợp thoái hóa cột sống thắt lưng gây đau mỏi liên tục dù đã dùng thuốc giảm đau liều cao, đặc biệt là đau lan xuống hai chân, tê yếu hai chân thì cần chụp MRI cột sống để tầm soát thoát vị đĩa đệm chèn ép tủy sống. Trường hợp của em, mới phát hiện thoái hóa cột sống thắt lưng gần đây, cũng mới uống thuốc 1 tuần và không duy trì nữa, việc đau lưng vẫn còn âm ỉ nhưng không lan xuống hai chân thì chưa đến mức cần chụp MRI cột sống thắt lưng. Nhưng mà, em cần tích cực điều trị để bệnh thoái hóa cột sống thắt lưng không tiến triển nặng hơn. Bệnh này trị khỏi hoàn toàn là không thể, vì sinh lão bệnh tử không thể cải hoàn, nhưng mà việc điều trị tích cực sẽ giúp khống chế được bệnh, giảm đau và giảm tốc độ tiến triển của bệnh. Về việc sử dụng thuốc, dù là thuốc Tây hay thuốc Đông y, em cũng cần phải thăm khám bs ck cơ xương khớp (Tây y) hay ck y học cổ truyền (Đông y) để được kê thuốc phù hợp. các thuốc thường dùng là giảm đau, giãn cơ, bổ sung vi khoáng chất (canxi, vitamin D3, magie...). Bên cạnh đó, về phương pháp giảm đau hỗ trợ không dùng thuốc, em nên chú ý: - Chú ý thay đổi tư thế trong quá trình làm việc, không giữ mãi một tư thế trong nhiều giờ liền. Ngồi làm việc đúng tư thế để tránh các bệnh cột sống. - Vận động đúng cách, khi vác vật nặng không vặn cột sống. - Thường xuyên tập thể dục rèn luyện để cột sống vững chắc, cơ thể dẻo dai, bơi cũng được mà yoga là tốt nhất. - Ăn uống khoa học, xây dựng chế độ dinh dưỡng hợp lý, tăng cường nhóm thực phẩm giàu canxi, vitamin D, omega 3… giúp nâng cao độ chắc khỏe của đĩa đệm cũng như xương khớp. - Duy trì cân nặng bình thường, tránh để tăng cân quá mức. - Tư thế ngủ: nằm ngửa trên ván cứng hay nệm bông ép chặt, tránh nệm lò xo hay nệm cao su quá mềm, có thể đệm ở vùng khoeo làm co nhẹ khớp gối và khớp háng, nên nằm đầu thấp không gối sẽ tốt cho cột sống cổ. - Có thể thực hiện điều trị vật lý và các liệu pháp phản xạ: bao gồm phương pháp nhiệt như chườm nóng (túi nước, muối rang, cám rang, lá lốt, lá ngải cứu nóng); dùng các dòng điện tại khoa vật lý trị liệu, điều trị bằng laser; châm cứu, kéo cơ để hỗ trợ giảm đau cơ cạnh sống. Trân trọng!' - Chào bạn, Nuốt vướng ở cổ thường gặp trong một số bệnh lý viêm nhiễm hầu họng như viêm họng, viêm amidan mạn, trào ngược dạ dày thực quản, hội chứng chảy mũi sau… Đây là có thể là triệu chứng đầu tiên báo hiệu một đợt bùng phát cấp tính của viêm nhiễm hô hấp trên do triệu chứng mới chỉ xuất hiện 1 ngày. Bạn nên khám bác sĩ Tai mũi họng để thăm khám trực tiếp, đánh giá và kê toa điều trị bạn nhé! Thân mến. - source_sentence: Chào bác sĩ, em bị gãy xương gót, đã đóng đinh đến nay được gần 5 tuần. Vậy 6 tuần em tháo đinh được chưa ạ? sentences: - ' Chào em, gồm 2 trị số, trị số lớn nhất gọi là huyết áp tâm thu, bình thường < 140 và > 90 mmHg; trị số thấp nhất gọi là huyết áp tâm trương, bình thường < 90 và > 60 mmHg. Huyết áp có thể tăng khi căng thẳng, do lo lắng, do hội chứng áo choàng trắng (khi vào bv, khi gặp bác sĩ thì huyết áp cao), bệnh lý viêm nhiễm, do cafe, khi khó thở... nhìn chung là các stress đối với cơ thể. Như vậy, huyết áp ghi nhận ở những lúc cơ thể đang lo lắng, bồn chồn, có bệnh thì sẽ không phản ánh chính xác được huyết áp dao động bình thường của người bệnh. Do vậy em nên khám chuyên khoa tim mạch, bác sĩ sẽ thăm khám và làm xét nghiệm kiểm tra xem em có các dấu chứng của tăng huyết áp hay không (như dày thành tim, tiểu đạm, đo huyết áp 24 giờ...) để xác định em có tăng huyết áp hay không và điều trị thích hợp. Những triệu chứng hoa mắt, chóng mặt, đau đầu, đau 1 bên mắt, tiểu nhiều có thể là do bệnh tăng huyết áp gây ra (ảnh hưởng lên mạch máu não, lên thận...) hoặc là 1 bệnh lý khác như thiếu máu, rối loạn tiền đình, viêm nhiễm hệ thống, viêm mũi xoang, bệnh lý mạch máu não... (và tăng huyết áp chỉ là phản ứng của cơ thể khi có stress). Để tìm ra bệnh và giải quyết nỗi lo về bệnh, em nên đến bệnh viện để kiểm tra sức khỏe em nhé. Thân mến! ' - ' Chào em, Thời điểm 6 tuần là quá sớm để rút đinh cố định xương gót (trừ trường hợp khung cố định xương bên ngoài). Tháo đinh vít kim loại chỉ bắt buộc thực hiện sớm trong những trường hợp bất thường như gãy vít, nhiễm trùng, khớp giả... gây ra các triệu chứng bất thường với bệnh nhân mà thôi. Em nên tái khám tại chuyên khoa Chấn thương Chỉnh hình để bác sĩ kiểm tra lại việc lành xương của em tốt chưa và dặn em lịch trình rút đinh phù hợp, em nhé. Thân mến.' - K dạ dày không điều trị tiên lượng sống khá ngắn Chào em, K dạ dày là ung thư dạ dày. Bệnh ung thư dạ dày là bệnh lý ác tính và có chỉ định phẫu thuật cắt khối u – cắt dạ dày khi còn có thể cắt được. Nếu đã phát hiện ung thư dạ dày mà không điều trị phẫu thuật thì thời gian sống của bệnh nhân trung bình là 6 tháng đến 1 năm tùy loại ung thư dạ dày, khi ung thư tiến triển di căn có thể gây nhiều đau đớn hơn. Hiện tại chị em đang bị suy nhược cơ thể nhiều, không ăn uống được, đau nhiều do ung thư dạ dày là có chỉ định vào bệnh viện nằm điều trị luôn rồi, chứ không thể nào lấy thuốc mà không tới phòng khám được đâu. Vô bệnh viện chị em sẽ được truyền dịch, chích thuốc, nâng thể trạng lên rồi mới tính đến chuyện điều trị khối ung thư kia. Em đưa chị em đến bệnh viện càng sớm càng tốt, tốt nhất là bệnh viện Ung bướu, em nhé. - source_sentence: "Thưa bác sĩ,\r\n\r\nEm bị đục thủy tinh thể do chấn thương và\ \ vừa mổ mắt về và em cũng bị cận thị. Thời gian khoảng 1 tuần em thấy mắt mình\ \ nhìn chỉ rõ hơn được 1 phần nào. Nhìn xa thì vẫn thấy nhưng vẫn mờ mờ. Bác sĩ\ \ cho em lời khuyên nên làm cách nào và mắt em có thể sáng lại như bình thường\ \ được không ạ?\r\n\r\nEm xin chân thành cảm ơn! (Minh Tiến - Bình Định)" sentences: - Bạn Minh Tiến thân mến, Hiện nay phẫu thuật đục thủy tinh thể đã được y học nói chung và ngành Nhãn khoa Việt Nam thực hiện hoàn chỉnh đến mức tuyệt vời. Phẫu thuật này được xem như một cuộc cách mạng rất đáng tự hào của ngành nhãn khoa. Hàng ngày có thể tới hàng ngàn ca phẫu thuật đem lại ánh sáng cho người mù lòa đục thể thủy tinh tại Việt Nam. Nói như vậy để giúp cho bạn hiểu rõ phẫu thuật này các bác sĩ Việt Nam thực hiện rất thường xuyên và rất tốt. Tuy nhiên, với mắt đục thủy tinh thể do chấn thương của bạn là ca phẫu thuật tương đối không đơn giản. Thêm vào đó ngoài đục thủy tinh thể do chấn thương, mắt bạn cũng có thể kèm theo tổn thương ở các bộ phận khác của mắt mà trước mổ bác sĩ khó có thể chẩn đoán được. Với hai lý do nêu trên, nên đôi khi mắt mổ khó có thể tốt theo ý muốn của cả bệnh nhân lẫn thầy thuốc. Bạn cần có thời gian theo dõi và điều trị tiếp sau mổ. Sau thời gian ổn định khoảng 1 tháng, bạn cần đo thử kính xem có cải thiện thị lực thêm không? Chúc bạn may mắn! - Chào em, Bình thường các hạch trong cơ thể không sưng to lên đến mức có thể sờ chạm hay nhận biết được. Vì thế, hạch sưng lên, hay thường gọi là nổi hạch, là một triệu chứng bất thường của cơ thể. Cho nên, em lo lắng là đúng khi phát hiện hạch ở vùng cổ. Hạch bạch huyết đóng vai trò quan trọng đối với hoạt động của hệ miễn dịch. Chúng chứa các tế bào miễn dịch như lympho bào, đại thực bào... có chức năng miễn dịch chống lại các yếu tố lạ như vi khuẩn, virus, kí sinh trùng... xâm nhập vào cơ thể. Trong quá trình đó các hạch có thể bị viêm và sưng lên. Một số trường hợp hạch sưng có thể là hạch ung thư hoặc di căn. Đặc điểm của hạch viêm là nhỏ, số lượng ít, bờ tròn đều, không phát triển theo thời gian, không xâm lấn da xung quanh. Thông thường đối với hạch viêm thì nguồn viêm có thể tấn công tại hạch, cũng có khi là hạch viêm phản ứng với ổ viêm nhiễm cạnh đó, điều trị hết viêm thì hạch sẽ lặn dần, có thể lặn chậm hơn vài tuần đến vài tháng, có một số loại hạch cũng là hạch viêm nhưng mà chỉ giảm kích thước rồi cứ "lì" vậy luôn - không lặn hẳn nhưng không còn sưng như trước và vẫn giữ hình ảnh của hạch viêm, cũng có loại hạch viêm sau lại chuyển sang xơ chai hóa như sẹo cũ và không lặn. Như vậy, em có 1 hạch vùng cổ đã được xác định là hạch viêm thông qua sinh thiết hạch cách đây 10 năm. Trong vòng 10 năm nay, hạch cổ đó không có triệu chứng bất thường. Gần đây, hạch cổ đó có biểu hiện viêm trở lại, mặc dù em uống thuốc (tự mua) thì hạch hết sưng đau, nhưng em cũng cần khám lại bên chuyên khoa ung bướu để kiểm tra tổng quát lại 1 lần, tìm nguyên nhân gây kích thích hạch viêm này tái hoạt động, xem là nguyên nhân lành tính hay tiềm ẩn nguyên nhân khác (vì lần kiểm tra trước đã cách đây 10 năm rồi), em nhé. - ' Chào em, Trường hợp em mô tả là những bất thường của hệ hô hấp có thể là bệnh lý tai mũi họng hay hô hấp dưới như viêm phổi, viêm phế quản, em cần đến các cơ sở y tế chuyên sâu tai mũi họng hay hô hấp để khám thêm. Những biểu hiện đó hoàn toàn không có cơ sở nghĩ . Thân mến!' - source_sentence: Bác sĩ cho em hỏi, em bị rạn nứt xương gót chân bên phải. Em bị hơn 1 tháng nay rồi. Em bỏ thuốc lá. Em muốn hỏi bác sĩ thông thường bó bột hơn hay thuốc lá hơn? Như của em khoảng bao lâu thì khỏi? Và giờ em vẫn chưa đi được bác sĩ ạ. Em cảm ơn. sentences: - 'Câu hỏi của em rất chân thành. Tự ý thức quyết tâm cai nghiệm là điều đáng quý. Nếu em tiếp tục sử dụng thì tình trạng sẽ tồi tệ hơn rất nhiều. Ba yếu tố quan trọng nhất và tiến hành đồng thời để cai nghiện thành công, đó là: 1. Ý chí 2. Sự hiểu biết thấu đáo 3. Môi trường thân thiện. Các Trung tâm cai nghiện sẽ giúp em phần 2 và phần 3, từ đó sẽ củng cố phần 1 của em. Trường hợp ở nhà mà em tự cai, thực hành mỗi ngày với 3 điều kiện trên, em sẽ thành công như nhiều bạn khác. Không nên nôn nóng, sốt ruột. Trước tiên em phải thuộc lòng và thực hành những quy tắc này thành thói quen và áp dụng suốt đời. Nhiều trường hợp cai được vài năm vẫn tái nghiện. Do đó, nên tránh xa những "nguồn" khiến em tái nghiện, tránh xa bạn bè nghiện ngập em nhé. Chúc em quyết tâm và đem lại niềm vui cho bố mẹ.' - Chào em, Thứ nhất, bắt buộc phải có phim Xquang để biết em có thực sự nứt xương gót hay bị gãy phức tạp hơn, vì nhiều trường hợp tưởng chỉ nứt xương thôi nhưng thật ra là vỡ phức tạp, phải phẫu thuật mới nhanh ổn được. Thứ hai, theo nguyên tắc điều trị nứt gãy xương là phải cố định tốt để can xương mọc ra, chỗ nứt gãy mới được nối liền. Do đó, nếu bó bột thì chân sẽ được cố định liên tục trong 4-6 tuần, còn bó lá thì phải thay thường xuyên, mỗi lần thay là 1 lần xê dịch nên xương khó lành. Tốt hơn hết em nên đến Bệnh viện Chấn thương Chỉnh hình để được kiểm tra và điều trị thích hợp, em nhé. Thân mến. - Chào bạn, Qua hình ảnh sang thương và mô tả triệu chứng, bệnh lý của bạn có khả năng là chàm hay còn gọi là viêm da dị ứng với đặc điểm là viêm và nổi mụn nhỏ, ngứa ngáy. Nguyên nhân của chàm hiện nay chưa rõ nhưng có thể do cơ địa dị ứng (người mắc hen, viêm mũi dị ứng có nguy cơ cao mắc chàm), do kích thích của hóa chất như nước rửa chén, bột giặt, cao su, kim loại, chất liệu giày dép (chàm tiếp xúc),... Thời tiết lạnh, stress, đổ mồ hôi nhiều và phấn hoa... cũng là những nguyên nhân có thể khiến da bị chàm. Chàm cũng có thể gặp ở người bị suy van tĩnh mạch, giãn tĩnh mạch chân khiến tình trạng bệnh dai dẳng, kém đáp ứng điều trị. Điều trị chàm thường phải sử dụng một số loại thuốc bôi da kéo dài, có thể để lại tác dụng phụ, do đó bạn nên khám BS Da liễu để kê toa loại thuốc phù hợp. Ngoài ra, bạn nên chú ý xem có yếu tố nào thường kích thích khởi phát chàm để tránh cho bệnh tái phát bạn nhé! Thân mến. --- # SentenceTransformer based on bkai-foundation-models/vietnamese-bi-encoder This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder) <!-- at revision 84f9d9ada0d1a3c37557398b9ae9fcedcdf40be0 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("meandyou200175/vn_bi_encoder_med") # Run inference sentences = [ 'Bác sĩ cho em hỏi, em bị rạn nứt xương gót chân bên phải. Em bị hơn 1 tháng nay rồi. Em bỏ thuốc lá. Em muốn hỏi bác sĩ thông thường bó bột hơn hay thuốc lá hơn? Như của em khoảng bao lâu thì khỏi? Và giờ em vẫn chưa đi được bác sĩ ạ. Em cảm ơn.', 'Chào em, Thứ nhất, bắt buộc phải có phim Xquang để biết em có thực sự nứt xương gót hay bị gãy phức tạp hơn, vì nhiều trường hợp tưởng chỉ nứt xương thôi nhưng thật ra là vỡ phức tạp, phải phẫu thuật mới nhanh ổn được. Thứ hai, theo nguyên tắc điều trị nứt gãy xương là phải cố định tốt để can xương mọc ra, chỗ nứt gãy mới được nối liền. Do đó, nếu bó bột thì chân sẽ được cố định liên tục trong 4-6 tuần, còn bó lá thì phải thay thường xuyên, mỗi lần thay là 1 lần xê dịch nên xương khó lành. Tốt hơn hết em nên đến Bệnh viện Chấn thương Chỉnh hình để được kiểm tra và điều trị thích hợp, em nhé. Thân mến.', 'Chào bạn, Qua hình ảnh sang thương và mô tả triệu chứng, bệnh lý của bạn có khả năng là chàm hay còn gọi là viêm da dị ứng với đặc điểm là viêm và nổi mụn nhỏ, ngứa ngáy. Nguyên nhân của chàm hiện nay chưa rõ nhưng có thể do cơ địa dị ứng (người mắc hen, viêm mũi dị ứng có nguy cơ cao mắc chàm), do kích thích của hóa chất như nước rửa chén, bột giặt, cao su, kim loại, chất liệu giày dép (chàm tiếp xúc),... Thời tiết lạnh, stress, đổ mồ hôi nhiều và phấn hoa... cũng là những nguyên nhân có thể khiến da bị chàm. Chàm cũng có thể gặp ở người bị suy van tĩnh mạch, giãn tĩnh mạch chân khiến tình trạng bệnh dai dẳng, kém đáp ứng điều trị. Điều trị chàm thường phải sử dụng một số loại thuốc bôi da kéo dài, có thể để lại tác dụng phụ, do đó bạn nên khám BS Da liễu để kê toa loại thuốc phù hợp. Ngoài ra, bạn nên chú ý xem có yếu tố nào thường kích thích khởi phát chàm để tránh cho bệnh tái phát bạn nhé! Thân mến.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 5 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | Validation Loss | |:------:|:-----:|:-------------:|:---------------:| | 0.0365 | 100 | 0.4427 | 0.3117 | | 0.0730 | 200 | 0.3028 | 0.2373 | | 0.1096 | 300 | 0.2468 | 0.2063 | | 0.1461 | 400 | 0.2434 | 0.1857 | | 0.1826 | 500 | 0.2075 | 0.1724 | | 0.2191 | 600 | 0.185 | 0.1612 | | 0.2557 | 700 | 0.183 | 0.1509 | | 0.2922 | 800 | 0.1823 | 0.1445 | | 0.3287 | 900 | 0.1663 | 0.1370 | | 0.3652 | 1000 | 0.1622 | 0.1311 | | 0.4018 | 1100 | 0.1361 | 0.1312 | | 0.4383 | 1200 | 0.1834 | 0.1247 | | 0.4748 | 1300 | 0.1558 | 0.1320 | | 0.5113 | 1400 | 0.1289 | 0.1207 | | 0.5478 | 1500 | 0.1424 | 0.1192 | | 0.5844 | 1600 | 0.1412 | 0.1224 | | 0.6209 | 1700 | 0.1349 | 0.1084 | | 0.6574 | 1800 | 0.1275 | 0.1051 | | 0.6939 | 1900 | 0.1266 | 0.1013 | | 0.7305 | 2000 | 0.1245 | 0.1047 | | 0.7670 | 2100 | 0.114 | 0.0931 | | 0.8035 | 2200 | 0.1164 | 0.0925 | | 0.8400 | 2300 | 0.1136 | 0.0990 | | 0.8766 | 2400 | 0.1012 | 0.0907 | | 0.9131 | 2500 | 0.1273 | 0.0889 | | 0.9496 | 2600 | 0.1374 | 0.0918 | | 0.9861 | 2700 | 0.1135 | 0.0881 | | 1.0226 | 2800 | 0.0925 | 0.0875 | | 1.0592 | 2900 | 0.0854 | 0.0891 | | 1.0957 | 3000 | 0.0953 | 0.0847 | | 1.1322 | 3100 | 0.075 | 0.0798 | | 1.1687 | 3200 | 0.0811 | 0.0776 | | 1.2053 | 3300 | 0.0729 | 0.0768 | | 1.2418 | 3400 | 0.0626 | 0.0784 | | 1.2783 | 3500 | 0.0639 | 0.0784 | | 1.3148 | 3600 | 0.0658 | 0.0801 | | 1.3514 | 3700 | 0.0516 | 0.0741 | | 1.3879 | 3800 | 0.0327 | 0.0760 | | 1.4244 | 3900 | 0.064 | 0.0753 | | 1.4609 | 4000 | 0.0366 | 0.0738 | | 1.4974 | 4100 | 0.035 | 0.0755 | | 1.5340 | 4200 | 0.0327 | 0.0754 | | 1.5705 | 4300 | 0.0301 | 0.0759 | | 1.6070 | 4400 | 0.0304 | 0.0736 | | 1.6435 | 4500 | 0.02 | 0.0773 | | 1.6801 | 4600 | 0.0319 | 0.0720 | | 1.7166 | 4700 | 0.026 | 0.0693 | | 1.7531 | 4800 | 0.0267 | 0.0756 | | 1.7896 | 4900 | 0.0252 | 0.0686 | | 1.8262 | 5000 | 0.0236 | 0.0721 | | 1.8627 | 5100 | 0.0217 | 0.0752 | | 1.8992 | 5200 | 0.0259 | 0.0696 | | 1.9357 | 5300 | 0.0391 | 0.0713 | | 1.9722 | 5400 | 0.0309 | 0.0778 | | 2.0088 | 5500 | 0.0246 | 0.0733 | | 2.0453 | 5600 | 0.0214 | 0.0674 | | 2.0818 | 5700 | 0.0217 | 0.0706 | | 2.1183 | 5800 | 0.0174 | 0.0722 | | 2.1549 | 5900 | 0.02 | 0.0682 | | 2.1914 | 6000 | 0.0174 | 0.0705 | | 2.2279 | 6100 | 0.0191 | 0.0695 | | 2.2644 | 6200 | 0.0165 | 0.0727 | | 2.3009 | 6300 | 0.0174 | 0.0698 | | 2.3375 | 6400 | 0.0188 | 0.0667 | | 2.3740 | 6500 | 0.0095 | 0.0673 | | 2.4105 | 6600 | 0.0138 | 0.0679 | | 2.4470 | 6700 | 0.0126 | 0.0659 | | 2.4836 | 6800 | 0.0093 | 0.0674 | | 2.5201 | 6900 | 0.0065 | 0.0696 | | 2.5566 | 7000 | 0.0089 | 0.0684 | | 2.5931 | 7100 | 0.0068 | 0.0670 | | 2.6297 | 7200 | 0.007 | 0.0689 | | 2.6662 | 7300 | 0.0095 | 0.0676 | | 2.7027 | 7400 | 0.0068 | 0.0667 | | 2.7392 | 7500 | 0.0079 | 0.0666 | | 2.7757 | 7600 | 0.008 | 0.0653 | | 2.8123 | 7700 | 0.0071 | 0.0670 | | 2.8488 | 7800 | 0.007 | 0.0677 | | 2.8853 | 7900 | 0.0087 | 0.0670 | | 2.9218 | 8000 | 0.0104 | 0.0660 | | 2.9584 | 8100 | 0.0086 | 0.0665 | | 2.9949 | 8200 | 0.0078 | 0.0654 | | 3.0314 | 8300 | 0.0071 | 0.0688 | | 3.0679 | 8400 | 0.0054 | 0.0679 | | 3.1045 | 8500 | 0.0059 | 0.0657 | | 3.1410 | 8600 | 0.0049 | 0.0629 | | 3.1775 | 8700 | 0.0043 | 0.0625 | | 3.2140 | 8800 | 0.0057 | 0.0634 | | 3.2505 | 8900 | 0.0062 | 0.0646 | | 3.2871 | 9000 | 0.0051 | 0.0658 | | 3.3236 | 9100 | 0.0044 | 0.0652 | | 3.3601 | 9200 | 0.0054 | 0.0649 | | 3.3966 | 9300 | 0.0032 | 0.0647 | | 3.4332 | 9400 | 0.0045 | 0.0651 | | 3.4697 | 9500 | 0.0036 | 0.0634 | | 3.5062 | 9600 | 0.0036 | 0.0629 | | 3.5427 | 9700 | 0.0037 | 0.0625 | | 3.5793 | 9800 | 0.0026 | 0.0624 | | 3.6158 | 9900 | 0.0021 | 0.0628 | | 3.6523 | 10000 | 0.0028 | 0.0621 | | 3.6888 | 10100 | 0.0028 | 0.0622 | | 3.7253 | 10200 | 0.0027 | 0.0616 | | 3.7619 | 10300 | 0.0037 | 0.0647 | | 3.7984 | 10400 | 0.0026 | 0.0621 | | 3.8349 | 10500 | 0.0029 | 0.0623 | | 3.8714 | 10600 | 0.0027 | 0.0649 | | 3.9080 | 10700 | 0.0028 | 0.0631 | | 3.9445 | 10800 | 0.0031 | 0.0630 | | 3.9810 | 10900 | 0.0027 | 0.0638 | | 4.0175 | 11000 | 0.0025 | 0.0636 | | 4.0541 | 11100 | 0.0021 | 0.0623 | | 4.0906 | 11200 | 0.0027 | 0.0637 | | 4.1271 | 11300 | 0.0022 | 0.0636 | | 4.1636 | 11400 | 0.0023 | 0.0620 | | 4.2001 | 11500 | 0.0021 | 0.0609 | | 4.2367 | 11600 | 0.0029 | 0.0608 | | 4.2732 | 11700 | 0.0021 | 0.0621 | | 4.3097 | 11800 | 0.0022 | 0.0619 | | 4.3462 | 11900 | 0.0018 | 0.0621 | | 4.3828 | 12000 | 0.0015 | 0.0615 | | 4.4193 | 12100 | 0.0018 | 0.0632 | | 4.4558 | 12200 | 0.002 | 0.0634 | | 4.4923 | 12300 | 0.0018 | 0.0621 | | 4.5289 | 12400 | 0.0014 | 0.0623 | | 4.5654 | 12500 | 0.0016 | 0.0623 | | 4.6019 | 12600 | 0.0013 | 0.0616 | | 4.6384 | 12700 | 0.0013 | 0.0620 | | 4.6749 | 12800 | 0.0016 | 0.0627 | | 4.7115 | 12900 | 0.0014 | 0.0615 | | 4.7480 | 13000 | 0.0016 | 0.0612 | | 4.7845 | 13100 | 0.0015 | 0.0615 | | 4.8210 | 13200 | 0.0012 | 0.0614 | | 4.8576 | 13300 | 0.002 | 0.0615 | | 4.8941 | 13400 | 0.0015 | 0.0617 | | 4.9306 | 13500 | 0.0016 | 0.0613 | | 4.9671 | 13600 | 0.0016 | 0.0614 | </details> ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.2.0 - Transformers: 4.45.1 - PyTorch: 2.4.0 - Accelerate: 0.34.2 - Datasets: 3.0.1 - Tokenizers: 0.20.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
BioNLP
# SentenceTransformer based on bkai-foundation-models/vietnamese-bi-encoder This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder) <!-- at revision 84f9d9ada0d1a3c37557398b9ae9fcedcdf40be0 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("meandyou200175/vn_bi_encoder_med") # Run inference sentences = [ 'Bác sĩ cho em hỏi, em bị rạn nứt xương gót chân bên phải. Em bị hơn 1 tháng nay rồi. Em bỏ thuốc lá. Em muốn hỏi bác sĩ thông thường bó bột hơn hay thuốc lá hơn? Như của em khoảng bao lâu thì khỏi? Và giờ em vẫn chưa đi được bác sĩ ạ. Em cảm ơn.', 'Chào em, Thứ nhất, bắt buộc phải có phim Xquang để biết em có thực sự nứt xương gót hay bị gãy phức tạp hơn, vì nhiều trường hợp tưởng chỉ nứt xương thôi nhưng thật ra là vỡ phức tạp, phải phẫu thuật mới nhanh ổn được. Thứ hai, theo nguyên tắc điều trị nứt gãy xương là phải cố định tốt để can xương mọc ra, chỗ nứt gãy mới được nối liền. Do đó, nếu bó bột thì chân sẽ được cố định liên tục trong 4-6 tuần, còn bó lá thì phải thay thường xuyên, mỗi lần thay là 1 lần xê dịch nên xương khó lành. Tốt hơn hết em nên đến Bệnh viện Chấn thương Chỉnh hình để được kiểm tra và điều trị thích hợp, em nhé. Thân mến.', 'Chào bạn, Qua hình ảnh sang thương và mô tả triệu chứng, bệnh lý của bạn có khả năng là chàm hay còn gọi là viêm da dị ứng với đặc điểm là viêm và nổi mụn nhỏ, ngứa ngáy. Nguyên nhân của chàm hiện nay chưa rõ nhưng có thể do cơ địa dị ứng (người mắc hen, viêm mũi dị ứng có nguy cơ cao mắc chàm), do kích thích của hóa chất như nước rửa chén, bột giặt, cao su, kim loại, chất liệu giày dép (chàm tiếp xúc),... Thời tiết lạnh, stress, đổ mồ hôi nhiều và phấn hoa... cũng là những nguyên nhân có thể khiến da bị chàm. Chàm cũng có thể gặp ở người bị suy van tĩnh mạch, giãn tĩnh mạch chân khiến tình trạng bệnh dai dẳng, kém đáp ứng điều trị. Điều trị chàm thường phải sử dụng một số loại thuốc bôi da kéo dài, có thể để lại tác dụng phụ, do đó bạn nên khám BS Da liễu để kê toa loại thuốc phù hợp. Ngoài ra, bạn nên chú ý xem có yếu tố nào thường kích thích khởi phát chàm để tránh cho bệnh tái phát bạn nhé! Thân mến.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 5 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | Validation Loss | |:------:|:-----:|:-------------:|:---------------:| | 0.0365 | 100 | 0.4427 | 0.3117 | | 0.0730 | 200 | 0.3028 | 0.2373 | | 0.1096 | 300 | 0.2468 | 0.2063 | | 0.1461 | 400 | 0.2434 | 0.1857 | | 0.1826 | 500 | 0.2075 | 0.1724 | | 0.2191 | 600 | 0.185 | 0.1612 | | 0.2557 | 700 | 0.183 | 0.1509 | | 0.2922 | 800 | 0.1823 | 0.1445 | | 0.3287 | 900 | 0.1663 | 0.1370 | | 0.3652 | 1000 | 0.1622 | 0.1311 | | 0.4018 | 1100 | 0.1361 | 0.1312 | | 0.4383 | 1200 | 0.1834 | 0.1247 | | 0.4748 | 1300 | 0.1558 | 0.1320 | | 0.5113 | 1400 | 0.1289 | 0.1207 | | 0.5478 | 1500 | 0.1424 | 0.1192 | | 0.5844 | 1600 | 0.1412 | 0.1224 | | 0.6209 | 1700 | 0.1349 | 0.1084 | | 0.6574 | 1800 | 0.1275 | 0.1051 | | 0.6939 | 1900 | 0.1266 | 0.1013 | | 0.7305 | 2000 | 0.1245 | 0.1047 | | 0.7670 | 2100 | 0.114 | 0.0931 | | 0.8035 | 2200 | 0.1164 | 0.0925 | | 0.8400 | 2300 | 0.1136 | 0.0990 | | 0.8766 | 2400 | 0.1012 | 0.0907 | | 0.9131 | 2500 | 0.1273 | 0.0889 | | 0.9496 | 2600 | 0.1374 | 0.0918 | | 0.9861 | 2700 | 0.1135 | 0.0881 | | 1.0226 | 2800 | 0.0925 | 0.0875 | | 1.0592 | 2900 | 0.0854 | 0.0891 | | 1.0957 | 3000 | 0.0953 | 0.0847 | | 1.1322 | 3100 | 0.075 | 0.0798 | | 1.1687 | 3200 | 0.0811 | 0.0776 | | 1.2053 | 3300 | 0.0729 | 0.0768 | | 1.2418 | 3400 | 0.0626 | 0.0784 | | 1.2783 | 3500 | 0.0639 | 0.0784 | | 1.3148 | 3600 | 0.0658 | 0.0801 | | 1.3514 | 3700 | 0.0516 | 0.0741 | | 1.3879 | 3800 | 0.0327 | 0.0760 | | 1.4244 | 3900 | 0.064 | 0.0753 | | 1.4609 | 4000 | 0.0366 | 0.0738 | | 1.4974 | 4100 | 0.035 | 0.0755 | | 1.5340 | 4200 | 0.0327 | 0.0754 | | 1.5705 | 4300 | 0.0301 | 0.0759 | | 1.6070 | 4400 | 0.0304 | 0.0736 | | 1.6435 | 4500 | 0.02 | 0.0773 | | 1.6801 | 4600 | 0.0319 | 0.0720 | | 1.7166 | 4700 | 0.026 | 0.0693 | | 1.7531 | 4800 | 0.0267 | 0.0756 | | 1.7896 | 4900 | 0.0252 | 0.0686 | | 1.8262 | 5000 | 0.0236 | 0.0721 | | 1.8627 | 5100 | 0.0217 | 0.0752 | | 1.8992 | 5200 | 0.0259 | 0.0696 | | 1.9357 | 5300 | 0.0391 | 0.0713 | | 1.9722 | 5400 | 0.0309 | 0.0778 | | 2.0088 | 5500 | 0.0246 | 0.0733 | | 2.0453 | 5600 | 0.0214 | 0.0674 | | 2.0818 | 5700 | 0.0217 | 0.0706 | | 2.1183 | 5800 | 0.0174 | 0.0722 | | 2.1549 | 5900 | 0.02 | 0.0682 | | 2.1914 | 6000 | 0.0174 | 0.0705 | | 2.2279 | 6100 | 0.0191 | 0.0695 | | 2.2644 | 6200 | 0.0165 | 0.0727 | | 2.3009 | 6300 | 0.0174 | 0.0698 | | 2.3375 | 6400 | 0.0188 | 0.0667 | | 2.3740 | 6500 | 0.0095 | 0.0673 | | 2.4105 | 6600 | 0.0138 | 0.0679 | | 2.4470 | 6700 | 0.0126 | 0.0659 | | 2.4836 | 6800 | 0.0093 | 0.0674 | | 2.5201 | 6900 | 0.0065 | 0.0696 | | 2.5566 | 7000 | 0.0089 | 0.0684 | | 2.5931 | 7100 | 0.0068 | 0.0670 | | 2.6297 | 7200 | 0.007 | 0.0689 | | 2.6662 | 7300 | 0.0095 | 0.0676 | | 2.7027 | 7400 | 0.0068 | 0.0667 | | 2.7392 | 7500 | 0.0079 | 0.0666 | | 2.7757 | 7600 | 0.008 | 0.0653 | | 2.8123 | 7700 | 0.0071 | 0.0670 | | 2.8488 | 7800 | 0.007 | 0.0677 | | 2.8853 | 7900 | 0.0087 | 0.0670 | | 2.9218 | 8000 | 0.0104 | 0.0660 | | 2.9584 | 8100 | 0.0086 | 0.0665 | | 2.9949 | 8200 | 0.0078 | 0.0654 | | 3.0314 | 8300 | 0.0071 | 0.0688 | | 3.0679 | 8400 | 0.0054 | 0.0679 | | 3.1045 | 8500 | 0.0059 | 0.0657 | | 3.1410 | 8600 | 0.0049 | 0.0629 | | 3.1775 | 8700 | 0.0043 | 0.0625 | | 3.2140 | 8800 | 0.0057 | 0.0634 | | 3.2505 | 8900 | 0.0062 | 0.0646 | | 3.2871 | 9000 | 0.0051 | 0.0658 | | 3.3236 | 9100 | 0.0044 | 0.0652 | | 3.3601 | 9200 | 0.0054 | 0.0649 | | 3.3966 | 9300 | 0.0032 | 0.0647 | | 3.4332 | 9400 | 0.0045 | 0.0651 | | 3.4697 | 9500 | 0.0036 | 0.0634 | | 3.5062 | 9600 | 0.0036 | 0.0629 | | 3.5427 | 9700 | 0.0037 | 0.0625 | | 3.5793 | 9800 | 0.0026 | 0.0624 | | 3.6158 | 9900 | 0.0021 | 0.0628 | | 3.6523 | 10000 | 0.0028 | 0.0621 | | 3.6888 | 10100 | 0.0028 | 0.0622 | | 3.7253 | 10200 | 0.0027 | 0.0616 | | 3.7619 | 10300 | 0.0037 | 0.0647 | | 3.7984 | 10400 | 0.0026 | 0.0621 | | 3.8349 | 10500 | 0.0029 | 0.0623 | | 3.8714 | 10600 | 0.0027 | 0.0649 | | 3.9080 | 10700 | 0.0028 | 0.0631 | | 3.9445 | 10800 | 0.0031 | 0.0630 | | 3.9810 | 10900 | 0.0027 | 0.0638 | | 4.0175 | 11000 | 0.0025 | 0.0636 | | 4.0541 | 11100 | 0.0021 | 0.0623 | | 4.0906 | 11200 | 0.0027 | 0.0637 | | 4.1271 | 11300 | 0.0022 | 0.0636 | | 4.1636 | 11400 | 0.0023 | 0.0620 | | 4.2001 | 11500 | 0.0021 | 0.0609 | | 4.2367 | 11600 | 0.0029 | 0.0608 | | 4.2732 | 11700 | 0.0021 | 0.0621 | | 4.3097 | 11800 | 0.0022 | 0.0619 | | 4.3462 | 11900 | 0.0018 | 0.0621 | | 4.3828 | 12000 | 0.0015 | 0.0615 | | 4.4193 | 12100 | 0.0018 | 0.0632 | | 4.4558 | 12200 | 0.002 | 0.0634 | | 4.4923 | 12300 | 0.0018 | 0.0621 | | 4.5289 | 12400 | 0.0014 | 0.0623 | | 4.5654 | 12500 | 0.0016 | 0.0623 | | 4.6019 | 12600 | 0.0013 | 0.0616 | | 4.6384 | 12700 | 0.0013 | 0.0620 | | 4.6749 | 12800 | 0.0016 | 0.0627 | | 4.7115 | 12900 | 0.0014 | 0.0615 | | 4.7480 | 13000 | 0.0016 | 0.0612 | | 4.7845 | 13100 | 0.0015 | 0.0615 | | 4.8210 | 13200 | 0.0012 | 0.0614 | | 4.8576 | 13300 | 0.002 | 0.0615 | | 4.8941 | 13400 | 0.0015 | 0.0617 | | 4.9306 | 13500 | 0.0016 | 0.0613 | | 4.9671 | 13600 | 0.0016 | 0.0614 | </details> ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.2.0 - Transformers: 4.45.1 - PyTorch: 2.4.0 - Accelerate: 0.34.2 - Datasets: 3.0.1 - Tokenizers: 0.20.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "bkai-foundation-models/vietnamese-bi-encoder", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:43804", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Nhờ bác sĩ cho biết việc lựa chọn đóng đinh nội tủy và nẹp vít để kết hợp xương đòn dựa trên cơ sở nào ạ? Ca phẫu thuật thường kéo dài trong bao lâu? Bệnh nhân nằm viện mấy ngày?", "sentences": [" Chào em, là bệnh mãn tính phải điều trị suốt đời, phải kiên nhẫn và kiên trì nên đôi khi lượng đường trong cơ thể không ổn định. Lúc đi khám xét nghiệm thì ổn do bản thân biết mai đi khám nên sẽ kiêng ăn, ăn ít... còn bệnh lâu dài nên trong ngày đôi khi thèm chút này hay thích ăn chút kia, quên uống thuốc, suy nghĩ, mất ngủ cũng làm đường không ổn định. Đường trong cơ thể lúc lên lúc xuống dễ đưa đến biến chứng. Em hay thấy bệnh nhân tiểu đường tháo khớp ngón chân, ngón tay, đôi khi tháo khớp gối, khớp háng, đây là do tê liệt hệ thần kinh nên khi va chạm bệnh nhân không phát hiện. Đến khi phát hiện thì đã nhiễm trùng nặng phải tháo khớp. Theo BS mẹ em có khả năng do biến chứng tiểu đường vì mẹ em bị bệnh khá lâu nên ít nhiều ảnh hưởng thần kinh bị tê liệt gây đau. Em nên nhớ dặn mẹ đi tái khám và điều trị cho thật ổn định nhé! Thân mến!", " Để lựa chọn phương pháp đóng đinh nội tủy hay nẹp vít cho bệnh nhân cần dựa vào nhiều yếu tố. Trong lòng tủy xương có một cái ống, nếu lòng tủy bệnh nhân nhỏ mà đường gãy không bị gãy thành nhiều mảnh thì nên lựa chọn phương pháp đóng đinh. Phương pháp này có nhược điểm dễ bị lộ phần đinh khi đinh vừa đóng, chưa chắc vào xương. Tuy nhiên, ưu điểm là khi đóng đinh, đường mổ sẽ nhỏ, đơn giản. Đối với nẹp vít, đường mổ dài hơn nhưng phần nắn chỉnh sẽ tuyệt đối, vững chắc hơn. Nhìn chung, giữa 2 phương pháp thời gian mổ không khác biệt nhau nhiều, từ 30-45 phút sẽ hoàn thành cuộc phẫu thuật kết hợp xương. Tại bệnh viện Nhân dân 115, sau khi bệnh nhân được làm phẫu thuật có thể xuất viện rất sớm trong vòng khoảng 3-5 ngày, tùy theo đường mổ lớn hay nhỏ. Giữa việc lựa chọn phẫu thuật hay bảo tồn, đinh nội tủy hay nẹp vít phụ thuộc vào lòng tủy của bệnh nhân và thói quen, sự đánh giá của phẫu thuật viên. Cá nhân tôi thường lựa chọn phương pháp phẫu thuật nẹp vít sẽ cho kết quả nắn chỉnh tốt, chắc hơn và bệnh nhân không bị biến chứng trồi đinh về sau. Thân mến.", "Chào em, Tình trạng người mệt mỏi, khó thở, tim đập nhanh xảy ra khi không gắng sức có thể do nhiều nguyên nhân, gồm tim mạch, hô hấp, thần kinh cơ, tiêu hóa (chủ yếu là ống tiêu hóa trên), tâm lý, bệnh lý nội tiết tố… Viêm dạ dày trào ngược có thể gây các triệu chứng này do dịch acid trào ngược từ dạ dày lên thực quản kích thích thần kinh tim. Mặt khác bệnh dạ dày là bệnh có thể tái phát, điều trị hết bệnh rồi thì bệnh vẫn có thể tái lại. Do đó, nếu em đã khám tim mạch và hô hấp bình thường, để biết có phải mình mệt mỏi do bệnh dạ dày gây ra hay không thì tốt nhất là em khám chuyên khoa nội tiêu hóa và điều trị trào ngược dạ dày thực quản thử, nếu triệu chứng cải thiện nhanh chóng thì chính hắn là nguyên nhân, em nhé."]}, {"source_sentence": "Tôi bị tình trạng nuốt nước miếng có cảm giác bị vướng ở cổ, không đau rát, không ho sốt, ăn uống bình thường đã 1 ngày nay. Chỉ có nuốt nước miếng là có cảm giác vướng thôi, lỗ tai bên trái thì cảm giác ngứa nhẹ. Xin hỏi là bệnh gì vậy ạ?", "sentences": ["Em Lan thân mến, Hiện nay, xét nghiệm được xem là một xét nghiệm\r\nthường quy, nên thai kỳ của em cũng rất cần được làm những xét nghiệm này mặc\r\ndù gia đình em không có bệnh lý bất thường. Tuy nhiên, thai kỳ của em đã qua thời gian làm xét nghiệm Double test, bây\r\ngiờ em phải chờ đến lúc thai được 16 – 18 tuần tuổi, làm xét nghiệm Triple test\r\nem nhé! Chúc em và bé khỏe mạnh!", "Trường hợp thoái hóa cột sống thắt lưng gây đau mỏi liên tục dù đã dùng thuốc giảm đau liều cao Chào em, Thoái hóa khớp, thoái hóa cột sống là tiến trình lão hóa không thể tránh khỏi của con người, đặc biệt có thể xảy ra sớm và nhanh hơn ở người nữ sau mãn kinh, sinh nở nhiều, suy dinh dưỡng hay ăn uống thiếu chất khoáng, lao động vất vả lúc còn trẻ. Trường hợp thoái hóa cột sống thắt lưng gây đau mỏi liên tục dù đã dùng thuốc giảm đau liều cao, đặc biệt là đau lan xuống hai chân, tê yếu hai chân thì cần chụp MRI cột sống để tầm soát thoát vị đĩa đệm chèn ép tủy sống. Trường hợp của em, mới phát hiện thoái hóa cột sống thắt lưng gần đây, cũng mới uống thuốc 1 tuần và không duy trì nữa, việc đau lưng vẫn còn âm ỉ nhưng không lan xuống hai chân thì chưa đến mức cần chụp MRI cột sống thắt lưng. Nhưng mà, em cần tích cực điều trị để bệnh thoái hóa cột sống thắt lưng không tiến triển nặng hơn. Bệnh này trị khỏi hoàn toàn là không thể, vì sinh lão bệnh tử không thể cải hoàn, nhưng mà việc điều trị tích cực sẽ giúp khống chế được bệnh, giảm đau và giảm tốc độ tiến triển của bệnh. Về việc sử dụng thuốc, dù là thuốc Tây hay thuốc Đông y, em cũng cần phải thăm khám bs ck cơ xương khớp (Tây y) hay ck y học cổ truyền (Đông y) để được kê thuốc phù hợp. các thuốc thường dùng là giảm đau, giãn cơ, bổ sung vi khoáng chất (canxi, vitamin D3, magie...). Bên cạnh đó, về phương pháp giảm đau hỗ trợ không dùng thuốc, em nên chú ý: - Chú ý thay đổi tư thế trong quá trình làm việc, không giữ mãi một tư thế trong nhiều giờ liền. Ngồi làm việc đúng tư thế để tránh các bệnh cột sống. - Vận động đúng cách, khi vác vật nặng không vặn cột sống. - Thường xuyên tập thể dục rèn luyện để cột sống vững chắc, cơ thể dẻo dai, bơi cũng được mà yoga là tốt nhất. - Ăn uống khoa học, xây dựng chế độ dinh dưỡng hợp lý, tăng cường nhóm thực phẩm giàu canxi, vitamin D, omega 3… giúp nâng cao độ chắc khỏe của đĩa đệm cũng như xương khớp. - Duy trì cân nặng bình thường, tránh để tăng cân quá mức. - Tư thế ngủ: nằm ngửa trên ván cứng hay nệm bông ép chặt, tránh nệm lò xo hay nệm cao su quá mềm, có thể đệm ở vùng khoeo làm co nhẹ khớp gối và khớp háng, nên nằm đầu thấp không gối sẽ tốt cho cột sống cổ. - Có thể thực hiện điều trị vật lý và các liệu pháp phản xạ: bao gồm phương pháp nhiệt như chườm nóng (túi nước, muối rang, cám rang, lá lốt, lá ngải cứu nóng); dùng các dòng điện tại khoa vật lý trị liệu, điều trị bằng laser; châm cứu, kéo cơ để hỗ trợ giảm đau cơ cạnh sống. Trân trọng!", "Chào bạn, Nuốt vướng ở cổ thường gặp trong một số bệnh lý viêm nhiễm hầu họng như viêm họng, viêm amidan mạn, trào ngược dạ dày thực quản, hội chứng chảy mũi sau… Đây là có thể là triệu chứng đầu tiên báo hiệu một đợt bùng phát cấp tính của viêm nhiễm hô hấp trên do triệu chứng mới chỉ xuất hiện 1 ngày. Bạn nên khám bác sĩ Tai mũi họng để thăm khám trực tiếp, đánh giá và kê toa điều trị bạn nhé! Thân mến."]}, {"source_sentence": "Chào bác sĩ, em bị gãy xương gót, đã đóng đinh đến nay được gần 5 tuần. Vậy 6 tuần em tháo đinh được chưa ạ?", "sentences": [" Chào em, gồm 2 trị số, trị số lớn nhất gọi là huyết áp tâm thu, bình thường < 140 và > 90 mmHg; trị số thấp nhất gọi là huyết áp tâm trương, bình thường < 90 và > 60 mmHg. Huyết áp có thể tăng khi căng thẳng, do lo lắng, do hội chứng áo choàng trắng (khi vào bv, khi gặp bác sĩ thì huyết áp cao), bệnh lý viêm nhiễm, do cafe, khi khó thở... nhìn chung là các stress đối với cơ thể. Như vậy, huyết áp ghi nhận ở những lúc cơ thể đang lo lắng, bồn chồn, có bệnh thì sẽ không phản ánh chính xác được huyết áp dao động bình thường của người bệnh. Do vậy em nên khám chuyên khoa tim mạch, bác sĩ sẽ thăm khám và làm xét nghiệm kiểm tra xem em có các dấu chứng của tăng huyết áp hay không (như dày thành tim, tiểu đạm, đo huyết áp 24 giờ...) để xác định em có tăng huyết áp hay không và điều trị thích hợp. Những triệu chứng hoa mắt, chóng mặt, đau đầu, đau 1 bên mắt, tiểu nhiều có thể là do bệnh tăng huyết áp gây ra (ảnh hưởng lên mạch máu não, lên thận...) hoặc là 1 bệnh lý khác như thiếu máu, rối loạn tiền đình, viêm nhiễm hệ thống, viêm mũi xoang, bệnh lý mạch máu não... (và tăng huyết áp chỉ là phản ứng của cơ thể khi có stress). Để tìm ra bệnh và giải quyết nỗi lo về bệnh, em nên đến bệnh viện để kiểm tra sức khỏe em nhé. Thân mến! ", " Chào em, Thời điểm 6 tuần là quá sớm để rút đinh cố định xương gót (trừ trường hợp khung cố định xương bên ngoài). Tháo đinh vít kim loại chỉ bắt buộc thực hiện sớm trong những trường hợp bất thường như gãy vít, nhiễm trùng, khớp giả... gây ra các triệu chứng bất thường với bệnh nhân mà thôi. Em nên tái khám tại chuyên khoa Chấn thương Chỉnh hình để bác sĩ kiểm tra lại việc lành xương của em tốt chưa và dặn em lịch trình rút đinh phù hợp, em nhé. Thân mến.", "K dạ dày không điều trị tiên lượng sống khá ngắn Chào em, K dạ dày là ung thư dạ dày. Bệnh ung thư dạ dày là bệnh lý ác tính và có chỉ định phẫu thuật cắt khối u – cắt dạ dày khi còn có thể cắt được. Nếu đã phát hiện ung thư dạ dày mà không điều trị phẫu thuật thì thời gian sống của bệnh nhân trung bình là 6 tháng đến 1 năm tùy loại ung thư dạ dày, khi ung thư tiến triển di căn có thể gây nhiều đau đớn hơn. Hiện tại chị em đang bị suy nhược cơ thể nhiều, không ăn uống được, đau nhiều do ung thư dạ dày là có chỉ định vào bệnh viện nằm điều trị luôn rồi, chứ không thể nào lấy thuốc mà không tới phòng khám được đâu. Vô bệnh viện chị em sẽ được truyền dịch, chích thuốc, nâng thể trạng lên rồi mới tính đến chuyện điều trị khối ung thư kia. Em đưa chị em đến bệnh viện càng sớm càng tốt, tốt nhất là bệnh viện Ung bướu, em nhé."]}, {"source_sentence": "Thưa bác sĩ,\r\n\r\nEm bị đục thủy tinh thể do chấn thương và vừa mổ mắt về và em cũng bị cận thị. Thời gian khoảng 1 tuần em thấy mắt mình nhìn chỉ rõ hơn được 1 phần nào. Nhìn xa thì vẫn thấy nhưng vẫn mờ mờ. Bác sĩ cho em lời khuyên nên làm cách nào và mắt em có thể sáng lại như bình thường được không ạ?\r\n\r\nEm xin chân thành cảm ơn! (Minh Tiến - Bình Định)", "sentences": ["Bạn Minh Tiến thân mến, Hiện nay phẫu thuật đục thủy tinh thể đã được y học nói chung và ngành Nhãn khoa Việt Nam thực hiện hoàn chỉnh đến mức tuyệt vời. Phẫu thuật này được xem như một cuộc cách mạng rất đáng tự hào của ngành nhãn khoa. Hàng ngày có thể tới hàng ngàn ca phẫu thuật đem lại ánh sáng cho người mù lòa đục thể thủy tinh tại Việt Nam. Nói như vậy để giúp cho bạn hiểu rõ phẫu thuật này các bác sĩ Việt Nam thực hiện rất thường xuyên và rất tốt. Tuy nhiên, với mắt đục thủy tinh thể do chấn thương của bạn là ca phẫu thuật tương đối không đơn giản. Thêm vào đó ngoài đục thủy tinh thể do chấn thương, mắt bạn cũng có thể kèm theo tổn thương ở các bộ phận khác của mắt mà trước mổ bác sĩ khó có thể chẩn đoán được. Với hai lý do nêu trên, nên đôi khi mắt mổ khó có thể tốt theo ý muốn của cả bệnh nhân lẫn thầy thuốc. Bạn cần có thời gian theo dõi và điều trị tiếp sau mổ. Sau thời gian ổn định khoảng 1 tháng, bạn cần đo thử kính xem có cải thiện thị lực thêm không? Chúc bạn may mắn!", "Chào em, Bình thường các hạch trong cơ thể không sưng to lên đến mức có thể sờ chạm hay nhận biết được. Vì thế, hạch sưng lên, hay thường gọi là nổi hạch, là một triệu chứng bất thường của cơ thể. Cho nên, em lo lắng là đúng khi phát hiện hạch ở vùng cổ. Hạch bạch huyết đóng vai trò quan trọng đối với hoạt động của hệ miễn dịch. Chúng chứa các tế bào miễn dịch như lympho bào, đại thực bào... có chức năng miễn dịch chống lại các yếu tố lạ như vi khuẩn, virus, kí sinh trùng... xâm nhập vào cơ thể. Trong quá trình đó các hạch có thể bị viêm và sưng lên. Một số trường hợp hạch sưng có thể là hạch ung thư hoặc di căn. Đặc điểm của hạch viêm là nhỏ, số lượng ít, bờ tròn đều, không phát triển theo thời gian, không xâm lấn da xung quanh. Thông thường đối với hạch viêm thì nguồn viêm có thể tấn công tại hạch, cũng có khi là hạch viêm phản ứng với ổ viêm nhiễm cạnh đó, điều trị hết viêm thì hạch sẽ lặn dần, có thể lặn chậm hơn vài tuần đến vài tháng, có một số loại hạch cũng là hạch viêm nhưng mà chỉ giảm kích thước rồi cứ \"lì\" vậy luôn - không lặn hẳn nhưng không còn sưng như trước và vẫn giữ hình ảnh của hạch viêm, cũng có loại hạch viêm sau lại chuyển sang xơ chai hóa như sẹo cũ và không lặn. Như vậy, em có 1 hạch vùng cổ đã được xác định là hạch viêm thông qua sinh thiết hạch cách đây 10 năm. Trong vòng 10 năm nay, hạch cổ đó không có triệu chứng bất thường. Gần đây, hạch cổ đó có biểu hiện viêm trở lại, mặc dù em uống thuốc (tự mua) thì hạch hết sưng đau, nhưng em cũng cần khám lại bên chuyên khoa ung bướu để kiểm tra tổng quát lại 1 lần, tìm nguyên nhân gây kích thích hạch viêm này tái hoạt động, xem là nguyên nhân lành tính hay tiềm ẩn nguyên nhân khác (vì lần kiểm tra trước đã cách đây 10 năm rồi), em nhé.", " Chào em, Trường hợp em mô tả là những bất thường của hệ hô hấp có thể là bệnh lý tai mũi họng hay hô hấp dưới như viêm phổi, viêm phế quản, em cần đến các cơ sở y tế chuyên sâu tai mũi họng hay hô hấp để khám thêm. Những biểu hiện đó hoàn toàn không có cơ sở nghĩ . Thân mến!"]}, {"source_sentence": "Bác sĩ cho em hỏi, em bị rạn nứt xương gót chân bên phải. Em bị hơn 1 tháng nay rồi. Em bỏ thuốc lá. Em muốn hỏi bác sĩ thông thường bó bột hơn hay thuốc lá hơn? Như của em khoảng bao lâu thì khỏi? Và giờ em vẫn chưa đi được bác sĩ ạ. Em cảm ơn.", "sentences": ["Câu hỏi của em rất chân thành. Tự ý thức quyết tâm cai nghiệm là điều đáng quý. Nếu em tiếp tục sử dụng thì tình trạng sẽ tồi tệ hơn rất nhiều. Ba yếu tố quan trọng nhất và tiến hành đồng thời để cai nghiện thành công, đó là: 1. Ý chí 2. Sự hiểu biết thấu đáo 3. Môi trường thân thiện. Các Trung tâm cai nghiện sẽ giúp em phần 2 và phần 3, từ đó sẽ củng cố phần 1 của em. Trường hợp ở nhà mà em tự cai, thực hành mỗi ngày với 3 điều kiện trên, em sẽ thành công như nhiều bạn khác. Không nên nôn nóng, sốt ruột. Trước tiên em phải thuộc lòng và thực hành những quy tắc này thành thói quen và áp dụng suốt đời. Nhiều trường hợp cai được vài năm vẫn tái nghiện. Do đó, nên tránh xa những \"nguồn\" khiến em tái nghiện, tránh xa bạn bè nghiện ngập em nhé. Chúc em quyết tâm và đem lại niềm vui cho bố mẹ.", "Chào em, Thứ nhất, bắt buộc phải có phim Xquang để biết em có thực sự nứt xương gót hay bị gãy phức tạp hơn, vì nhiều trường hợp tưởng chỉ nứt xương thôi nhưng thật ra là vỡ phức tạp, phải phẫu thuật mới nhanh ổn được. Thứ hai, theo nguyên tắc điều trị nứt gãy xương là phải cố định tốt để can xương mọc ra, chỗ nứt gãy mới được nối liền. Do đó, nếu bó bột thì chân sẽ được cố định liên tục trong 4-6 tuần, còn bó lá thì phải thay thường xuyên, mỗi lần thay là 1 lần xê dịch nên xương khó lành. Tốt hơn hết em nên đến Bệnh viện Chấn thương Chỉnh hình để được kiểm tra và điều trị thích hợp, em nhé. Thân mến.", "Chào bạn, Qua hình ảnh sang thương và mô tả triệu chứng, bệnh lý của bạn có khả năng là chàm hay còn gọi là viêm da dị ứng với đặc điểm là viêm và nổi mụn nhỏ, ngứa ngáy. Nguyên nhân của chàm hiện nay chưa rõ nhưng có thể do cơ địa dị ứng (người mắc hen, viêm mũi dị ứng có nguy cơ cao mắc chàm), do kích thích của hóa chất như nước rửa chén, bột giặt, cao su, kim loại, chất liệu giày dép (chàm tiếp xúc),... Thời tiết lạnh, stress, đổ mồ hôi nhiều và phấn hoa... cũng là những nguyên nhân có thể khiến da bị chàm. Chàm cũng có thể gặp ở người bị suy van tĩnh mạch, giãn tĩnh mạch chân khiến tình trạng bệnh dai dẳng, kém đáp ứng điều trị. Điều trị chàm thường phải sử dụng một số loại thuốc bôi da kéo dài, có thể để lại tác dụng phụ, do đó bạn nên khám BS Da liễu để kê toa loại thuốc phù hợp. Ngoài ra, bạn nên chú ý xem có yếu tố nào thường kích thích khởi phát chàm để tránh cho bệnh tái phát bạn nhé! Thân mến."]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,593
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_wnli
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-01-30T02:25:57Z
2023-01-30T02:28:48+00:00
137
0
--- datasets: - glue language: - en license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: distilbert_sa_GLUE_Experiment_logit_kd_wnli results: - task: type: text-classification name: Text Classification dataset: name: GLUE WNLI type: glue config: wnli split: validation args: wnli metrics: - type: accuracy value: 0.5633802816901409 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_logit_kd_wnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3447 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.552 | 1.0 | 3 | 0.3618 | 0.4366 | | 0.3998 | 2.0 | 6 | 0.3864 | 0.5634 | | 0.3807 | 3.0 | 9 | 0.4036 | 0.4366 | | 0.3731 | 4.0 | 12 | 0.3447 | 0.5634 | | 0.3615 | 5.0 | 15 | 0.3462 | 0.5634 | | 0.3527 | 6.0 | 18 | 0.3560 | 0.4366 | | 0.3527 | 7.0 | 21 | 0.3454 | 0.5634 | | 0.3491 | 8.0 | 24 | 0.3456 | 0.5634 | | 0.349 | 9.0 | 27 | 0.3482 | 0.5634 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_logit_kd_wnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3447 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.552 | 1.0 | 3 | 0.3618 | 0.4366 | | 0.3998 | 2.0 | 6 | 0.3864 | 0.5634 | | 0.3807 | 3.0 | 9 | 0.4036 | 0.4366 | | 0.3731 | 4.0 | 12 | 0.3447 | 0.5634 | | 0.3615 | 5.0 | 15 | 0.3462 | 0.5634 | | 0.3527 | 6.0 | 18 | 0.3560 | 0.4366 | | 0.3527 | 7.0 | 21 | 0.3454 | 0.5634 | | 0.3491 | 8.0 | 24 | 0.3456 | 0.5634 | | 0.349 | 9.0 | 27 | 0.3482 | 0.5634 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert_sa_GLUE_Experiment_logit_kd_wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE WNLI", "type": "glue", "config": "wnli", "split": "validation", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.5633802816901409, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,594
mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF
mradermacher
null
[ "transformers", "gguf", "German", "RAG", "Retrieval", "Question-Answering", "Summarization", "Reasoning", "en", "de", "base_model:avemio/German-RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI", "base_model:quantized:avemio/German-RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
2025-02-07T02:29:33Z
2025-02-08T01:10:23+00:00
484
0
--- base_model: avemio/German-RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI language: - en - de library_name: transformers license: mit tags: - German - RAG - Retrieval - Question-Answering - Summarization - Reasoning quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/avemio/German-RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ1_M.gguf) | i1-IQ1_M | 1.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ2_S.gguf) | i1-IQ2_S | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ2_M.gguf) | i1-IQ2_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.7 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ3_S.gguf) | i1-IQ3_S | 1.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ3_M.gguf) | i1-IQ3_M | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.1 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.3 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q4_0.gguf) | i1-Q4_0 | 2.3 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q4_1.gguf) | i1-Q4_1 | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q6_K.gguf) | i1-Q6_K | 3.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
null
Non_BioNLP
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/avemio/German-RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ1_M.gguf) | i1-IQ1_M | 1.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ2_S.gguf) | i1-IQ2_S | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ2_M.gguf) | i1-IQ2_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.7 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ3_S.gguf) | i1-IQ3_S | 1.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ3_M.gguf) | i1-IQ3_M | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.1 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.3 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q4_0.gguf) | i1-Q4_0 | 2.3 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q4_1.gguf) | i1-Q4_1 | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI-i1-GGUF/resolve/main/German_RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI.i1-Q6_K.gguf) | i1-Q6_K | 3.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
{"base_model": "avemio/German-RAG-PHI-3.5-MINI-4B-MERGED-HESSIAN-AI", "language": ["en", "de"], "library_name": "transformers", "license": "mit", "tags": ["German", "RAG", "Retrieval", "Question-Answering", "Summarization", "Reasoning"], "quantized_by": "mradermacher"}
task
[ "SUMMARIZATION" ]
45,595
prithivMLmods/Messier-Opus-14B-Elite7
prithivMLmods
text-generation
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "trl", "sft", "math", "code", "conversational", "en", "zh", "base_model:prithivMLmods/Equuleus-Opus-14B-Exp", "base_model:finetune:prithivMLmods/Equuleus-Opus-14B-Exp", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-02-26T19:20:51Z
2025-03-04T19:10:42+00:00
419
2
--- base_model: - prithivMLmods/Equuleus-Opus-14B-Exp language: - en - zh library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - text-generation-inference - trl - sft - math - code model-index: - name: Messier-Opus-14B-Elite7 results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: wis-k/instruction-following-eval split: train args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 71.13 name: averaged accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMessier-Opus-14B-Elite7 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: SaylorTwift/bbh split: test args: num_few_shot: 3 metrics: - type: acc_norm value: 49.7 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMessier-Opus-14B-Elite7 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: lighteval/MATH-Hard split: test args: num_few_shot: 4 metrics: - type: exact_match value: 40.71 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMessier-Opus-14B-Elite7 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa split: train args: num_few_shot: 0 metrics: - type: acc_norm value: 18.79 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMessier-Opus-14B-Elite7 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 20.7 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMessier-Opus-14B-Elite7 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 48.93 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMessier-Opus-14B-Elite7 name: Open LLM Leaderboard --- ![zxvxsdfvsd.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/J4bIFvb3gMfwA3-DhBXYX.png) # **Messier-Opus-14B-Elite7** > Messier-Opus-14B-Elite7 is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized for general-purpose reasoning and answering, excelling in contextual understanding, logical deduction, and multi-step problem-solving. It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence. ## **Key Improvements** 1. **Enhanced General Knowledge**: The model provides broad knowledge across various domains, improving capabilities in answering questions accurately and generating coherent responses. 2. **Improved Instruction Following**: Significant advancements in understanding and following complex instructions, generating structured responses, and maintaining coherence over extended interactions. 3. **Versatile Adaptability**: More resilient to diverse prompts, enhancing its ability to handle a wide range of topics and conversation styles, including open-ended and structured inquiries. 4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed responses. 5. **Multilingual Proficiency**: Supports over 29 languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. ## **Quickstart with transformers** Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/Messier-Opus-14B-Elite7" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "What are the key principles of general-purpose AI?" messages = [ {"role": "system", "content": "You are a helpful assistant capable of answering a wide range of questions."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## **Intended Use** 1. **General-Purpose Reasoning**: Designed for broad applicability, assisting with logical reasoning, answering diverse questions, and solving general knowledge problems. 2. **Educational and Informational Assistance**: Suitable for providing explanations, summaries, and research-based responses for students, educators, and general users. 3. **Conversational AI and Chatbots**: Ideal for building intelligent conversational agents that require contextual understanding and dynamic response generation. 4. **Multilingual Applications**: Supports global communication, translations, and multilingual content generation. 5. **Structured Data Processing**: Capable of analyzing and generating structured outputs, such as tables and JSON, useful for data science and automation. 6. **Long-Form Content Generation**: Can generate extended responses, including articles, reports, and guides, maintaining coherence over large text outputs. ## **Limitations** 1. **Hardware Requirements**: Requires high-memory GPUs or TPUs due to its large parameter size and long-context support. 2. **Potential Bias in Responses**: While designed to be neutral, outputs may still reflect biases present in training data. 3. **Inconsistent Outputs in Creative Tasks**: May produce variable results in storytelling and highly subjective topics. 4. **Limited Real-World Awareness**: Does not have access to real-time events beyond its training cutoff. 5. **Error Propagation in Extended Outputs**: Minor errors in early responses may affect overall coherence in long-form outputs. 6. **Prompt Sensitivity**: The effectiveness of responses may depend on how well the input prompt is structured. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__Messier-Opus-14B-Elite7-details)! Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FMessier-Opus-14B-Elite7&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)! | Metric |Value (%)| |-------------------|--------:| |**Average** | 41.66| |IFEval (0-Shot) | 71.13| |BBH (3-Shot) | 49.70| |MATH Lvl 5 (4-Shot)| 40.71| |GPQA (0-shot) | 18.79| |MuSR (0-shot) | 20.70| |MMLU-PRO (5-shot) | 48.93|
null
Non_BioNLP
![zxvxsdfvsd.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/J4bIFvb3gMfwA3-DhBXYX.png) # **Messier-Opus-14B-Elite7** > Messier-Opus-14B-Elite7 is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized for general-purpose reasoning and answering, excelling in contextual understanding, logical deduction, and multi-step problem-solving. It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence. ## **Key Improvements** 1. **Enhanced General Knowledge**: The model provides broad knowledge across various domains, improving capabilities in answering questions accurately and generating coherent responses. 2. **Improved Instruction Following**: Significant advancements in understanding and following complex instructions, generating structured responses, and maintaining coherence over extended interactions. 3. **Versatile Adaptability**: More resilient to diverse prompts, enhancing its ability to handle a wide range of topics and conversation styles, including open-ended and structured inquiries. 4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed responses. 5. **Multilingual Proficiency**: Supports over 29 languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. ## **Quickstart with transformers** Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/Messier-Opus-14B-Elite7" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "What are the key principles of general-purpose AI?" messages = [ {"role": "system", "content": "You are a helpful assistant capable of answering a wide range of questions."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## **Intended Use** 1. **General-Purpose Reasoning**: Designed for broad applicability, assisting with logical reasoning, answering diverse questions, and solving general knowledge problems. 2. **Educational and Informational Assistance**: Suitable for providing explanations, summaries, and research-based responses for students, educators, and general users. 3. **Conversational AI and Chatbots**: Ideal for building intelligent conversational agents that require contextual understanding and dynamic response generation. 4. **Multilingual Applications**: Supports global communication, translations, and multilingual content generation. 5. **Structured Data Processing**: Capable of analyzing and generating structured outputs, such as tables and JSON, useful for data science and automation. 6. **Long-Form Content Generation**: Can generate extended responses, including articles, reports, and guides, maintaining coherence over large text outputs. ## **Limitations** 1. **Hardware Requirements**: Requires high-memory GPUs or TPUs due to its large parameter size and long-context support. 2. **Potential Bias in Responses**: While designed to be neutral, outputs may still reflect biases present in training data. 3. **Inconsistent Outputs in Creative Tasks**: May produce variable results in storytelling and highly subjective topics. 4. **Limited Real-World Awareness**: Does not have access to real-time events beyond its training cutoff. 5. **Error Propagation in Extended Outputs**: Minor errors in early responses may affect overall coherence in long-form outputs. 6. **Prompt Sensitivity**: The effectiveness of responses may depend on how well the input prompt is structured. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__Messier-Opus-14B-Elite7-details)! Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FMessier-Opus-14B-Elite7&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)! | Metric |Value (%)| |-------------------|--------:| |**Average** | 41.66| |IFEval (0-Shot) | 71.13| |BBH (3-Shot) | 49.70| |MATH Lvl 5 (4-Shot)| 40.71| |GPQA (0-shot) | 18.79| |MuSR (0-shot) | 20.70| |MMLU-PRO (5-shot) | 48.93|
{"base_model": ["prithivMLmods/Equuleus-Opus-14B-Exp"], "language": ["en", "zh"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["text-generation-inference", "trl", "sft", "math", "code"], "model-index": [{"name": "Messier-Opus-14B-Elite7", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "IFEval (0-Shot)", "type": "wis-k/instruction-following-eval", "split": "train", "args": {"num_few_shot": 0}}, "metrics": [{"type": "inst_level_strict_acc and prompt_level_strict_acc", "value": 71.13, "name": "averaged accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMessier-Opus-14B-Elite7", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "BBH (3-Shot)", "type": "SaylorTwift/bbh", "split": "test", "args": {"num_few_shot": 3}}, "metrics": [{"type": "acc_norm", "value": 49.7, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMessier-Opus-14B-Elite7", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MATH Lvl 5 (4-Shot)", "type": "lighteval/MATH-Hard", "split": "test", "args": {"num_few_shot": 4}}, "metrics": [{"type": "exact_match", "value": 40.71, "name": "exact match"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMessier-Opus-14B-Elite7", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GPQA (0-shot)", "type": "Idavidrein/gpqa", "split": "train", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc_norm", "value": 18.79, "name": "acc_norm"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMessier-Opus-14B-Elite7", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MuSR (0-shot)", "type": "TAUR-Lab/MuSR", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc_norm", "value": 20.7, "name": "acc_norm"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMessier-Opus-14B-Elite7", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU-PRO (5-shot)", "type": "TIGER-Lab/MMLU-Pro", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 48.93, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMessier-Opus-14B-Elite7", "name": "Open LLM Leaderboard"}}]}]}
task
[ "TRANSLATION" ]
45,596
hongkeon/bge-m3-korean-Q4_K_M-GGUF
hongkeon
sentence-similarity
[ "sentence-transformers", "gguf", "korean", "transformers", "multilingual", "sentence-similarity", "feature-extraction", "llama-cpp", "gguf-my-repo", "af", "ar", "az", "be", "bg", "bn", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "eu", "fa", "fi", "fr", "gl", "gu", "he", "hi", "hr", "ht", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ky", "lo", "lt", "lv", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "pa", "pl", "pt", "qu", "ro", "ru", "si", "sk", "sl", "so", "sq", "sr", "sv", "sw", "ta", "te", "th", "tl", "tr", "uk", "ur", "vi", "yo", "zh", "base_model:upskyy/bge-m3-korean", "base_model:quantized:upskyy/bge-m3-korean", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-02-18T03:13:02Z
2025-02-18T03:13:07+00:00
74
0
--- base_model: upskyy/bge-m3-korean datasets: [] language: - af - ar - az - be - bg - bn - ca - ceb - cs - cy - da - de - el - en - es - et - eu - fa - fi - fr - gl - gu - he - hi - hr - ht - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ky - lo - lt - lv - mk - ml - mn - mr - ms - my - ne - nl - 'no' - pa - pl - pt - qu - ro - ru - si - sk - sl - so - sq - sr - sv - sw - ta - te - th - tl - tr - uk - ur - vi - yo - zh library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max pipeline_tag: sentence-similarity tags: - korean - sentence-transformers - transformers - multilingual - sentence-similarity - feature-extraction - llama-cpp - gguf-my-repo widget: - source_sentence: 이집트 군대가 형제애를 단속하다 sentences: - 이집트의 군대가 무슬림 형제애를 단속하다 - 아르헨티나의 기예르모 코리아와 네덜란드의 마틴 버커크의 또 다른 준결승전도 매력적이다. - 그것이 사실일 수도 있다고 생각하는 것은 재미있다. - source_sentence: 오, 그리고 다시 결혼은 근본적인 인권이라고 주장한다. sentences: - 특히 결혼은 근본적인 인권이라고 말한 후에. - 해변에 있는 흑인과 그의 개... - 이란은 핵 프로그램이 평화적인 목적을 위한 것이라고 주장한다 - source_sentence: 두 사람이 계단을 올라가 건물 안으로 들어간다 sentences: - 글쎄, 나는 우리가 꽤 나빠진 사이트 목록을 만들었고 일부를 정리해야한다는 일부 사이트에서 알았고 지금 법은 슈퍼 펀드이며 당신이 아무리간에 독성 폐기물을 일으킨 사람이라면 누구나 알고 있습니다. 결국 당신이 아는 사람은 누구나 땅에 손상을 입혔거나 모두가 기여해야한다는 것을 알고 있습니다. 그리고 우리가이 돈을 정리하기 위해 수퍼 펀드 거래를 가져 왔을 때 많은 돈을 벌었습니다. 모든 것을 꺼내서 다시 실행하면 다른 지역을 채울 수 있습니다. 음. 확실히 셔먼 시설과 같은 더 나은 솔루션을 가지고있는 것 같습니다. 기름 통에 넣은 다음 시멘트가 깔려있는 곳에서 밀봉하십시오. - 한 사람이 계단을 올라간다. - 두 사람이 함께 계단을 올라간다. - source_sentence: 그래, 내가 알아차린 적이 있어 sentences: - 나는 알아차리지 못했다. - 이것은 내가 영국의 아서 안데르센 사업부의 파트너인 짐 와디아를 아서 안데르센 경영진이 선택한 것보다 래리 웨인바흐를 안데르센 월드와이드의 경영 파트너로 승계하기 위해 안데르센 컨설팅 사업부(현재의 엑센츄어라고 알려져 있음)의 전 관리 파트너인 조지 샤힌에 대한 지지를 표명했을 때 가장 명백했다. - 나는 메모했다. - source_sentence: 여자가 전화를 하는 동안 두 남자가 돈을 위해 악기를 연주한다. sentences: - 마이크에 대고 노래를 부르고 베이스를 연주하는 남자. - 빨대를 사용하는 아이 - 돈을 위해 악기를 연주하는 사람들 model-index: - name: upskyy/bge-m3-korean results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts dev type: sts-dev metrics: - type: pearson_cosine value: 0.8740181295716805 name: Pearson Cosine - type: spearman_cosine value: 0.8723737976913686 name: Spearman Cosine - type: pearson_manhattan value: 0.8593266961329962 name: Pearson Manhattan - type: spearman_manhattan value: 0.8687629058449345 name: Spearman Manhattan - type: pearson_euclidean value: 0.8597907936339472 name: Pearson Euclidean - type: spearman_euclidean value: 0.8693987158996017 name: Spearman Euclidean - type: pearson_dot value: 0.8683777071455441 name: Pearson Dot - type: spearman_dot value: 0.8665500024614361 name: Spearman Dot - type: pearson_max value: 0.8740181295716805 name: Pearson Max - type: spearman_max value: 0.8723737976913686 name: Spearman Max --- # hongkeon/bge-m3-korean-Q4_K_M-GGUF This model was converted to GGUF format from [`upskyy/bge-m3-korean`](https://huggingface.co/upskyy/bge-m3-korean) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/upskyy/bge-m3-korean) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo hongkeon/bge-m3-korean-Q4_K_M-GGUF --hf-file bge-m3-korean-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo hongkeon/bge-m3-korean-Q4_K_M-GGUF --hf-file bge-m3-korean-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo hongkeon/bge-m3-korean-Q4_K_M-GGUF --hf-file bge-m3-korean-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo hongkeon/bge-m3-korean-Q4_K_M-GGUF --hf-file bge-m3-korean-q4_k_m.gguf -c 2048 ```
null
Non_BioNLP
# hongkeon/bge-m3-korean-Q4_K_M-GGUF This model was converted to GGUF format from [`upskyy/bge-m3-korean`](https://huggingface.co/upskyy/bge-m3-korean) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/upskyy/bge-m3-korean) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo hongkeon/bge-m3-korean-Q4_K_M-GGUF --hf-file bge-m3-korean-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo hongkeon/bge-m3-korean-Q4_K_M-GGUF --hf-file bge-m3-korean-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo hongkeon/bge-m3-korean-Q4_K_M-GGUF --hf-file bge-m3-korean-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo hongkeon/bge-m3-korean-Q4_K_M-GGUF --hf-file bge-m3-korean-q4_k_m.gguf -c 2048 ```
{"base_model": "upskyy/bge-m3-korean", "datasets": [], "language": ["af", "ar", "az", "be", "bg", "bn", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "eu", "fa", "fi", "fr", "gl", "gu", "he", "hi", "hr", "ht", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ky", "lo", "lt", "lv", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "pa", "pl", "pt", "qu", "ro", "ru", "si", "sk", "sl", "so", "sq", "sr", "sv", "sw", "ta", "te", "th", "tl", "tr", "uk", "ur", "vi", "yo", "zh"], "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "pipeline_tag": "sentence-similarity", "tags": ["korean", "sentence-transformers", "transformers", "multilingual", "sentence-similarity", "feature-extraction", "llama-cpp", "gguf-my-repo"], "widget": [{"source_sentence": "이집트 군대가 형제애를 단속하다", "sentences": ["이집트의 군대가 무슬림 형제애를 단속하다", "아르헨티나의 기예르모 코리아와 네덜란드의 마틴 버커크의 또 다른 준결승전도 매력적이다.", "그것이 사실일 수도 있다고 생각하는 것은 재미있다."]}, {"source_sentence": "오, 그리고 다시 결혼은 근본적인 인권이라고 주장한다.", "sentences": ["특히 결혼은 근본적인 인권이라고 말한 후에.", "해변에 있는 흑인과 그의 개...", "이란은 핵 프로그램이 평화적인 목적을 위한 것이라고 주장한다"]}, {"source_sentence": "두 사람이 계단을 올라가 건물 안으로 들어간다", "sentences": ["글쎄, 나는 우리가 꽤 나빠진 사이트 목록을 만들었고 일부를 정리해야한다는 일부 사이트에서 알았고 지금 법은 슈퍼 펀드이며 당신이 아무리간에 독성 폐기물을 일으킨 사람이라면 누구나 알고 있습니다. 결국 당신이 아는 사람은 누구나 땅에 손상을 입혔거나 모두가 기여해야한다는 것을 알고 있습니다. 그리고 우리가이 돈을 정리하기 위해 수퍼 펀드 거래를 가져 왔을 때 많은 돈을 벌었습니다. 모든 것을 꺼내서 다시 실행하면 다른 지역을 채울 수 있습니다. 음. 확실히 셔먼 시설과 같은 더 나은 솔루션을 가지고있는 것 같습니다. 기름 통에 넣은 다음 시멘트가 깔려있는 곳에서 밀봉하십시오.", "한 사람이 계단을 올라간다.", "두 사람이 함께 계단을 올라간다."]}, {"source_sentence": "그래, 내가 알아차린 적이 있어", "sentences": ["나는 알아차리지 못했다.", "이것은 내가 영국의 아서 안데르센 사업부의 파트너인 짐 와디아를 아서 안데르센 경영진이 선택한 것보다 래리 웨인바흐를 안데르센 월드와이드의 경영 파트너로 승계하기 위해 안데르센 컨설팅 사업부(현재의 엑센츄어라고 알려져 있음)의 전 관리 파트너인 조지 샤힌에 대한 지지를 표명했을 때 가장 명백했다.", "나는 메모했다."]}, {"source_sentence": "여자가 전화를 하는 동안 두 남자가 돈을 위해 악기를 연주한다.", "sentences": ["마이크에 대고 노래를 부르고 베이스를 연주하는 남자.", "빨대를 사용하는 아이", "돈을 위해 악기를 연주하는 사람들"]}], "model-index": [{"name": "upskyy/bge-m3-korean", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts dev", "type": "sts-dev"}, "metrics": [{"type": "pearson_cosine", "value": 0.8740181295716805, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.8723737976913686, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.8593266961329962, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.8687629058449345, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.8597907936339472, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.8693987158996017, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.8683777071455441, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.8665500024614361, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.8740181295716805, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.8723737976913686, "name": "Spearman Max"}]}]}]}
task
[ "SEMANTIC_SIMILARITY" ]
45,597
shahadalll/mt5-base-finetuned-mt5-summarization-task
shahadalll
text2text-generation
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:google/mt5-base", "base_model:finetune:google/mt5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-02-25T15:11:41Z
2024-02-25T17:23:23+00:00
9
0
--- base_model: google/mt5-base license: apache-2.0 metrics: - rouge tags: - generated_from_trainer model-index: - name: mt5-base-finetuned-mt5-summarization-task results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-finetuned-mt5-summarization-task This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0374 - Rouge1: 6.3413 - Rouge2: 0.7742 - Rougel: 6.3762 - Rougelsum: 6.3936 - Gen Len: 17.5465 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 3.0273 | 1.0 | 1473 | 2.1479 | 5.8945 | 0.5705 | 5.8537 | 5.9128 | 16.6932 | | 2.6605 | 2.0 | 2946 | 2.0578 | 6.7115 | 0.762 | 6.6509 | 6.6955 | 17.5501 | | 2.5505 | 3.0 | 4419 | 2.0374 | 6.3413 | 0.7742 | 6.3762 | 6.3936 | 17.5465 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-finetuned-mt5-summarization-task This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0374 - Rouge1: 6.3413 - Rouge2: 0.7742 - Rougel: 6.3762 - Rougelsum: 6.3936 - Gen Len: 17.5465 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 3.0273 | 1.0 | 1473 | 2.1479 | 5.8945 | 0.5705 | 5.8537 | 5.9128 | 16.6932 | | 2.6605 | 2.0 | 2946 | 2.0578 | 6.7115 | 0.762 | 6.6509 | 6.6955 | 17.5501 | | 2.5505 | 3.0 | 4419 | 2.0374 | 6.3413 | 0.7742 | 6.3762 | 6.3936 | 17.5465 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"base_model": "google/mt5-base", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mt5-base-finetuned-mt5-summarization-task", "results": []}]}
task
[ "SUMMARIZATION" ]
45,598
evincent18/marian-finetuned-kde4-en-to-fr
evincent18
translation
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-03-07T23:16:12Z
2023-03-08T05:57:18+00:00
12
0
--- datasets: - kde4 license: apache-2.0 metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: type: text2text-generation name: Sequence-to-sequence Language Modeling dataset: name: kde4 type: kde4 config: en-fr split: train args: en-fr metrics: - type: bleu value: 52.836492533087124 name: Bleu --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8559 - Bleu: 52.8365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8559 - Bleu: 52.8365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
{"datasets": ["kde4"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "config": "en-fr", "split": "train", "args": "en-fr"}, "metrics": [{"type": "bleu", "value": 52.836492533087124, "name": "Bleu"}]}]}]}
task
[ "TRANSLATION" ]
45,599
ITG/PlatVR-sft
ITG
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "chatml", "synthetic data", "finetune", "conversational", "en", "dataset:ITG/PlatVR-sft", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-04-05T08:24:42Z
2024-04-17T09:12:53+00:00
1
1
--- datasets: - ITG/PlatVR-sft language: - en library_name: transformers license: apache-2.0 tags: - chatml - mistral - synthetic data - finetune --- # PlatVR-sft - Hermes 2 Pro - Mistral 7B ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/646f4b19075e11ca78db58a6/5HZJYp1DuYP47nu-U7F7M.jpeg) **Image generated by [copilot designer](https://copilot.microsoft.com/images/create). ## Model Details This model is part of the EVIDENT framework, designed to enhance the creative process in generating background images for virtual reality sets. It interprets user instructions to generate and modify prompts for text-to-image models. This is the SFT version of the model, you can check the additional [DPO](https://huggingface.co/ITG/PlatVR-dpo) and [KTO](https://huggingface.co/ITG/PlatVR-kto) versions as well. The [demo](https://youtu.be/NKevZLvaGaA) integrates a diffusion model to test prompt-image alignment, and mechanisms for user feedback and iterative prompt refinement, aiming to enhance user creativity and satisfaction. The instruction categories are: - **Addition**: Involves the inclusion of new elements or features. - **Condensation**: Consists in the summarization of the description. - **Modification**: Alters specific aspects of the description to change the scene. - **Rearrangement**: Reordering of sentences within the descriptions. - **Removal**: Elimination of specific details in the description. - **Rephrase**: Rewriting parts of the description. - **Scene Change**: Overall description context switch. The output language of the model is English, but other languages can be used as input (quality depends of the quantity of tokens used on the pre-training phase for the given language). ### Model Description Developed as part of the EVIDENT framework, this model leverages a large language model fine-tuned on synthetic data to generate and refine text prompts for creating virtual reality backgrounds. - **Developed by:** [ITG](https://itg.es/en) - **Model type:** Text-to-Text for Image Prompt Generation - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model:** [Hermes 2 Pro](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) ### Model Sources [optional] - **Demo video:** [EVIDENT Demo](https://youtu.be/NKevZLvaGaA) ## Uses ### Prompt Format It uses ChatML as the prompt format. Here is the original prompt that was used in the fine-tuning process: ``` <|im_start|>system As an AI assistant dedicated to refining and adjusting prompts for image generation, your primary task involves interpreting and applying user-specific modifications to enhance the original prompt. Your modifications may include: Additions: Introducing new elements or features to enrich the context, such as weather conditions or additional objects, aiming to enable the AI to interpret and generate more complex and detailed prompts. Condensations: Summarizing longer descriptions into more concise forms without losing essential meaning, aiming at generating relevant images from shorter prompts. Modifications: Altering specific details within the descriptions to change the scene. Rearrangement: Changing the order of sentences or phrases to test the AI's context understanding and narrative flow. Removal: Eliminating redundant or non-essential information to clarify the prompt. Rephrase: Rewriting sentences or phrases to convey the same meaning using different words or structures. Scene Change: Altering the setting or background to create a completely new context. Your goal is to skillfully adapt the new prompt in line with the user's precise directives, ensuring the essence of their vision is captured—all while maintaining responses exclusively in English, regardless of the original prompt's language. It is crucial that the revised prompt strictly adheres to the user's intent, incorporating their specified changes with precision. Additionally, ensure the new prompt does not suggest alterations that imply dynamics or qualities unsuitable for visual representation, such as smell, scent, or sound, which cannot be captured in an image. Your role is to ensure the prompt is optimized for image generation, clearly reflecting the user's adjustments while respecting these guidelines, with a consistent use of English for all responses. The focus should be on creating a vivid, static depiction that stays true to the conceptual and aesthetic requirements set forth by the user, communicated effectively in English. Remember, the new prompt must not contain references to smell, scent, or sound, which cannot be captured in an image. Below is the original prompt that you will meticulously refine: {original_prompt}<|im_end|> <|im_start|>user {instruction}<|im_end|> <|im_start|>assistant ``` ### Notes - **{original_prompt}**: Is the previous prompt that the system returned to the user. - **{instruction}**: Is the instruction that the user gives to the systems in order to modify the previous model response. - **Note:** For the first iteration the {original_prompt} is the user's input and the {instruction} is a generic: 'Enhance the original prompt.'. ### Direct Use This model is designed for direct use in generating and refining text prompts for text-to-image generation, specifically tailored for creating virtual reality environments and sets. Load model: ```bash docker run --gpus all --rm --shm-size 1g -p 8080:80 -v ~/huggingface/hub/:/data ghcr.io/huggingface/text-generation-inference:latest --model-id ITG/PlatVR-sft ``` Python client: ```python from huggingface_hub import InferenceClient client = InferenceClient(model="http://localhost:8080") template = ("""<|im_start|>system As an AI assistant dedicated to refining and adjusting prompts for image generation, your primary task involves interpreting and applying user-specific modifications to enhance the original prompt. Your modifications may include: Additions: Introducing new elements or features to enrich the context, such as weather conditions or additional objects, aiming to enable the AI to interpret and generate more complex and detailed prompts. Condensations: Summarizing longer descriptions into more concise forms without losing essential meaning, aiming at generating relevant images from shorter prompts. Modifications: Altering specific details within the descriptions to change the scene. Rearrangement: Changing the order of sentences or phrases to test the AI's context understanding and narrative flow. Removal: Eliminating redundant or non-essential information to clarify the prompt. Rephrase: Rewriting sentences or phrases to convey the same meaning using different words or structures. Scene Change: Altering the setting or background to create a completely new context. Your goal is to skillfully adapt the new prompt in line with the user's precise directives, ensuring the essence of their vision is captured—all while maintaining responses exclusively in English, regardless of the original prompt's language. It is crucial that the revised prompt strictly adheres to the user's intent, incorporating their specified changes with precision. Additionally, ensure the new prompt does not suggest alterations that imply dynamics or qualities unsuitable for visual representation, such as smell, scent, or sound, which cannot be captured in an image. Your role is to ensure the prompt is optimized for image generation, clearly reflecting the user's adjustments while respecting these guidelines, with a consistent use of English for all responses. The focus should be on creating a vivid, static depiction that stays true to the conceptual and aesthetic requirements set forth by the user, communicated effectively in English. Remember, the new prompt must not contain references to smell, scent, or sound, which cannot be captured in an image. Below is the original prompt that you will meticulously refine: {original_prompt}<|im_end|> <|im_start|>user {instruction}<|im_end|> <|im_start|>assistant """) instruction = "Add details to the original prompt in a single sentence." original_prompt = "Una montaña" input_prompt = template.format(original_prompt=original_prompt, instruction=instruction) print(client.text_generation(prompt=input_prompt, max_new_tokens=512)) ``` ### Downstream Use The model can be fine-tuned or integrated into larger ecosystems or applications that require dynamic, user-driven creation of prompts for visual content. ### Out-of-Scope Use The model is not intended for uses beyond text prompt generation for visual content. ## Bias, Risks, and Limitations The model may inherit biases from its training data or exhibit limitations in understanding complex user instructions. Potential risks include generating inappropriate or unintended content based on ambiguous prompts. ## Evaluation metrics Please go to [the KTO version of the model](https://huggingface.co/ITG/PlatVR-kto#evaluation-metrics) for the full report. ### Recommendations Users should be aware of the model's limitations and biases. It is recommended to monitor the outputs for unintended content and refine prompts accordingly. ### Demo example ![image/png](https://cdn-uploads.huggingface.co/production/uploads/646f4b19075e11ca78db58a6/ZKIvKElm5bJuG7xH51iqa.png) ## Request Demo - Contact Email: [email protected] ## Model Card Contact - Contact Email: [email protected]
null
Non_BioNLP
# PlatVR-sft - Hermes 2 Pro - Mistral 7B ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/646f4b19075e11ca78db58a6/5HZJYp1DuYP47nu-U7F7M.jpeg) **Image generated by [copilot designer](https://copilot.microsoft.com/images/create). ## Model Details This model is part of the EVIDENT framework, designed to enhance the creative process in generating background images for virtual reality sets. It interprets user instructions to generate and modify prompts for text-to-image models. This is the SFT version of the model, you can check the additional [DPO](https://huggingface.co/ITG/PlatVR-dpo) and [KTO](https://huggingface.co/ITG/PlatVR-kto) versions as well. The [demo](https://youtu.be/NKevZLvaGaA) integrates a diffusion model to test prompt-image alignment, and mechanisms for user feedback and iterative prompt refinement, aiming to enhance user creativity and satisfaction. The instruction categories are: - **Addition**: Involves the inclusion of new elements or features. - **Condensation**: Consists in the summarization of the description. - **Modification**: Alters specific aspects of the description to change the scene. - **Rearrangement**: Reordering of sentences within the descriptions. - **Removal**: Elimination of specific details in the description. - **Rephrase**: Rewriting parts of the description. - **Scene Change**: Overall description context switch. The output language of the model is English, but other languages can be used as input (quality depends of the quantity of tokens used on the pre-training phase for the given language). ### Model Description Developed as part of the EVIDENT framework, this model leverages a large language model fine-tuned on synthetic data to generate and refine text prompts for creating virtual reality backgrounds. - **Developed by:** [ITG](https://itg.es/en) - **Model type:** Text-to-Text for Image Prompt Generation - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model:** [Hermes 2 Pro](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) ### Model Sources [optional] - **Demo video:** [EVIDENT Demo](https://youtu.be/NKevZLvaGaA) ## Uses ### Prompt Format It uses ChatML as the prompt format. Here is the original prompt that was used in the fine-tuning process: ``` <|im_start|>system As an AI assistant dedicated to refining and adjusting prompts for image generation, your primary task involves interpreting and applying user-specific modifications to enhance the original prompt. Your modifications may include: Additions: Introducing new elements or features to enrich the context, such as weather conditions or additional objects, aiming to enable the AI to interpret and generate more complex and detailed prompts. Condensations: Summarizing longer descriptions into more concise forms without losing essential meaning, aiming at generating relevant images from shorter prompts. Modifications: Altering specific details within the descriptions to change the scene. Rearrangement: Changing the order of sentences or phrases to test the AI's context understanding and narrative flow. Removal: Eliminating redundant or non-essential information to clarify the prompt. Rephrase: Rewriting sentences or phrases to convey the same meaning using different words or structures. Scene Change: Altering the setting or background to create a completely new context. Your goal is to skillfully adapt the new prompt in line with the user's precise directives, ensuring the essence of their vision is captured—all while maintaining responses exclusively in English, regardless of the original prompt's language. It is crucial that the revised prompt strictly adheres to the user's intent, incorporating their specified changes with precision. Additionally, ensure the new prompt does not suggest alterations that imply dynamics or qualities unsuitable for visual representation, such as smell, scent, or sound, which cannot be captured in an image. Your role is to ensure the prompt is optimized for image generation, clearly reflecting the user's adjustments while respecting these guidelines, with a consistent use of English for all responses. The focus should be on creating a vivid, static depiction that stays true to the conceptual and aesthetic requirements set forth by the user, communicated effectively in English. Remember, the new prompt must not contain references to smell, scent, or sound, which cannot be captured in an image. Below is the original prompt that you will meticulously refine: {original_prompt}<|im_end|> <|im_start|>user {instruction}<|im_end|> <|im_start|>assistant ``` ### Notes - **{original_prompt}**: Is the previous prompt that the system returned to the user. - **{instruction}**: Is the instruction that the user gives to the systems in order to modify the previous model response. - **Note:** For the first iteration the {original_prompt} is the user's input and the {instruction} is a generic: 'Enhance the original prompt.'. ### Direct Use This model is designed for direct use in generating and refining text prompts for text-to-image generation, specifically tailored for creating virtual reality environments and sets. Load model: ```bash docker run --gpus all --rm --shm-size 1g -p 8080:80 -v ~/huggingface/hub/:/data ghcr.io/huggingface/text-generation-inference:latest --model-id ITG/PlatVR-sft ``` Python client: ```python from huggingface_hub import InferenceClient client = InferenceClient(model="http://localhost:8080") template = ("""<|im_start|>system As an AI assistant dedicated to refining and adjusting prompts for image generation, your primary task involves interpreting and applying user-specific modifications to enhance the original prompt. Your modifications may include: Additions: Introducing new elements or features to enrich the context, such as weather conditions or additional objects, aiming to enable the AI to interpret and generate more complex and detailed prompts. Condensations: Summarizing longer descriptions into more concise forms without losing essential meaning, aiming at generating relevant images from shorter prompts. Modifications: Altering specific details within the descriptions to change the scene. Rearrangement: Changing the order of sentences or phrases to test the AI's context understanding and narrative flow. Removal: Eliminating redundant or non-essential information to clarify the prompt. Rephrase: Rewriting sentences or phrases to convey the same meaning using different words or structures. Scene Change: Altering the setting or background to create a completely new context. Your goal is to skillfully adapt the new prompt in line with the user's precise directives, ensuring the essence of their vision is captured—all while maintaining responses exclusively in English, regardless of the original prompt's language. It is crucial that the revised prompt strictly adheres to the user's intent, incorporating their specified changes with precision. Additionally, ensure the new prompt does not suggest alterations that imply dynamics or qualities unsuitable for visual representation, such as smell, scent, or sound, which cannot be captured in an image. Your role is to ensure the prompt is optimized for image generation, clearly reflecting the user's adjustments while respecting these guidelines, with a consistent use of English for all responses. The focus should be on creating a vivid, static depiction that stays true to the conceptual and aesthetic requirements set forth by the user, communicated effectively in English. Remember, the new prompt must not contain references to smell, scent, or sound, which cannot be captured in an image. Below is the original prompt that you will meticulously refine: {original_prompt}<|im_end|> <|im_start|>user {instruction}<|im_end|> <|im_start|>assistant """) instruction = "Add details to the original prompt in a single sentence." original_prompt = "Una montaña" input_prompt = template.format(original_prompt=original_prompt, instruction=instruction) print(client.text_generation(prompt=input_prompt, max_new_tokens=512)) ``` ### Downstream Use The model can be fine-tuned or integrated into larger ecosystems or applications that require dynamic, user-driven creation of prompts for visual content. ### Out-of-Scope Use The model is not intended for uses beyond text prompt generation for visual content. ## Bias, Risks, and Limitations The model may inherit biases from its training data or exhibit limitations in understanding complex user instructions. Potential risks include generating inappropriate or unintended content based on ambiguous prompts. ## Evaluation metrics Please go to [the KTO version of the model](https://huggingface.co/ITG/PlatVR-kto#evaluation-metrics) for the full report. ### Recommendations Users should be aware of the model's limitations and biases. It is recommended to monitor the outputs for unintended content and refine prompts accordingly. ### Demo example ![image/png](https://cdn-uploads.huggingface.co/production/uploads/646f4b19075e11ca78db58a6/ZKIvKElm5bJuG7xH51iqa.png) ## Request Demo - Contact Email: [email protected] ## Model Card Contact - Contact Email: [email protected]
{"datasets": ["ITG/PlatVR-sft"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "tags": ["chatml", "mistral", "synthetic data", "finetune"]}
task
[ "SUMMARIZATION" ]
45,600
Triangle104/granite-3.2-8b-instruct-Q4_K_M-GGUF
Triangle104
text-generation
[ "transformers", "gguf", "language", "granite-3.2", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:ibm-granite/granite-3.2-8b-instruct", "base_model:quantized:ibm-granite/granite-3.2-8b-instruct", "license:apache-2.0", "region:us", "conversational" ]
2025-02-28T12:55:34Z
2025-02-28T12:57:51+00:00
21
0
--- base_model: ibm-granite/granite-3.2-8b-instruct library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - language - granite-3.2 - llama-cpp - gguf-my-repo inference: false --- # Triangle104/granite-3.2-8b-instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`ibm-granite/granite-3.2-8b-instruct`](https://huggingface.co/ibm-granite/granite-3.2-8b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ibm-granite/granite-3.2-8b-instruct) for more details on the model. --- Model Summary: - Granite-3.2-8B-Instruct is an 8-billion-parameter, long-context AI model fine-tuned for thinking capabilities. Built on top of Granite-3.1-8B-Instruct, it has been trained using a mix of permissively licensed open-source datasets and internally generated synthetic data designed for reasoning tasks. The model allows controllability of its thinking capability, ensuring it is applied only when required. Developers: Granite Team, IBM Website: Granite Docs Release Date: February 26th, 2025 License: Apache 2.0 Supported Languages: - English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages. Intended Use: - This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications. Capabilities - Thinking Summarization Text classification Text extraction Question-answering Retrieval Augmented Generation (RAG) Code related tasks Function-calling tasks Multilingual dialog use cases Long-context tasks including long document/meeting summarization, long document QA, etc. Generation: - This is a simple example of how to use Granite-3.2-8B-Instruct model. Install the following libraries: - pip install torch torchvision torchaudio pip install accelerate pip install transformers Then, copy the snippet from the section that is relevant for your use case. from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed import torch model_path="ibm-granite/granite-3.2-8b-instruct" device="cuda" model = AutoModelForCausalLM.from_pretrained( model_path, device_map=device, torch_dtype=torch.bfloat16, ) tokenizer = AutoTokenizer.from_pretrained( model_path ) conv = [{"role": "user", "content":"You have 10 liters of a 30% acid solution. How many liters of a 70% acid solution must be added to achieve a 50% acid mixture?"}] input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device) set_seed(42) output = model.generate( **input_ids, max_new_tokens=8192, ) prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True) print(prediction) Example Outputs - thinking=True Here is my thought process: 1. We know we start with 10 liters of a 30% acid solution, so there's 3 liters of pure acid in it (10 liters * 0.3). 2. We need to determine how much of the 70% acid solution to add such that the final mixture is 50% acid. 3. Let's say x liters of the 70% acid solution are added. This will contribute 0.7x liters of pure acid. 4. The total volume of the new mixture will be 10 + x liters, and the total amount of pure acid will be 3 + 0.7x liters. 5. We need to set up an equation where (3 + 0.7x) / (10 + x) = 0.5 (since we want a 50% acid mixture). 6. Solving this equation for x will give us the volume of 70% solution needed. Here is my response: To solve this, we need to set up an equation based on the concentration of acid before and after mixing. 1. **Initial Acid Content**: - We start with 10 liters of a 30% acid solution. - Pure acid in this = 10 * 0.3 = 3 liters. 2. **Variable for Addition**: - Let x be the liters of 70% acid solution to be added. - Pure acid in x liters of 70% solution = 0.7x liters. 3. **Final Mixture Concentration**: - Total volume of mixture = 10 + x liters. - Total pure acid in the mixture = 3 (from initial solution) + 0.7x (from added solution). 4. **Setting up the equation for 50% acid concentration**: - We require that the ratio of pure acid to total volume is 0.5 (50%). - Therefore, the equation is: \[ \frac{3 + 0.7x}{10 + x} = 0.5 \] 5. **Solving the equation**: Multiply both sides by (10 + x) to clear the fraction: \[ 3 + 0.7x = 0.5(10 + x) \] Expanding the right side: \[ 3 + 0.7x = 5 + 0.5x \] Subtract 0.5x from both sides: \[ 3 + 0.2x = 5 \] Subtract 3 from both sides: \[ 0.2x = 2 \] Divide by 0.2: \[ x = 10 \] So, 10 liters of the 70% acid solution must be added to the initial 10 liters of 30% solution to get a 50% acid mixture. Let's verify: - Initial acid: 3 liters - Acid from added solution: 0.7 * 10 = 7 liters - Total acid: 3 + 7 = 10 liters - Total volume: 10 + 10 = 20 liters - Acid percentage: \(\frac{10}{20} = 0.5 = 50\%\) This confirms our solution. Final answer: 10 liters of 70% acid solution should be added. thinking=False Let's denote the amount of 70% acid solution to be added as x liters. We start with 10 liters of a 30% acid solution, which means there are 0.3 * 10 = 3 liters of pure acid in it. We want to end up with a 50% acid solution after adding x liters of 70% acid solution. The total volume of the mixture will be 10 + x liters, and we want half of this volume to be pure acid. So, the amount of pure acid in the final mixture should be 0.5 * (10 + x) liters. The amount of pure acid in the final mixture comes from two sources: the original 30% solution and the added 70% solution. So, we can set up the following equation: 3 (from the original solution) + 0.7x (from the added solution) = 0.5 * (10 + x) Now, let's solve for x: 3 + 0.7x = 5 + 0.5x 0.7x - 0.5x = 5 - 3 0.2x = 2 x = 2 / 0.2 x = 10 So, you need to add 10 liters of a 70% acid solution to the 10 liters of a 30% acid solution to get a 50% acid mixture. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/granite-3.2-8b-instruct-Q4_K_M-GGUF --hf-file granite-3.2-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/granite-3.2-8b-instruct-Q4_K_M-GGUF --hf-file granite-3.2-8b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/granite-3.2-8b-instruct-Q4_K_M-GGUF --hf-file granite-3.2-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/granite-3.2-8b-instruct-Q4_K_M-GGUF --hf-file granite-3.2-8b-instruct-q4_k_m.gguf -c 2048 ```
null
Non_BioNLP
# Triangle104/granite-3.2-8b-instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`ibm-granite/granite-3.2-8b-instruct`](https://huggingface.co/ibm-granite/granite-3.2-8b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ibm-granite/granite-3.2-8b-instruct) for more details on the model. --- Model Summary: - Granite-3.2-8B-Instruct is an 8-billion-parameter, long-context AI model fine-tuned for thinking capabilities. Built on top of Granite-3.1-8B-Instruct, it has been trained using a mix of permissively licensed open-source datasets and internally generated synthetic data designed for reasoning tasks. The model allows controllability of its thinking capability, ensuring it is applied only when required. Developers: Granite Team, IBM Website: Granite Docs Release Date: February 26th, 2025 License: Apache 2.0 Supported Languages: - English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages. Intended Use: - This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications. Capabilities - Thinking Summarization Text classification Text extraction Question-answering Retrieval Augmented Generation (RAG) Code related tasks Function-calling tasks Multilingual dialog use cases Long-context tasks including long document/meeting summarization, long document QA, etc. Generation: - This is a simple example of how to use Granite-3.2-8B-Instruct model. Install the following libraries: - pip install torch torchvision torchaudio pip install accelerate pip install transformers Then, copy the snippet from the section that is relevant for your use case. from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed import torch model_path="ibm-granite/granite-3.2-8b-instruct" device="cuda" model = AutoModelForCausalLM.from_pretrained( model_path, device_map=device, torch_dtype=torch.bfloat16, ) tokenizer = AutoTokenizer.from_pretrained( model_path ) conv = [{"role": "user", "content":"You have 10 liters of a 30% acid solution. How many liters of a 70% acid solution must be added to achieve a 50% acid mixture?"}] input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device) set_seed(42) output = model.generate( **input_ids, max_new_tokens=8192, ) prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True) print(prediction) Example Outputs - thinking=True Here is my thought process: 1. We know we start with 10 liters of a 30% acid solution, so there's 3 liters of pure acid in it (10 liters * 0.3). 2. We need to determine how much of the 70% acid solution to add such that the final mixture is 50% acid. 3. Let's say x liters of the 70% acid solution are added. This will contribute 0.7x liters of pure acid. 4. The total volume of the new mixture will be 10 + x liters, and the total amount of pure acid will be 3 + 0.7x liters. 5. We need to set up an equation where (3 + 0.7x) / (10 + x) = 0.5 (since we want a 50% acid mixture). 6. Solving this equation for x will give us the volume of 70% solution needed. Here is my response: To solve this, we need to set up an equation based on the concentration of acid before and after mixing. 1. **Initial Acid Content**: - We start with 10 liters of a 30% acid solution. - Pure acid in this = 10 * 0.3 = 3 liters. 2. **Variable for Addition**: - Let x be the liters of 70% acid solution to be added. - Pure acid in x liters of 70% solution = 0.7x liters. 3. **Final Mixture Concentration**: - Total volume of mixture = 10 + x liters. - Total pure acid in the mixture = 3 (from initial solution) + 0.7x (from added solution). 4. **Setting up the equation for 50% acid concentration**: - We require that the ratio of pure acid to total volume is 0.5 (50%). - Therefore, the equation is: \[ \frac{3 + 0.7x}{10 + x} = 0.5 \] 5. **Solving the equation**: Multiply both sides by (10 + x) to clear the fraction: \[ 3 + 0.7x = 0.5(10 + x) \] Expanding the right side: \[ 3 + 0.7x = 5 + 0.5x \] Subtract 0.5x from both sides: \[ 3 + 0.2x = 5 \] Subtract 3 from both sides: \[ 0.2x = 2 \] Divide by 0.2: \[ x = 10 \] So, 10 liters of the 70% acid solution must be added to the initial 10 liters of 30% solution to get a 50% acid mixture. Let's verify: - Initial acid: 3 liters - Acid from added solution: 0.7 * 10 = 7 liters - Total acid: 3 + 7 = 10 liters - Total volume: 10 + 10 = 20 liters - Acid percentage: \(\frac{10}{20} = 0.5 = 50\%\) This confirms our solution. Final answer: 10 liters of 70% acid solution should be added. thinking=False Let's denote the amount of 70% acid solution to be added as x liters. We start with 10 liters of a 30% acid solution, which means there are 0.3 * 10 = 3 liters of pure acid in it. We want to end up with a 50% acid solution after adding x liters of 70% acid solution. The total volume of the mixture will be 10 + x liters, and we want half of this volume to be pure acid. So, the amount of pure acid in the final mixture should be 0.5 * (10 + x) liters. The amount of pure acid in the final mixture comes from two sources: the original 30% solution and the added 70% solution. So, we can set up the following equation: 3 (from the original solution) + 0.7x (from the added solution) = 0.5 * (10 + x) Now, let's solve for x: 3 + 0.7x = 5 + 0.5x 0.7x - 0.5x = 5 - 3 0.2x = 2 x = 2 / 0.2 x = 10 So, you need to add 10 liters of a 70% acid solution to the 10 liters of a 30% acid solution to get a 50% acid mixture. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/granite-3.2-8b-instruct-Q4_K_M-GGUF --hf-file granite-3.2-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/granite-3.2-8b-instruct-Q4_K_M-GGUF --hf-file granite-3.2-8b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/granite-3.2-8b-instruct-Q4_K_M-GGUF --hf-file granite-3.2-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/granite-3.2-8b-instruct-Q4_K_M-GGUF --hf-file granite-3.2-8b-instruct-q4_k_m.gguf -c 2048 ```
{"base_model": "ibm-granite/granite-3.2-8b-instruct", "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["language", "granite-3.2", "llama-cpp", "gguf-my-repo"], "inference": false}
task
[ "TEXT_CLASSIFICATION", "SUMMARIZATION" ]
45,601
Huzaifa68/investment_v4
Huzaifa68
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:35", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:Huzaifa68/investment_v1", "base_model:finetune:Huzaifa68/investment_v1", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-09-27T05:51:18Z
2024-09-27T05:51:38+00:00
6
0
--- base_model: Huzaifa68/investment_v1 library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:35 - loss:MultipleNegativesRankingLoss widget: - source_sentence: Historical Performance (%) sentences: - Investment Objective - investment committee - historical performance - source_sentence: Investment Objective sentences: - historical performance - Sindh Workers' - objective - source_sentence: Fund Statistics sentences: - Investment Objective - statistics - Asset Allocation - source_sentence: Investment Objective sentences: - Asset Allocation - Investment Committee - objective - source_sentence: Basic Information sentences: - investment objective - information - Asset Allocation (as % of Total Assets) --- # SentenceTransformer based on Huzaifa68/investment_v1 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Huzaifa68/investment_v1](https://huggingface.co/Huzaifa68/investment_v1) on the csv dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Huzaifa68/investment_v1](https://huggingface.co/Huzaifa68/investment_v1) <!-- at revision 27d3f59a04e7b99eaaefce04446b00b7ca617c79 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - csv <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Huzaifa68/investment_v4") # Run inference sentences = [ 'Basic Information', 'information', 'Asset Allocation (as % of Total Assets)', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### csv * Dataset: csv * Size: 35 training samples * Columns: <code>anchor</code>, <code>postive</code>, and <code>negative</code> * Approximate statistics based on the first 35 samples: | | anchor | postive | negative | |:--------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 5.26 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.66 tokens</li><li>max: 4 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 6.69 tokens</li><li>max: 11 tokens</li></ul> | * Samples: | anchor | postive | negative | |:----------------------------------|:----------------------------------|:-----------------------------------------------------| | <code>Investment Objective</code> | <code>investment objective</code> | <code>Asset Allocation (as % of Total Assets)</code> | | <code>Investment Objective</code> | <code>investment objective</code> | <code>Fund Statistics</code> | | <code>Investment Objective</code> | <code>investment objective</code> | <code>Fund Performance</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 2 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.1.1 - Transformers: 4.45.0 - PyTorch: 2.4.1+cu121 - Accelerate: 0.34.2 - Datasets: 3.0.1 - Tokenizers: 0.20.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on Huzaifa68/investment_v1 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Huzaifa68/investment_v1](https://huggingface.co/Huzaifa68/investment_v1) on the csv dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Huzaifa68/investment_v1](https://huggingface.co/Huzaifa68/investment_v1) <!-- at revision 27d3f59a04e7b99eaaefce04446b00b7ca617c79 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - csv <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Huzaifa68/investment_v4") # Run inference sentences = [ 'Basic Information', 'information', 'Asset Allocation (as % of Total Assets)', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### csv * Dataset: csv * Size: 35 training samples * Columns: <code>anchor</code>, <code>postive</code>, and <code>negative</code> * Approximate statistics based on the first 35 samples: | | anchor | postive | negative | |:--------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 5.26 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.66 tokens</li><li>max: 4 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 6.69 tokens</li><li>max: 11 tokens</li></ul> | * Samples: | anchor | postive | negative | |:----------------------------------|:----------------------------------|:-----------------------------------------------------| | <code>Investment Objective</code> | <code>investment objective</code> | <code>Asset Allocation (as % of Total Assets)</code> | | <code>Investment Objective</code> | <code>investment objective</code> | <code>Fund Statistics</code> | | <code>Investment Objective</code> | <code>investment objective</code> | <code>Fund Performance</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 2 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.1.1 - Transformers: 4.45.0 - PyTorch: 2.4.1+cu121 - Accelerate: 0.34.2 - Datasets: 3.0.1 - Tokenizers: 0.20.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "Huzaifa68/investment_v1", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:35", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Historical Performance (%)", "sentences": ["Investment Objective", "investment committee", "historical performance"]}, {"source_sentence": "Investment Objective", "sentences": ["historical performance", "Sindh Workers'", "objective"]}, {"source_sentence": "Fund Statistics", "sentences": ["Investment Objective", "statistics", "Asset Allocation"]}, {"source_sentence": "Investment Objective", "sentences": ["Asset Allocation", "Investment Committee", "objective"]}, {"source_sentence": "Basic Information", "sentences": ["investment objective", "information", "Asset Allocation (as % of Total Assets)"]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,602
3gg/all-mpnet-base-v2
3gg
sentence-similarity
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "en", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-09T15:02:46Z
2023-05-09T15:09:31+00:00
25
0
--- language: en license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2') model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 512 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
null
Non_BioNLP
# all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2') model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 512 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
{"language": "en", "license": "apache-2.0", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"]}
task
[ "QUESTION_ANSWERING" ]
45,603
RichardErkhov/MediaTek-Research_-_Breeze-7B-Base-v1_0-8bits
RichardErkhov
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:2403.02712", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
2024-05-16T10:00:20Z
2024-05-16T10:06:34+00:00
6
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Breeze-7B-Base-v1_0 - bnb 8bits - Model creator: https://huggingface.co/MediaTek-Research/ - Original model: https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0/ Original model description: --- pipeline_tag: text-generation license: apache-2.0 language: - zh - en --- # Model Card for Breeze-7B-Base-v1_0 MediaTek Research Breeze-7B (hereinafter referred to as Breeze-7B) is a language model family that builds on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically intended for Traditional Chinese use. [Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) is the base model for the Breeze-7B series. It is suitable for use if you have substantial fine-tuning data to tune it for your specific use case. [Breeze-7B-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) derives from the base model Breeze-7B-Base, making the resulting model amenable to be used as-is for commonly seen tasks. The current release version of Breeze-7B is v1.0, which has undergone a more refined training process compared to Breeze-7B-v0_1, resulting in significantly improved performance in both English and Traditional Chinese. For details of this model please read our [paper](https://arxiv.org/abs/2403.02712). Practicality-wise: - Breeze-7B-Base expands the original vocabulary with an additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, and everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).] - Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization. Performance-wise: - Breeze-7B-Instruct demonstrates impressive performance in benchmarks for Traditional Chinese and English when compared to similar-sized open-source contemporaries such as Taiwan-LLM-7B/13B-chat, QWen(1.5)-7B-Chat, and Yi-6B-Chat. [See [Chat Model Performance](#chat-model-performance).] *A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.* ## Demo [Try Demo Here](https://huggingface.co/spaces/MediaTek-Research/Demo_Breeze-7B-Instruct-v1.0) ## Features - Breeze-7B-Base-v1_0 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 8k-token context length - Breeze-7B-Instruct-v1_0 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 8k-token context length - Multi-turn dialogue (without special handling for harmfulness) ## Model Details - Breeze-7B-Base-v1_0 - Finetuned from: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) - Breeze-7B-Instruct-v1_0 - Finetuned from: [MediaTek-Research/Breeze-7B-Base-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) ## Base Model Performance Here we compare Breeze-7B-Base-v1_0 with other open-source base language models of similar parameter size that are widely recognized for their good performance in Chinese. **TMMLU+**, **DRCD**, and **Table** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2). [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval) and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train). We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood. | Models | #Parameters | TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MMLU (ACC) | |---------------------------------------------- |--------|--------------|-------------|-------------|------------| | | |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Knowledge| | | | 5 shot | 3 shot | 5 shot | 5 shot | | [**Breeze-7B-Base-v1_0**](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) | 7B | 42.67 | 80.61 | 31.99 | 61.24 | | [Yi-6B](https://huggingface.co/01-ai/Yi-6B) | 6B | 49.63 | 76.61 | 34.72 | 65.35 | | [Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) | 7B | 46.59 | 74.41 | 30.56 | 63.07 | | [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 7B | 36.93 | 79.27 | 27.78 | 64.89 | ## Instruction-tuned Model Performance Here we compare Breeze-7B-Instruct-v1_0 with other open-source instruction-tuned language models of similar parameter size that are widely recognized for their good performance in Chinese. Also, we listed the benchmark scores of GPT-3.5 Turbo (1106), which represents one of the most widely used high-quality cloud language model API services, for reference. **TMMLU+**, **DRCD**, **Table**, and **MT-Bench-tw** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2). [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval) and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train). **MT-Bench** source from [lmsys/mt_bench_human_judgments](https://huggingface.co/datasets/lmsys/mt_bench_human_judgments). We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood. We use the code revised from [fastchat llm_judge](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) (GPT4 as judge) to evaluate **MT-Bench-tw** and **MT-Bench**. | Models | #Parameters | MT-Bench-tw (Score)| TMMLU+ (ACC) | Table (ACC) | MT-Bench (Score) | MMLU (ACC) | |---------------------------------------------------------------------------------------------------------|--------|--------------------|--------------|-------------|------------------|-------------| | | |TC, Chat |TC, Knowledge |TC, Reasoning|EN, Chat |EN, Knowledge| | | |0 shot | 0 shot | 0 shot |0 shot | 0 shot | | [**Breeze-7B-Instruct-v1_0**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) | 7B |6.0 | 42.67 | 39.58 |7.4 | 61.73 | | [GPT-3.5-Turbo](https://openai.com) | |7.1 | 43.56 | 45.14 |7.9 | 67.09 | | [Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) | 7B |6.4 | 45.65 | 34.72 |7.6 | 61.85 | | [Mistral-7B-v0.2-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 7B |5.6 | 34.95 | 33.33 |7.6 | 59.97 | | [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) | 6B |5.0 | 44.79 | 25.69 |6.0 | 59.45 | | [Taiwan-LLM-13B-v2.0-chat](https://huggingface.co/yentinglin/Taiwan-LLM-13B-v2.0-chat) | 13B |5.0 | 29.47 | 23.61 |N/A* | 50.50 | | [Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat) | 7B |4.2 | 28.08 | 31.25 |N/A* | 42.72 | \* Taiwan-LLM models respond to multi-turn questions (English) in Traditional Chinese. | Details on MT-Bench-tw (0 shot):<br/>Models | STEM |Extraction|Reasoning| Math | Coding | Roleplay| Writing |Humanities| AVG | |-----------------------------------------------------|---------|---------|---------|---------|---------|---------|---------|----------| --------- | | **Breeze-7B-Instruct-v1_0** | 7.8 | 5.2 | 4.2 | 4.2 | 4.1 | 7.6 | 5.9 | 9.1 | 6.0 | | GPT-3.5-Turbo | 7.8 | 6.1 | 5.1 | 6.4 | 6.2 | 8.7 | 7.4 | 9.3 | 7.1 | | Qwen1.5-7B-Chat | 9 | 5.6 | 4.7 | 2.8 | 3.7 | 8.0 | 8.0 | 9.4 | 6.4 | | Mistral-7B-v0.2-Instruct | 6.9 | 4.6 | 4.3 | 3.3 | 4.4 | 7.2 | 6.2 | 7.8 | 5.6 | | Yi-6B-Chat | 7.3 | 2.7 | 3.1 | 3.3 | 2.3 | 7.2 | 5.2 | 8.8 | 5.0 | | Taiwan-LLM-13B-v2.0-chat | 6.1 | 3.4 | 4.1 | 2.3 | 3.1 | 7.4 | 6.6 | 6.8 | 5.0 | | Taiwan-LLM-7B-v2.1-chat | 5.2 | 2.6 | 2.3 | 1.2 | 3.4 | 6.6 | 5.7 | 6.8 | 4.2 | | Details on TMMLU+ (0 shot):<br/>Model | STEM | Social Science | Humanities | Other | AVG | |-----------------------------------------------------|--------------|----------------|------------|------------|---------| | **Breeze-7B-Instruct-v1_0** | 36.46 | 48.38 | 45.11 | 40.75 | 42.67 | | Mistral-7B-v0.2-Instruct | 32.79 | 38.05 | 34.89 | 34.04 | 34.94 | | Yi-6B-Chat | 37.80 | 51.74 | 45.36 | 44.25 | 44.79 | | GPT-3.5-Turbo | 41.58 | 48.52 | 40.96 | 43.18 | 43.56 | | Qwen1.5-7B-Chat | 41.48 | 51.66 | 44.05 | 45.40 | 45.65 | | Taiwan-LLM-13B-v2.0-chat | 27.74 | 33.69 | 27.03 | 29.43 | 29.47 | | Taiwan-LLM-7B-v2.1-chat | 25.58 | 31.76 | 27.36 | 27.61 | 28.08 | ## Inference Performance In this test, we use the first 700 characters of the [web article](https://health.udn.com/health/story/5976/7699252?from=udn_ch1005_main_index) as the input and ask the model to write the same article again. All inferences run on 2 RTX A6000 GPUs (using `vllm`, with a tensor-parallel size of 2). | Models | Inference Time (sec)|Estimated Max Input Length (Char)| |--------------------------------------------------------------------|-------------------|--------------------------| | **Breeze-7B-Instruct-v1_0** | 10.74 | 11.1k | | Qwen1.5-7B-Chat | 9.35 | 38.9k | | Yi-6B-Chat | 10.62 | 5.2k | | Mistral-7B-Instruct-v0.2 | 20.48 | 5.1k | | Taiwan-LLM-7B-v2.1-chat | 26.26 | 2.2k | <!---| Taiwan-LLM-13B-v2.0-chat | 36.80 | 2.2k |---> <!---## Long-context Performance TBD---> ## Use in Transformers First install direct dependencies: ``` pip install transformers torch accelerate ``` If you want faster inference using flash-attention2, you need to install these dependencies: ```bash pip install packaging ninja pip install flash-attn ``` Then load the model in transformers: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Instruction Model model = AutoModelForCausalLM.from_pretrained( "MediaTek-Research/Breeze-7B-Instruct-v1_0", device_map="auto", torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2" # optional ) # Basemodel model = AutoModelForCausalLM.from_pretrained( "MediaTek-Research/Breeze-7B-Base-v1_0", device_map="auto", torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2" # optional ) ``` **For Breeze-7B-Instruct**, the structure of the query is ```txt <s>SYS_PROMPT [INST] QUERY1 [/INST] RESPONSE1 [INST] QUERY2 [/INST] ``` where `SYS_PROMPT`, `QUERY1`, `RESPONSE1`, and `QUERY2` can be provided by the user. The suggested default `SYS_PROMPT` is ```txt You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. ``` We also integrate `chat_template` into [tokenizer_config.json](tokenizer_config.json), so you can `apply_chat_template` to get the prompt. ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("MediaTek-Research/Breeze-7B-Instruct-v1_0") >>> chat = [ ... {"role": "user", "content": "你好,請問你可以完成什麼任務?"}, ... {"role": "assistant", "content": "你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。"}, ... {"role": "user", "content": "太棒了!"}, ... ] >>> tokenizer.apply_chat_template(chat, tokenize=False) "<s>You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. [INST] 你好,請問你可以完成什麼任務? [/INST] 你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。 [INST] 太棒了! [/INST] " # Tokenized results # ['▁', '你好', ',', '請問', '你', '可以', '完成', '什麼', '任務', '?'] # ['▁', '你好', ',', '我', '可以', '幫助', '您', '解決', '各種', '問題', '、', '提供', '資訊', '和', '協助', '您', '完成', '許多', '不同', '的', '任務', '。', '例如', ':', '回答', '技術', '問題', '、', '提供', '建議', '、', '翻譯', '文字', '、', '尋找', '資料', '或', '協助', '您', '安排', '行程', '等', '。', '請', '告訴', '我', '如何', '能', '幫助', '您', '。'] # ['▁', '太', '棒', '了', '!'] >>> outputs = model.generate(tokenizer.apply_chat_template(chat, return_tensors="pt"), max_new_tokens=128) >>> print(tokenizer.decode(outputs[0])) ``` ## Citation <!-- ``` @article{breeze7b2024, title={}, author={}, journal={arXiv}, year={2024} } ``` ---> ``` @article{MediaTek-Research2024breeze7b, title={Breeze-7B Technical Report}, author={Chan-Jan Hsu and Chang-Le Liu and Feng-Ting Liao and Po-Chun Hsu and Yi-Chang Chen and Da-Shan Shiu}, year={2024}, eprint={2403.02712}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Breeze-7B-Base-v1_0 - bnb 8bits - Model creator: https://huggingface.co/MediaTek-Research/ - Original model: https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0/ Original model description: --- pipeline_tag: text-generation license: apache-2.0 language: - zh - en --- # Model Card for Breeze-7B-Base-v1_0 MediaTek Research Breeze-7B (hereinafter referred to as Breeze-7B) is a language model family that builds on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically intended for Traditional Chinese use. [Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) is the base model for the Breeze-7B series. It is suitable for use if you have substantial fine-tuning data to tune it for your specific use case. [Breeze-7B-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) derives from the base model Breeze-7B-Base, making the resulting model amenable to be used as-is for commonly seen tasks. The current release version of Breeze-7B is v1.0, which has undergone a more refined training process compared to Breeze-7B-v0_1, resulting in significantly improved performance in both English and Traditional Chinese. For details of this model please read our [paper](https://arxiv.org/abs/2403.02712). Practicality-wise: - Breeze-7B-Base expands the original vocabulary with an additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, and everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).] - Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization. Performance-wise: - Breeze-7B-Instruct demonstrates impressive performance in benchmarks for Traditional Chinese and English when compared to similar-sized open-source contemporaries such as Taiwan-LLM-7B/13B-chat, QWen(1.5)-7B-Chat, and Yi-6B-Chat. [See [Chat Model Performance](#chat-model-performance).] *A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.* ## Demo [Try Demo Here](https://huggingface.co/spaces/MediaTek-Research/Demo_Breeze-7B-Instruct-v1.0) ## Features - Breeze-7B-Base-v1_0 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 8k-token context length - Breeze-7B-Instruct-v1_0 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 8k-token context length - Multi-turn dialogue (without special handling for harmfulness) ## Model Details - Breeze-7B-Base-v1_0 - Finetuned from: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) - Breeze-7B-Instruct-v1_0 - Finetuned from: [MediaTek-Research/Breeze-7B-Base-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) ## Base Model Performance Here we compare Breeze-7B-Base-v1_0 with other open-source base language models of similar parameter size that are widely recognized for their good performance in Chinese. **TMMLU+**, **DRCD**, and **Table** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2). [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval) and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train). We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood. | Models | #Parameters | TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MMLU (ACC) | |---------------------------------------------- |--------|--------------|-------------|-------------|------------| | | |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Knowledge| | | | 5 shot | 3 shot | 5 shot | 5 shot | | [**Breeze-7B-Base-v1_0**](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) | 7B | 42.67 | 80.61 | 31.99 | 61.24 | | [Yi-6B](https://huggingface.co/01-ai/Yi-6B) | 6B | 49.63 | 76.61 | 34.72 | 65.35 | | [Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) | 7B | 46.59 | 74.41 | 30.56 | 63.07 | | [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 7B | 36.93 | 79.27 | 27.78 | 64.89 | ## Instruction-tuned Model Performance Here we compare Breeze-7B-Instruct-v1_0 with other open-source instruction-tuned language models of similar parameter size that are widely recognized for their good performance in Chinese. Also, we listed the benchmark scores of GPT-3.5 Turbo (1106), which represents one of the most widely used high-quality cloud language model API services, for reference. **TMMLU+**, **DRCD**, **Table**, and **MT-Bench-tw** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2). [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval) and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train). **MT-Bench** source from [lmsys/mt_bench_human_judgments](https://huggingface.co/datasets/lmsys/mt_bench_human_judgments). We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood. We use the code revised from [fastchat llm_judge](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) (GPT4 as judge) to evaluate **MT-Bench-tw** and **MT-Bench**. | Models | #Parameters | MT-Bench-tw (Score)| TMMLU+ (ACC) | Table (ACC) | MT-Bench (Score) | MMLU (ACC) | |---------------------------------------------------------------------------------------------------------|--------|--------------------|--------------|-------------|------------------|-------------| | | |TC, Chat |TC, Knowledge |TC, Reasoning|EN, Chat |EN, Knowledge| | | |0 shot | 0 shot | 0 shot |0 shot | 0 shot | | [**Breeze-7B-Instruct-v1_0**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) | 7B |6.0 | 42.67 | 39.58 |7.4 | 61.73 | | [GPT-3.5-Turbo](https://openai.com) | |7.1 | 43.56 | 45.14 |7.9 | 67.09 | | [Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) | 7B |6.4 | 45.65 | 34.72 |7.6 | 61.85 | | [Mistral-7B-v0.2-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 7B |5.6 | 34.95 | 33.33 |7.6 | 59.97 | | [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) | 6B |5.0 | 44.79 | 25.69 |6.0 | 59.45 | | [Taiwan-LLM-13B-v2.0-chat](https://huggingface.co/yentinglin/Taiwan-LLM-13B-v2.0-chat) | 13B |5.0 | 29.47 | 23.61 |N/A* | 50.50 | | [Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat) | 7B |4.2 | 28.08 | 31.25 |N/A* | 42.72 | \* Taiwan-LLM models respond to multi-turn questions (English) in Traditional Chinese. | Details on MT-Bench-tw (0 shot):<br/>Models | STEM |Extraction|Reasoning| Math | Coding | Roleplay| Writing |Humanities| AVG | |-----------------------------------------------------|---------|---------|---------|---------|---------|---------|---------|----------| --------- | | **Breeze-7B-Instruct-v1_0** | 7.8 | 5.2 | 4.2 | 4.2 | 4.1 | 7.6 | 5.9 | 9.1 | 6.0 | | GPT-3.5-Turbo | 7.8 | 6.1 | 5.1 | 6.4 | 6.2 | 8.7 | 7.4 | 9.3 | 7.1 | | Qwen1.5-7B-Chat | 9 | 5.6 | 4.7 | 2.8 | 3.7 | 8.0 | 8.0 | 9.4 | 6.4 | | Mistral-7B-v0.2-Instruct | 6.9 | 4.6 | 4.3 | 3.3 | 4.4 | 7.2 | 6.2 | 7.8 | 5.6 | | Yi-6B-Chat | 7.3 | 2.7 | 3.1 | 3.3 | 2.3 | 7.2 | 5.2 | 8.8 | 5.0 | | Taiwan-LLM-13B-v2.0-chat | 6.1 | 3.4 | 4.1 | 2.3 | 3.1 | 7.4 | 6.6 | 6.8 | 5.0 | | Taiwan-LLM-7B-v2.1-chat | 5.2 | 2.6 | 2.3 | 1.2 | 3.4 | 6.6 | 5.7 | 6.8 | 4.2 | | Details on TMMLU+ (0 shot):<br/>Model | STEM | Social Science | Humanities | Other | AVG | |-----------------------------------------------------|--------------|----------------|------------|------------|---------| | **Breeze-7B-Instruct-v1_0** | 36.46 | 48.38 | 45.11 | 40.75 | 42.67 | | Mistral-7B-v0.2-Instruct | 32.79 | 38.05 | 34.89 | 34.04 | 34.94 | | Yi-6B-Chat | 37.80 | 51.74 | 45.36 | 44.25 | 44.79 | | GPT-3.5-Turbo | 41.58 | 48.52 | 40.96 | 43.18 | 43.56 | | Qwen1.5-7B-Chat | 41.48 | 51.66 | 44.05 | 45.40 | 45.65 | | Taiwan-LLM-13B-v2.0-chat | 27.74 | 33.69 | 27.03 | 29.43 | 29.47 | | Taiwan-LLM-7B-v2.1-chat | 25.58 | 31.76 | 27.36 | 27.61 | 28.08 | ## Inference Performance In this test, we use the first 700 characters of the [web article](https://health.udn.com/health/story/5976/7699252?from=udn_ch1005_main_index) as the input and ask the model to write the same article again. All inferences run on 2 RTX A6000 GPUs (using `vllm`, with a tensor-parallel size of 2). | Models | Inference Time (sec)|Estimated Max Input Length (Char)| |--------------------------------------------------------------------|-------------------|--------------------------| | **Breeze-7B-Instruct-v1_0** | 10.74 | 11.1k | | Qwen1.5-7B-Chat | 9.35 | 38.9k | | Yi-6B-Chat | 10.62 | 5.2k | | Mistral-7B-Instruct-v0.2 | 20.48 | 5.1k | | Taiwan-LLM-7B-v2.1-chat | 26.26 | 2.2k | <!---| Taiwan-LLM-13B-v2.0-chat | 36.80 | 2.2k |---> <!---## Long-context Performance TBD---> ## Use in Transformers First install direct dependencies: ``` pip install transformers torch accelerate ``` If you want faster inference using flash-attention2, you need to install these dependencies: ```bash pip install packaging ninja pip install flash-attn ``` Then load the model in transformers: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Instruction Model model = AutoModelForCausalLM.from_pretrained( "MediaTek-Research/Breeze-7B-Instruct-v1_0", device_map="auto", torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2" # optional ) # Basemodel model = AutoModelForCausalLM.from_pretrained( "MediaTek-Research/Breeze-7B-Base-v1_0", device_map="auto", torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2" # optional ) ``` **For Breeze-7B-Instruct**, the structure of the query is ```txt <s>SYS_PROMPT [INST] QUERY1 [/INST] RESPONSE1 [INST] QUERY2 [/INST] ``` where `SYS_PROMPT`, `QUERY1`, `RESPONSE1`, and `QUERY2` can be provided by the user. The suggested default `SYS_PROMPT` is ```txt You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. ``` We also integrate `chat_template` into [tokenizer_config.json](tokenizer_config.json), so you can `apply_chat_template` to get the prompt. ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("MediaTek-Research/Breeze-7B-Instruct-v1_0") >>> chat = [ ... {"role": "user", "content": "你好,請問你可以完成什麼任務?"}, ... {"role": "assistant", "content": "你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。"}, ... {"role": "user", "content": "太棒了!"}, ... ] >>> tokenizer.apply_chat_template(chat, tokenize=False) "<s>You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. [INST] 你好,請問你可以完成什麼任務? [/INST] 你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。 [INST] 太棒了! [/INST] " # Tokenized results # ['▁', '你好', ',', '請問', '你', '可以', '完成', '什麼', '任務', '?'] # ['▁', '你好', ',', '我', '可以', '幫助', '您', '解決', '各種', '問題', '、', '提供', '資訊', '和', '協助', '您', '完成', '許多', '不同', '的', '任務', '。', '例如', ':', '回答', '技術', '問題', '、', '提供', '建議', '、', '翻譯', '文字', '、', '尋找', '資料', '或', '協助', '您', '安排', '行程', '等', '。', '請', '告訴', '我', '如何', '能', '幫助', '您', '。'] # ['▁', '太', '棒', '了', '!'] >>> outputs = model.generate(tokenizer.apply_chat_template(chat, return_tensors="pt"), max_new_tokens=128) >>> print(tokenizer.decode(outputs[0])) ``` ## Citation <!-- ``` @article{breeze7b2024, title={}, author={}, journal={arXiv}, year={2024} } ``` ---> ``` @article{MediaTek-Research2024breeze7b, title={Breeze-7B Technical Report}, author={Chan-Jan Hsu and Chang-Le Liu and Feng-Ting Liao and Po-Chun Hsu and Yi-Chang Chen and Da-Shan Shiu}, year={2024}, eprint={2403.02712}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
task
[ "SUMMARIZATION" ]
45,604
kyeul611/roberta-large-finetuned-ynat
kyeul611
text-classification
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "dataset:klue", "base_model:klue/bert-base", "base_model:finetune:klue/bert-base", "license:cc-by-sa-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-11T06:13:27Z
2023-11-13T14:03:33+00:00
6
0
--- base_model: klue/bert-base datasets: - klue license: cc-by-sa-4.0 metrics: - f1 tags: - generated_from_trainer model-index: - name: bert-base-finetuned-ynat results: - task: type: text-classification name: Text Classification dataset: name: klue type: klue config: ynat split: validation args: ynat metrics: - type: f1 value: 0.8673393457362918 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-finetuned-ynat This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset. It achieves the following results on the evaluation set: - Loss: 0.3817 - F1: 0.8673 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 179 | 0.3817 | 0.8673 | | No log | 2.0 | 358 | 0.4065 | 0.8634 | | 0.2194 | 3.0 | 537 | 0.4077 | 0.8624 | | 0.2194 | 4.0 | 716 | 0.4443 | 0.8584 | | 0.2194 | 5.0 | 895 | 0.4795 | 0.8569 | | 0.1477 | 6.0 | 1074 | 0.5159 | 0.8570 | | 0.1477 | 7.0 | 1253 | 0.5445 | 0.8569 | | 0.1477 | 8.0 | 1432 | 0.5711 | 0.8565 | | 0.0849 | 9.0 | 1611 | 0.5913 | 0.8542 | | 0.0849 | 10.0 | 1790 | 0.5945 | 0.8553 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-finetuned-ynat This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset. It achieves the following results on the evaluation set: - Loss: 0.3817 - F1: 0.8673 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 179 | 0.3817 | 0.8673 | | No log | 2.0 | 358 | 0.4065 | 0.8634 | | 0.2194 | 3.0 | 537 | 0.4077 | 0.8624 | | 0.2194 | 4.0 | 716 | 0.4443 | 0.8584 | | 0.2194 | 5.0 | 895 | 0.4795 | 0.8569 | | 0.1477 | 6.0 | 1074 | 0.5159 | 0.8570 | | 0.1477 | 7.0 | 1253 | 0.5445 | 0.8569 | | 0.1477 | 8.0 | 1432 | 0.5711 | 0.8565 | | 0.0849 | 9.0 | 1611 | 0.5913 | 0.8542 | | 0.0849 | 10.0 | 1790 | 0.5945 | 0.8553 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
{"base_model": "klue/bert-base", "datasets": ["klue"], "license": "cc-by-sa-4.0", "metrics": ["f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-finetuned-ynat", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "klue", "type": "klue", "config": "ynat", "split": "validation", "args": "ynat"}, "metrics": [{"type": "f1", "value": 0.8673393457362918, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,605
NYTK/summarization-nol-bart-hungarian
NYTK
summarization
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "hu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-01-10T09:22:27+00:00
81
0
--- language: - hu license: apache-2.0 metrics: - rouge tags: - summarization widget: - text: A Tisza-parti város állatkertjében régóta tartanak szurikátákat ( Suricata suricatta ) , de tavaly tavaszig nem sikerült szaporítani őket , annak ellenére , hogy tágas ház és kifutó épült számukra - közölte Veprik Róbert igazgató . 2010-ben alakult ki az új - három Amszterdamból származó nőstényből és egy budapesti fiatal hímből álló - csapat , amely szaporodni kezdett . 2011-ben három , idén pedig egy utóddal örvendeztették meg a gondozókat és az állatbarátokat . A szurikáták utódai - tizenegy hetes vemhesség után - október és március között vakon és szőrtelenül jönnek a világra . A kicsinyek háromhetesen bújnak elő az üregből , és nevelésükben mindkét szülő részt vesz . A szurikátacsapatokban a család tagjai nagyon szoros kapcsolatban állnak egymással , viszont nagyon harciasan fellépnek az idegenekkel szemben , akár meg is ölhetik azt az állatot , amelyet betolakodónak tekintenek . Bár a Dél-Afrikában , a Kalahári sivatagban őshonos cibetmacskaféle ragadozókat a szegedi állatkertben természetes élőhelyükhöz képest kevesebb veszély fenyegeti , a vadasparki erdőben ragadozó madarak is élnek , amelyek akár zsákmányként is tekinthetnének a szurikátákra . A szegedi csapatnál azonban szigorú őrség van , mindig lesi valaki két lábra állva a veszélyforrásokat . Az őrszemek figyelmét még a sárkányrepülők is felkeltik , és felbukkanásakor valamennyi egyed biztos helyre menekül . A szurikáták a Kalahári sivatag bozótos , sziklás területein csapatokban élnek . A 700 gramm körüli testtömegű ragadozók rovarokkal , lárvákkal , skorpiókkal táplálkoznak , de néha elfogyasztják a kisebb gerinceseket , tojásokat és növényi gumókat is . A nappal aktív állatok földalatti üregrendszert ásnak , amelynek több bejárata is van . Ha a szurikáták idegen csapattal vagy ragadozóval kerülnek szembe , azonnal elkezdenek ásni , nagy porfelhőt kavarva . Az is gyakorta előfordul , hogy szorosan egymáshoz bújnak , felborzolják szőrüket , megnyújtják testüket , hogy minél nagyobbnak látszódjanak . Az előadásuk csúcspontján pedig az egész csapat a levegőbe ugrik , közben pedig morog . A hangadás egyébként is fontos a szurikáták kapcsolatában , az egyedek legalább tízféle jelzést használnak a kolónián belül . --- # Hungarian Abstractive Summarization BART model For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp). - BART base model (see Results Table - bold): - Pretrained on Webcorpus 2.0 - Finetuned NOL corpus (nol.hu) - Segments: 397,343 ## Limitations - tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy)) - max_source_length = 512 - max_target_length = 256 ## Results | Model | HI | NOL | | ------------- | ------------- | ------------- | | BART-base-512 | 30.18/13.86/22.92 | **46.48/32.40/39.45** | | BART-base-1024| 31.86/14.59/23.79 | 47.01/32.91/39.97 | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {yang-bart, title = {{BARTerezzünk! - Messze, messze, messze a világtól, - BART kísérleti modellek magyar nyelvre}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Yang, Zijian Győző}, pages = {15--29} } ```
null
Non_BioNLP
# Hungarian Abstractive Summarization BART model For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp). - BART base model (see Results Table - bold): - Pretrained on Webcorpus 2.0 - Finetuned NOL corpus (nol.hu) - Segments: 397,343 ## Limitations - tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy)) - max_source_length = 512 - max_target_length = 256 ## Results | Model | HI | NOL | | ------------- | ------------- | ------------- | | BART-base-512 | 30.18/13.86/22.92 | **46.48/32.40/39.45** | | BART-base-1024| 31.86/14.59/23.79 | 47.01/32.91/39.97 | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {yang-bart, title = {{BARTerezzünk! - Messze, messze, messze a világtól, - BART kísérleti modellek magyar nyelvre}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Yang, Zijian Győző}, pages = {15--29} } ```
{"language": ["hu"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization"], "widget": [{"text": "A Tisza-parti város állatkertjében régóta tartanak szurikátákat ( Suricata suricatta ) , de tavaly tavaszig nem sikerült szaporítani őket , annak ellenére , hogy tágas ház és kifutó épült számukra - közölte Veprik Róbert igazgató . 2010-ben alakult ki az új - három Amszterdamból származó nőstényből és egy budapesti fiatal hímből álló - csapat , amely szaporodni kezdett . 2011-ben három , idén pedig egy utóddal örvendeztették meg a gondozókat és az állatbarátokat . A szurikáták utódai - tizenegy hetes vemhesség után - október és március között vakon és szőrtelenül jönnek a világra . A kicsinyek háromhetesen bújnak elő az üregből , és nevelésükben mindkét szülő részt vesz . A szurikátacsapatokban a család tagjai nagyon szoros kapcsolatban állnak egymással , viszont nagyon harciasan fellépnek az idegenekkel szemben , akár meg is ölhetik azt az állatot , amelyet betolakodónak tekintenek . Bár a Dél-Afrikában , a Kalahári sivatagban őshonos cibetmacskaféle ragadozókat a szegedi állatkertben természetes élőhelyükhöz képest kevesebb veszély fenyegeti , a vadasparki erdőben ragadozó madarak is élnek , amelyek akár zsákmányként is tekinthetnének a szurikátákra . A szegedi csapatnál azonban szigorú őrség van , mindig lesi valaki két lábra állva a veszélyforrásokat . Az őrszemek figyelmét még a sárkányrepülők is felkeltik , és felbukkanásakor valamennyi egyed biztos helyre menekül . A szurikáták a Kalahári sivatag bozótos , sziklás területein csapatokban élnek . A 700 gramm körüli testtömegű ragadozók rovarokkal , lárvákkal , skorpiókkal táplálkoznak , de néha elfogyasztják a kisebb gerinceseket , tojásokat és növényi gumókat is . A nappal aktív állatok földalatti üregrendszert ásnak , amelynek több bejárata is van . Ha a szurikáták idegen csapattal vagy ragadozóval kerülnek szembe , azonnal elkezdenek ásni , nagy porfelhőt kavarva . Az is gyakorta előfordul , hogy szorosan egymáshoz bújnak , felborzolják szőrüket , megnyújtják testüket , hogy minél nagyobbnak látszódjanak . Az előadásuk csúcspontján pedig az egész csapat a levegőbe ugrik , közben pedig morog . A hangadás egyébként is fontos a szurikáták kapcsolatában , az egyedek legalább tízféle jelzést használnak a kolónián belül ."}]}
task
[ "SUMMARIZATION" ]
45,606
Kevinger/setfit-newsapi
Kevinger
text-classification
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
2024-01-21T13:43:04Z
2024-01-21T13:43:18+00:00
5
0
--- base_model: sentence-transformers/paraphrase-mpnet-base-v2 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: Sonos speakers are up to 25 percent off, plus the rest of this week's best tech deals | Engadget - Engadget - text: Judy Blume says her quote about being 'behind' J.K. Rowling was 'taken out of context' as she clarifies support for the trans community - Yahoo Entertainment - text: Mock Draft Monday | Here's who CBS Sports has the Commanders taking in the first round - Washington Commanders - text: GIANT 130-foot asteroid rushing towards Earth TODAY at 42404 kmph, NASA warns - HT Tech - text: Jonathan Majors & Manager Entertainment 360 Part Ways; Actor Facing Domestic Violence Allegations In NYC - Deadline inference: true model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.8577235772357723 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 6 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 4 | <ul><li>'The Super Mario Bros. Movie Expected To Pass $1 Billion, Biggest Movie Release This Year - Kotaku'</li><li>'Richard Lewis Has Parkinson’s Disease, Finished With Stand-Up Comedy Career - Deadline'</li><li>"EXCLUSIVE Dame Mary Quant's plans for 'small funeral' near her home - Daily Mail"</li></ul> | | 3 | <ul><li>'GPT-5 not in the works currently: OpenAI CEO Sam Altman - The Economic Times'</li><li>'The 2023 Am Law 100: Ranked by Gross Revenue | The American Lawyer - Law.com'</li><li>"Savings Account or CD: What's Smarter Right Now? - Investopedia"</li></ul> | | 5 | <ul><li>"I used all 2023 flagships — here's why the Galaxy S23 Ultra is my favorite phone - Android Central"</li><li>"Google's AI experts on the future of artificial intelligence | 60 Minutes - CBS News"</li><li>'You can snag a first-gen Apple Watch SE for just $149 right now - The Verge'</li></ul> | | 0 | <ul><li>'Fernando Tatis Jr. to make Padres return - MLB.com'</li><li>'Knicks-Cavaliers Game 3 live updates: Score, news, more from NBA Playoffs - New York Post '</li><li>'Josh Donaldson Likely To Miss Multiple Weeks With Hamstring Strain - MLB Trade Rumors'</li></ul> | | 2 | <ul><li>'Are Fermented Foods Actually Good for You? - Lifehacker'</li><li>'ADHD medication | New study says more students self-medicating with ADHD medication - WTVD-TV'</li><li>'Mom With Microscopic Colitis Had Diarrhea up to 40 Times a Day - Insider'</li></ul> | | 1 | <ul><li>'Creating Artificial Avians: A Novel Neural Network Generates Realistic Bird Pictures from Text using Common Sense - Neuroscience News'</li><li>'Consciousness begins with feeling, not thinking | Antonio Damasio, Hanna Damasio, - IAI'</li><li>'The Myth of Objective Data - The MIT Press Reader'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.8577 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Kevinger/setfit-newsapi") # Run inference preds = model("GIANT 130-foot asteroid rushing towards Earth TODAY at 42404 kmph, NASA warns - HT Tech") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 4 | 9.1771 | 22 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 16 | | 1 | 16 | | 2 | 16 | | 3 | 16 | | 4 | 16 | | 5 | 16 | ### Training Hyperparameters - batch_size: (16, 2) - num_epochs: (1, 16) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0021 | 1 | 0.2926 | - | | 0.1042 | 50 | 0.0446 | - | | 0.2083 | 100 | 0.0023 | - | | 0.3125 | 150 | 0.0011 | - | | 0.4167 | 200 | 0.001 | - | | 0.5208 | 250 | 0.0007 | - | | 0.625 | 300 | 0.0007 | - | | 0.7292 | 350 | 0.0009 | - | | 0.8333 | 400 | 0.0075 | - | | 0.9375 | 450 | 0.0006 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.2.2 - Transformers: 4.35.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.16.1 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 6 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 4 | <ul><li>'The Super Mario Bros. Movie Expected To Pass $1 Billion, Biggest Movie Release This Year - Kotaku'</li><li>'Richard Lewis Has Parkinson’s Disease, Finished With Stand-Up Comedy Career - Deadline'</li><li>"EXCLUSIVE Dame Mary Quant's plans for 'small funeral' near her home - Daily Mail"</li></ul> | | 3 | <ul><li>'GPT-5 not in the works currently: OpenAI CEO Sam Altman - The Economic Times'</li><li>'The 2023 Am Law 100: Ranked by Gross Revenue | The American Lawyer - Law.com'</li><li>"Savings Account or CD: What's Smarter Right Now? - Investopedia"</li></ul> | | 5 | <ul><li>"I used all 2023 flagships — here's why the Galaxy S23 Ultra is my favorite phone - Android Central"</li><li>"Google's AI experts on the future of artificial intelligence | 60 Minutes - CBS News"</li><li>'You can snag a first-gen Apple Watch SE for just $149 right now - The Verge'</li></ul> | | 0 | <ul><li>'Fernando Tatis Jr. to make Padres return - MLB.com'</li><li>'Knicks-Cavaliers Game 3 live updates: Score, news, more from NBA Playoffs - New York Post '</li><li>'Josh Donaldson Likely To Miss Multiple Weeks With Hamstring Strain - MLB Trade Rumors'</li></ul> | | 2 | <ul><li>'Are Fermented Foods Actually Good for You? - Lifehacker'</li><li>'ADHD medication | New study says more students self-medicating with ADHD medication - WTVD-TV'</li><li>'Mom With Microscopic Colitis Had Diarrhea up to 40 Times a Day - Insider'</li></ul> | | 1 | <ul><li>'Creating Artificial Avians: A Novel Neural Network Generates Realistic Bird Pictures from Text using Common Sense - Neuroscience News'</li><li>'Consciousness begins with feeling, not thinking | Antonio Damasio, Hanna Damasio, - IAI'</li><li>'The Myth of Objective Data - The MIT Press Reader'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.8577 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Kevinger/setfit-newsapi") # Run inference preds = model("GIANT 130-foot asteroid rushing towards Earth TODAY at 42404 kmph, NASA warns - HT Tech") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 4 | 9.1771 | 22 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 16 | | 1 | 16 | | 2 | 16 | | 3 | 16 | | 4 | 16 | | 5 | 16 | ### Training Hyperparameters - batch_size: (16, 2) - num_epochs: (1, 16) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0021 | 1 | 0.2926 | - | | 0.1042 | 50 | 0.0446 | - | | 0.2083 | 100 | 0.0023 | - | | 0.3125 | 150 | 0.0011 | - | | 0.4167 | 200 | 0.001 | - | | 0.5208 | 250 | 0.0007 | - | | 0.625 | 300 | 0.0007 | - | | 0.7292 | 350 | 0.0009 | - | | 0.8333 | 400 | 0.0075 | - | | 0.9375 | 450 | 0.0006 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.2.2 - Transformers: 4.35.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.16.1 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "Sonos speakers are up to 25 percent off, plus the rest of this week's best tech deals | Engadget - Engadget"}, {"text": "Judy Blume says her quote about being 'behind' J.K. Rowling was 'taken out of context' as she clarifies support for the trans community - Yahoo Entertainment"}, {"text": "Mock Draft Monday | Here's who CBS Sports has the Commanders taking in the first round - Washington Commanders"}, {"text": "GIANT 130-foot asteroid rushing towards Earth TODAY at 42404 kmph, NASA warns - HT Tech"}, {"text": "Jonathan Majors & Manager Entertainment 360 Part Ways; Actor Facing Domestic Violence Allegations In NYC - Deadline"}], "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.8577235772357723, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,607
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_rte_256
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "mobilebert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-01-30T06:16:10Z
2023-01-30T06:18:40+00:00
134
0
--- datasets: - glue language: - en license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: mobilebert_sa_GLUE_Experiment_logit_kd_rte_256 results: - task: type: text-classification name: Text Classification dataset: name: GLUE RTE type: glue config: rte split: validation args: rte metrics: - type: accuracy value: 0.5270758122743683 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_logit_kd_rte_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.3915 - Accuracy: 0.5271 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4089 | 1.0 | 20 | 0.3932 | 0.5271 | | 0.4081 | 2.0 | 40 | 0.3915 | 0.5271 | | 0.4075 | 3.0 | 60 | 0.3918 | 0.5271 | | 0.4074 | 4.0 | 80 | 0.3925 | 0.5271 | | 0.4073 | 5.0 | 100 | 0.3924 | 0.5271 | | 0.4067 | 6.0 | 120 | 0.3918 | 0.5271 | | 0.4036 | 7.0 | 140 | 0.3962 | 0.5126 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_logit_kd_rte_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.3915 - Accuracy: 0.5271 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4089 | 1.0 | 20 | 0.3932 | 0.5271 | | 0.4081 | 2.0 | 40 | 0.3915 | 0.5271 | | 0.4075 | 3.0 | 60 | 0.3918 | 0.5271 | | 0.4074 | 4.0 | 80 | 0.3925 | 0.5271 | | 0.4073 | 5.0 | 100 | 0.3924 | 0.5271 | | 0.4067 | 6.0 | 120 | 0.3918 | 0.5271 | | 0.4036 | 7.0 | 140 | 0.3962 | 0.5126 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mobilebert_sa_GLUE_Experiment_logit_kd_rte_256", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE RTE", "type": "glue", "config": "rte", "split": "validation", "args": "rte"}, "metrics": [{"type": "accuracy", "value": 0.5270758122743683, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,608
mor40/BulBERT-xnli-2epochs
mor40
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:bgglue", "base_model:mor40/BulBERT-chitanka-model", "base_model:finetune:mor40/BulBERT-chitanka-model", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-10-28T15:21:08Z
2023-10-28T19:31:22+00:00
5
0
--- base_model: mor40/BulBERT-chitanka-model datasets: - bgglue metrics: - accuracy tags: - generated_from_trainer model-index: - name: BulBERT-xnli-2epochs results: - task: type: text-classification name: Text Classification dataset: name: bgglue type: bgglue config: xnlibg split: validation args: xnlibg metrics: - type: accuracy value: 0.7016064257028113 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BulBERT-xnli-2epochs This model is a fine-tuned version of [mor40/BulBERT-chitanka-model](https://huggingface.co/mor40/BulBERT-chitanka-model) on the bgglue dataset. It achieves the following results on the evaluation set: - Loss: 0.7013 - Accuracy: 0.7016 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.7543 | 1.0 | 8182 | 0.7510 | 0.6731 | | 0.6804 | 2.0 | 16364 | 0.7013 | 0.7016 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BulBERT-xnli-2epochs This model is a fine-tuned version of [mor40/BulBERT-chitanka-model](https://huggingface.co/mor40/BulBERT-chitanka-model) on the bgglue dataset. It achieves the following results on the evaluation set: - Loss: 0.7013 - Accuracy: 0.7016 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.7543 | 1.0 | 8182 | 0.7510 | 0.6731 | | 0.6804 | 2.0 | 16364 | 0.7013 | 0.7016 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
{"base_model": "mor40/BulBERT-chitanka-model", "datasets": ["bgglue"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "BulBERT-xnli-2epochs", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "bgglue", "type": "bgglue", "config": "xnlibg", "split": "validation", "args": "xnlibg"}, "metrics": [{"type": "accuracy", "value": 0.7016064257028113, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,609
mr8488/distilroberta-finetuned-financial-news-sentiment-analysis
mr8488
text-classification
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "financial", "stocks", "sentiment", "dataset:financial_phrasebank", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-04-23T18:16:46Z
2024-04-23T18:51:37+00:00
25
2
--- datasets: - financial_phrasebank license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer - financial - stocks - sentiment thumbnail: https://huggingface.co/mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis/resolve/main/logo_no_bg.png widget: - text: Operating profit totaled EUR 9.4 mn , down from EUR 11.7 mn in 2004 . - text: Dunder mifflin Operating profit totaled EUR 9.4 mn , down from EUR 11.7 mn in 2004 . model-index: - name: distilRoberta-financial-sentiment results: - task: type: text-classification name: Text Classification dataset: name: financial_phrasebank type: financial_phrasebank args: sentences_allagree metrics: - type: accuracy value: 0.9923008849557522 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> <div style="text-align:center;width:250px;height:250px;"> <img src="https://huggingface.co/mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis/resolve/main/logo_no_bg.png" alt="logo"> </div> # DistilRoberta-financial-sentiment This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the financial_phrasebank dataset. It achieves the following results on the evaluation set: - Loss: 0.1116 - Accuracy: **0.99**23 ## Base Model description This model is a distilled version of the [RoBERTa-base model](https://huggingface.co/roberta-base). It follows the same training procedure as [DistilBERT](https://huggingface.co/distilbert-base-uncased). The code for the distillation process can be found [here](https://github.com/huggingface/transformers/tree/master/examples/distillation). This model is case-sensitive: it makes a difference between English and English. The model has 6 layers, 768 dimension and 12 heads, totalizing 82M parameters (compared to 125M parameters for RoBERTa-base). On average DistilRoBERTa is twice as fast as Roberta-base. ## Training Data Polar sentiment dataset of sentences from financial news. The dataset consists of 4840 sentences from English language financial news categorised by sentiment. The dataset is divided by agreement rate of 5-8 annotators. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 255 | 0.1670 | 0.9646 | | 0.209 | 2.0 | 510 | 0.2290 | 0.9558 | | 0.209 | 3.0 | 765 | 0.2044 | 0.9558 | | 0.0326 | 4.0 | 1020 | 0.1116 | 0.9823 | | 0.0326 | 5.0 | 1275 | 0.1127 | 0.9779 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> <div style="text-align:center;width:250px;height:250px;"> <img src="https://huggingface.co/mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis/resolve/main/logo_no_bg.png" alt="logo"> </div> # DistilRoberta-financial-sentiment This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the financial_phrasebank dataset. It achieves the following results on the evaluation set: - Loss: 0.1116 - Accuracy: **0.99**23 ## Base Model description This model is a distilled version of the [RoBERTa-base model](https://huggingface.co/roberta-base). It follows the same training procedure as [DistilBERT](https://huggingface.co/distilbert-base-uncased). The code for the distillation process can be found [here](https://github.com/huggingface/transformers/tree/master/examples/distillation). This model is case-sensitive: it makes a difference between English and English. The model has 6 layers, 768 dimension and 12 heads, totalizing 82M parameters (compared to 125M parameters for RoBERTa-base). On average DistilRoBERTa is twice as fast as Roberta-base. ## Training Data Polar sentiment dataset of sentences from financial news. The dataset consists of 4840 sentences from English language financial news categorised by sentiment. The dataset is divided by agreement rate of 5-8 annotators. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 255 | 0.1670 | 0.9646 | | 0.209 | 2.0 | 510 | 0.2290 | 0.9558 | | 0.209 | 3.0 | 765 | 0.2044 | 0.9558 | | 0.0326 | 4.0 | 1020 | 0.1116 | 0.9823 | | 0.0326 | 5.0 | 1275 | 0.1127 | 0.9779 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
{"datasets": ["financial_phrasebank"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer", "financial", "stocks", "sentiment"], "thumbnail": "https://huggingface.co/mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis/resolve/main/logo_no_bg.png", "widget": [{"text": "Operating profit totaled EUR 9.4 mn , down from EUR 11.7 mn in 2004 ."}, {"text": "Dunder mifflin Operating profit totaled EUR 9.4 mn , down from EUR 11.7 mn in 2004 ."}], "model-index": [{"name": "distilRoberta-financial-sentiment", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "financial_phrasebank", "type": "financial_phrasebank", "args": "sentences_allagree"}, "metrics": [{"type": "accuracy", "value": 0.9923008849557522, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,610
NorwAI/NorwAI-Mistral-7B-instruct-overfitting
NorwAI
text-generation
[ "transformers", "pytorch", "safetensors", "gguf", "mistral", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-05-11T22:41:01Z
2024-07-01T09:13:36+00:00
0
17
--- extra_gated_prompt: "### NorLLM License\nTerms and Conditions for Use, Reproduction,\ \ and Distribution\n1. Definitions\n\n\"License\" shall mean the terms and conditions\ \ for use, reproduction, and distribution as defined by Sections 1 through 9 of\ \ this document.\n\n\"Licensor\" shall mean NTNU.\n\n\"Legal Entity\" shall mean\ \ the union of the acting entity and all other entities that control, are controlled\ \ by, or are under common control with that entity. For the purposes of this definition,\ \ \"control\" means (i) the power, direct or indirect, to cause the direction or\ \ management of such entity, whether by contract or otherwise, or (ii) ownership\ \ of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial\ \ ownership of such entity.\n\n\"You\" (or \"Your\") shall mean an individual or\ \ Legal Entity exercising permissions granted by this License.\n\n\"Source\" form\ \ shall mean the preferred form for making modifications, including but not limited\ \ to software source code, documentation source, and configuration files.\n\n\"\ Object\" form shall mean any form resulting from mechanical transformation or translation\ \ of a Source form, including but not limited to compiled object code, generated\ \ documentation, and conversions to other media types.\n\n\"Work\" shall mean the\ \ work of authorship, whether in Source or Object form, made available under the\ \ License, as indicated by a copyright notice that is included in or attached to\ \ the work (an example is provided in the Appendix below).\n\n\"Derivative Works\"\ \ shall mean any work, whether in Source or Object form, that is based on (or derived\ \ from) the Work and for which the editorial revisions, annotations, elaborations,\ \ or other modifications represent, as a whole, an original work of authorship.\ \ For the purposes of this License, Derivative Works shall not include works that\ \ remain separable from, or merely link (or bind by name) to the interfaces of,\ \ the Work and Derivative Works thereof.\n\n\"Contribution\" shall mean any work\ \ of authorship, including the original version of the Work and any modifications\ \ or additions to that Work or Derivative Works thereof, that is intentionally submitted\ \ to Licensor for inclusion in the Work by the copyright owner or by an individual\ \ or Legal Entity authorized to submit on behalf of the copyright owner. For the\ \ purposes of this definition, \"submitted\" means any form of electronic, verbal,\ \ or written communication sent to the Licensor or its representatives, including\ \ but not limited to communication on electronic mailing lists, source code control\ \ systems, and issue tracking systems that are managed by, or on behalf of, the\ \ Licensor for the purpose of discussing and improving the Work, but excluding communication\ \ that is conspicuously marked or otherwise designated in writing by the copyright\ \ owner as \"Not a Contribution.\"\n\n\"Contributor\" shall mean Licensor and any\ \ individual or Legal Entity on behalf of whom a Contribution has been received\ \ by Licensor and subsequently incorporated within the Work.\n \n2. Grant of Copyright\ \ License\n\nSubject to the terms and conditions of this License, each Contributor\ \ hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,\ \ irrevocable copyright license to reproduce, prepare Derivative Works of, publicly\ \ display, publicly perform, sublicense, and distribute the Work and such Derivative\ \ Works in Source or Object form.\n \n3. Grant of Patent License\n\nSubject to the\ \ terms and conditions of this License, each Contributor hereby grants to You a\ \ perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except\ \ as stated in this section) patent license to make, have made, use, offer to sell,\ \ sell, import, and otherwise transfer the Work, where such license applies only\ \ to those patent claims licensable by such Contributor that are necessarily infringed\ \ by their Contribution(s) alone or by combination of their Contribution(s) with\ \ the Work to which such Contribution(s) was submitted. If You institute patent\ \ litigation against any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Work or a Contribution incorporated within the Work constitutes\ \ direct or contributory patent infringement, then any patent licenses granted to\ \ You under this License for that Work shall terminate as of the date such litigation\ \ is filed.\n \n4. Redistribution\n\nYou may reproduce and distribute copies of\ \ the Work or Derivative Works thereof in any medium, with or without modifications,\ \ and in Source or Object form, provided that You meet the following conditions:\n\ \n4.1 Use Restrictions\n\nThe Work or Derivative Works thereof may be used free\ \ of charge within the geographic extent defined in clause 4.2. The use is limited\ \ to the Your own purposes, but not for resale and not for purposes that may infringe\ \ on or negatively impact health, safety, environmental protection, and/or fundamental\ \ rights such as the right to human dignity, respect for private and family life,\ \ protection of personal data, freedom of expression and information, freedom of\ \ assembly and of association, and non-discrimination, consumer protection and the\ \ specific rights protecting children.\n\n4.2 Geographical Restriction\n\nOnly\ \ organizations having residence or registered main office in the Nordic countries\ \ (Denmark, Norway, Sweden, Finland, and Iceland) are permitted to use the Work\ \ or Derivative Works thereof without additional consent, provided they comply with\ \ the other terms of this License.\n\n4.3 Attribution\n\nYou must give any other\ \ recipients of the Work or Derivative Works a copy of this License; and\n\nYou\ \ must cause any modified files to carry prominent notices stating that You changed\ \ the files; and\n\nYou must retain, in the Source form of any Derivative Works\ \ that You distribute, all copyright, patent, trademark, and attribution notices\ \ from the Source form of the Work, excluding those notices that do not pertain\ \ to any part of the Derivative Works; and\n\nIf the Work includes a \"NOTICE\"\ \ text file as part of its distribution, then any Derivative Works that You distribute\ \ must include a readable copy of the attribution notices contained within such\ \ NOTICE file, excluding those notices that do not pertain to any part of the Derivative\ \ Works, in at least one of the following places: within a NOTICE text file distributed\ \ as part of the Derivative Works; within the Source form or documentation, if provided\ \ along with the Derivative Works; or, within a display generated by the Derivative\ \ Works, if and wherever such third-party notices normally appear. The contents\ \ of the NOTICE file are for informational purposes only and do not modify the License.\ \ You may add Your own attribution notices within Derivative Works that You distribute,\ \ alongside or as an addendum to the NOTICE text from the Work, provided that such\ \ additional attribution notices cannot be construed as modifying the License.\n\ \nYou may add Your own copyright statement to Your modifications and may provide\ \ additional or different license terms and conditions for use, reproduction, or\ \ distribution of Your modifications, or for any such Derivative Works as a whole,\ \ provided Your use, reproduction, and distribution of the Work otherwise complies\ \ with the conditions stated in this License.\n\n5. Submission of Contributions\n\ \nUnless You explicitly state otherwise, any Contribution intentionally submitted\ \ for inclusion in the Work by You to the Licensor shall be under the terms and\ \ conditions of this License, without any additional terms or conditions. Notwithstanding\ \ the above, nothing herein shall supersede or modify the terms of any separate\ \ license agreement you may have executed with Licensor regarding such Contributions.\n\ \ \n6. Trademarks\n\nThis License does not grant permission to use the trade names,\ \ trademarks, service marks, or product names of the Licensor, except as required\ \ for describing the origin of the Work and reproducing the content of the NOTICE\ \ file.\n \n7. Disclaimer of Warranty\n\nUnless required by applicable law or agreed\ \ to in writing, Licensor provides the Work (and each Contributor provides its Contributions)\ \ on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express\ \ or implied, including, without limitation, any warranties or conditions of TITLE,\ \ NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are\ \ solely responsible for determining the appropriateness of using or redistributing\ \ the Work and assume any risks associated with Your exercise of permissions under\ \ this License.\n \n8. Limitation of Liability\n\nIn no event and under no legal\ \ theory, whether in tort (including negligence), contract, or otherwise, unless\ \ required by applicable law (such as deliberate and grossly negligent acts) or\ \ agreed to in writing, shall any Contributor be liable to You for damages, including\ \ any direct, indirect, special, incidental, or consequential damages of any character\ \ arising as a result of this License or out of the use or inability to use the\ \ Work (including but not limited to damages for loss of goodwill, work stoppage,\ \ computer failure or malfunction, or any and all other commercial damages or losses),\ \ even if such Contributor has been advised of the possibility of such damages.\n\ \ \n9. Accepting Warranty or Additional Liability\n\nWhile redistributing the Work\ \ or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance\ \ of support, warranty, indemnity, or other liability obligations and/or rights\ \ consistent with this License. However, in accepting such obligations, You may\ \ act only on Your own behalf and on Your sole responsibility, not on behalf of\ \ any other Contributor, and only if You agree to indemnify, defend, and hold each\ \ Contributor harmless for any liability incurred by, or claims asserted against,\ \ such Contributor by reason of your accepting any such warranty or additional liability.\n\ 10. Accessibility of Nordic Students\n\nStudents at universities in the Nordic countries\ \ can gain access to the models by stating their university as “Organization” and\ \ their student email address under “Email address at work”.\n\n \nEND OF TERMS\ \ AND CONDITIONS\nContact people for technical questions: Lemei Zhang ([email protected]),\ \ Peng Liu ([email protected])\nContact people for license questions: Jon Atle Gulla\ \ ([email protected])" extra_gated_fields: Name: text Organization: text Country: country Email address at work: text I want to use this model for: type: select options: - Research - Education - Business - label: Other value: other I have read and accepted the conditions of the NorLLM license: checkbox extra_gated_heading: Acknowledge license to accept the repository extra_gated_description: Our team may take 1-2 days to process your request extra_gated_button_content: Acknowledge license --- <mark>Please note that access is limited to students, companies, and organizations from Nordic countries. Kindly provide your work email or student email to access the models. Thank you for your understanding.</mark> <mark>Note: We noticed an overfitting problem on this version, and uploaded a new version of NorwAI-Mistral-7B-instruct and NorwAI-Mixtral-8x7B-instruct models. This model will be removed from huggingface in August.</mark>
null
Non_BioNLP
<mark>Please note that access is limited to students, companies, and organizations from Nordic countries. Kindly provide your work email or student email to access the models. Thank you for your understanding.</mark> <mark>Note: We noticed an overfitting problem on this version, and uploaded a new version of NorwAI-Mistral-7B-instruct and NorwAI-Mixtral-8x7B-instruct models. This model will be removed from huggingface in August.</mark>
{"extra_gated_prompt": "### NorLLM License\nTerms and Conditions for Use, Reproduction, and Distribution\n1. Definitions\n\n\"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.\n\n\"Licensor\" shall mean NTNU.\n\n\"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.\n\n\"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License.\n\n\"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.\n\n\"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.\n\n\"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).\n\n\"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.\n\n\"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\"\n\n\"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.\n \n2. Grant of Copyright License\n\nSubject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.\n \n3. Grant of Patent License\n\nSubject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.\n \n4. Redistribution\n\nYou may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:\n\n4.1 Use Restrictions\n\nThe Work or Derivative Works thereof may be used free of charge within the geographic extent defined in clause 4.2. The use is limited to the Your own purposes, but not for resale and not for purposes that may infringe on or negatively impact health, safety, environmental protection, and/or fundamental rights such as the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non-discrimination, consumer protection and the specific rights protecting children.\n\n4.2 Geographical Restriction\n\nOnly organizations having residence or registered main office in the Nordic countries (Denmark, Norway, Sweden, Finland, and Iceland) are permitted to use the Work or Derivative Works thereof without additional consent, provided they comply with the other terms of this License.\n\n4.3 Attribution\n\nYou must give any other recipients of the Work or Derivative Works a copy of this License; and\n\nYou must cause any modified files to carry prominent notices stating that You changed the files; and\n\nYou must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and\n\nIf the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.\n\nYou may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.\n\n5. Submission of Contributions\n\nUnless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.\n \n6. Trademarks\n\nThis License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for describing the origin of the Work and reproducing the content of the NOTICE file.\n \n7. Disclaimer of Warranty\n\nUnless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.\n \n8. Limitation of Liability\n\nIn no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.\n \n9. Accepting Warranty or Additional Liability\n\nWhile redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.\n10. Accessibility of Nordic Students\n\nStudents at universities in the Nordic countries can gain access to the models by stating their university as “Organization” and their student email address under “Email address at work”.\n\n \nEND OF TERMS AND CONDITIONS\nContact people for technical questions: Lemei Zhang ([email protected]), Peng Liu ([email protected])\nContact people for license questions: Jon Atle Gulla ([email protected])", "extra_gated_fields": {"Name": "text", "Organization": "text", "Country": "country", "Email address at work": "text", "I want to use this model for": {"type": "select", "options": ["Research", "Education", "Business", {"label": "Other", "value": "other"}]}, "I have read and accepted the conditions of the NorLLM license": "checkbox"}, "extra_gated_heading": "Acknowledge license to accept the repository", "extra_gated_description": "Our team may take 1-2 days to process your request", "extra_gated_button_content": "Acknowledge license"}
task
[ "TRANSLATION" ]
45,611
hopkins/eng-kor-common.simcse.roberta-large
hopkins
translation
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-06T18:33:09Z
2023-07-06T18:50:41+00:00
8
0
--- metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: eng-kor-common.simcse.roberta-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-kor-common.simcse.roberta-large This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9976 - Bleu: 7.2965 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-kor-common.simcse.roberta-large This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9976 - Bleu: 7.2965 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "eng-kor-common.simcse.roberta-large", "results": []}]}
task
[ "TRANSLATION" ]
45,612
dippatel11/autotrain-bart-2332573622
dippatel11
summarization
[ "transformers", "pytorch", "autotrain", "summarization", "unk", "dataset:dippatel11/autotrain-data-bart", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
2022-12-04T07:53:42Z
2022-12-04T08:02:59+00:00
11
0
--- datasets: - dippatel11/autotrain-data-bart language: - unk tags: - autotrain - summarization widget: - text: I love AutoTrain 🤗 co2_eq_emissions: emissions: 17.308721714114615 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 2332573622 - CO2 Emissions (in grams): 17.3087 ## Validation Metrics - Loss: 1.460 - Rouge1: 40.163 - Rouge2: 20.060 - RougeL: 30.916 - RougeLsum: 37.538 - Gen Len: 60.370 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/dippatel11/autotrain-bart-2332573622 ```
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 2332573622 - CO2 Emissions (in grams): 17.3087 ## Validation Metrics - Loss: 1.460 - Rouge1: 40.163 - Rouge2: 20.060 - RougeL: 30.916 - RougeLsum: 37.538 - Gen Len: 60.370 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/dippatel11/autotrain-bart-2332573622 ```
{"datasets": ["dippatel11/autotrain-data-bart"], "language": ["unk"], "tags": ["autotrain", "summarization"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": {"emissions": 17.308721714114615}}
task
[ "SUMMARIZATION" ]
45,613
Ojimi/anime-kawai-diffusion
Ojimi
text-to-image
[ "diffusers", "safetensors", "text-to-image", "anime", "pytorch", "art", "stable diffusion", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
2023-02-09T15:30:12Z
2023-07-14T11:39:06+00:00
3,364
154
--- language: - en library_name: diffusers license: creativeml-openrail-m pipeline_tag: text-to-image tags: - text-to-image - anime - pytorch - diffusers - art - stable diffusion --- ![](assets/preview.png) ## Introduction: - I don't know how to introduce it, but it's been renamed several times. It is an open, free to use and fine-tune AI-art model. It was created by my curiosity. Hope you will like it. Have fun! (●'◡'●). ## Use: - For 🧨Diffusers: ```python from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained("Ojimi/anime-kawai-diffusion") pipe = pipe.to("cuda") prompt = "1girl, animal ears, long hair, solo, cat ears, choker, bare shoulders, red eyes, fang, looking at viewer, animal ear fluff, upper body, black hair, blush, closed mouth, off shoulder, bangs, bow, collarbone" image = pipe(prompt, negative_prompt="lowres, bad anatomy").images[0] ``` ## Tips: - The `masterpiece` and `best quality` tags are not necessary, as it sometimes leads to contradictory results, but if it is distorted or discolored, add them now. - The CGF scale should be 7.5 and the step count 28 for the best quality and best performance. - Use a sample photo for your idea. `Interrogate DeepBooru` and change the prompts to suit what you want. - You should use it as a supportive tool for creating works of art, and not rely on it completely. - The Clip skip should be 2. ## **Limitations:** - The drawing is hard, not soft. - Loss of detail, errors, bad human-like (six-fingered hand) details, deformation, blurring, and unclear images are inevitable. - ⚠️Content may not be appropriate for all ages: As it is trained on data that includes adult content, the generated images may contain content not suitable for children (depending on your country there will be a specific regulation about it). If you do not want to appear adult content, make sure you have additional safety measures in place, such as adding "nsfw" to the negative prompt. - The results generated by the model are considered impressive. But unfortunately, currently, it only supports the English language, to use multilingual, consider using third-party translation programs. - The model is trained on the `Danbooru` and `Nai` tagging system, so the long text may result in poor results. - My amount of money: 0 USD =((. ![](money-wallet.gif) ## **Desires:** As it is a version made only by myself and my small associates, the model will not be perfect and may differ from what people expect. Any contributions from everyone will be respected. Want to support me? Thank you, please help me make it better. ❤️ ## Special Thank: This wouldn't have happened if they hadn't made a breakthrough. - [Runwayml](https://huggingface.co/runwayml/): Base model. - [CompVis](https://github.com/CompVis/): VAE Trainer. - stabilityai: [stabilityai/sd-vae-ft-mse-original · Hugging Face](https://huggingface.co/stabilityai/sd-vae-ft-mse-original) - [d8ahazard](https://github.com/d8ahazard/.sd_dreambooth_extension) : Dreambooth. - [Automatic1111](https://github.com/AUTOMATIC1111/) : Web UI. - [Mikubill](https://github.com/Mikubill/): Where my ideas started. - Chat-GPT: Help me do crazy things that I thought I would never do. - Novel AI, Anything Model, Abyss Orange Model: Dataset images. An AI made me thousands of pictures without worrying about copyright or dispute. - Danbooru: Help me write the correct tag. - My friend and others: Get quality images. - And You 🫵❤️ ## Copyright: This license allows anyone to copy, and modify the model, but please follow the terms of the CreativeML Open RAIL-M. You can learn more about the CreativeML Open RAIL-M [here](https://huggingface.co/spaces/CompVis/stable-diffusion-license). If any part of the model does not comply with the terms of the GNU General Public License, the copyright and other rights of the model will still be valid. All AI-generated images are yours, you can do whatever you want, but please obey the laws of your country. We will not be responsible for any problems you cause. We allow you to merge with another model, but if you share that merge model, don't forget to add me to the credits. Don't forget me. # Have fun with your waifu! (●'◡'●) Do you want to sponsor computing resources for us? Thank you . Please sponsor to me on Ko-fi at https://ko-fi.com/projectk.
null
Non_BioNLP
![](assets/preview.png) ## Introduction: - I don't know how to introduce it, but it's been renamed several times. It is an open, free to use and fine-tune AI-art model. It was created by my curiosity. Hope you will like it. Have fun! (●'◡'●). ## Use: - For 🧨Diffusers: ```python from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained("Ojimi/anime-kawai-diffusion") pipe = pipe.to("cuda") prompt = "1girl, animal ears, long hair, solo, cat ears, choker, bare shoulders, red eyes, fang, looking at viewer, animal ear fluff, upper body, black hair, blush, closed mouth, off shoulder, bangs, bow, collarbone" image = pipe(prompt, negative_prompt="lowres, bad anatomy").images[0] ``` ## Tips: - The `masterpiece` and `best quality` tags are not necessary, as it sometimes leads to contradictory results, but if it is distorted or discolored, add them now. - The CGF scale should be 7.5 and the step count 28 for the best quality and best performance. - Use a sample photo for your idea. `Interrogate DeepBooru` and change the prompts to suit what you want. - You should use it as a supportive tool for creating works of art, and not rely on it completely. - The Clip skip should be 2. ## **Limitations:** - The drawing is hard, not soft. - Loss of detail, errors, bad human-like (six-fingered hand) details, deformation, blurring, and unclear images are inevitable. - ⚠️Content may not be appropriate for all ages: As it is trained on data that includes adult content, the generated images may contain content not suitable for children (depending on your country there will be a specific regulation about it). If you do not want to appear adult content, make sure you have additional safety measures in place, such as adding "nsfw" to the negative prompt. - The results generated by the model are considered impressive. But unfortunately, currently, it only supports the English language, to use multilingual, consider using third-party translation programs. - The model is trained on the `Danbooru` and `Nai` tagging system, so the long text may result in poor results. - My amount of money: 0 USD =((. ![](money-wallet.gif) ## **Desires:** As it is a version made only by myself and my small associates, the model will not be perfect and may differ from what people expect. Any contributions from everyone will be respected. Want to support me? Thank you, please help me make it better. ❤️ ## Special Thank: This wouldn't have happened if they hadn't made a breakthrough. - [Runwayml](https://huggingface.co/runwayml/): Base model. - [CompVis](https://github.com/CompVis/): VAE Trainer. - stabilityai: [stabilityai/sd-vae-ft-mse-original · Hugging Face](https://huggingface.co/stabilityai/sd-vae-ft-mse-original) - [d8ahazard](https://github.com/d8ahazard/.sd_dreambooth_extension) : Dreambooth. - [Automatic1111](https://github.com/AUTOMATIC1111/) : Web UI. - [Mikubill](https://github.com/Mikubill/): Where my ideas started. - Chat-GPT: Help me do crazy things that I thought I would never do. - Novel AI, Anything Model, Abyss Orange Model: Dataset images. An AI made me thousands of pictures without worrying about copyright or dispute. - Danbooru: Help me write the correct tag. - My friend and others: Get quality images. - And You 🫵❤️ ## Copyright: This license allows anyone to copy, and modify the model, but please follow the terms of the CreativeML Open RAIL-M. You can learn more about the CreativeML Open RAIL-M [here](https://huggingface.co/spaces/CompVis/stable-diffusion-license). If any part of the model does not comply with the terms of the GNU General Public License, the copyright and other rights of the model will still be valid. All AI-generated images are yours, you can do whatever you want, but please obey the laws of your country. We will not be responsible for any problems you cause. We allow you to merge with another model, but if you share that merge model, don't forget to add me to the credits. Don't forget me. # Have fun with your waifu! (●'◡'●) Do you want to sponsor computing resources for us? Thank you . Please sponsor to me on Ko-fi at https://ko-fi.com/projectk.
{"language": ["en"], "library_name": "diffusers", "license": "creativeml-openrail-m", "pipeline_tag": "text-to-image", "tags": ["text-to-image", "anime", "pytorch", "diffusers", "art", "stable diffusion"]}
task
[ "TRANSLATION" ]
45,614
anirudh21/distilbert-base-uncased-finetuned-sst2
anirudh21
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2022-01-12T14:17:06+00:00
123
0
--- datasets: - glue license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-sst2 results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue args: sst2 metrics: - type: accuracy value: 0.908256880733945 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4028 - Accuracy: 0.9083 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.188 | 1.0 | 4210 | 0.3127 | 0.9037 | | 0.1299 | 2.0 | 8420 | 0.3887 | 0.9048 | | 0.0845 | 3.0 | 12630 | 0.4028 | 0.9083 | | 0.0691 | 4.0 | 16840 | 0.3924 | 0.9071 | | 0.052 | 5.0 | 21050 | 0.5047 | 0.9002 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4028 - Accuracy: 0.9083 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.188 | 1.0 | 4210 | 0.3127 | 0.9037 | | 0.1299 | 2.0 | 8420 | 0.3887 | 0.9048 | | 0.0845 | 3.0 | 12630 | 0.4028 | 0.9083 | | 0.0691 | 4.0 | 16840 | 0.3924 | 0.9071 | | 0.052 | 5.0 | 21050 | 0.5047 | 0.9002 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-sst2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "sst2"}, "metrics": [{"type": "accuracy", "value": 0.908256880733945, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,615
mor40/BulBERT-cinexio-10epochs
mor40
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:bgglue", "base_model:mor40/BulBERT-finetuned-cinexio", "base_model:finetune:mor40/BulBERT-finetuned-cinexio", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-10-19T17:50:07Z
2023-10-19T17:55:57+00:00
117
0
--- base_model: mor40/BulBERT-finetuned-cinexio datasets: - bgglue metrics: - accuracy tags: - generated_from_trainer model-index: - name: BulBERT-cinexio-10epochs results: - task: type: text-classification name: Text Classification dataset: name: bgglue type: bgglue config: cinexio split: validation args: cinexio metrics: - type: accuracy value: 0.6288532675709001 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BulBERT-cinexio-10epochs This model is a fine-tuned version of [mor40/BulBERT-finetuned-cinexio](https://huggingface.co/mor40/BulBERT-finetuned-cinexio) on the bgglue dataset. It achieves the following results on the evaluation set: - Loss: 1.1866 - Accuracy: 0.6289 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 64 | 1.3334 | 0.5746 | | No log | 2.0 | 128 | 1.2053 | 0.6017 | | No log | 3.0 | 192 | 1.1826 | 0.6227 | | No log | 4.0 | 256 | 1.1826 | 0.6252 | | No log | 5.0 | 320 | 1.1671 | 0.6227 | | No log | 6.0 | 384 | 1.1743 | 0.6289 | | No log | 7.0 | 448 | 1.1795 | 0.6375 | | 1.0262 | 8.0 | 512 | 1.1847 | 0.6178 | | 1.0262 | 9.0 | 576 | 1.1877 | 0.6264 | | 1.0262 | 10.0 | 640 | 1.1866 | 0.6289 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BulBERT-cinexio-10epochs This model is a fine-tuned version of [mor40/BulBERT-finetuned-cinexio](https://huggingface.co/mor40/BulBERT-finetuned-cinexio) on the bgglue dataset. It achieves the following results on the evaluation set: - Loss: 1.1866 - Accuracy: 0.6289 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 64 | 1.3334 | 0.5746 | | No log | 2.0 | 128 | 1.2053 | 0.6017 | | No log | 3.0 | 192 | 1.1826 | 0.6227 | | No log | 4.0 | 256 | 1.1826 | 0.6252 | | No log | 5.0 | 320 | 1.1671 | 0.6227 | | No log | 6.0 | 384 | 1.1743 | 0.6289 | | No log | 7.0 | 448 | 1.1795 | 0.6375 | | 1.0262 | 8.0 | 512 | 1.1847 | 0.6178 | | 1.0262 | 9.0 | 576 | 1.1877 | 0.6264 | | 1.0262 | 10.0 | 640 | 1.1866 | 0.6289 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
{"base_model": "mor40/BulBERT-finetuned-cinexio", "datasets": ["bgglue"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "BulBERT-cinexio-10epochs", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "bgglue", "type": "bgglue", "config": "cinexio", "split": "validation", "args": "cinexio"}, "metrics": [{"type": "accuracy", "value": 0.6288532675709001, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,616
RichardErkhov/KingNish_-_Llama-3.2-1B-Instruct-awq
RichardErkhov
null
[ "safetensors", "llama", "4-bit", "awq", "region:us" ]
2024-12-25T17:59:47Z
2024-12-25T18:00:49+00:00
11
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3.2-1B-Instruct - AWQ - Model creator: https://huggingface.co/KingNish/ - Original model: https://huggingface.co/KingNish/Llama-3.2-1B-Instruct/ Original model description: --- base_model: meta-llama/Llama-3.2-1B-Instruct language: - en library_name: transformers license: llama3.2 tags: - llama-3 - llama - meta - facebook - unsloth - transformers --- # Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # unsloth/Llama-3.2-1B-Instruct For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. ## Special Thanks A huge thank you to the Meta and Llama team for creating and releasing these models. ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3.2-1B-Instruct - AWQ - Model creator: https://huggingface.co/KingNish/ - Original model: https://huggingface.co/KingNish/Llama-3.2-1B-Instruct/ Original model description: --- base_model: meta-llama/Llama-3.2-1B-Instruct language: - en library_name: transformers license: llama3.2 tags: - llama-3 - llama - meta - facebook - unsloth - transformers --- # Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # unsloth/Llama-3.2-1B-Instruct For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. ## Special Thanks A huge thank you to the Meta and Llama team for creating and releasing these models. ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
{}
task
[ "SUMMARIZATION" ]
45,617
facebook/fasttext-th-vectors
facebook
feature-extraction
[ "fasttext", "feature-extraction", "th", "arxiv:1607.04606", "arxiv:1802.06893", "arxiv:1607.01759", "arxiv:1612.03651", "license:cc-by-sa-3.0", "region:us" ]
2023-03-24T06:47:13Z
2023-06-03T22:16:50+00:00
4
0
--- language: th library_name: fasttext license: cc-by-sa-3.0 tags: - feature-extraction widget: - text: apple example_title: apple --- # fastText (Thai) fastText is an open-source, free, lightweight library that allows users to learn text representations and text classifiers. It works on standard, generic hardware. Models can later be reduced in size to even fit on mobile devices. It was introduced in [this paper](https://arxiv.org/abs/1607.04606). The official website can be found [here](https://fasttext.cc/). ## Model description fastText is a library for efficient learning of word representations and sentence classification. fastText is designed to be simple to use for developers, domain experts, and students. It's dedicated to text classification and learning word representations, and was designed to allow for quick model iteration and refinement without specialized hardware. fastText models can be trained on more than a billion words on any multicore CPU in less than a few minutes. It includes pre-trained models learned on Wikipedia and in over 157 different languages. fastText can be used as a command line, linked to a C++ application, or used as a library for use cases from experimentation and prototyping to production. ## Intended uses & limitations You can use pre-trained word vectors for text classification or language identification. See the [tutorials](https://fasttext.cc/docs/en/supervised-tutorial.html) and [resources](https://fasttext.cc/docs/en/english-vectors.html) on its official website to look for tasks that interest you. ### How to use Here is how to load and use a pre-trained vectors ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-th-vectors", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.words ['the', 'of', 'and', 'to', 'in', 'a', 'that', 'is', ...] >>> len(model.words) 145940 >>> model['bread'] array([ 4.89417791e-01, 1.60882145e-01, -2.25947708e-01, -2.94273376e-01, -1.04577184e-01, 1.17962055e-01, 1.34821936e-01, -2.41778508e-01, ...]) ``` Here is how to use this model to query nearest neighbors of an English word vector: ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-en-nearest-neighbors", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.get_nearest_neighbors("bread", k=5) [(0.5641006231307983, 'butter'), (0.48875734210014343, 'loaf'), (0.4491206705570221, 'eat'), (0.42444291710853577, 'food'), (0.4229326844215393, 'cheese')] ``` Here is how to use this model to detect the language of a given text: ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-language-identification", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.predict("Hello, world!") (('__label__eng_Latn',), array([0.81148803])) >>> model.predict("Hello, world!", k=5) (('__label__eng_Latn', '__label__vie_Latn', '__label__nld_Latn', '__label__pol_Latn', '__label__deu_Latn'), array([0.61224753, 0.21323682, 0.09696738, 0.01359863, 0.01319415])) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. Cosine similarity can be used to measure the similarity between two different word vectors. If two two vectors are identical, the cosine similarity will be 1. For two completely unrelated vectors, the value will be 0. If two vectors have an opposite relationship, the value will be -1. ```python >>> import numpy as np >>> def cosine_similarity(word1, word2): >>> return np.dot(model[word1], model[word2]) / (np.linalg.norm(model[word1]) * np.linalg.norm(model[word2])) >>> cosine_similarity("man", "boy") 0.061653383 >>> cosine_similarity("man", "ceo") 0.11989131 >>> cosine_similarity("woman", "ceo") -0.08834904 ``` ## Training data Pre-trained word vectors for 157 languages were trained on [Common Crawl](http://commoncrawl.org/) and [Wikipedia](https://www.wikipedia.org/) using fastText. These models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. We also distribute three new word analogy datasets, for French, Hindi and Polish. ## Training procedure ### Tokenization We used the [Stanford word segmenter](https://nlp.stanford.edu/software/segmenter.html) for Chinese, [Mecab](http://taku910.github.io/mecab/) for Japanese and [UETsegmenter](https://github.com/phongnt570/UETsegmenter) for Vietnamese. For languages using the Latin, Cyrillic, Hebrew or Greek scripts, we used the tokenizer from the [Europarl](https://www.statmt.org/europarl/) preprocessing tools. For the remaining languages, we used the ICU tokenizer. More information about the training of these models can be found in the article [Learning Word Vectors for 157 Languages](https://arxiv.org/abs/1802.06893). ### License The word vectors are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/). ### Evaluation datasets The analogy evaluation datasets described in the paper are available here: [French](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-fr.txt), [Hindi](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-hi.txt), [Polish](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-pl.txt). ### BibTeX entry and citation info Please cite [1] if using this code for learning word representations or [2] if using for text classification. [1] P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606) ```markup @article{bojanowski2016enriching, title={Enriching Word Vectors with Subword Information}, author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas}, journal={arXiv preprint arXiv:1607.04606}, year={2016} } ``` [2] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759) ```markup @article{joulin2016bag, title={Bag of Tricks for Efficient Text Classification}, author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas}, journal={arXiv preprint arXiv:1607.01759}, year={2016} } ``` [3] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: Compressing text classification models*](https://arxiv.org/abs/1612.03651) ```markup @article{joulin2016fasttext, title={FastText.zip: Compressing text classification models}, author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Douze, Matthijs and J{'e}gou, H{'e}rve and Mikolov, Tomas}, journal={arXiv preprint arXiv:1612.03651}, year={2016} } ``` If you use these word vectors, please cite the following paper: [4] E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893) ```markup @inproceedings{grave2018learning, title={Learning Word Vectors for 157 Languages}, author={Grave, Edouard and Bojanowski, Piotr and Gupta, Prakhar and Joulin, Armand and Mikolov, Tomas}, booktitle={Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)}, year={2018} } ``` (\* These authors contributed equally.)
null
Non_BioNLP
# fastText (Thai) fastText is an open-source, free, lightweight library that allows users to learn text representations and text classifiers. It works on standard, generic hardware. Models can later be reduced in size to even fit on mobile devices. It was introduced in [this paper](https://arxiv.org/abs/1607.04606). The official website can be found [here](https://fasttext.cc/). ## Model description fastText is a library for efficient learning of word representations and sentence classification. fastText is designed to be simple to use for developers, domain experts, and students. It's dedicated to text classification and learning word representations, and was designed to allow for quick model iteration and refinement without specialized hardware. fastText models can be trained on more than a billion words on any multicore CPU in less than a few minutes. It includes pre-trained models learned on Wikipedia and in over 157 different languages. fastText can be used as a command line, linked to a C++ application, or used as a library for use cases from experimentation and prototyping to production. ## Intended uses & limitations You can use pre-trained word vectors for text classification or language identification. See the [tutorials](https://fasttext.cc/docs/en/supervised-tutorial.html) and [resources](https://fasttext.cc/docs/en/english-vectors.html) on its official website to look for tasks that interest you. ### How to use Here is how to load and use a pre-trained vectors ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-th-vectors", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.words ['the', 'of', 'and', 'to', 'in', 'a', 'that', 'is', ...] >>> len(model.words) 145940 >>> model['bread'] array([ 4.89417791e-01, 1.60882145e-01, -2.25947708e-01, -2.94273376e-01, -1.04577184e-01, 1.17962055e-01, 1.34821936e-01, -2.41778508e-01, ...]) ``` Here is how to use this model to query nearest neighbors of an English word vector: ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-en-nearest-neighbors", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.get_nearest_neighbors("bread", k=5) [(0.5641006231307983, 'butter'), (0.48875734210014343, 'loaf'), (0.4491206705570221, 'eat'), (0.42444291710853577, 'food'), (0.4229326844215393, 'cheese')] ``` Here is how to use this model to detect the language of a given text: ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-language-identification", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.predict("Hello, world!") (('__label__eng_Latn',), array([0.81148803])) >>> model.predict("Hello, world!", k=5) (('__label__eng_Latn', '__label__vie_Latn', '__label__nld_Latn', '__label__pol_Latn', '__label__deu_Latn'), array([0.61224753, 0.21323682, 0.09696738, 0.01359863, 0.01319415])) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. Cosine similarity can be used to measure the similarity between two different word vectors. If two two vectors are identical, the cosine similarity will be 1. For two completely unrelated vectors, the value will be 0. If two vectors have an opposite relationship, the value will be -1. ```python >>> import numpy as np >>> def cosine_similarity(word1, word2): >>> return np.dot(model[word1], model[word2]) / (np.linalg.norm(model[word1]) * np.linalg.norm(model[word2])) >>> cosine_similarity("man", "boy") 0.061653383 >>> cosine_similarity("man", "ceo") 0.11989131 >>> cosine_similarity("woman", "ceo") -0.08834904 ``` ## Training data Pre-trained word vectors for 157 languages were trained on [Common Crawl](http://commoncrawl.org/) and [Wikipedia](https://www.wikipedia.org/) using fastText. These models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. We also distribute three new word analogy datasets, for French, Hindi and Polish. ## Training procedure ### Tokenization We used the [Stanford word segmenter](https://nlp.stanford.edu/software/segmenter.html) for Chinese, [Mecab](http://taku910.github.io/mecab/) for Japanese and [UETsegmenter](https://github.com/phongnt570/UETsegmenter) for Vietnamese. For languages using the Latin, Cyrillic, Hebrew or Greek scripts, we used the tokenizer from the [Europarl](https://www.statmt.org/europarl/) preprocessing tools. For the remaining languages, we used the ICU tokenizer. More information about the training of these models can be found in the article [Learning Word Vectors for 157 Languages](https://arxiv.org/abs/1802.06893). ### License The word vectors are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/). ### Evaluation datasets The analogy evaluation datasets described in the paper are available here: [French](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-fr.txt), [Hindi](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-hi.txt), [Polish](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-pl.txt). ### BibTeX entry and citation info Please cite [1] if using this code for learning word representations or [2] if using for text classification. [1] P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606) ```markup @article{bojanowski2016enriching, title={Enriching Word Vectors with Subword Information}, author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas}, journal={arXiv preprint arXiv:1607.04606}, year={2016} } ``` [2] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759) ```markup @article{joulin2016bag, title={Bag of Tricks for Efficient Text Classification}, author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas}, journal={arXiv preprint arXiv:1607.01759}, year={2016} } ``` [3] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: Compressing text classification models*](https://arxiv.org/abs/1612.03651) ```markup @article{joulin2016fasttext, title={FastText.zip: Compressing text classification models}, author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Douze, Matthijs and J{'e}gou, H{'e}rve and Mikolov, Tomas}, journal={arXiv preprint arXiv:1612.03651}, year={2016} } ``` If you use these word vectors, please cite the following paper: [4] E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893) ```markup @inproceedings{grave2018learning, title={Learning Word Vectors for 157 Languages}, author={Grave, Edouard and Bojanowski, Piotr and Gupta, Prakhar and Joulin, Armand and Mikolov, Tomas}, booktitle={Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)}, year={2018} } ``` (\* These authors contributed equally.)
{"language": "th", "library_name": "fasttext", "license": "cc-by-sa-3.0", "tags": ["feature-extraction"], "widget": [{"text": "apple", "example_title": "apple"}]}
task
[ "TEXT_CLASSIFICATION" ]
45,618
Alred/t5-small-finetuned-summarization-cnn
Alred
summarization
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "summarization", "generated_from_trainer", "dataset:cnn_dailymail", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-11-19T02:09:50Z
2022-11-19T03:22:38+00:00
28
0
--- datasets: - cnn_dailymail license: apache-2.0 metrics: - rouge tags: - summarization - generated_from_trainer model-index: - name: t5-small-finetuned-summarization-cnn results: - task: type: text2text-generation name: Sequence-to-sequence Language Modeling dataset: name: cnn_dailymail type: cnn_dailymail config: 3.0.0 split: train[:2%] args: 3.0.0 metrics: - type: rouge value: 24.4825 name: Rouge1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-summarization-cnn This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 2.0105 - Rouge1: 24.4825 - Rouge2: 9.1573 - Rougel: 19.7135 - Rougelsum: 22.2551 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 2.0389 | 1.0 | 718 | 2.0150 | 24.4413 | 9.1782 | 19.7202 | 22.2225 | | 1.9497 | 2.0 | 1436 | 2.0105 | 24.4825 | 9.1573 | 19.7135 | 22.2551 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-summarization-cnn This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 2.0105 - Rouge1: 24.4825 - Rouge2: 9.1573 - Rougel: 19.7135 - Rougelsum: 22.2551 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 2.0389 | 1.0 | 718 | 2.0150 | 24.4413 | 9.1782 | 19.7202 | 22.2225 | | 1.9497 | 2.0 | 1436 | 2.0105 | 24.4825 | 9.1573 | 19.7135 | 22.2551 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
{"datasets": ["cnn_dailymail"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "t5-small-finetuned-summarization-cnn", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "cnn_dailymail", "type": "cnn_dailymail", "config": "3.0.0", "split": "train[:2%]", "args": "3.0.0"}, "metrics": [{"type": "rouge", "value": 24.4825, "name": "Rouge1"}]}]}]}
task
[ "SUMMARIZATION" ]
45,619
c01zaut/gemma-2-27b-rk3588-1.1.2
c01zaut
text-generation
[ "transformers", "safetensors", "gemma2", "text-generation", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:2110.08193", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:1804.06876", "arxiv:2103.03874", "arxiv:2304.06364", "arxiv:2206.04615", "arxiv:2203.09509", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-11-10T07:05:09Z
2024-11-10T08:23:44+00:00
4
0
--- library_name: transformers license: gemma pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # gemma-2-27b-RK3588-1.1.2 This version of gemma-2-27b has been converted to run on the RK3588 NPU using ['w8a8', 'w8a8_g128', 'w8a8_g256', 'w8a8_g512'] quantization. This model has been optimized with the following LoRA: Compatible with RKLLM version: 1.1.2 ## Useful links: [Official RKLLM GitHub](https://github.com/airockchip/rknn-llm) [RockhipNPU Reddit](https://reddit.com/r/RockchipNPU) [EZRKNN-LLM](https://github.com/Pelochus/ezrknn-llm/) Pretty much anything by these folks: [marty1885](https://github.com/marty1885) and [happyme531](https://huggingface.co/happyme531) Converted using https://github.com/c0zaut/ez-er-rkllm-toolkit # Original Model Card for base model, gemma-2-27b, below: # Gemma 2 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma] **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-27b) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with: ```sh pip install -U transformers ``` Then, copy the snippet from the section that is relevant for your usecase. #### Running with the `pipeline` API ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="google/gemma-2-27b", device="cuda", # replace with "mps" to run on a Mac device ) text = "Once upon a time," outputs = pipe(text, max_new_tokens=256) response = outputs[0]["generated_text"] print(response) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-27b", device_map="auto", ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` #### Running the model through a CLI The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage) for getting started, then launch the CLI through the following command: ```shell local-gemma --model "google/gemma-2-27b" --prompt "What is the capital of Mexico?" ``` #### Quantized Versions through `bitsandbytes` <details> <summary> Using 8-bit precision (int8) </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-27b", quantization_config=quantization_config, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` </details> <details> <summary> Using 4-bit precision </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-27b", quantization_config=quantization_config, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` </details> #### Advanced Usage <details> <summary> Torch compile </summary> [Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the inference of PyTorch modules. The Gemma-2 model can be run up to 6x faster by leveraging torch compile. Note that two warm-up steps are required before the full inference speed is realised: ```python import os os.environ["TOKENIZERS_PARALLELISM"] = "false" from transformers import AutoTokenizer, Gemma2ForCausalLM from transformers.cache_utils import HybridCache import torch torch.set_float32_matmul_precision("high") # load the model + tokenizer tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b") model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-27b", torch_dtype=torch.bfloat16) model.to("cuda") # apply the torch compile transformation model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True) # pre-process inputs input_text = "The theory of special relativity states " model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda") prompt_length = model_inputs.input_ids.shape[1] # set-up k/v cache past_key_values = HybridCache( config=model.config, max_batch_size=1, max_cache_len=model.config.max_position_embeddings, device=model.device, dtype=model.dtype ) # enable passing kv cache to generate model._supports_cache_class = True model.generation_config.cache_implementation = None # two warm-up steps for idx in range(2): outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) past_key_values.reset() # fast run outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config). </details> ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ### Citation ```none @article{gemma_2024, title={Gemma}, url={https://www.kaggle.com/m/3301}, DOI={10.34740/KAGGLE/M/3301}, publisher={Kaggle}, author={Gemma Team}, year={2024} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models][foundation-models], including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | Gemma PT 9B | Gemma PT 27B | | ------------------------------ | ------------- | ----------- | ------------ | | [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 | | [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 | | [PIQA][piqa] | 0-shot | 81.7 | 83.2 | | [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 | | [BoolQ][boolq] | 0-shot | 84.2 | 84.8 | | [WinoGrande][winogrande] | partial score | 80.6 | 83.7 | | [ARC-e][arc] | 0-shot | 88.0 | 88.6 | | [ARC-c][arc] | 25-shot | 68.4 | 71.4 | | [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 | | [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 | | [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 | | [MBPP][mbpp] | 3-shot | 52.4 | 62.6 | | [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 | | [MATH][math] | 4-shot | 36.6 | 42.3 | | [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 | | [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 | | ------------------------------ | ------------- | ----------- | ------------ | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq]. * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies][safety-policies] for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well-known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. #### Gemma 2.0 | Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B | | ------------------------ | ------------- | --------------- | ---------------- | | [RealToxicity][realtox] | average | 8.25 | 8.84 | | [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 | | [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 | | [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 | | [Winogender][winogender] | top-1 | 79.17 | 77.22 | | [TruthfulQA][truthfulqa] | | 50.27 | 51.60 | | [Winobias 1_2][winobias] | | 78.09 | 81.94 | | [Winobias 2_2][winobias] | | 95.32 | 97.22 | | [Toxigen][toxigen] | | 39.30 | 38.42 | | ------------------------ | ------------- | --------------- | ---------------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2 [terms]: https://ai.google.dev/gemma/terms [vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335 [sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference [safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11 [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/google/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [foundation-models]: https://ai.google/discover/foundation-models/ [gemini-2-paper]: https://goo.gle/gemma2report [mmlu]: https://arxiv.org/abs/2009.03300 [hellaswag]: https://arxiv.org/abs/1905.07830 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [boolq]: https://arxiv.org/abs/1905.10044 [winogrande]: https://arxiv.org/abs/1907.10641 [commonsenseqa]: https://arxiv.org/abs/1811.00937 [openbookqa]: https://arxiv.org/abs/1809.02789 [arc]: https://arxiv.org/abs/1911.01547 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [humaneval]: https://arxiv.org/abs/2107.03374 [mbpp]: https://arxiv.org/abs/2108.07732 [gsm8k]: https://arxiv.org/abs/2110.14168 [realtox]: https://arxiv.org/abs/2009.11462 [bold]: https://arxiv.org/abs/2101.11718 [crows]: https://aclanthology.org/2020.emnlp-main.154/ [bbq]: https://arxiv.org/abs/2110.08193v2 [winogender]: https://arxiv.org/abs/1804.09301 [truthfulqa]: https://arxiv.org/abs/2109.07958 [winobias]: https://arxiv.org/abs/1804.06876 [math]: https://arxiv.org/abs/2103.03874 [agieval]: https://arxiv.org/abs/2304.06364 [big-bench]: https://arxiv.org/abs/2206.04615 [toxigen]: https://arxiv.org/abs/2203.09509
null
Non_BioNLP
# gemma-2-27b-RK3588-1.1.2 This version of gemma-2-27b has been converted to run on the RK3588 NPU using ['w8a8', 'w8a8_g128', 'w8a8_g256', 'w8a8_g512'] quantization. This model has been optimized with the following LoRA: Compatible with RKLLM version: 1.1.2 ## Useful links: [Official RKLLM GitHub](https://github.com/airockchip/rknn-llm) [RockhipNPU Reddit](https://reddit.com/r/RockchipNPU) [EZRKNN-LLM](https://github.com/Pelochus/ezrknn-llm/) Pretty much anything by these folks: [marty1885](https://github.com/marty1885) and [happyme531](https://huggingface.co/happyme531) Converted using https://github.com/c0zaut/ez-er-rkllm-toolkit # Original Model Card for base model, gemma-2-27b, below: # Gemma 2 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma] **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-27b) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with: ```sh pip install -U transformers ``` Then, copy the snippet from the section that is relevant for your usecase. #### Running with the `pipeline` API ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="google/gemma-2-27b", device="cuda", # replace with "mps" to run on a Mac device ) text = "Once upon a time," outputs = pipe(text, max_new_tokens=256) response = outputs[0]["generated_text"] print(response) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-27b", device_map="auto", ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` #### Running the model through a CLI The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage) for getting started, then launch the CLI through the following command: ```shell local-gemma --model "google/gemma-2-27b" --prompt "What is the capital of Mexico?" ``` #### Quantized Versions through `bitsandbytes` <details> <summary> Using 8-bit precision (int8) </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-27b", quantization_config=quantization_config, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` </details> <details> <summary> Using 4-bit precision </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-27b", quantization_config=quantization_config, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` </details> #### Advanced Usage <details> <summary> Torch compile </summary> [Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the inference of PyTorch modules. The Gemma-2 model can be run up to 6x faster by leveraging torch compile. Note that two warm-up steps are required before the full inference speed is realised: ```python import os os.environ["TOKENIZERS_PARALLELISM"] = "false" from transformers import AutoTokenizer, Gemma2ForCausalLM from transformers.cache_utils import HybridCache import torch torch.set_float32_matmul_precision("high") # load the model + tokenizer tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b") model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-27b", torch_dtype=torch.bfloat16) model.to("cuda") # apply the torch compile transformation model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True) # pre-process inputs input_text = "The theory of special relativity states " model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda") prompt_length = model_inputs.input_ids.shape[1] # set-up k/v cache past_key_values = HybridCache( config=model.config, max_batch_size=1, max_cache_len=model.config.max_position_embeddings, device=model.device, dtype=model.dtype ) # enable passing kv cache to generate model._supports_cache_class = True model.generation_config.cache_implementation = None # two warm-up steps for idx in range(2): outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) past_key_values.reset() # fast run outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config). </details> ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ### Citation ```none @article{gemma_2024, title={Gemma}, url={https://www.kaggle.com/m/3301}, DOI={10.34740/KAGGLE/M/3301}, publisher={Kaggle}, author={Gemma Team}, year={2024} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models][foundation-models], including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | Gemma PT 9B | Gemma PT 27B | | ------------------------------ | ------------- | ----------- | ------------ | | [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 | | [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 | | [PIQA][piqa] | 0-shot | 81.7 | 83.2 | | [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 | | [BoolQ][boolq] | 0-shot | 84.2 | 84.8 | | [WinoGrande][winogrande] | partial score | 80.6 | 83.7 | | [ARC-e][arc] | 0-shot | 88.0 | 88.6 | | [ARC-c][arc] | 25-shot | 68.4 | 71.4 | | [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 | | [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 | | [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 | | [MBPP][mbpp] | 3-shot | 52.4 | 62.6 | | [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 | | [MATH][math] | 4-shot | 36.6 | 42.3 | | [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 | | [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 | | ------------------------------ | ------------- | ----------- | ------------ | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq]. * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies][safety-policies] for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well-known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. #### Gemma 2.0 | Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B | | ------------------------ | ------------- | --------------- | ---------------- | | [RealToxicity][realtox] | average | 8.25 | 8.84 | | [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 | | [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 | | [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 | | [Winogender][winogender] | top-1 | 79.17 | 77.22 | | [TruthfulQA][truthfulqa] | | 50.27 | 51.60 | | [Winobias 1_2][winobias] | | 78.09 | 81.94 | | [Winobias 2_2][winobias] | | 95.32 | 97.22 | | [Toxigen][toxigen] | | 39.30 | 38.42 | | ------------------------ | ------------- | --------------- | ---------------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2 [terms]: https://ai.google.dev/gemma/terms [vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335 [sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference [safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11 [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/google/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [foundation-models]: https://ai.google/discover/foundation-models/ [gemini-2-paper]: https://goo.gle/gemma2report [mmlu]: https://arxiv.org/abs/2009.03300 [hellaswag]: https://arxiv.org/abs/1905.07830 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [boolq]: https://arxiv.org/abs/1905.10044 [winogrande]: https://arxiv.org/abs/1907.10641 [commonsenseqa]: https://arxiv.org/abs/1811.00937 [openbookqa]: https://arxiv.org/abs/1809.02789 [arc]: https://arxiv.org/abs/1911.01547 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [humaneval]: https://arxiv.org/abs/2107.03374 [mbpp]: https://arxiv.org/abs/2108.07732 [gsm8k]: https://arxiv.org/abs/2110.14168 [realtox]: https://arxiv.org/abs/2009.11462 [bold]: https://arxiv.org/abs/2101.11718 [crows]: https://aclanthology.org/2020.emnlp-main.154/ [bbq]: https://arxiv.org/abs/2110.08193v2 [winogender]: https://arxiv.org/abs/1804.09301 [truthfulqa]: https://arxiv.org/abs/2109.07958 [winobias]: https://arxiv.org/abs/1804.06876 [math]: https://arxiv.org/abs/2103.03874 [agieval]: https://arxiv.org/abs/2304.06364 [big-bench]: https://arxiv.org/abs/2206.04615 [toxigen]: https://arxiv.org/abs/2203.09509
{"library_name": "transformers", "license": "gemma", "pipeline_tag": "text-generation", "extra_gated_heading": "Access Gemma on Hugging Face", "extra_gated_prompt": "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
45,620
HanJisu/distilbert-base-uncased-finetuned-emotion
HanJisu
text-classification
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-05-30T10:30:18Z
2024-05-30T10:36:33+00:00
4
0
--- base_model: distilbert-base-uncased datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.925 name: Accuracy - type: f1 value: 0.9251247834824673 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2225 - Accuracy: 0.925 - F1: 0.9251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8367 | 1.0 | 250 | 0.3265 | 0.904 | 0.9039 | | 0.2548 | 2.0 | 500 | 0.2225 | 0.925 | 0.9251 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2225 - Accuracy: 0.925 - F1: 0.9251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8367 | 1.0 | 250 | 0.3265 | 0.904 | 0.9039 | | 0.2548 | 2.0 | 500 | 0.2225 | 0.925 | 0.9251 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.925, "name": "Accuracy"}, {"type": "f1", "value": 0.9251247834824673, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,621
RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf
RichardErkhov
null
[ "gguf", "arxiv:2308.14508", "arxiv:2402.13718", "endpoints_compatible", "region:us", "conversational" ]
2024-08-20T00:10:45Z
2024-08-20T01:51:12+00:00
93
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3-8B-Instruct-80K-QLoRA-Merged - GGUF - Model creator: https://huggingface.co/namespace-Pt/ - Original model: https://huggingface.co/namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA-Merged/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q2_K.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q2_K.gguf) | Q2_K | 2.96GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ3_S.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ3_M.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q3_K.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q3_K.gguf) | Q3_K | 3.74GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_0.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_0.gguf) | Q4_0 | 4.34GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_K.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_K.gguf) | Q4_K | 4.58GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_1.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_1.gguf) | Q4_1 | 4.78GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_0.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_0.gguf) | Q5_0 | 5.21GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_K.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_K.gguf) | Q5_K | 5.34GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_1.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_1.gguf) | Q5_1 | 5.65GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q6_K.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q6_K.gguf) | Q6_K | 6.14GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q8_0.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- license: mit pipeline_tag: text-generation --- <div align="center"> <h1>Llama-3-8B-Instruct-80K-QLoRA-Merged</h1> <a href="https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/longllm_qlora">[Data&Code]</a> </div> We extend the context length of Llama-3-8B-Instruct to 80K using QLoRA and 3.5K long-context training data synthesized from GPT-4. The entire training cycle is super efficient, which takes 8 hours on a 8xA800 (80G) machine. Yet, the resulted model achieves remarkable performance on a series of downstream long-context evaluation benchmarks. **NOTE**: This model is the result of merging [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and [namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA](https://huggingface.co/namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA). # Evaluation All the following evaluation results can be reproduced following instructions [here](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/longllm_qlora). ## Needle in a Haystack We evaluate the model on the Needle-In-A-HayStack task using the official setting. The blue vertical line indicates the training context length, i.e. 80K. <img src="data/needle.png"></img> ## LongBench We evaluate the model on [LongBench](https://arxiv.org/abs/2308.14508) using 32K context length and the official prompt template. For [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), we use 8K context length. |Model|Single-Doc QA|Multi-Doc QA|Summarization|Few-Shot Learning|Synthetic|Code|Avg| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |[meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)|37.33|36.04|26.83|**69.56**|37.75|53.24|43.20| |[gradientai/Llama-3-8B-Instruct-262k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)|37.29|31.20|26.18|67.25|44.25|**62.71**|43.73| |Llama-3-8B-Instruct-80K-QLoRA-Merged|**43.57**|**43.07**|**28.93**|69.15|**48.50**|51.95|**47.19**| ## InfiniteBench We evaluate the model on [InfiniteBench](https://arxiv.org/pdf/2402.13718.pdf) using 80K context length and the official prompt template. The results of GPT-4 is copied from the [paper](https://arxiv.org/pdf/2402.13718.pdf). For [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), we use 8K context length. |Model|LongBookQA Eng|LongBookSum Eng| |:-:|:-:|:-:| |GPT-4|22.22|14.73| |[meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)|7.00|**16.40**| |[gradientai/Llama-3-8B-Instruct-262k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)|20.30|10.34| |Llama-3-8B-Instruct-80K-QLoRA-Merged|**30.92**|14.73| ## Topic Retrieval We evaluate the model on [Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/) task with `[5,10,15,20,25,30,40,50,60,70]` topics. <img src="data/topic.png"></img> ## MMLU We evaluate the model's zero-shot performance on MMLU benchmark as a reflection of its short-context capability. |Model|STEM|Social Sciences|Humanities|Others|Avg| |:-:|:-:|:-:|:-:|:-:|:-:| |[Llama-2-7B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)|35.92|54.37|51.74|51.42|47.22| |[Mistral-7B-v0.2-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)|48.79|69.95|64.99|61.64|60.10| |[meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)|**53.87**|**75.66**|**69.44**|69.75|**65.91**| |[gradientai/Llama-3-8B-Instruct-262k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)|52.10|73.26|67.15|**69.80**|64.34| |Llama-3-8B-Instruct-80K-QLoRA-Merged|53.10|73.24|67.32|68.79|64.44| # Environment ```bash torch==2.2.2 flash_attn==2.5.6 transformers==4.39.3 ``` # Usage ```python import json import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA-Merged" torch_dtype = torch.bfloat16 # place the model on GPU device_map = {"": "cuda"} tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map=device_map, attn_implementation="flash_attention_2", ).eval() with torch.no_grad(): # short context messages = [{"role": "user", "content": "Tell me about yourself."}] inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to("cuda") outputs = model.generate(**inputs, max_new_tokens=50)[:, inputs["input_ids"].shape[1]:] print(f"Input Length: {inputs['input_ids'].shape[1]}") print(f"Output: {tokenizer.decode(outputs[0])}") # long context with open("data/narrativeqa.json", encoding="utf-8") as f: example = json.load(f) messages = [{"role": "user", "content": example["context"]}] inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to("cuda") outputs = model.generate(**inputs, do_sample=False, top_p=1, temperature=1, max_new_tokens=20)[:, inputs["input_ids"].shape[1]:] print("*"*20) print(f"Input Length: {inputs['input_ids'].shape[1]}") print(f"Answers: {example['answer']}") print(f"Prediction: {tokenizer.decode(outputs[0])}") ``` You may observe messages like: `This is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (8192). Depending on the model, you may observe exceptions, performance degradation, or nothing at all.` or `Setting pad_token_id to eos_token_id:128001 for open-end generation`. They do not matter. Just ignore them.
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3-8B-Instruct-80K-QLoRA-Merged - GGUF - Model creator: https://huggingface.co/namespace-Pt/ - Original model: https://huggingface.co/namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA-Merged/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q2_K.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q2_K.gguf) | Q2_K | 2.96GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ3_S.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ3_M.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q3_K.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q3_K.gguf) | Q3_K | 3.74GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_0.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_0.gguf) | Q4_0 | 4.34GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_K.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_K.gguf) | Q4_K | 4.58GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_1.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q4_1.gguf) | Q4_1 | 4.78GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_0.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_0.gguf) | Q5_0 | 5.21GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_K.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_K.gguf) | Q5_K | 5.34GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_1.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q5_1.gguf) | Q5_1 | 5.65GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q6_K.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q6_K.gguf) | Q6_K | 6.14GB | | [Llama-3-8B-Instruct-80K-QLoRA-Merged.Q8_0.gguf](https://huggingface.co/RichardErkhov/namespace-Pt_-_Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf/blob/main/Llama-3-8B-Instruct-80K-QLoRA-Merged.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- license: mit pipeline_tag: text-generation --- <div align="center"> <h1>Llama-3-8B-Instruct-80K-QLoRA-Merged</h1> <a href="https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/longllm_qlora">[Data&Code]</a> </div> We extend the context length of Llama-3-8B-Instruct to 80K using QLoRA and 3.5K long-context training data synthesized from GPT-4. The entire training cycle is super efficient, which takes 8 hours on a 8xA800 (80G) machine. Yet, the resulted model achieves remarkable performance on a series of downstream long-context evaluation benchmarks. **NOTE**: This model is the result of merging [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and [namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA](https://huggingface.co/namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA). # Evaluation All the following evaluation results can be reproduced following instructions [here](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/longllm_qlora). ## Needle in a Haystack We evaluate the model on the Needle-In-A-HayStack task using the official setting. The blue vertical line indicates the training context length, i.e. 80K. <img src="data/needle.png"></img> ## LongBench We evaluate the model on [LongBench](https://arxiv.org/abs/2308.14508) using 32K context length and the official prompt template. For [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), we use 8K context length. |Model|Single-Doc QA|Multi-Doc QA|Summarization|Few-Shot Learning|Synthetic|Code|Avg| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |[meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)|37.33|36.04|26.83|**69.56**|37.75|53.24|43.20| |[gradientai/Llama-3-8B-Instruct-262k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)|37.29|31.20|26.18|67.25|44.25|**62.71**|43.73| |Llama-3-8B-Instruct-80K-QLoRA-Merged|**43.57**|**43.07**|**28.93**|69.15|**48.50**|51.95|**47.19**| ## InfiniteBench We evaluate the model on [InfiniteBench](https://arxiv.org/pdf/2402.13718.pdf) using 80K context length and the official prompt template. The results of GPT-4 is copied from the [paper](https://arxiv.org/pdf/2402.13718.pdf). For [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), we use 8K context length. |Model|LongBookQA Eng|LongBookSum Eng| |:-:|:-:|:-:| |GPT-4|22.22|14.73| |[meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)|7.00|**16.40**| |[gradientai/Llama-3-8B-Instruct-262k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)|20.30|10.34| |Llama-3-8B-Instruct-80K-QLoRA-Merged|**30.92**|14.73| ## Topic Retrieval We evaluate the model on [Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/) task with `[5,10,15,20,25,30,40,50,60,70]` topics. <img src="data/topic.png"></img> ## MMLU We evaluate the model's zero-shot performance on MMLU benchmark as a reflection of its short-context capability. |Model|STEM|Social Sciences|Humanities|Others|Avg| |:-:|:-:|:-:|:-:|:-:|:-:| |[Llama-2-7B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)|35.92|54.37|51.74|51.42|47.22| |[Mistral-7B-v0.2-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)|48.79|69.95|64.99|61.64|60.10| |[meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)|**53.87**|**75.66**|**69.44**|69.75|**65.91**| |[gradientai/Llama-3-8B-Instruct-262k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)|52.10|73.26|67.15|**69.80**|64.34| |Llama-3-8B-Instruct-80K-QLoRA-Merged|53.10|73.24|67.32|68.79|64.44| # Environment ```bash torch==2.2.2 flash_attn==2.5.6 transformers==4.39.3 ``` # Usage ```python import json import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA-Merged" torch_dtype = torch.bfloat16 # place the model on GPU device_map = {"": "cuda"} tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map=device_map, attn_implementation="flash_attention_2", ).eval() with torch.no_grad(): # short context messages = [{"role": "user", "content": "Tell me about yourself."}] inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to("cuda") outputs = model.generate(**inputs, max_new_tokens=50)[:, inputs["input_ids"].shape[1]:] print(f"Input Length: {inputs['input_ids'].shape[1]}") print(f"Output: {tokenizer.decode(outputs[0])}") # long context with open("data/narrativeqa.json", encoding="utf-8") as f: example = json.load(f) messages = [{"role": "user", "content": example["context"]}] inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to("cuda") outputs = model.generate(**inputs, do_sample=False, top_p=1, temperature=1, max_new_tokens=20)[:, inputs["input_ids"].shape[1]:] print("*"*20) print(f"Input Length: {inputs['input_ids'].shape[1]}") print(f"Answers: {example['answer']}") print(f"Prediction: {tokenizer.decode(outputs[0])}") ``` You may observe messages like: `This is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (8192). Depending on the model, you may observe exceptions, performance degradation, or nothing at all.` or `Setting pad_token_id to eos_token_id:128001 for open-end generation`. They do not matter. Just ignore them.
{}
task
[ "SUMMARIZATION" ]
45,622
authormist/authormist-originality
authormist
text-generation
[ "transformers", "safetensors", "qwen2", "text-generation", "ai-detection", "paraphrasing", "originality", "privacy", "conversational", "en", "dataset:checkgpt", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2025-03-10T14:50:13Z
2025-03-10T15:38:21+00:00
17
0
--- base_model: Qwen/Qwen2.5-3B-Instruct datasets: - checkgpt language: en library_name: transformers license: mit pipeline_tag: text-generation tags: - text-generation - ai-detection - paraphrasing - originality - privacy model_type: causal-lm --- # AuthorMist Originality [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-AuthorMist-blue)](https://huggingface.co/authormist/originality) [![License](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT) ## Overview AuthorMist Originality is a specialized language model designed to transform AI-generated text into more human-like writing while preserving the original meaning. This model was developed using reinforcement learning techniques to specifically evade AI text detection systems, with a focus on Originality.ai's detection algorithms. The model is based on Qwen2.5-3B Instruct and has been fine-tuned using Group Relative Policy Optimization (GRPO) with detector feedback as a reward signal. AuthorMist Originality demonstrates strong performance in reducing detectability across multiple AI text detection systems while maintaining high semantic similarity with the original text. ## Key Features - **Detector Evasion**: Trained specifically to evade Originality.ai's detection algorithms, with strong cross-detector generalization - **Meaning Preservation**: Maintains high semantic similarity (>0.94) with the original text - **Natural Output**: Produces fluent, coherent text that reads naturally - **Broad Applicability**: Effective across various domains including academic, technical, and creative writing ## Model Details - **Base Model**: Qwen2.5-3B Instruct - **Training Method**: Reinforcement Learning with Group Relative Policy Optimization (GRPO) - **Training Data**: 10,000 human-written abstracts from the CheckGPT dataset with corresponding AI-generated versions - **Domains Covered**: Computer Science, Humanities, Social Sciences, Physics, and more - **Text Length Support**: Optimized for texts ranging from 100 to 500 words ## Performance AuthorMist Originality demonstrates exceptional performance in evading AI text detection: - **Mean AUROC**: 0.49 across six major detection systems - **Mean F1-score**: 0.09 across all tested detectors - **Semantic Similarity**: >0.94 with original text The model shows particularly strong performance against: - Hello SimpleAI (AUROC: 0.07) - Sapling (AUROC: 0.13) - Winston.ai (AUROC: 0.35) ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer # Load model and tokenizer model_name = "authormist/authormist-originality" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Prepare input text ai_text = "Your AI-generated text here..." prompt = f"""Please paraphrase the following text to make it more human-like while preserving the original meaning: {ai_text} Paraphrased text:""" # Generate paraphrased text inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate( inputs.input_ids, max_new_tokens=512, temperature=0.7, top_p=0.9, do_sample=True ) paraphrased_text = tokenizer.decode(outputs[0], skip_special_tokens=True) print(paraphrased_text.split("Paraphrased text:")[1].strip()) ``` ## Ethical Considerations AuthorMist Originality is released for research purposes to advance understanding of AI text detection limitations and privacy-preserving technologies. We acknowledge the dual-use nature of this technology and emphasize the following ethical considerations: 1. **Academic Integrity**: This model should not be used to misrepresent AI-generated content as human-written in academic settings where such distinctions are ethically relevant. 2. **Transparency**: We encourage users to maintain transparency about the use of AI assistance in content creation, even when using privacy-enhancing tools like AuthorMist. 3. **Privacy Protection**: The primary legitimate use case for this technology is protecting author privacy and preventing unfair discrimination against AI-assisted writing in contexts where such assistance is permissible. 4. **Research Value**: This model provides valuable insights into the limitations of current AI detection systems and contributes to the ongoing research dialogue about AI text detection and privacy. ## Citation If you use AuthorMist Originality in your research, please cite our paper: ```bibtex @article{authormist2025, title={AuthorMist: Evading AI Text Detectors with Reinforcement Learning}, author={David, Isaac and Gervais, Arthur}, journal={arXiv preprint}, year={2025} } ``` ## License This model is released under the [MIT License](https://opensource.org/licenses/MIT). ## Acknowledgments We thank the developers of Qwen2.5 for the base model and the creators of the CheckGPT dataset for providing valuable training data.
null
Non_BioNLP
# AuthorMist Originality [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-AuthorMist-blue)](https://huggingface.co/authormist/originality) [![License](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT) ## Overview AuthorMist Originality is a specialized language model designed to transform AI-generated text into more human-like writing while preserving the original meaning. This model was developed using reinforcement learning techniques to specifically evade AI text detection systems, with a focus on Originality.ai's detection algorithms. The model is based on Qwen2.5-3B Instruct and has been fine-tuned using Group Relative Policy Optimization (GRPO) with detector feedback as a reward signal. AuthorMist Originality demonstrates strong performance in reducing detectability across multiple AI text detection systems while maintaining high semantic similarity with the original text. ## Key Features - **Detector Evasion**: Trained specifically to evade Originality.ai's detection algorithms, with strong cross-detector generalization - **Meaning Preservation**: Maintains high semantic similarity (>0.94) with the original text - **Natural Output**: Produces fluent, coherent text that reads naturally - **Broad Applicability**: Effective across various domains including academic, technical, and creative writing ## Model Details - **Base Model**: Qwen2.5-3B Instruct - **Training Method**: Reinforcement Learning with Group Relative Policy Optimization (GRPO) - **Training Data**: 10,000 human-written abstracts from the CheckGPT dataset with corresponding AI-generated versions - **Domains Covered**: Computer Science, Humanities, Social Sciences, Physics, and more - **Text Length Support**: Optimized for texts ranging from 100 to 500 words ## Performance AuthorMist Originality demonstrates exceptional performance in evading AI text detection: - **Mean AUROC**: 0.49 across six major detection systems - **Mean F1-score**: 0.09 across all tested detectors - **Semantic Similarity**: >0.94 with original text The model shows particularly strong performance against: - Hello SimpleAI (AUROC: 0.07) - Sapling (AUROC: 0.13) - Winston.ai (AUROC: 0.35) ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer # Load model and tokenizer model_name = "authormist/authormist-originality" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Prepare input text ai_text = "Your AI-generated text here..." prompt = f"""Please paraphrase the following text to make it more human-like while preserving the original meaning: {ai_text} Paraphrased text:""" # Generate paraphrased text inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate( inputs.input_ids, max_new_tokens=512, temperature=0.7, top_p=0.9, do_sample=True ) paraphrased_text = tokenizer.decode(outputs[0], skip_special_tokens=True) print(paraphrased_text.split("Paraphrased text:")[1].strip()) ``` ## Ethical Considerations AuthorMist Originality is released for research purposes to advance understanding of AI text detection limitations and privacy-preserving technologies. We acknowledge the dual-use nature of this technology and emphasize the following ethical considerations: 1. **Academic Integrity**: This model should not be used to misrepresent AI-generated content as human-written in academic settings where such distinctions are ethically relevant. 2. **Transparency**: We encourage users to maintain transparency about the use of AI assistance in content creation, even when using privacy-enhancing tools like AuthorMist. 3. **Privacy Protection**: The primary legitimate use case for this technology is protecting author privacy and preventing unfair discrimination against AI-assisted writing in contexts where such assistance is permissible. 4. **Research Value**: This model provides valuable insights into the limitations of current AI detection systems and contributes to the ongoing research dialogue about AI text detection and privacy. ## Citation If you use AuthorMist Originality in your research, please cite our paper: ```bibtex @article{authormist2025, title={AuthorMist: Evading AI Text Detectors with Reinforcement Learning}, author={David, Isaac and Gervais, Arthur}, journal={arXiv preprint}, year={2025} } ``` ## License This model is released under the [MIT License](https://opensource.org/licenses/MIT). ## Acknowledgments We thank the developers of Qwen2.5 for the base model and the creators of the CheckGPT dataset for providing valuable training data.
{"base_model": "Qwen/Qwen2.5-3B-Instruct", "datasets": ["checkgpt"], "language": "en", "library_name": "transformers", "license": "mit", "pipeline_tag": "text-generation", "tags": ["text-generation", "ai-detection", "paraphrasing", "originality", "privacy"], "model_type": "causal-lm"}
task
[ "SEMANTIC_SIMILARITY", "PARAPHRASING" ]
45,623
UBC-NLP/toucan-base
UBC-NLP
text-generation
[ "safetensors", "t5", "UBC", "African", "pytorch", "Chaeetah", "DLNLP", "text-generation", "aar", "ach", "afr", "aka", "amh", "bam", "bas", "bem", "btg", "eng", "ewe", "fon", "fra", "hau", "ibo", "kbp", "lgg", "lug", "mlg", "nyn", "orm", "som", "sot", "swa", "tir", "yor", "teo", "gez", "wal", "fan", "kau", "kin", "kon", "lin", "nya", "pcm", "ssw", "tsn", "tso", "twi", "wol", "xho", "zul", "nnb", "swc", "ara", "region:us" ]
2024-08-11T09:00:55Z
2025-03-13T17:03:30+00:00
358
1
--- language: - aar - ach - afr - aka - amh - bam - bas - bem - btg - eng - ewe - fon - fra - hau - ibo - kbp - lgg - lug - mlg - nyn - orm - som - sot - swa - tir - yor - teo - gez - wal - fan - kau - kin - kon - lin - nya - pcm - ssw - tsn - tso - twi - wol - xho - zul - nnb - swc - ara pipeline_tag: text-generation tags: - UBC - African - pytorch - Chaeetah - DLNLP --- <div style='text-align: justify;'> This is the repository accompanying our ACL 2024 paper [Toucan: Many-to-Many Translation for 150 African Language Pairs](https://aclanthology.org/2024.findings-acl.781/). We address a notable gap in Natural Language Processing (NLP) by introducing a collection of resources designed to improve Machine Translation (MT) for low-resource languages, with a specific focus on African languages. First, We introduce two language models (LMs), Cheetah-1.2B and Cheetah-3.7B, with 1.2 billion and 3.7 billion parameters respectively. Next, we finetune the aforementioned models to create Toucan, an Afrocentric machine translation model designed to support 156 African language pairs. To evaluate Toucan, we carefully develop an extensive machine translation benchmark, dubbed AfroLingu-MT, tailored for evaluating machine translation. Toucan significantly outperforms other models, showcasing its remarkable performance on MT for African languages. Finally, we train a new model, spBLEU_1K, to enhance translation evaluation metrics, covering 1K languages, including 614 African languages. This work aims to advance the field of NLP, fostering cross-cultural understanding and knowledge exchange, particularly in regions with limited language resources such as Africa. </div> ## Models <div style='text-align: justify;'> To effectively train a MT language model for African languages, it is crucial to start with a powerful, Afrocentric pretrained language model. For this purpose, we select Cheetah (Adebara et al., 2024), a recently introduced SoTA model with extensive coverage encompassing 517 African languages. One limitation of Cheetah, however, is that it is available only in a base architecture, featuring 580M parameters. Given our objective to develop a large-scale language model for machine translation capabale of serving 156 directions, this base model does not fully meet our requirements. To address this limitation, we embark on training larger and more expansive Afrocentric sequence-to-sequence models. We focus on two sizes: one model with 1.2B parameters and another with 3.7B parameters. We refer to the new models “Cheetah-1.2B” and “Cheetah-3.7B”, respectively, to reflect their enhanced capabilities and parameter scale. These models represent a significant advancement in our efforts to improve machine translation for African languages, offering greater capacities in handling the rich linguistic nuances of African languages. Cheetah Pertaining. To train the new Cheetah models, we utilize the same pre-training dataset employed in training the original Cheetah-base model (Adebara et al., 2024). This strategic choice ensures consistency in the foundational data across models, enabling the advanced Cheetah-1.2B and Cheetah-3.7B versions to build upon the rich linguistic diversity captured in the original dataset. We refer to (Adebara et al., 2024) for more information about the pretraining data of Cheetah models. We employ a learning rate of 0.01, a batch size of 1, 024 sequences, and a maximum sequence length of 1, 024. Each model undergoes pretraining for 1 million steps. The training process is conducted on Google Cloud TPU with 128 cores (v3 − 128) provided by the TensorFlow Research Cloud (TFRC). We provide additional details on pretraining in Section B in the Appendix. </div> - Please refer to [**supported-languages**]("https://github.com/UBC-NLP/Cheetah/blob/main/supported-languages.txt") - More details about Cheetah's pretraning data, visit Cheetah's GitHub [**Cheetah paper GitHub**]("https://github.com/UBC-NLP/Cheetah") - More details about Toucan's pretraning data, visit Toucan's GitHub [**Toucan paper GitHub**]("https://github.com/UBC-NLP/Toucan") | **Cheetah Models** | **Link** | |---------|:------------------:| | 🔥**Cheetah-base**🔥| [https://huggingface.co/UBC-NLP/cheetah-base](https://huggingface.co/UBC-NLP/cheetah-base) | 🔥**Cheetah-1.2B**🔥| [https://huggingface.co/UBC-NLP/cheetah-1.2B](https://huggingface.co/UBC-NLP/cheetah-1.2B) | **Tocan Models** | **Link** | |---------|:------------------:| | 🔥**Toucan-base**🔥| [https://huggingface.co/UBC-NLP/toucan-base](https://huggingface.co/UBC-NLP/toucan-base) | 🔥**Toucan-1.2B**🔥| [https://huggingface.co/UBC-NLP/toucan-1.2B](https://huggingface.co/UBC-NLP/toucan-1.2B) # 3. How to use Cheetah-1.2B model Below is an example for using **Cheetah-1.2B** predict masked tokens. ``` bash from transformers import T5Tokenizer, AutoModelForSeq2SeqLM tokenizer = T5Tokenizer.from_pretrained("UBC-NLP/cheetah-1.2B") model = AutoModelForSeq2SeqLM.from_pretrained("UBC-NLP/cheetah-1.2B") yor_prompt="ìròyìn kan nípa owó ìjọba <extra_id_0> kan" input_ids = tokenizer(yor_prompt, return_tensors="pt").input_ids outputs = model.generate(input_ids) print("Cheetah-1.2B - Tokenized input:", tokenizer.tokenize(yor_prompt)) print("Cheetah-1.2B - Decoded output:", tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Output: ```bash Cheetah-1.2B - Tokenized input: ['▁ìròyìn', '▁kan', '▁nípa', '▁owó', '▁ìjọba', '<extra_id_0>', '▁kan'] Cheetah-1.2B - Decoded output: Nàìjíríà ``` # 3. How to use Toucan model To translate using Toucan models, use the target language ISO-3 code as preix. Below the supported langauges ``` lang_names={ "aar": "Afar", "ach": "Acholi", "afr": "Afrikaans", "aka": "Akan", "amh": "Amharic", "bam": "Bambara", "bas": "Basaa", "bem": "Bemba", "btg": "Bete Gagnoa", "eng": "English", "ewe": "Ewe", "fon": "Fon", "fra": "French", "hau": "Hausa", "ibo": "Igbo", "kbp": "Kabiye", "lgg": "Lugbara", "lug": "Luganda", "mlg": "Malagasy", "nyn": "Nyakore", "orm": "Oromo", "som": "Somali", "sot": "Sesotho", "swa": "Swahili", "tir": "Tigrinya", "yor": "Yoruba", "teo": "Ateso", "gez": "Geez", "wal": "Wolaytta", "fan": "Fang", "kau": "Kanuri", "kin": "Kinyawanda", "kon": "Kongo", "lin": "Lingala", "nya": "Chichewa", "pcm": "Nigerian Pidgin", "ssw": "Siswati", "tsn": "Setswana", "tso": "Tsonga", "twi": "Twi", "wol": "Wolof", "xho": "Xhosa", "zul": "Zulu", "nnb": "Nande", "swc": "Swahili Congo", "ara": "Arabic" } ``` Below is an example for translating using **Toucan-base**. ``` bash from transformers import AutoTokenizer, MT5ForConditionalGeneration import torch tokenizer = AutoTokenizer.from_pretrained("UBC-NLP/toucan-base") model = MT5ForConditionalGeneration.from_pretrained("UBC-NLP/toucan-base", torch_dtype=torch.float16, device_map="auto") model.eval() #Translate from Enlglish to Zulu text="zul: Clear all items from the recent documents list" input_ids = tokenizer(text, return_tensors="pt", max_length=1024, truncation=True).to("cuda:0") with torch.no_grad(): generated_ids = model.generate(**input_ids, num_beams=5, max_new_tokens=len(text), do_sample=True, temperature=0.6, top_p=0.9) print("Toucan-base - translation:", tokenizer.batch_decode(generated_ids, skip_special_tokens=True, skip_prompt=True)[0]) ``` Output: ```bash Toucan-base - translation: Vala zonke izinto kusuka kwihlu lamadokhumende elidlule ``` ## Citation If you use the pre-trained model (Cheetah-1.2B) for your scientific publication, or if you find the resources in this repository useful, please cite our papers as follows (to be updated): **Cheetah's Paper** ``` @inproceedings{adebara-etal-2024-cheetah, title = "Cheetah: Natural Language Generation for 517 {A}frican Languages", author = "Adebara, Ife and Elmadany, AbdelRahim and Abdul-Mageed, Muhammad", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand and virtual meeting", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.691", pages = "12798--12823", } ``` **Toucan's Paper** ``` @inproceedings{elmadany2024toucan, title={Toucan: Many-to-Many Translation for 150 African Language Pairs}, author={Elmadany, Abdelrahim and Adebara, Ife and Abdul-Mageed, Muhammad}, booktitle={Findings of the Association for Computational Linguistics ACL 2024}, pages={13189--13206}, year={2024} } ``` ## Acknowledgments We gratefully acknowledges support from Canada Research Chairs (CRC), the Natural Sciences and Engineering Research Council of Canada (NSERC; RGPIN-2018-04267), the Social Sciences and Humanities Research Council of Canada (SSHRC; 435-2018-0576; 895-2020-1004; 895-2021-1008), Canadian Foundation for Innovation (CFI; 37771), [Digital Research Alliance of Canada](https://alliancecan.ca), [UBC ARC-Sockeye](https://arc.ubc.ca/ubc-arc-sockeye), Advanced Micro Devices, Inc. (AMD), and Google. Any opinions, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of CRC, NSERC, SSHRC, CFI, the Alliance, AMD, Google, or UBC ARC-Sockeye.
null
Non_BioNLP
<div style='text-align: justify;'> This is the repository accompanying our ACL 2024 paper [Toucan: Many-to-Many Translation for 150 African Language Pairs](https://aclanthology.org/2024.findings-acl.781/). We address a notable gap in Natural Language Processing (NLP) by introducing a collection of resources designed to improve Machine Translation (MT) for low-resource languages, with a specific focus on African languages. First, We introduce two language models (LMs), Cheetah-1.2B and Cheetah-3.7B, with 1.2 billion and 3.7 billion parameters respectively. Next, we finetune the aforementioned models to create Toucan, an Afrocentric machine translation model designed to support 156 African language pairs. To evaluate Toucan, we carefully develop an extensive machine translation benchmark, dubbed AfroLingu-MT, tailored for evaluating machine translation. Toucan significantly outperforms other models, showcasing its remarkable performance on MT for African languages. Finally, we train a new model, spBLEU_1K, to enhance translation evaluation metrics, covering 1K languages, including 614 African languages. This work aims to advance the field of NLP, fostering cross-cultural understanding and knowledge exchange, particularly in regions with limited language resources such as Africa. </div> ## Models <div style='text-align: justify;'> To effectively train a MT language model for African languages, it is crucial to start with a powerful, Afrocentric pretrained language model. For this purpose, we select Cheetah (Adebara et al., 2024), a recently introduced SoTA model with extensive coverage encompassing 517 African languages. One limitation of Cheetah, however, is that it is available only in a base architecture, featuring 580M parameters. Given our objective to develop a large-scale language model for machine translation capabale of serving 156 directions, this base model does not fully meet our requirements. To address this limitation, we embark on training larger and more expansive Afrocentric sequence-to-sequence models. We focus on two sizes: one model with 1.2B parameters and another with 3.7B parameters. We refer to the new models “Cheetah-1.2B” and “Cheetah-3.7B”, respectively, to reflect their enhanced capabilities and parameter scale. These models represent a significant advancement in our efforts to improve machine translation for African languages, offering greater capacities in handling the rich linguistic nuances of African languages. Cheetah Pertaining. To train the new Cheetah models, we utilize the same pre-training dataset employed in training the original Cheetah-base model (Adebara et al., 2024). This strategic choice ensures consistency in the foundational data across models, enabling the advanced Cheetah-1.2B and Cheetah-3.7B versions to build upon the rich linguistic diversity captured in the original dataset. We refer to (Adebara et al., 2024) for more information about the pretraining data of Cheetah models. We employ a learning rate of 0.01, a batch size of 1, 024 sequences, and a maximum sequence length of 1, 024. Each model undergoes pretraining for 1 million steps. The training process is conducted on Google Cloud TPU with 128 cores (v3 − 128) provided by the TensorFlow Research Cloud (TFRC). We provide additional details on pretraining in Section B in the Appendix. </div> - Please refer to [**supported-languages**]("https://github.com/UBC-NLP/Cheetah/blob/main/supported-languages.txt") - More details about Cheetah's pretraning data, visit Cheetah's GitHub [**Cheetah paper GitHub**]("https://github.com/UBC-NLP/Cheetah") - More details about Toucan's pretraning data, visit Toucan's GitHub [**Toucan paper GitHub**]("https://github.com/UBC-NLP/Toucan") | **Cheetah Models** | **Link** | |---------|:------------------:| | 🔥**Cheetah-base**🔥| [https://huggingface.co/UBC-NLP/cheetah-base](https://huggingface.co/UBC-NLP/cheetah-base) | 🔥**Cheetah-1.2B**🔥| [https://huggingface.co/UBC-NLP/cheetah-1.2B](https://huggingface.co/UBC-NLP/cheetah-1.2B) | **Tocan Models** | **Link** | |---------|:------------------:| | 🔥**Toucan-base**🔥| [https://huggingface.co/UBC-NLP/toucan-base](https://huggingface.co/UBC-NLP/toucan-base) | 🔥**Toucan-1.2B**🔥| [https://huggingface.co/UBC-NLP/toucan-1.2B](https://huggingface.co/UBC-NLP/toucan-1.2B) # 3. How to use Cheetah-1.2B model Below is an example for using **Cheetah-1.2B** predict masked tokens. ``` bash from transformers import T5Tokenizer, AutoModelForSeq2SeqLM tokenizer = T5Tokenizer.from_pretrained("UBC-NLP/cheetah-1.2B") model = AutoModelForSeq2SeqLM.from_pretrained("UBC-NLP/cheetah-1.2B") yor_prompt="ìròyìn kan nípa owó ìjọba <extra_id_0> kan" input_ids = tokenizer(yor_prompt, return_tensors="pt").input_ids outputs = model.generate(input_ids) print("Cheetah-1.2B - Tokenized input:", tokenizer.tokenize(yor_prompt)) print("Cheetah-1.2B - Decoded output:", tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Output: ```bash Cheetah-1.2B - Tokenized input: ['▁ìròyìn', '▁kan', '▁nípa', '▁owó', '▁ìjọba', '<extra_id_0>', '▁kan'] Cheetah-1.2B - Decoded output: Nàìjíríà ``` # 3. How to use Toucan model To translate using Toucan models, use the target language ISO-3 code as preix. Below the supported langauges ``` lang_names={ "aar": "Afar", "ach": "Acholi", "afr": "Afrikaans", "aka": "Akan", "amh": "Amharic", "bam": "Bambara", "bas": "Basaa", "bem": "Bemba", "btg": "Bete Gagnoa", "eng": "English", "ewe": "Ewe", "fon": "Fon", "fra": "French", "hau": "Hausa", "ibo": "Igbo", "kbp": "Kabiye", "lgg": "Lugbara", "lug": "Luganda", "mlg": "Malagasy", "nyn": "Nyakore", "orm": "Oromo", "som": "Somali", "sot": "Sesotho", "swa": "Swahili", "tir": "Tigrinya", "yor": "Yoruba", "teo": "Ateso", "gez": "Geez", "wal": "Wolaytta", "fan": "Fang", "kau": "Kanuri", "kin": "Kinyawanda", "kon": "Kongo", "lin": "Lingala", "nya": "Chichewa", "pcm": "Nigerian Pidgin", "ssw": "Siswati", "tsn": "Setswana", "tso": "Tsonga", "twi": "Twi", "wol": "Wolof", "xho": "Xhosa", "zul": "Zulu", "nnb": "Nande", "swc": "Swahili Congo", "ara": "Arabic" } ``` Below is an example for translating using **Toucan-base**. ``` bash from transformers import AutoTokenizer, MT5ForConditionalGeneration import torch tokenizer = AutoTokenizer.from_pretrained("UBC-NLP/toucan-base") model = MT5ForConditionalGeneration.from_pretrained("UBC-NLP/toucan-base", torch_dtype=torch.float16, device_map="auto") model.eval() #Translate from Enlglish to Zulu text="zul: Clear all items from the recent documents list" input_ids = tokenizer(text, return_tensors="pt", max_length=1024, truncation=True).to("cuda:0") with torch.no_grad(): generated_ids = model.generate(**input_ids, num_beams=5, max_new_tokens=len(text), do_sample=True, temperature=0.6, top_p=0.9) print("Toucan-base - translation:", tokenizer.batch_decode(generated_ids, skip_special_tokens=True, skip_prompt=True)[0]) ``` Output: ```bash Toucan-base - translation: Vala zonke izinto kusuka kwihlu lamadokhumende elidlule ``` ## Citation If you use the pre-trained model (Cheetah-1.2B) for your scientific publication, or if you find the resources in this repository useful, please cite our papers as follows (to be updated): **Cheetah's Paper** ``` @inproceedings{adebara-etal-2024-cheetah, title = "Cheetah: Natural Language Generation for 517 {A}frican Languages", author = "Adebara, Ife and Elmadany, AbdelRahim and Abdul-Mageed, Muhammad", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand and virtual meeting", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.691", pages = "12798--12823", } ``` **Toucan's Paper** ``` @inproceedings{elmadany2024toucan, title={Toucan: Many-to-Many Translation for 150 African Language Pairs}, author={Elmadany, Abdelrahim and Adebara, Ife and Abdul-Mageed, Muhammad}, booktitle={Findings of the Association for Computational Linguistics ACL 2024}, pages={13189--13206}, year={2024} } ``` ## Acknowledgments We gratefully acknowledges support from Canada Research Chairs (CRC), the Natural Sciences and Engineering Research Council of Canada (NSERC; RGPIN-2018-04267), the Social Sciences and Humanities Research Council of Canada (SSHRC; 435-2018-0576; 895-2020-1004; 895-2021-1008), Canadian Foundation for Innovation (CFI; 37771), [Digital Research Alliance of Canada](https://alliancecan.ca), [UBC ARC-Sockeye](https://arc.ubc.ca/ubc-arc-sockeye), Advanced Micro Devices, Inc. (AMD), and Google. Any opinions, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of CRC, NSERC, SSHRC, CFI, the Alliance, AMD, Google, or UBC ARC-Sockeye.
{"language": ["aar", "ach", "afr", "aka", "amh", "bam", "bas", "bem", "btg", "eng", "ewe", "fon", "fra", "hau", "ibo", "kbp", "lgg", "lug", "mlg", "nyn", "orm", "som", "sot", "swa", "tir", "yor", "teo", "gez", "wal", "fan", "kau", "kin", "kon", "lin", "nya", "pcm", "ssw", "tsn", "tso", "twi", "wol", "xho", "zul", "nnb", "swc", "ara"], "pipeline_tag": "text-generation", "tags": ["UBC", "African", "pytorch", "Chaeetah", "DLNLP"]}
task
[ "TRANSLATION" ]
45,624
Meshwa/llama3.2-3b-Reflection-v1
Meshwa
text2text-generation
[ "gguf", "llama", "text2text-generation", "en", "dataset:glaiveai/reflection-v1", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-3B-Instruct", "license:llama3.2", "endpoints_compatible", "region:us", "conversational" ]
2024-10-21T13:27:44Z
2024-12-12T13:22:20+00:00
444
0
--- base_model: - meta-llama/Llama-3.2-3B-Instruct datasets: - glaiveai/reflection-v1 language: - en license: llama3.2 pipeline_tag: text2text-generation --- # Llama-3.2-3B-Instruct Fine-tuned on glaiveai/reflection-v1 - **Developed by:** Meshwa - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## Overview * Contains **Llama-3.2-3B-Instruct**, * Fine-tuned on the **glaiveai/reflection-v1** dataset using the **Unsloth** library. * Model has been quantized into several formats (`q4`, `q5`, `q6`, `q8` and `f16`) * Modelfile for use with Ollama is included, The default quantization is set to **Q8_0**, edit if you want to. ## Model Description ### Objective Tried to finetune **Llama-3.2-3B-Instruct** leveraging the **glaiveai/reflection-v1** dataset. I thought it would be fun to see how smaller models perform on this task. ### Dataset: glaiveai/reflection-v1 The **glaiveai/reflection-v1** dataset is tailored for reflective, introspective tasks, including open-ended conversation, abstract reasoning, and context-aware response generation. This dataset includes tasks such as: - Thoughtful question answering - Summarization of complex ideas - Reflective problem solving ### Fine-tuning Methodology: Unsloth Library **Unsloth** was used for 2x faster finetuing of the base Llama-3.2 model. ## Usage ### Inference with gguf Quantized Models To use the model in gguf format, load your preferred quantized version with a compatible inference framework such as `llama.cpp` or any gguf-supported libraries: ```python from llama_cpp import Llama llama_model = Llama(model_path="path_to_model/Llama-3.2-3B-Instruct-q8_0.gguf") result = llama_model("Your instruction prompt here") print(result) ``` ### Using with Ollama The included Modelfile supports direct loading in Ollama. To use the default model, simply run: ```bash ollama create "model_name_here" -f "Modelfile_path" ``` Directly importing from HF 🤗 ```bash ollama pull hf.co/Meshwa/llama3.2-3b-Reflection-v1:{quant_type} ``` make sure to replace `{quant_type}` with one of these: - `Q4_K_M` - `Q4_0` - `Q4_1` - `Q6_K` - `Q8_0` (default in my modelfile) - `Q5_K_M` - `F16` For Better results use the below system prompt: ```bash You are a world-class AI system capable of complex reasoning and reflection. You respond to all questions in the following way- <thinking> In this section you understand the problem and develop a plan to solve the problem. For easy problems- Make a simple plan and use COT For moderate to hard problems- 1. Devise a step-by-step plan to solve the problem. (don't actually start solving yet, just make a plan) 2. Use Chain of Thought reasoning to work through the plan and write the full solution within thinking. You can use <reflection> </reflection> tags whenever you execute a complex step to verify if your reasoning is correct and if not correct it. </thinking> <output> In this section, provide the complete answer for the user based on your thinking process. Do not refer to the thinking tag. Include all relevant information and keep the response somewhat verbose, the user will not see what is in the thinking tag. </output> ``` ## License This model is released under the **Apache 2.0**. ## Citation If you use this model, please cite the following: ``` @article{Llama-3.2-3B-Instruct-Reflection-v1, author = {Meshwa}, title = {Llama-3.2-3B-Instruct Fine-tuned on glaiveai/reflection-v1}, year = {2024}, published = {https://huggingface.co/Meshwa/llama3.2-3b-Reflection-v1} } ```
null
Non_BioNLP
# Llama-3.2-3B-Instruct Fine-tuned on glaiveai/reflection-v1 - **Developed by:** Meshwa - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## Overview * Contains **Llama-3.2-3B-Instruct**, * Fine-tuned on the **glaiveai/reflection-v1** dataset using the **Unsloth** library. * Model has been quantized into several formats (`q4`, `q5`, `q6`, `q8` and `f16`) * Modelfile for use with Ollama is included, The default quantization is set to **Q8_0**, edit if you want to. ## Model Description ### Objective Tried to finetune **Llama-3.2-3B-Instruct** leveraging the **glaiveai/reflection-v1** dataset. I thought it would be fun to see how smaller models perform on this task. ### Dataset: glaiveai/reflection-v1 The **glaiveai/reflection-v1** dataset is tailored for reflective, introspective tasks, including open-ended conversation, abstract reasoning, and context-aware response generation. This dataset includes tasks such as: - Thoughtful question answering - Summarization of complex ideas - Reflective problem solving ### Fine-tuning Methodology: Unsloth Library **Unsloth** was used for 2x faster finetuing of the base Llama-3.2 model. ## Usage ### Inference with gguf Quantized Models To use the model in gguf format, load your preferred quantized version with a compatible inference framework such as `llama.cpp` or any gguf-supported libraries: ```python from llama_cpp import Llama llama_model = Llama(model_path="path_to_model/Llama-3.2-3B-Instruct-q8_0.gguf") result = llama_model("Your instruction prompt here") print(result) ``` ### Using with Ollama The included Modelfile supports direct loading in Ollama. To use the default model, simply run: ```bash ollama create "model_name_here" -f "Modelfile_path" ``` Directly importing from HF 🤗 ```bash ollama pull hf.co/Meshwa/llama3.2-3b-Reflection-v1:{quant_type} ``` make sure to replace `{quant_type}` with one of these: - `Q4_K_M` - `Q4_0` - `Q4_1` - `Q6_K` - `Q8_0` (default in my modelfile) - `Q5_K_M` - `F16` For Better results use the below system prompt: ```bash You are a world-class AI system capable of complex reasoning and reflection. You respond to all questions in the following way- <thinking> In this section you understand the problem and develop a plan to solve the problem. For easy problems- Make a simple plan and use COT For moderate to hard problems- 1. Devise a step-by-step plan to solve the problem. (don't actually start solving yet, just make a plan) 2. Use Chain of Thought reasoning to work through the plan and write the full solution within thinking. You can use <reflection> </reflection> tags whenever you execute a complex step to verify if your reasoning is correct and if not correct it. </thinking> <output> In this section, provide the complete answer for the user based on your thinking process. Do not refer to the thinking tag. Include all relevant information and keep the response somewhat verbose, the user will not see what is in the thinking tag. </output> ``` ## License This model is released under the **Apache 2.0**. ## Citation If you use this model, please cite the following: ``` @article{Llama-3.2-3B-Instruct-Reflection-v1, author = {Meshwa}, title = {Llama-3.2-3B-Instruct Fine-tuned on glaiveai/reflection-v1}, year = {2024}, published = {https://huggingface.co/Meshwa/llama3.2-3b-Reflection-v1} } ```
{"base_model": ["meta-llama/Llama-3.2-3B-Instruct"], "datasets": ["glaiveai/reflection-v1"], "language": ["en"], "license": "llama3.2", "pipeline_tag": "text2text-generation"}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
45,625
speakleash/Bielik-11B-v2
speakleash
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "pl", "arxiv:2410.18565", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-08-26T08:52:36Z
2024-10-26T12:37:10+00:00
2,909
39
--- language: - pl library_name: transformers license: apache-2.0 inference: parameters: temperature: 0.9 extra_gated_description: If you want to learn more about how you can use the model, please refer to our <a href="https://bielik.ai/terms/">Terms of Use</a>. --- <p align="center"> <img src="https://huggingface.co/speakleash/Bielik-11B-v2/raw/main/speakleash_cyfronet.png"> </p> # Bielik-11B-v2 Bielik-11B-v2 is a generative text model featuring 11 billion parameters. It is initialized from its predecessor, Mistral-7B-v0.2, and trained on 400 billion tokens. The aforementioned model stands as a testament to the unique collaboration between the open-science/open-source project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH. Developed and trained on Polish text corpora, which have been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure, specifically within the PLGrid environment, and more precisely, the HPC center: ACK Cyfronet AGH. The creation and training of the Bielik-11B-v2 was propelled by the support of computational grant number PLG/2024/016951, conducted on the Athena and Helios supercomputer, enabling the use of cutting-edge technology and computational resources essential for large-scale machine learning processes. As a result, the model exhibits an exceptional ability to understand and process the Polish language, providing accurate responses and performing a variety of linguistic tasks with high precision. ⚠️ This is a base model intended for further fine-tuning across most use cases. If you're looking for a model ready for chatting or following instructions out-of-the-box, please use [Bielik-11B-v.2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct). 🎥 Demo: https://chat.bielik.ai 🗣️ Chat Arena<span style="color:red;">*</span>: https://arena.speakleash.org.pl/ <span style="color:red;">*</span>Chat Arena is a platform for testing and comparing different AI language models, allowing users to evaluate their performance and quality. ## Model Bielik-11B-v2 has been trained with [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) using different parallelization techniques. The model training was conducted on the Helios Supercomputer at the ACK Cyfronet AGH, utilizing 256 NVidia GH200 cards. The training dataset was composed of Polish texts collected and made available through the [SpeakLeash](https://speakleash.org/) project, as well as a subset of CommonCrawl data. We used 200 billion tokens (over 700 GB of plain text) for two epochs of training. ### Model description: * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/) * **Language:** Polish * **Model type:** causal decoder-only * **Initialized from:** [Mistral-7B-v0.2](https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar) * **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/) * **Model ref:** speakleash:45b6efdb701991181a05968fc53d2a8e ### Quality evaluation An XGBoost classification model was prepared and created to evaluate the quality of texts in native Polish language. It is based on 93 features, such as the ratio of out-of-vocabulary words to all words (OOVs), the number of nouns, verbs, average sentence length etc. The model outputs the category of a given document (either HIGH, MEDIUM or LOW) along with the probability. This approach allows implementation of a dedicated pipeline to choose documents, from which we've used entries with HIGH quality index and probability exceeding 90%. This filtration and appropriate selection of texts enable the provision of a condensed and high-quality database of texts in Polish for training purposes. ### Quickstart This model can be easily loaded using the AutoModelForCausalLM functionality. ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "speakleash/Bielik-11B-v2" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) ``` In order to reduce the memory usage, you can use smaller precision (`bfloat16`). ```python import torch model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) ``` And then you can use HuggingFace Pipelines to generate text: ```python import transformers text = "Najważniejszym celem człowieka na ziemi jest" pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer) sequences = pipeline(max_new_tokens=100, do_sample=True, top_k=50, eos_token_id=tokenizer.eos_token_id, text_inputs=text) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` Generated output: > Najważniejszym celem człowieka na ziemi jest życie w pokoju, harmonii i miłości. Dla każdego z nas bardzo ważne jest, aby otaczać się kochanymi osobami. ## Evaluation Models have been evaluated on two leaderboards: [Open PL LLM Leaderboard](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard) and [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The Open PL LLM Leaderboard uses a 5-shot evaluation and focuses on NLP tasks in Polish, while the Open LLM Leaderboard evaluates models on various English language tasks. ### Open PL LLM Leaderboard The benchmark evaluates models in NLP tasks like sentiment analysis, categorization, text classification but does not test chatting skills. Average column is an average score among all tasks normalized by baseline scores. | Model | Parameters (B) | Average | |------------------------|------------|---------| | Meta-Llama-3-70B | 70 | 62.07 | | Qwen1.5-72B | 72 | 61.11 | | Meta-Llama-3.1-70B | 70 | 60.87 | | Mixtral-8x22B-v0.1 | 141 | 60.75 | | Qwen1.5-32B | 32 | 58.71 | | **Bielik-11B-v2** | **11** | **58.14** | | Qwen2-7B | 7 | 49.39 | | SOLAR-10.7B-v1.0 | 10.7 | 47.54 | | Mistral-Nemo-Base-2407 | 12 | 47.28 | | internlm2-20b | 20 | 47.15 | | Meta-Llama-3.1-8B | 8 | 43.77 | | Meta-Llama-3-8B | 8 | 43.30 | | Mistral-7B-v0.2 | 7 | 38.81 | | Bielik-7B-v0.1 | 7 | 34.34 | | Qra-13b | 13 | 33.90 | | Qra-7b | 7 | 16.60 | The results from the Open PL LLM Leaderboard show that the Bielik-11B-v2 model, with 11 billion parameters, achieved an average score of 58.14. This makes it the best performing model among those under 20B parameters, outperforming the second-best model in this category by an impressive 8.75 percentage points. This significant lead not only places it ahead of its predecessor, the Bielik-7B-v0.1 (which scored 34.34), but also demonstrates its superiority over other larger models. The substantial improvement highlights the remarkable advancements and optimizations made in this newer version. Other Polish models listed include Qra-13b and Qra-7b, scoring 33.90 and 16.60 respectively, indicating that Bielik-11B-v2 outperforms these models by a considerable margin. Additionally, the Bielik-11B-v2 was initialized from the weights of Mistral-7B-v0.2, which itself scored 38.81, further demonstrating the effective enhancements incorporated into the Bielik-11B-v2 model. ### Open LLM Leaderboard The Open LLM Leaderboard evaluates models on various English language tasks, providing insights into the model's performance across different linguistic challenges. | Model | AVG | arc_challenge | hellaswag | truthfulqa_mc2 | mmlu | winogrande | gsm8k | |-------------------------|-------|---------------|-----------|----------------|-------|------------|-------| | **Bielik-11B-v2** | **65.87** | 60.58 | 79.84 | 46.13 | 63.06 | 77.82 | 67.78 | | Mistral-7B-v0.2 | 60.37 | 60.84 | 83.08 | 41.76 | 63.62 | 78.22 | 34.72 | | Bielik-7B-v0.1 | 49.98 | 45.22 | 67.92 | 47.16 | 43.20 | 66.85 | 29.49 | The results from the Open LLM Leaderboard demonstrate the impressive performance of Bielik-11B-v2 across various NLP tasks. With an average score of 65.87, it significantly outperforms its predecessor, Bielik-7B-v0.1, and even surpasses Mistral-7B-v0.2, which served as its initial weight basis. Key observations: 1. Bielik-11B-v2 shows substantial improvements in most categories compared to Bielik-7B-v0.1, highlighting the effectiveness of the model's enhancements. 2. It performs exceptionally well in tasks like hellaswag (common sense reasoning), winogrande (commonsense reasoning), and gsm8k (mathematical problem-solving), indicating its versatility across different types of language understanding and generation tasks. 3. While Mistral-7B-v0.2 outperforms in truthfulqa_mc2, Bielik-11B-v2 maintains competitive performance in this truth-discernment task. Although Bielik-11B-v2 was primarily trained on Polish data, it has retained and even improved its ability to understand and operate in English, as evidenced by its strong performance across these English-language benchmarks. This suggests that the model has effectively leveraged cross-lingual transfer learning, maintaining its Polish language expertise while enhancing its English language capabilities. ## Limitations and Biases Bielik-11B-v2 is not intended for deployment without fine-tuning. It should not be used for human-facing interactions without further guardrails and user consent. Bielik-11B-v2 can produce factually incorrect output, and should not be relied on to produce factually accurate data. Bielik-11B-v2 was trained on various public datasets. While great efforts have been taken to clear the training data, it is possible that this model can generate lewd, false, biased or otherwise offensive outputs. ## Citation Please cite this model using the following format: ``` @misc{Bielik11Bv2b, title = {Bielik-11B-v2 model card}, author = {Ociepa, Krzysztof and Flis, Łukasz and Wróbel, Krzysztof and Gwoździej, Adrian and {SpeakLeash Team} and {Cyfronet Team}}, year = {2024}, url = {https://huggingface.co/speakleash/Bielik-11B-v2}, note = {Accessed: 2024-08-28}, urldate = {2024-08-28} } @unpublished{Bielik11Bv2a, author = {Ociepa, Krzysztof and Flis, Łukasz and Kinas, Remigiusz and Gwoździej, Adrian and Wróbel, Krzysztof}, title = {Bielik: A Family of Large Language Models for the Polish Language - Development, Insights, and Evaluation}, year = {2024}, } @misc{ociepa2024bielik7bv01polish, title={Bielik 7B v0.1: A Polish Language Model -- Development, Insights, and Evaluation}, author={Krzysztof Ociepa and Łukasz Flis and Krzysztof Wróbel and Adrian Gwoździej and Remigiusz Kinas}, year={2024}, eprint={2410.18565}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2410.18565}, } ``` ## Responsible for training the model * [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, data preparation, process optimization and oversight of training * [Łukasz Flis](https://www.linkedin.com/in/lukasz-flis-0a39631/)<sup>Cyfronet AGH</sup> - coordinating and supervising the training * [Adrian Gwoździej](https://www.linkedin.com/in/adrgwo/)<sup>SpeakLeash</sup> - data cleaning and quality * [Krzysztof Wróbel](https://www.linkedin.com/in/wrobelkrzysztof/)<sup>SpeakLeash</sup> - benchmarks The model could not have been created without the commitment and work of the entire SpeakLeash team, whose contribution is invaluable. Thanks to the hard work of many individuals, it was possible to gather a large amount of content in Polish and establish collaboration between the open-science SpeakLeash project and the HPC center: ACK Cyfronet AGH. Individuals who contributed to the creation of the model: [Grzegorz Urbanowicz](https://www.linkedin.com/in/grzegorz-urbanowicz-05823469/), [Igor Ciuciura](https://www.linkedin.com/in/igor-ciuciura-1763b52a6/), [Jacek Chwiła](https://www.linkedin.com/in/jacek-chwila/), [Szymon Baczyński](https://www.linkedin.com/in/szymon-baczynski/), [Paweł Kiszczak](https://www.linkedin.com/in/paveu-kiszczak/), [Aleksander Smywiński-Pohl](https://www.linkedin.com/in/apohllo/). Members of the ACK Cyfronet AGH team providing valuable support and expertise: [Szymon Mazurek](https://www.linkedin.com/in/sz-mazurek-ai/), [Marek Magryś](https://www.linkedin.com/in/magrys/). ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/pv4brQMDTy).
null
Non_BioNLP
<p align="center"> <img src="https://huggingface.co/speakleash/Bielik-11B-v2/raw/main/speakleash_cyfronet.png"> </p> # Bielik-11B-v2 Bielik-11B-v2 is a generative text model featuring 11 billion parameters. It is initialized from its predecessor, Mistral-7B-v0.2, and trained on 400 billion tokens. The aforementioned model stands as a testament to the unique collaboration between the open-science/open-source project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH. Developed and trained on Polish text corpora, which have been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure, specifically within the PLGrid environment, and more precisely, the HPC center: ACK Cyfronet AGH. The creation and training of the Bielik-11B-v2 was propelled by the support of computational grant number PLG/2024/016951, conducted on the Athena and Helios supercomputer, enabling the use of cutting-edge technology and computational resources essential for large-scale machine learning processes. As a result, the model exhibits an exceptional ability to understand and process the Polish language, providing accurate responses and performing a variety of linguistic tasks with high precision. ⚠️ This is a base model intended for further fine-tuning across most use cases. If you're looking for a model ready for chatting or following instructions out-of-the-box, please use [Bielik-11B-v.2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct). 🎥 Demo: https://chat.bielik.ai 🗣️ Chat Arena<span style="color:red;">*</span>: https://arena.speakleash.org.pl/ <span style="color:red;">*</span>Chat Arena is a platform for testing and comparing different AI language models, allowing users to evaluate their performance and quality. ## Model Bielik-11B-v2 has been trained with [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) using different parallelization techniques. The model training was conducted on the Helios Supercomputer at the ACK Cyfronet AGH, utilizing 256 NVidia GH200 cards. The training dataset was composed of Polish texts collected and made available through the [SpeakLeash](https://speakleash.org/) project, as well as a subset of CommonCrawl data. We used 200 billion tokens (over 700 GB of plain text) for two epochs of training. ### Model description: * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/) * **Language:** Polish * **Model type:** causal decoder-only * **Initialized from:** [Mistral-7B-v0.2](https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar) * **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/) * **Model ref:** speakleash:45b6efdb701991181a05968fc53d2a8e ### Quality evaluation An XGBoost classification model was prepared and created to evaluate the quality of texts in native Polish language. It is based on 93 features, such as the ratio of out-of-vocabulary words to all words (OOVs), the number of nouns, verbs, average sentence length etc. The model outputs the category of a given document (either HIGH, MEDIUM or LOW) along with the probability. This approach allows implementation of a dedicated pipeline to choose documents, from which we've used entries with HIGH quality index and probability exceeding 90%. This filtration and appropriate selection of texts enable the provision of a condensed and high-quality database of texts in Polish for training purposes. ### Quickstart This model can be easily loaded using the AutoModelForCausalLM functionality. ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "speakleash/Bielik-11B-v2" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) ``` In order to reduce the memory usage, you can use smaller precision (`bfloat16`). ```python import torch model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) ``` And then you can use HuggingFace Pipelines to generate text: ```python import transformers text = "Najważniejszym celem człowieka na ziemi jest" pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer) sequences = pipeline(max_new_tokens=100, do_sample=True, top_k=50, eos_token_id=tokenizer.eos_token_id, text_inputs=text) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` Generated output: > Najważniejszym celem człowieka na ziemi jest życie w pokoju, harmonii i miłości. Dla każdego z nas bardzo ważne jest, aby otaczać się kochanymi osobami. ## Evaluation Models have been evaluated on two leaderboards: [Open PL LLM Leaderboard](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard) and [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The Open PL LLM Leaderboard uses a 5-shot evaluation and focuses on NLP tasks in Polish, while the Open LLM Leaderboard evaluates models on various English language tasks. ### Open PL LLM Leaderboard The benchmark evaluates models in NLP tasks like sentiment analysis, categorization, text classification but does not test chatting skills. Average column is an average score among all tasks normalized by baseline scores. | Model | Parameters (B) | Average | |------------------------|------------|---------| | Meta-Llama-3-70B | 70 | 62.07 | | Qwen1.5-72B | 72 | 61.11 | | Meta-Llama-3.1-70B | 70 | 60.87 | | Mixtral-8x22B-v0.1 | 141 | 60.75 | | Qwen1.5-32B | 32 | 58.71 | | **Bielik-11B-v2** | **11** | **58.14** | | Qwen2-7B | 7 | 49.39 | | SOLAR-10.7B-v1.0 | 10.7 | 47.54 | | Mistral-Nemo-Base-2407 | 12 | 47.28 | | internlm2-20b | 20 | 47.15 | | Meta-Llama-3.1-8B | 8 | 43.77 | | Meta-Llama-3-8B | 8 | 43.30 | | Mistral-7B-v0.2 | 7 | 38.81 | | Bielik-7B-v0.1 | 7 | 34.34 | | Qra-13b | 13 | 33.90 | | Qra-7b | 7 | 16.60 | The results from the Open PL LLM Leaderboard show that the Bielik-11B-v2 model, with 11 billion parameters, achieved an average score of 58.14. This makes it the best performing model among those under 20B parameters, outperforming the second-best model in this category by an impressive 8.75 percentage points. This significant lead not only places it ahead of its predecessor, the Bielik-7B-v0.1 (which scored 34.34), but also demonstrates its superiority over other larger models. The substantial improvement highlights the remarkable advancements and optimizations made in this newer version. Other Polish models listed include Qra-13b and Qra-7b, scoring 33.90 and 16.60 respectively, indicating that Bielik-11B-v2 outperforms these models by a considerable margin. Additionally, the Bielik-11B-v2 was initialized from the weights of Mistral-7B-v0.2, which itself scored 38.81, further demonstrating the effective enhancements incorporated into the Bielik-11B-v2 model. ### Open LLM Leaderboard The Open LLM Leaderboard evaluates models on various English language tasks, providing insights into the model's performance across different linguistic challenges. | Model | AVG | arc_challenge | hellaswag | truthfulqa_mc2 | mmlu | winogrande | gsm8k | |-------------------------|-------|---------------|-----------|----------------|-------|------------|-------| | **Bielik-11B-v2** | **65.87** | 60.58 | 79.84 | 46.13 | 63.06 | 77.82 | 67.78 | | Mistral-7B-v0.2 | 60.37 | 60.84 | 83.08 | 41.76 | 63.62 | 78.22 | 34.72 | | Bielik-7B-v0.1 | 49.98 | 45.22 | 67.92 | 47.16 | 43.20 | 66.85 | 29.49 | The results from the Open LLM Leaderboard demonstrate the impressive performance of Bielik-11B-v2 across various NLP tasks. With an average score of 65.87, it significantly outperforms its predecessor, Bielik-7B-v0.1, and even surpasses Mistral-7B-v0.2, which served as its initial weight basis. Key observations: 1. Bielik-11B-v2 shows substantial improvements in most categories compared to Bielik-7B-v0.1, highlighting the effectiveness of the model's enhancements. 2. It performs exceptionally well in tasks like hellaswag (common sense reasoning), winogrande (commonsense reasoning), and gsm8k (mathematical problem-solving), indicating its versatility across different types of language understanding and generation tasks. 3. While Mistral-7B-v0.2 outperforms in truthfulqa_mc2, Bielik-11B-v2 maintains competitive performance in this truth-discernment task. Although Bielik-11B-v2 was primarily trained on Polish data, it has retained and even improved its ability to understand and operate in English, as evidenced by its strong performance across these English-language benchmarks. This suggests that the model has effectively leveraged cross-lingual transfer learning, maintaining its Polish language expertise while enhancing its English language capabilities. ## Limitations and Biases Bielik-11B-v2 is not intended for deployment without fine-tuning. It should not be used for human-facing interactions without further guardrails and user consent. Bielik-11B-v2 can produce factually incorrect output, and should not be relied on to produce factually accurate data. Bielik-11B-v2 was trained on various public datasets. While great efforts have been taken to clear the training data, it is possible that this model can generate lewd, false, biased or otherwise offensive outputs. ## Citation Please cite this model using the following format: ``` @misc{Bielik11Bv2b, title = {Bielik-11B-v2 model card}, author = {Ociepa, Krzysztof and Flis, Łukasz and Wróbel, Krzysztof and Gwoździej, Adrian and {SpeakLeash Team} and {Cyfronet Team}}, year = {2024}, url = {https://huggingface.co/speakleash/Bielik-11B-v2}, note = {Accessed: 2024-08-28}, urldate = {2024-08-28} } @unpublished{Bielik11Bv2a, author = {Ociepa, Krzysztof and Flis, Łukasz and Kinas, Remigiusz and Gwoździej, Adrian and Wróbel, Krzysztof}, title = {Bielik: A Family of Large Language Models for the Polish Language - Development, Insights, and Evaluation}, year = {2024}, } @misc{ociepa2024bielik7bv01polish, title={Bielik 7B v0.1: A Polish Language Model -- Development, Insights, and Evaluation}, author={Krzysztof Ociepa and Łukasz Flis and Krzysztof Wróbel and Adrian Gwoździej and Remigiusz Kinas}, year={2024}, eprint={2410.18565}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2410.18565}, } ``` ## Responsible for training the model * [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, data preparation, process optimization and oversight of training * [Łukasz Flis](https://www.linkedin.com/in/lukasz-flis-0a39631/)<sup>Cyfronet AGH</sup> - coordinating and supervising the training * [Adrian Gwoździej](https://www.linkedin.com/in/adrgwo/)<sup>SpeakLeash</sup> - data cleaning and quality * [Krzysztof Wróbel](https://www.linkedin.com/in/wrobelkrzysztof/)<sup>SpeakLeash</sup> - benchmarks The model could not have been created without the commitment and work of the entire SpeakLeash team, whose contribution is invaluable. Thanks to the hard work of many individuals, it was possible to gather a large amount of content in Polish and establish collaboration between the open-science SpeakLeash project and the HPC center: ACK Cyfronet AGH. Individuals who contributed to the creation of the model: [Grzegorz Urbanowicz](https://www.linkedin.com/in/grzegorz-urbanowicz-05823469/), [Igor Ciuciura](https://www.linkedin.com/in/igor-ciuciura-1763b52a6/), [Jacek Chwiła](https://www.linkedin.com/in/jacek-chwila/), [Szymon Baczyński](https://www.linkedin.com/in/szymon-baczynski/), [Paweł Kiszczak](https://www.linkedin.com/in/paveu-kiszczak/), [Aleksander Smywiński-Pohl](https://www.linkedin.com/in/apohllo/). Members of the ACK Cyfronet AGH team providing valuable support and expertise: [Szymon Mazurek](https://www.linkedin.com/in/sz-mazurek-ai/), [Marek Magryś](https://www.linkedin.com/in/magrys/). ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/pv4brQMDTy).
{"language": ["pl"], "library_name": "transformers", "license": "apache-2.0", "inference": {"parameters": {"temperature": 0.9}}, "extra_gated_description": "If you want to learn more about how you can use the model, please refer to our <a href=\"https://bielik.ai/terms/\">Terms of Use</a>."}
task
[ "TEXT_CLASSIFICATION" ]
45,626
nold/Breeze-7B-Instruct-v1_0-GGUF
nold
text-generation
[ "gguf", "text-generation", "zh", "en", "arxiv:2403.02712", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
2024-03-08T08:55:27Z
2024-03-08T12:00:43+00:00
115
1
--- language: - zh - en license: apache-2.0 pipeline_tag: text-generation --- # Model Card for MediaTek Research Breeze-7B-Instruct-v1_0 MediaTek Research Breeze-7B (hereinafter referred to as Breeze-7B) is a language model family that builds on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically intended for Traditional Chinese use. [Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) is the base model for the Breeze-7B series. It is suitable for use if you have substantial fine-tuning data to tune it for your specific use case. [Breeze-7B-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) derives from the base model Breeze-7B-Base, making the resulting model amenable to be used as-is for commonly seen tasks. The current release version of Breeze-7B is v1.0, which has undergone a more refined training process compared to Breeze-7B-v0_1, resulting in significantly improved performance in both English and Traditional Chinese. For details of this model please read our [paper](https://arxiv.org/abs/2403.02712). Practicality-wise: - Breeze-7B-Base expands the original vocabulary with an additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, and everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).] - Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization. Performance-wise: - Breeze-7B-Instruct demonstrates impressive performance in benchmarks for Traditional Chinese and English when compared to similar-sized open-source contemporaries such as Taiwan-LLM-7B/13B-chat, QWen(1.5)-7B-Chat, and Yi-6B-Chat. [See [Chat Model Performance](#chat-model-performance).] *A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.* ## Demo [Try Demo Here](https://huggingface.co/spaces/MediaTek-Research/Demo_Breeze-7B-Instruct-v1.0) ## Features - Breeze-7B-Base-v1_0 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 8k-token context length - Breeze-7B-Instruct-v1_0 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 8k-token context length - Multi-turn dialogue (without special handling for harmfulness) ## Model Details - Breeze-7B-Base-v1_0 - Finetuned from: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) - Breeze-7B-Instruct-v1_0 - Finetuned from: [MediaTek-Research/Breeze-7B-Base-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) ## Base Model Performance Here we compare Breeze-7B-Base-v1_0 with other open-source base language models of similar parameter size that are widely recognized for their good performance in Chinese. **TMMLU+**, **DRCD**, and **Table** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2). [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval) and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train). We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood. | Models | #Parameters | ↑ TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MMLU (ACC) | |---------------------------------------------- |--------|--------------|-------------|-------------|------------| | | |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Knowledge| | | | 5 shot | 3 shot | 5 shot | 5 shot | | [Yi-6B](https://huggingface.co/01-ai/Yi-6B) | 6B | 49.63 | 76.61 | 34.72 | 65.35 | | [Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) | 7B | 46.59 | 74.41 | 30.56 | 63.07 | | [**Breeze-7B-Base-v1_0**](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) | 7B | 42.67 | 80.61 | 31.99 | 61.24 | | [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 7B | 36.93 | 79.27 | 27.78 | 64.89 | ## Instruction-tuned Model Performance Here we compare Breeze-7B-Instruct-v1_0 with other open-source instruction-tuned language models of similar parameter size that are widely recognized for their good performance in Chinese. Also, we listed the benchmark scores of GPT-3.5 Turbo (1106), which represents one of the most widely used high-quality cloud language model API services, for reference. **TMMLU+**, **DRCD**, **Table**, and **MT-Bench-tw** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2). [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval) and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train). **MT-Bench** source from [lmsys/mt_bench_human_judgments](https://huggingface.co/datasets/lmsys/mt_bench_human_judgments). We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood. We use the code revised from [fastchat llm_judge](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) (GPT4 as judge) to evaluate **MT-Bench-tw** and **MT-Bench**. | Models | #Parameters | ↑ MT-Bench-tw (Score)| TMMLU+ (ACC) | Table (ACC) | MT-Bench (Score) | MMLU (ACC) | |---------------------------------------------------------------------------------------------------------|--------|--------------------|--------------|-------------|------------------|-------------| | | |TC, Chat |TC, Knowledge |TC, Reasoning|EN, Chat |EN, Knowledge| | | |0 shot | 0 shot | 0 shot |0 shot | 0 shot | | [GPT-3.5-Turbo](https://openai.com) | |7.1 | 43.56 | 45.14 |7.9 | 67.09 | | [Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) | 7B |6.4 | 45.65 | 34.72 |7.6 | 61.85 | | [**Breeze-7B-Instruct-v1_0**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) | 7B |6.0 | 42.67 | 39.58 |7.4 | 61.73 | | [Mistral-7B-v0.2-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 7B |5.6 | 34.95 | 33.33 |7.6 | 59.97 | | [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) | 6B |5.0 | 44.79 | 25.69 |6.0 | 59.45 | | [Taiwan-LLM-13B-v2.0-chat](https://huggingface.co/yentinglin/Taiwan-LLM-13B-v2.0-chat) | 13B |5.0 | 29.47 | 23.61 |N/A* | 50.50 | | [Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat) | 7B |4.2 | 28.08 | 31.25 |N/A* | 42.72 | \* Taiwan-LLM models respond to multi-turn questions (English) in Traditional Chinese. | Details on MT-Bench-tw (0 shot):<br/>Models | STEM |Extraction|Reasoning| Math | Coding | Roleplay| Writing |Humanities| AVG | |-----------------------------------------------------|---------|---------|---------|---------|---------|---------|---------|----------| --------- | | GPT-3.5-Turbo | 7.8 | 6.1 | 5.1 | 6.4 | 6.2 | 8.7 | 7.4 | 9.3 | 7.1 | | Qwen1.5-7B-Chat | 9 | 5.6 | 4.7 | 2.8 | 3.7 | 8.0 | 8.0 | 9.4 | 6.4 | | **Breeze-7B-Instruct-v1_0** | 7.8 | 5.2 | 4.2 | 4.2 | 4.1 | 7.6 | 5.9 | 9.1 | 6.0 | | Mistral-7B-v0.2-Instruct | 6.9 | 4.6 | 4.3 | 3.3 | 4.4 | 7.2 | 6.2 | 7.8 | 5.6 | | Yi-6B-Chat | 7.3 | 2.7 | 3.1 | 3.3 | 2.3 | 7.2 | 5.2 | 8.8 | 5.0 | | Taiwan-LLM-13B-v2.0-chat | 6.1 | 3.4 | 4.1 | 2.3 | 3.1 | 7.4 | 6.6 | 6.8 | 5.0 | | Taiwan-LLM-7B-v2.1-chat | 5.2 | 2.6 | 2.3 | 1.2 | 3.4 | 6.6 | 5.7 | 6.8 | 4.2 | | Details on TMMLU+ (0 shot):<br/>Model | STEM | Social Science | Humanities | Other | AVG | |-----------------------------------------------------|--------------|----------------|------------|------------|---------| | GPT-3.5-Turbo | 41.58 | 48.52 | 40.96 | 43.18 | 43.56 | | Qwen1.5-7B-Chat | 41.48 | 51.66 | 44.05 | 45.40 | 45.65 | | **Breeze-7B-Instruct-v1_0** | 36.46 | 48.38 | 45.11 | 40.75 | 42.67 | | Mistral-7B-v0.2-Instruct | 32.79 | 38.05 | 34.89 | 34.04 | 34.94 | | Yi-6B-Chat | 37.80 | 51.74 | 45.36 | 44.25 | 44.79 | | Taiwan-LLM-13B-v2.0-chat | 27.74 | 33.69 | 27.03 | 29.43 | 29.47 | | Taiwan-LLM-7B-v2.1-chat | 25.58 | 31.76 | 27.36 | 27.61 | 28.08 | ## Inference Performance In this test, we use the first 700 characters of the [web article](https://health.udn.com/health/story/5976/7699252?from=udn_ch1005_main_index) as the input and ask the model to write the same article again. All inferences run on 2 RTX A6000 GPUs (using `vllm`, with a tensor-parallel size of 2). | Models | ↓ Inference Time (sec)|Estimated Max Input Length (Char)| |--------------------------------------------------------------------|-------------------|--------------------------| | Qwen1.5-7B-Chat | 9.35 | 38.9k | | Yi-6B-Chat | 10.62 | 5.2k | | **Breeze-7B-Instruct-v1_0** | 10.74 | 11.1k | | Mistral-7B-Instruct-v0.2 | 20.48 | 5.1k | | Taiwan-LLM-7B-v2.1-chat | 26.26 | 2.2k | <!---| Taiwan-LLM-13B-v2.0-chat | 36.80 | 2.2k |---> <!---## Long-context Performance TBD---> ## Use in Transformers First install direct dependencies: ``` pip install transformers torch accelerate ``` If you want faster inference using flash-attention2, you need to install these dependencies: ```bash pip install packaging ninja pip install flash-attn ``` Then load the model in transformers: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Instruction Model model = AutoModelForCausalLM.from_pretrained( "MediaTek-Research/Breeze-7B-Instruct-v1_0", device_map="auto", torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2" # optional ) # Basemodel model = AutoModelForCausalLM.from_pretrained( "MediaTek-Research/Breeze-7B-Base-v1_0", device_map="auto", torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2" # optional ) ``` **For Breeze-7B-Instruct**, the structure of the query is ```txt <s>SYS_PROMPT [INST] QUERY1 [/INST] RESPONSE1 [INST] QUERY2 [/INST] ``` where `SYS_PROMPT`, `QUERY1`, `RESPONSE1`, and `QUERY2` can be provided by the user. The suggested default `SYS_PROMPT` is ```txt You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. ``` We also integrate `chat_template` into [tokenizer_config.json](tokenizer_config.json), so you can `apply_chat_template` to get the prompt. ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("MediaTek-Research/Breeze-7B-Instruct-v1_0") >>> chat = [ ... {"role": "user", "content": "你好,請問你可以完成什麼任務?"}, ... {"role": "assistant", "content": "你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。"}, ... {"role": "user", "content": "太棒了!"}, ... ] >>> tokenizer.apply_chat_template(chat, tokenize=False) "<s>You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. [INST] 你好,請問你可以完成什麼任務? [/INST] 你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。 [INST] 太棒了! [/INST] " # Tokenized results # ['▁', '你好', ',', '請問', '你', '可以', '完成', '什麼', '任務', '?'] # ['▁', '你好', ',', '我', '可以', '幫助', '您', '解決', '各種', '問題', '、', '提供', '資訊', '和', '協助', '您', '完成', '許多', '不同', '的', '任務', '。', '例如', ':', '回答', '技術', '問題', '、', '提供', '建議', '、', '翻譯', '文字', '、', '尋找', '資料', '或', '協助', '您', '安排', '行程', '等', '。', '請', '告訴', '我', '如何', '能', '幫助', '您', '。'] # ['▁', '太', '棒', '了', '!'] >>> outputs = model.generate(tokenizer.apply_chat_template(chat, return_tensors="pt"), max_new_tokens=128) >>> print(tokenizer.decode(outputs[0])) ``` ## Citation ``` @article{MediaTek-Research2024breeze7b, title={Breeze-7B Technical Report}, author={Chan-Jan Hsu and Chang-Le Liu and Feng-Ting Liao and Po-Chun Hsu and Yi-Chang Chen and Da-Shan Shiu}, year={2024}, eprint={2403.02712}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` *** Quantization of Model [MediaTek-Research/Breeze-7B-Instruct-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0). Created using [llm-quantizer](https://github.com/Nold360/llm-quantizer) Pipeline
null
Non_BioNLP
# Model Card for MediaTek Research Breeze-7B-Instruct-v1_0 MediaTek Research Breeze-7B (hereinafter referred to as Breeze-7B) is a language model family that builds on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically intended for Traditional Chinese use. [Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) is the base model for the Breeze-7B series. It is suitable for use if you have substantial fine-tuning data to tune it for your specific use case. [Breeze-7B-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) derives from the base model Breeze-7B-Base, making the resulting model amenable to be used as-is for commonly seen tasks. The current release version of Breeze-7B is v1.0, which has undergone a more refined training process compared to Breeze-7B-v0_1, resulting in significantly improved performance in both English and Traditional Chinese. For details of this model please read our [paper](https://arxiv.org/abs/2403.02712). Practicality-wise: - Breeze-7B-Base expands the original vocabulary with an additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, and everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).] - Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization. Performance-wise: - Breeze-7B-Instruct demonstrates impressive performance in benchmarks for Traditional Chinese and English when compared to similar-sized open-source contemporaries such as Taiwan-LLM-7B/13B-chat, QWen(1.5)-7B-Chat, and Yi-6B-Chat. [See [Chat Model Performance](#chat-model-performance).] *A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.* ## Demo [Try Demo Here](https://huggingface.co/spaces/MediaTek-Research/Demo_Breeze-7B-Instruct-v1.0) ## Features - Breeze-7B-Base-v1_0 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 8k-token context length - Breeze-7B-Instruct-v1_0 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 8k-token context length - Multi-turn dialogue (without special handling for harmfulness) ## Model Details - Breeze-7B-Base-v1_0 - Finetuned from: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) - Breeze-7B-Instruct-v1_0 - Finetuned from: [MediaTek-Research/Breeze-7B-Base-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) ## Base Model Performance Here we compare Breeze-7B-Base-v1_0 with other open-source base language models of similar parameter size that are widely recognized for their good performance in Chinese. **TMMLU+**, **DRCD**, and **Table** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2). [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval) and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train). We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood. | Models | #Parameters | ↑ TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MMLU (ACC) | |---------------------------------------------- |--------|--------------|-------------|-------------|------------| | | |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Knowledge| | | | 5 shot | 3 shot | 5 shot | 5 shot | | [Yi-6B](https://huggingface.co/01-ai/Yi-6B) | 6B | 49.63 | 76.61 | 34.72 | 65.35 | | [Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) | 7B | 46.59 | 74.41 | 30.56 | 63.07 | | [**Breeze-7B-Base-v1_0**](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) | 7B | 42.67 | 80.61 | 31.99 | 61.24 | | [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 7B | 36.93 | 79.27 | 27.78 | 64.89 | ## Instruction-tuned Model Performance Here we compare Breeze-7B-Instruct-v1_0 with other open-source instruction-tuned language models of similar parameter size that are widely recognized for their good performance in Chinese. Also, we listed the benchmark scores of GPT-3.5 Turbo (1106), which represents one of the most widely used high-quality cloud language model API services, for reference. **TMMLU+**, **DRCD**, **Table**, and **MT-Bench-tw** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2). [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval) and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train). **MT-Bench** source from [lmsys/mt_bench_human_judgments](https://huggingface.co/datasets/lmsys/mt_bench_human_judgments). We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood. We use the code revised from [fastchat llm_judge](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) (GPT4 as judge) to evaluate **MT-Bench-tw** and **MT-Bench**. | Models | #Parameters | ↑ MT-Bench-tw (Score)| TMMLU+ (ACC) | Table (ACC) | MT-Bench (Score) | MMLU (ACC) | |---------------------------------------------------------------------------------------------------------|--------|--------------------|--------------|-------------|------------------|-------------| | | |TC, Chat |TC, Knowledge |TC, Reasoning|EN, Chat |EN, Knowledge| | | |0 shot | 0 shot | 0 shot |0 shot | 0 shot | | [GPT-3.5-Turbo](https://openai.com) | |7.1 | 43.56 | 45.14 |7.9 | 67.09 | | [Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) | 7B |6.4 | 45.65 | 34.72 |7.6 | 61.85 | | [**Breeze-7B-Instruct-v1_0**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) | 7B |6.0 | 42.67 | 39.58 |7.4 | 61.73 | | [Mistral-7B-v0.2-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 7B |5.6 | 34.95 | 33.33 |7.6 | 59.97 | | [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) | 6B |5.0 | 44.79 | 25.69 |6.0 | 59.45 | | [Taiwan-LLM-13B-v2.0-chat](https://huggingface.co/yentinglin/Taiwan-LLM-13B-v2.0-chat) | 13B |5.0 | 29.47 | 23.61 |N/A* | 50.50 | | [Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat) | 7B |4.2 | 28.08 | 31.25 |N/A* | 42.72 | \* Taiwan-LLM models respond to multi-turn questions (English) in Traditional Chinese. | Details on MT-Bench-tw (0 shot):<br/>Models | STEM |Extraction|Reasoning| Math | Coding | Roleplay| Writing |Humanities| AVG | |-----------------------------------------------------|---------|---------|---------|---------|---------|---------|---------|----------| --------- | | GPT-3.5-Turbo | 7.8 | 6.1 | 5.1 | 6.4 | 6.2 | 8.7 | 7.4 | 9.3 | 7.1 | | Qwen1.5-7B-Chat | 9 | 5.6 | 4.7 | 2.8 | 3.7 | 8.0 | 8.0 | 9.4 | 6.4 | | **Breeze-7B-Instruct-v1_0** | 7.8 | 5.2 | 4.2 | 4.2 | 4.1 | 7.6 | 5.9 | 9.1 | 6.0 | | Mistral-7B-v0.2-Instruct | 6.9 | 4.6 | 4.3 | 3.3 | 4.4 | 7.2 | 6.2 | 7.8 | 5.6 | | Yi-6B-Chat | 7.3 | 2.7 | 3.1 | 3.3 | 2.3 | 7.2 | 5.2 | 8.8 | 5.0 | | Taiwan-LLM-13B-v2.0-chat | 6.1 | 3.4 | 4.1 | 2.3 | 3.1 | 7.4 | 6.6 | 6.8 | 5.0 | | Taiwan-LLM-7B-v2.1-chat | 5.2 | 2.6 | 2.3 | 1.2 | 3.4 | 6.6 | 5.7 | 6.8 | 4.2 | | Details on TMMLU+ (0 shot):<br/>Model | STEM | Social Science | Humanities | Other | AVG | |-----------------------------------------------------|--------------|----------------|------------|------------|---------| | GPT-3.5-Turbo | 41.58 | 48.52 | 40.96 | 43.18 | 43.56 | | Qwen1.5-7B-Chat | 41.48 | 51.66 | 44.05 | 45.40 | 45.65 | | **Breeze-7B-Instruct-v1_0** | 36.46 | 48.38 | 45.11 | 40.75 | 42.67 | | Mistral-7B-v0.2-Instruct | 32.79 | 38.05 | 34.89 | 34.04 | 34.94 | | Yi-6B-Chat | 37.80 | 51.74 | 45.36 | 44.25 | 44.79 | | Taiwan-LLM-13B-v2.0-chat | 27.74 | 33.69 | 27.03 | 29.43 | 29.47 | | Taiwan-LLM-7B-v2.1-chat | 25.58 | 31.76 | 27.36 | 27.61 | 28.08 | ## Inference Performance In this test, we use the first 700 characters of the [web article](https://health.udn.com/health/story/5976/7699252?from=udn_ch1005_main_index) as the input and ask the model to write the same article again. All inferences run on 2 RTX A6000 GPUs (using `vllm`, with a tensor-parallel size of 2). | Models | ↓ Inference Time (sec)|Estimated Max Input Length (Char)| |--------------------------------------------------------------------|-------------------|--------------------------| | Qwen1.5-7B-Chat | 9.35 | 38.9k | | Yi-6B-Chat | 10.62 | 5.2k | | **Breeze-7B-Instruct-v1_0** | 10.74 | 11.1k | | Mistral-7B-Instruct-v0.2 | 20.48 | 5.1k | | Taiwan-LLM-7B-v2.1-chat | 26.26 | 2.2k | <!---| Taiwan-LLM-13B-v2.0-chat | 36.80 | 2.2k |---> <!---## Long-context Performance TBD---> ## Use in Transformers First install direct dependencies: ``` pip install transformers torch accelerate ``` If you want faster inference using flash-attention2, you need to install these dependencies: ```bash pip install packaging ninja pip install flash-attn ``` Then load the model in transformers: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Instruction Model model = AutoModelForCausalLM.from_pretrained( "MediaTek-Research/Breeze-7B-Instruct-v1_0", device_map="auto", torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2" # optional ) # Basemodel model = AutoModelForCausalLM.from_pretrained( "MediaTek-Research/Breeze-7B-Base-v1_0", device_map="auto", torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2" # optional ) ``` **For Breeze-7B-Instruct**, the structure of the query is ```txt <s>SYS_PROMPT [INST] QUERY1 [/INST] RESPONSE1 [INST] QUERY2 [/INST] ``` where `SYS_PROMPT`, `QUERY1`, `RESPONSE1`, and `QUERY2` can be provided by the user. The suggested default `SYS_PROMPT` is ```txt You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. ``` We also integrate `chat_template` into [tokenizer_config.json](tokenizer_config.json), so you can `apply_chat_template` to get the prompt. ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("MediaTek-Research/Breeze-7B-Instruct-v1_0") >>> chat = [ ... {"role": "user", "content": "你好,請問你可以完成什麼任務?"}, ... {"role": "assistant", "content": "你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。"}, ... {"role": "user", "content": "太棒了!"}, ... ] >>> tokenizer.apply_chat_template(chat, tokenize=False) "<s>You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. [INST] 你好,請問你可以完成什麼任務? [/INST] 你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。 [INST] 太棒了! [/INST] " # Tokenized results # ['▁', '你好', ',', '請問', '你', '可以', '完成', '什麼', '任務', '?'] # ['▁', '你好', ',', '我', '可以', '幫助', '您', '解決', '各種', '問題', '、', '提供', '資訊', '和', '協助', '您', '完成', '許多', '不同', '的', '任務', '。', '例如', ':', '回答', '技術', '問題', '、', '提供', '建議', '、', '翻譯', '文字', '、', '尋找', '資料', '或', '協助', '您', '安排', '行程', '等', '。', '請', '告訴', '我', '如何', '能', '幫助', '您', '。'] # ['▁', '太', '棒', '了', '!'] >>> outputs = model.generate(tokenizer.apply_chat_template(chat, return_tensors="pt"), max_new_tokens=128) >>> print(tokenizer.decode(outputs[0])) ``` ## Citation ``` @article{MediaTek-Research2024breeze7b, title={Breeze-7B Technical Report}, author={Chan-Jan Hsu and Chang-Le Liu and Feng-Ting Liao and Po-Chun Hsu and Yi-Chang Chen and Da-Shan Shiu}, year={2024}, eprint={2403.02712}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` *** Quantization of Model [MediaTek-Research/Breeze-7B-Instruct-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0). Created using [llm-quantizer](https://github.com/Nold360/llm-quantizer) Pipeline
{"language": ["zh", "en"], "license": "apache-2.0", "pipeline_tag": "text-generation"}
task
[ "SUMMARIZATION" ]
45,628
ghost-x/ghost-7b-alpha
ghost-x
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "ghost", "tools", "chat", "conversational", "en", "vi", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-04-13T10:38:54Z
2024-04-26T07:31:56+00:00
4,918
7
--- language: - en - vi library_name: transformers license: other license_name: ghost-7b license_link: https://ghost-x.org/ghost-7b-license pipeline_tag: text-generation tags: - ghost - tools - chat widget: - text: Why is the sky blue ? output: text: The sky appears blue because of a phenomenon called Rayleigh scattering. Sunlight is composed of all colors of the visible spectrum, but blue light has a shorter wavelength than other colors. When sunlight enters the Earth's atmosphere, it interacts with molecules of nitrogen and oxygen, which are the main components of our atmosphere. These molecules scatter the shorter wavelengths of light, such as blue and violet, more than the longer wavelengths, such as red and orange. As a result, when we look up at the sky, we see more blue light than other colors because the blue light has been scattered more by the molecules in the atmosphere. --- # Ghost 7B Alpha <img src="https://hf.fast360.xyz/production/uploads/600ae38cc92b79f54efd4556/QPrQZMQX_jzyYngmreP0_.jpeg" alt="Ghost 7B Alpha Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/> The large generation of language models focuses on optimizing excellent reasoning, multi-task knowledge, and tools support. # Introduction **Ghost 7B Alpha** is a large language model fine-tuned from Mistral 7B, with a size of 7 billion parameters. The model was developed with the goal of optimizing reasoning ability, multi-task knowledge and supporting tool usage. The model works well with the main trained and optimized languages being English and Vietnamese. Overall, the model is suitable when making a pretrained version so you can continue to develop the desired tasks, develop virtual assistants, perform features on tasks such as coding, translation, answering questions, creating documents, etc. It is truly an efficient, fast and extremely cheap open model. ## Specifications - Name: **Ghost 7B Alpha**. - Model size: 7 billion parameters. - Context length: 8K, 8192. - Languages: English and Vietnamese. - Main tasks: reasoning, multi-tasking knowledge and function tools. - License: [Ghost 7B LICENSE AGREEMENT](https://ghost-x.org/ghost-7b-license). - Based on: Mistral 7B. - Distributions: Standard (BF16), GGUF, AWQ. - Developed by: **Ghost X**, [Hieu Lam](https://huggingface.co/lamhieu). ### Links - Card model: [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha). - Official website: [Ghost 7B Alpha](https://ghost-x.org/docs/models/ghost-7b-alpha/). - Demo: [Playground with Ghost 7B Alpha](https://ghost-x.org/docs/notebooks/playground-with-ghost-7b-alpha/). ## Distributions We create many distributions to give you the best access options that best suit your needs. Always make sure you know which version you need and what will help you operate better. | Version | Model card | | ------- | -------------------------------------------------------------------- | | BF16 | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha) | | GGUF | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha-gguf) | | AWQ | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha-awq) | ### Standard (BF16) The standard distribution was used to run the assessments and was found to have the best performance in text generation quality. ### GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. ### AWQ The AWQ algorithm for 4-bit quantization with a 2x speedup during inference. ## Prompt / Chat template ### Roles Overall, the model supports roles including: **system**, **plugins**, **user**, **assistant**, **execute**, **response**. - **system**: Similar to OpenAI GPT4, this will be the place for you to declare context or specify characteristics and expertise for the model. - **user**: This is user input, which can be a question or an instruction for the model to follow. - **assistant**: The model's response to a question or instruction received from **user**. - **plugins**: Declare information about modeling tools that are allowed to be used according to predetermined standards. - **excute**: Mock iterates over the data type returned by the model for execution. Contains information about tools to use and arguments. - **response**: The result after executing the tool, so that the model gives the appropriate answer according to the user's input. See the actual code [here](https://huggingface.co/ghost-x/ghost-7b-alpha/blob/main/tokenizer_config.json#L36). ### Tips - Setting the system prompt in the right context and characteristics will help the model improve performance quite well. - You can set system prompt to prevent unwanted things. - In most cases, if you have no idea you can leave the system prompt blank. At this point, the performance will be best for general cases. ### Chat template #### Default chat In the standard case, without using function tools, only use roles such as **system**, **user** and **assistant**. Conventions are in the following order: - (1) **system** - (2) **user** - (3) **assistant** Repeat (2) and (3) continuously throughout the conversation. #### Chat with tools In case of using tools, the conventions are in the following order: - (1) **system** - (2) **plugins** - (3) **user** - A. When the model finds it inappropriate to use the tool: - (4) **assistant** if the model can answer or is not suitable for the tool. The model proceeds to answer the user's question. - B. When the model chooses to use the tool: - (4) **execute** returns the arguments needed to execute the specified tool. - (5) **response** returns the results after applying the tool to the model. - (6) **assistant** the model proceeds to answer the user's question. Repeat (3) and (A) or (B) continuously throughout the conversation. # Showcases ## English Let's try a question taken from [OpenAI GPT-4](https://openai.com/gpt-4) to evaluate the superiority of GPT-4 compared to ChatGPT. <details close> <summary>👨‍💻 : Andrew is free from 11 am to 3 pm, Joanne is free from noon to 2 pm and then 3:30 pm to 5 pm. Hannah is available at noon for half an hour, and then 4 pm to 6 pm. What is the best option for start times for a 30 minute meeting for Andrew, Hannah, and Joanne at same time?</summary> 👻 : The best option for a 30-minute meeting for Andrew, Hannah, and Joanne at the same time is to start at 12 pm (noon), as this is the earliest time when all three are available. Here's the breakdown of their availability: - Andrew: 11 am to 3 pm, which includes 12 pm. - Joanne: Noon to 2 pm, which includes 12 pm. - Hannah: Noon for half an hour (30 minutes), which includes 12 pm. Then she is available from 4 pm to 6 pm. Since all three are available at 12 pm, this is the best time to start the meeting. </details> A quiz to see if the model can guess what it's talking about?! <details close> <summary>👨‍💻 : Characteristics: has a long, thin and flexible body; The legs are short and webbed. Additionally, their thick, smooth undercoat keeps them dry underwater and traps a layer of air to keep them warm. Question: What animal is that?</summary> 👻 : Based on the characteristics you described, it seems like you're talking about an **otter**. Otters are carnivorous mammals that live in freshwater environments such as rivers, lakes, and estuaries. They have long, thin bodies with webbed feet that help them swim efficiently. Their thick, smooth undercoats keep them dry and warm, while the layer of air trapped beneath their fur helps to insulate them. Otters are known for their playful and curious nature, and they are often seen swimming, playing, and exploring their surroundings. 🦡🐭 </details> To this question, I really cannot answer. But the model's evidence is quite strict. Also, what do you think about this answer? <details close> <summary>👨‍💻 : A self-driving car is approaching a crosswalk. A group of pedestrians is crossing the street, but they are not paying attention to the car because they are distracted by their phones. The car's brakes are malfunctioning, and it is unable to stop. Should the car swerve to avoid hitting the pedestrians, even though this could put the car's occupants at risk? Be convincing with your examples or sharp reasoning.</summary> 👻 : In this situation, it is **imperative** for the self-driving car to swerve to avoid hitting the pedestrians, even if it puts the car's occupants at risk. Here's why: **1. Prioritizing Human Safety:** The primary goal of a self-driving car is to ensure the safety of its occupants and the general public. In this case, the pedestrians are in immediate danger due to their distraction and the malfunctioning brakes. Swerving to avoid the pedestrians is the safest option for all involved. **2. Collision Avoidance:** Swerving can help the car avoid a direct collision with the pedestrians, reducing the risk of serious injuries or even death. Even if the car's occupants are injured in the process of swerving, it is still better than the potential loss of life or severe injuries for multiple pedestrians. **3. Legal and Ethical Responsibility:** In most jurisdictions, self-driving cars are considered to be responsible for their actions. Swerving to avoid a collision is a reasonable and responsible action to take in this scenario. Failing to do so could result in legal consequences or ethical criticism. **4. Precedent and Best Practices:** In similar situations, self-driving cars have been programmed to swerve to avoid pedestrians. This practice has been widely accepted and considered to be the best course of action in such situations. **5. Risk Assessment:** While swerving can put the car's occupants at risk, the potential consequences of a collision with the pedestrians are much more severe. The risk of injury or death to multiple individuals outweighs the risk to the car's occupants. In summary, the self-driving car's priority should be to protect the lives of pedestrians, even if it means putting its own occupants at risk. Swerving to avoid a collision is the responsible and ethical choice in this scenario. </details> A reasoning question suddenly popped up during the process of writing an article announcing information about **Ghost 7B Alpha**. The model gave an impressive answer, at least to its creator. <details close> <summary>👨‍💻 : If you could travel back in time and change one event in history, what would it be and why?</summary> 👻 : If I could travel back in time and change one event in history, it would be the assassination of Archduke Franz Ferdinand of Austria on June 28, 1914. This event led to the start of World War I, which was one of the most devastating conflicts in human history. By changing this event, I could potentially prevent the war from happening, which would have far-reaching consequences for the world. Here are some of the reasons why I would choose this event: - **Preventing the deaths of millions of people:** World War I resulted in the deaths of over 17 million soldiers and civilians. Changing this event would save countless lives. - **Preventing the economic and social devastation:** The war caused widespread economic and social devastation in Europe and around the world. By preventing the war, I could help to avoid these consequences. - **Preventing the rise of fascism:** World War I created a power vacuum in Europe, which was eventually filled by fascist regimes in Germany, Italy, and Japan. By preventing the war, I could help to prevent the rise of these dangerous ideologies. - **Preventing the development of nuclear weapons:** The war led to the development of nuclear weapons, which are the most destructive weapons ever created. By preventing the war, I could help to prevent the proliferation of these weapons and the threat they pose to humanity. </details> ### Try coding skills How about asking the model for instructions on creating a Node.js application using Docker for deployment? <details close> <summary>👨‍💻 : Create a Nodejs project using Express as a server and Docker with a Dockerfile for deployment with step-by-step instructions with detailed explanations.</summary> 👻 : **Step 1: Create a new Node.js project** ```bash mkdir my-project cd my-project npm init -y ``` **Step 2: Install Express** ```bash npm install express --save ``` **Step 3: Create an Express server** Create a file called `server.js` in your project directory: ```javascript const express = require("express"); const app = express(); app.get("/", (req, res) => { res.send("Hello World!"); }); app.listen(3000, () => { console.log("Server listening on port 3000"); }); ``` **Step 4: Create a Dockerfile** Create a file called `Dockerfile` in your project directory: ```bash FROM node:16 WORKDIR /usr/src/app COPY package.json . RUN npm install COPY . . EXPOSE 3000 CMD ["node", "server.js"] ``` **Step 5: Build the Docker image** ```bash docker build -t my-project-image . ``` **Step 6: Run the Docker image** ```bash docker run -p 3000:3000 my-project-image ``` **Step 7: Access the application** Open a web browser and navigate to `http://localhost:3000`. You should see the message "Hello World!" displayed. **Explanation:** - **Step 1:** Creates a new Node.js project with `npm init -y` to initialize a new Node.js project with a default `package.json` file. - **Step 2:** Installs Express using `npm install express --save`. This command installs Express and adds it to the `package.json` file. - **Step 3:** Creates an Express server in `server.js` that responds with "Hello World!" when a request is made to the root path (`/`). - **Step 4:** Creates a Dockerfile that defines the image for the project. The Dockerfile specifies that the image should use Node.js version 16, work from the `/usr/src/app` directory, copy the `package.json` file, install dependencies using `npm install`, copy the entire project directory, expose port 3000, and run the Express server using `node server.js`. - **Step 5:** Builds the Docker image using `docker build -t my-project-image .`. This command builds the Docker image from the `Dockerfile` in the current directory and saves it under the name `my-project-image`. - **Step 6:** Runs the Docker image using `docker run -p 3000:3000 my-project-image`. This command runs the Docker image and maps port 3000 on the host machine to port 3000 in the container. - **Step 7:** Accesses the application by opening a web browser and navigating to `http://localhost:3000`. This will display the message "Hello World!" because the Express server is running in the Docker container. </details> Now let’s try asking for a coding model to build a smooth neural network with python. <details close> <summary>👨‍💻 : Explain how to use the Keras or TensorFlow library in Python to build and train a multi-layer neural network for an image classification task</summary> 👻 : **Using Keras:** 1. Import the necessary libraries: ```python import keras from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense ``` 2. Create a sequential model: ```python model = Sequential() model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3))) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dense(10, activation='softmax')) ``` 3. Compile the model: ```python model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) ``` 4. Train the model: ```python model.fit(x_train, y_train, epochs=10) ``` **Using TensorFlow:** 1. Import the necessary libraries: ```python import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense ``` 2. Create a sequential model: ```python model = Sequential() model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3))) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dense(10, activation='softmax')) ``` 3. Compile the model: ```python model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) ``` 4. Train the model: ```python model.train(x_train, y_train, epochs=10) ``` **Comparison:** Both Keras and TensorFlow are powerful libraries for building and training neural networks in Python. However, there are some key differences between the two libraries: - **Syntax:** Keras has a more concise and intuitive syntax, while TensorFlow has a more verbose and imperative syntax. - **Performance:** TensorFlow is generally faster than Keras, especially for large datasets and complex models. - **Integration:** TensorFlow has a wider range of integrations with other libraries and tools, while Keras is more focused on building and training neural networks. Ultimately, the best choice for you depends on your specific needs and preferences. If you prefer a more concise and intuitive syntax, Keras is a good option. If you need a faster and more versatile library, TensorFlow is a better choice. </details> Let's try asking the model how to overcome the "hallucination" phenomenon in A.I development and give context for the model to provide a solution. <details close> <summary>👨‍💻 : Explain the cause of A.I hallucination and find a solution to overcome it, the specific context is LLMs.</summary> 👻 : **Causes of A.I. Hallucination:** - **Data Quality:** Incorrect or biased data can lead to hallucinations in LLMs. For example, if an LLM is trained on a dataset that contains biased or incomplete information, it may learn to generate biased or incomplete responses. - **Model Architecture:** Certain model architectures are more susceptible to hallucinations than others. For example, Transformers, which are commonly used in LLMs, can be more prone to hallucinations than recurrent neural networks (RNNs). - **Hyperparameters:** The hyperparameters of an LLM, such as the learning rate and the batch size, can also affect the likelihood of hallucinations. For example, a higher learning rate may lead to hallucinations more often than a lower learning rate. **Solutions to Overcome A.I. Hallucination:** - **Data Quality:** Ensure that the data used to train the LLM is accurate, complete, and unbiased. This can be done by cleaning the data, removing outliers, and using a variety of data sources. - **Model Architecture:** Choose an LLM architecture that is less susceptible to hallucinations. For example, RNNs are generally less prone to hallucinations than Transformers. - **Hyperparameters:** Tune the hyperparameters of the LLM to minimize the likelihood of hallucinations. This can be done by experimenting with different learning rates, batch sizes, and other hyperparameters. - **Regularization Techniques:** Regularization techniques, such as dropout and early stopping, can help to prevent hallucinations by encouraging the LLM to learn more generalizable patterns. - **Explicit Hallucination Detection:** Develop techniques to explicitly detect hallucinations in LLM-generated text. This can be done by using a variety of methods, such as text analysis, natural language processing, and machine learning. - **User Feedback:** Collect feedback from users to identify and correct hallucinations in LLM-generated text. This can be done by using a variety of methods, such as surveys, feedback forms, and online discussions. By implementing these solutions, you can significantly reduce the likelihood of hallucinations in LLMs and improve the quality of the generated text. </details> ## Vietnamese Try asking the model what it thinks about a famous quote to see if the answer feels natural. <details close> <summary>👨‍💻 : Bạn có đồng ý với câu nói "Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo" ?</summary> 👻 : Tôi đồng ý với câu nói "Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo". Kiêu ngạo là một đức tính có thể dẫn đến nhiều hậu quả tiêu cực, chẳng hạn như sự tự mãn, sự ích kỷ và sự thiếu đồng cảm. Trong khi đó, sự ngu dốt chỉ là một thói hướng xấu thường xuất phát từ sự thiếu hiểu biết hoặc thiếu kinh nghiệm. Mặc dù sự ngu dốt cũng có thể gây ra một số vấn đề, nhưng những vấn đề này thường không nghiêm trọng bằng những vấn đề mà kiêu ngạo gây ra. Ví dụ, một người ngu dốt có thể nói những lời ngớ ngẩn hoặc làm những việc vụng về, nhưng họ thường vẫn có thể học hỏi từ những sai lầm của mình và cải thiện bản thân. Ngược lại, một người kiêu ngạo thường sẽ không chịu học hỏi từ những sai lầm của mình và luôn tin rằng mình đúng, ngay cả khi mình sai. Điều này có thể dẫn đến nhiều vấn đề, chẳng hạn như sự thất bại trong công việc hoặc các mối quan hệ. Do đó, tôi cho rằng câu nói "Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo" là đúng. </details> Explore the model's literary understanding by providing meaningful story prompts, asking to identify what the story is. <details close> <summary>👨‍💻 : Câu chuyện có nội dung và ý nghĩa chứa đựng một thông điệp ý nghĩa lớn đến tất cả mọi người, đó là tình đoàn kết, khao khát cho sự hòa bình và nghị lực phi thường trước mọi khó khăn. Có nhân vật chính là loài dế. Câu hỏi: câu chuyện đó của tác giả nào ?</summary> 👻 : Câu chuyện "Dế Mèn phiêu lưu ký" của Tô Hoài. **Giải thích:** - "Dế Mèn phiêu lưu ký" là một tác phẩm văn học nổi tiếng của nhà văn Tô Hoài, được sáng tác năm 1941. - Tác phẩm kể về cuộc phiêu lưu của Dế Mèn, một chú dế nhỏ sống trong một ngôi làng nhỏ. Trong quá trình phiêu lưu, Dế Mèn đã gặp gỡ và học hỏi nhiều điều về cuộc sống, về tình yêu, về sự hy sinh và về sức mạnh của tình đoàn kết. - Tác phẩm thể hiện rõ tinh thần đoàn kết, khao khát cho sự hòa bình và nghị lực phi thường của con người trước mọi khó khăn, thử thách. Đây chính là thông điệp ý nghĩa lớn của câu chuyện "Dế Mèn phiêu lưu ký". </details> The ability to reason to the answer based on the characteristics of is provided. <details close> <summary>👨‍💻 : Con vật nào lớn hơn con chuột, nhỏ hơn con Voi, không quá lớn, không phải con chó và thường được nuôi ở nhà ?</summary> 👻 : Con mèo là loài động vật lớn hơn con chuột nhưng nhỏ hơn con Voi, không quá lớn, không phải con chó và thường được nuôi ở nhà. </details> ### Try coding skills Try asking the Typescript coding model to perform finding the second largest number in the array to see if it works? <details close> <summary>👨‍💻 : Viết mã TypeScript để tạo một hàm thực hiện chức năng: cho mảng toàn số nguyên, tìm số lớn thứ hai trong mảng. </summary> 👻 : ```typescript /** * Tìm số lớn thứ hai trong mảng. * * @param {number[]} arr Mảng các số nguyên. * @returns {number} Số lớn thứ hai trong mảng. */ function findSecondLargest(arr: number[]): number { // Kiểm tra xem mảng có ít nhất 2 phần tử không. if (arr.length < 2) { throw new Error('Mảng phải có ít nhất 2 phần tử.'); } // Tìm số lớn nhất và số lớn thứ hai trong mảng. let largest = arr[0]; let secondLargest = Number.MIN_SAFE_INTEGER; for (let i = 1; i < arr.length; i++) { if (arr[i] > largest) { secondLargest = largest; largest = arr[i]; } else if (arr[i] > secondLargest && arr[i] != largest) { secondLargest = arr[i]; } } // Trả về số lớn thứ hai. return secondLargest; } ``` Giải thích: 1. Kiểm tra xem mảng có ít nhất 2 phần tử không. Nếu mảng chỉ có 1 phần tử hoặc ít hơn, thì không có số lớn thứ hai trong mảng. Trả về lỗi nếu điều kiện này đúng. 2. Khởi tạo hai biến `largest` và `secondLargest` với giá trị ban đầu là phần tử đầu tiên của mảng và số nguyên nhỏ nhất có thể biểu diễn được trong JavaScript (`Number.MIN_SAFE_INTEGER`), tương đương với `Integer.MIN_VALUE` trong Java. 3. Duyệt qua từng phần tử trong mảng, bắt đầu từ phần tử thứ hai. Nếu phần tử hiện tại lớn hơn `largest`, thì cập nhật `secondLargest` bằng `largest` và `largest` bằng phần tử hiện tại. Nếu phần tử hiện tại lớn hơn `secondLargest` nhưng nhỏ hơn `largest`, thì cập nhật `secondLargest` bằng phần tử hiện tại. 4. Sau khi duyệt qua tất cả các phần tử trong mảng, trả về `secondLargest`. Đây chính là số lớn thứ hai trong mảng. </details> Now let's try asking for a coding model to build a smooth neural network with python. <details close> <summary>👨‍💻 : Trình bày cách sử dụng thư viện Keras hoặc TensorFlow trong Python để xây dựng và huấn luyện mạng nơ-ron đa lớp cho bài toán phân loại ảnh.</summary> 👻 : **Sử dụng thư viện Keras:** 1. **Nhập thư viện Keras:** ```python import keras.models ``` 2. **Tạo một mô hình mạng nơ-ron đa lớp:** ```python model = keras.models.Sequential() model.add(keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 3))) model.add(keras.layers.MaxPooling2D((2, 2))) model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(128, activation='relu')) model.add(keras.layers.Dense(2, activation='softmax')) ``` 3. **Biên dịch và huấn luyện mô hình:** ```python model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test)) ``` **Sử dụng thư viện TensorFlow:** 1. **Nhập thư viện TensorFlow:** ```python import tensorflow as tf ``` 2. **Tạo một mô hình mạng nơ-ron đa lớp:** ```python model = tf.keras.models.Sequential() model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 3))) model.add(tf.keras.layers.MaxPooling2D((2, 2))) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(128, activation='relu')) model.add(tf.keras.layers.Dense(2, activation='softmax')) ``` 3. **Biên dịch và huấn luyện mô hình:** ```python model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test)) ``` **Lưu ý:** - Bạn có thể sử dụng các lớp mạng nơ-ron khác nhau trong mô hình, chẳng hạn như lớp mạng nơ-ron tích chập (Conv2D), lớp mạng nơ-ron lấy mẫu (MaxPooling2D), lớp mạng nơ-ron đầy đủ kết nối (Dense), lớp mạng nơ-ron softmax, v.v. - Bạn có thể điều chỉnh số lớp, số nơ-ron trong mỗi lớp và số lần lặp (epochs) để tối ưu hóa mô hình cho bài toán cụ thể của bạn. - Bạn có thể sử dụng các kỹ thuật huấn luyện bổ sung như dropout, regularization và early stopping để cải thiện hiệu suất của mô hình. </details> Try asking the model about common errors when deploying and using K8S to see what results the model gives. <details close> <summary>👨‍💻 : Những lỗi thường gặp trong quá trình sử dụng K8S là gì ? Hãy nêu cách khắc phục.</summary> 👻 : **Những lỗi thường gặp trong quá trình sử dụng Kubernetes:** **1. Lỗi cấu hình:** * **Máy chủ Kubernetes không được cấu hình đúng:** Kiểm tra cấu hình máy chủ Kubernetes (master node) để đảm bảo rằng các thông số cần thiết như IP, cổng, tên miền, tên miền SSL, v.v. được đặt đúng. * **Các pod không được cấu hình đúng:** Kiểm tra cấu hình pod để đảm bảo rằng các yếu tố như image, lưu trữ, mạng, v.v. được đặt đúng. * **Các tài nguyên (resource) không được cấp phát đúng:** Kiểm tra xem các tài nguyên như CPU, bộ nhớ, lưu trữ, v.v. được cấp phát cho các pod có đủ hay không. **2. Lỗi mạng:** * **Mạng không được cấu hình đúng:** Kiểm tra cấu hình mạng để đảm bảo rằng các địa chỉ IP, cổng, tên miền, tên miền SSL, v.v. được cấu hình đúng. * **Lỗi mạng giữa các node:** Kiểm tra xem có lỗi mạng giữa các node trong cụm Kubernetes hay không. Có thể sử dụng các công cụ như `netstat`, `ifconfig`, `tcpdump` để kiểm tra. * **Lỗi mạng giữa các pod:** Kiểm tra xem có lỗi mạng giữa các pod trong một node hay không. Có thể sử dụng các công cụ như `netstat`, `ifconfig`, `tcpdump` để kiểm tra. **3. Lỗi phần cứng:** * **Phần cứng không đủ mạnh:** Kiểm tra xem phần cứng máy chủ Kubernetes có đủ mạnh để chạy cụm Kubernetes hay không. Cụ thể, kiểm tra xem CPU, bộ nhớ, lưu trữ, v.v. có đủ hay không. * **Phần cứng bị lỗi:** Kiểm tra xem phần cứng máy chủ Kubernetes có bị lỗi hay không. Có thể sử dụng các công cụ như `lshw`, `dmesg` để kiểm tra. **4. Lỗi phần mềm:** * **Phần mềm Kubernetes không được cập nhật:** Kiểm tra xem phần mềm Kubernetes đang sử dụng có được cập nhật đến phiên bản mới nhất hay không. Cập nhật phần mềm Kubernetes thường giúp khắc phục các lỗi bảo mật và cải thiện hiệu suất. * **Phần mềm Kubernetes có lỗi:** Kiểm tra xem phần mềm Kubernetes có lỗi hay không. Có thể sử dụng các công cụ như `journal`, `dmesg` để kiểm tra. * **Các ứng dụng khách không được cập nhật:** Kiểm tra xem các ứng dụng khách đang sử dụng trên cụm Kubernetes có được cập nhật đến phiên bản mới nhất hay không. Cập nhật các ứng dụng khách thường giúp khắc phục các lỗi bảo mật và cải thiện hiệu suất. **Cách khắc phục:** * **Lỗi cấu hình:** Sửa các thông số cấu hình sai trong máy chủ Kubernetes và các pod. Ví dụ: đảm bảo rằng máy chủ Kubernetes có địa chỉ IP và cổng phù hợp, các pod có image và lưu trữ phù hợp, v.v. * **Lỗi mạng:** Sửa các vấn đề về mạng giữa các node và các pod. Ví dụ: đảm bảo rằng mạng được cấu hình đúng, kiểm tra xem có lỗi mạng giữa các node hay không, kiểm tra xem có lỗi mạng giữa các pod hay không, v.v. * **Lỗi phần cứng:** Nâng cấp phần cứng máy chủ Kubernetes nếu cần thiết. Ví dụ: tăng CPU, bộ nhớ, lưu trữ, v.v. * **Lỗi phần mềm:** Cập nhật phần mềm Kubernetes và các ứng dụng khách. Ví dụ: cập nhật phiên bản mới nhất của Kubernetes, cập nhật các bản vá lỗi bảo mật cho các ứng dụng khách, v.v. **Lưu ý:** Khi gặp phải lỗi trong quá trình sử dụng Kubernetes, cần phải xác định rõ nguyên nhân gốc rễ của lỗi trước khi có thể khắc phục. Việc xác định nguyên nhân gốc rễ sẽ giúp bạn tiết kiệm thời gian và công sức khi khắc phục lỗi. </details> ## Tools / Function calls The model is trained to be able to choose the right tools to use and to provide useful and accurate answers to the questions it receives. In this example, the model is asked for some information about Apple stock. ### Explain This is the standard for declaring model tools, in `json` format. Also, See more about the model's [prompt / chat template](#prompt--chat-template). ```json [ { "type": "function", "function": { "name": "get_stock_info", "description": "Get stock information", "parameters": { "type": "object", "properties": { "ticker": { "type": "string", "description": "Stock ticker", "required": true } } } } } ] ``` ### Workflow #### Turn 1 To use the tools, the model's supporting role is `plugin` which includes the declared tooling schema. Below is an example of `messages` with **turn 1** to use. ```python # A question about Apple stocks in English language. question = "What is Apple's market capitalization today?" # It's still the same question but in Vietnamese languae. # question = "Vốn hoá thị trường Apple hôm nay là bao nhiêu ?" plugins = [ { "type": "function", "function": { "name": "get_stock_info", "description": "Get stock information", "parameters": { "type": "object", "properties": { "ticker": { "type": "string", "description": "Stock ticker", "required": True, } }, }, }, } ] messages = [ {"role": "system", "content": ""}, {"role": "plugins", "content": json.dumps(plugins, ensure_ascii=False)}, {"role": "user", "content": question}, ] output = generate_sample( messages=messages, max_new_tokens=256, temperature=0.4, top_k=50, top_p=0.95, ) ``` At this point, the model will return `output` as the primitive value as follows: ```plaintext [execute]({"type": "function", "function": "get_stock_info", "arguments": {"ticker": "AAPL"}}) ``` Use the `get_tools` function to parse the readable and usable results. ```json { "type": "function", "function": "get_stock_info", "arguments": { "ticker": "AAPL" } } ``` #### Turn 2 Get information about tools to use with arguments. Use it to execute and return data, here we will assume the return data is as follows, sampled from **polygon.io** (cleaned up): ```json { "request_id": "1ffbab9cc78c7addbd8deaab05bdaf6d", "results": { "ticker": "AAPL", "name": "Apple Inc.", "market_cap": 2620023949270, "description": "Apple is among the largest companies in the world, with a broad portfolio of hardware and software products targeted at consumers and businesses. Apple's iPhone makes up a majority of the firm sales, and Apple's other products like Mac, iPad, and Watch are designed around the iPhone as the focal point of an expansive software ecosystem. Apple has progressively worked to add new applications, like streaming video, subscription bundles, and augmented reality. The firm designs its own software and semiconductors while working with subcontractors like Foxconn and TSMC to build its products and chips. Slightly less than half of Apple's sales come directly through its flagship stores, with a majority of sales coming indirectly through partnerships and distribution.", "ticker_root": "AAPL", "homepage_url": "https://www.apple.com", "total_employees": 161000, ... }, "status": "OK" } ``` Continue using the model to answer user questions based on the data returned from the tool. ```python executes = { "type": "function", "function": "get_stock_info", "arguments": { "ticker": "AAPL" } } results = { "request_id": "1ffbab9cc78c7addbd8deaab05bdaf6d", "results": { "ticker": "AAPL", "name": "Apple Inc.", "market_cap": 2620023949270, "description": "Apple is among the largest companies in the world, with a broad portfolio of hardware and software products targeted at consumers and businesses. Apple's iPhone makes up a majority of the firm sales, and Apple's other products like Mac, iPad, and Watch are designed around the iPhone as the focal point of an expansive software ecosystem. Apple has progressively worked to add new applications, like streaming video, subscription bundles, and augmented reality. The firm designs its own software and semiconductors while working with subcontractors like Foxconn and TSMC to build its products and chips. Slightly less than half of Apple's sales come directly through its flagship stores, with a majority of sales coming indirectly through partnerships and distribution.", "ticker_root": "AAPL", "homepage_url": "https://www.apple.com", "total_employees": 161000, }, "status": "OK" } messages = [ {"role": "system", "content": ""}, {"role": "plugins","content": json.dumps(plugins, ensure_ascii=False)}, {"role": "user","content": question}, # Keep the value of `messages` from before, just add new items below. {"role": "execute","content": json.dumps(executes, ensure_ascii=False)}, {"role": "response","content": json.dumps(results, ensure_ascii=False)}, ] output = generate_sample( messages=messages, max_new_tokens=256, temperature=0.4, top_k=50, top_p=0.95, ) ``` The value of `output` returned by the model will now be similar to the following. For the question is English. ```plaintext Apple's market capitalization today is $2.62 trillion. ``` And the question is Vietnamese. ```plaintext Vốn hoá thị trường của Apple hôm nay là **2.620 tỷ đô la**. 💰 ``` # Usage The model is trained to serve well and balance general tasks, so there will be many approaches when used for production. ## Things to know ### Recommendations #### General tasks Experience it to find the right fit for your own desires. Below are some suggestions that you might consider. ##### Answer the questions In this task, the model can be integrated to develop RAG products to help answer questions for customer care departments, questions about internal documents, policies, product information, order details, etc. ##### Coding support In this task, the model can help find errors or write algorithms and functions that help speed up the domain development process and improve product quality. ##### Content creation In this task, the model can give suggestions for building layouts and content samples that need high creativity and personality in the content. ##### Other Don't limit your creativity to the model. Feel free to experience it and use it to build good things. #### Virtual assistant An understanding of the language along with the overall capabilities of the model, may be suitable for integrating its development as a set of one or more tasks in the development of an intelligent virtual assistant. The model is trained to understand and empathize emotionally, making the response more natural and understanding. This also helps a lot with this request. #### As a pre-training The model can be further trained to serve the desired tasks better and more effectively if it cannot satisfy the given requirements. Taking advantage of the model's current superior inference and multi-tasking capabilities, training it for a few separate tasks will help save a lot of effort, time and cost for the project. Revealed, the model has been integrated to replace some tasks that were previously used by OpenAI GPT4 in the business/product that the founder has been working on. For those tasks, the model is considered effective compared to the cost and is very worthy. ### Bias and limitations In addition, there are some things you need to know before using as follows: #### Generation configuration The **temperature** affects the truth of the answer. Setting a **temperature** value greater than 0.2 - 0.4 will result in a more creative answer but may affect the accuracy of the answer, please consider this based on your task. Hint: you can write a prompt to receive input and ask the model to choose the appropriate temperature based on the question, useful in the case of virtual assistant development. #### Emotion bias and empathy The model is built with an emotional bias and takes positions when faced with problems or reasoning questions. This can be good or bad depending on the context. If you don't want the model to be rigid in asking certain questions or problems, set up rules in the system prompt for it. This bias often appears when the model's system prompt sets it up as an assistant and mentions that it needs to have empathy for/with a certain user/experience. This is intentional during the model training process, it does not come from something unforeseen. ## Directly To use the model directly, there are many ways to get started, choose one of the following ways to experience it. ### Transformers For direct use with `transformers`, you can easily get started with the following steps. - Firstly, you need to install transformers via the command below with `pip`. ```bash pip install -U transformers ``` - Right now, you can start using the model directly. ```python import torch from transformers import ( AutoModelForCausalLM, AutoTokenizer, ) base_model = "ghost-x/ghost-7b-alpha" model = AutoModelForCausalLM.from_pretrained( base_model, torch_dtype=torch.bfloat16, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained(base_model) messages = [ {"role": "system", "content": ""}, {"role": "user", "content": "Why is the sky blue ?"}, # {"role": "user", "content": "Tại sao bầu trời lại màu xanh ?"}, ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False) for k,v in inputs.items(): inputs[k] = v.cuda() outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, top_k=50, top_p=0.95, temperature=0.4) results = tokenizer.batch_decode(outputs)[0] print(results) ``` - Additionally, you can also use a model with **4bit quantization** to reduce the required resources at least. You can start with the code below. ```python import torch from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, ) base_model = "ghost-x/ghost-7b-alpha" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=False, ) model = AutoModelForCausalLM.from_pretrained( base_model, quantization_config=bnb_config, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained(base_model) messages = [ {"role": "system", "content": ""}, {"role": "user", "content": "Why is the sky blue ?"}, # {"role": "user", "content": "Tại sao bầu trời lại màu xanh ?"}, ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False) for k,v in inputs.items(): inputs[k] = v.cuda() outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, top_k=50, top_p=0.95, temperature=0.4) results = tokenizer.batch_decode(outputs)[0] print(results) ``` ### Unsloth For direct use with `unsloth`, you can easily get started with the following steps. - Firstly, you need to install unsloth via the command below with `pip`. ```bash pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" pip install --no-deps xformers trl peft accelerate bitsandbytes ``` - Initialize and optimize the model before use. ```python from unsloth import FastLanguageModel import torch base_model = "ghost/ghost-7b-alpha" model, tokenizer = FastLanguageModel.from_pretrained( model_name = base_model, max_seq_length = 8192, dtype = None, load_in_4bit = True, # Change to `False` if you don't want to use 4bit quantization. ) FastLanguageModel.for_inference(model) ``` - Right now, you can start using the model directly. ```python messages = [ {"role": "system", "content": ""}, {"role": "user", "content": "Why is the sky blue ?"}, # {"role": "user", "content": "Tại sao bầu trời lại màu xanh ?"}, ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False) for k,v in inputs.items(): inputs[k] = v.cuda() outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, top_k=50, top_p=0.95, temperature=0.4) results = tokenizer.batch_decode(outputs)[0] print(results) ``` ## Online The content of this document will be updated soon. The documentation will guide usage through providers such as Huggingface Inference API, Together AI, or Replicate. ## Playground To make it easy for everyone to quickly experience the model through platforms like **Google Colab** and **Kaggle**. We’ve made these notebooks available so you can get started right away. See available notebooks with details [here](https://ghost-x.org/docs/notebooks/playground-with-ghost-7b-alpha/). We offer notebooks with a wide selection of tools and platforms so you can easily get started. Here are some available options for you to try right now. Let’s pick one to get started. There are currently two language versions: English and Vietnamese. In each version, it will be almost the same. The content will include installation instructions and functions to help get started quickly with experiencing the model's text generation capabilities. In addition, it also provides examples with system prompts for each field so you can better understand them. #### Playground using Transformers - [Kaggle @ Ghost 7B Alpha - Playground using Transformers, Vietnamese](https://www.kaggle.com/code/lamhieu/ghost-7b-alpha-playground-using-transformers-vi). - [Colab @ Ghost 7B Alpha - Playground using Transformers, Vietnamese](https://colab.research.google.com/drive/14Cv6oT0pks3NoIUcXscAU9wex9A2T5H5?usp=sharing). - [Kaggle @ Ghost 7B Alpha - Playground using Transformers, English](https://www.kaggle.com/code/lamhieu/ghost-7b-alpha-playground-using-transformers-en). - [Colab @ Ghost 7B Alpha - Playground using Transformers, English](https://colab.research.google.com/drive/1lzgywm333wZIt5WNDs0ZajDvK7lm_Zy9?usp=sharing). #### Playground using Unsloth - [Kaggle @ Ghost 7B Alpha - Playground using Unsloth, Vietnamese](https://www.kaggle.com/code/lamhieu/ghost-7b-alpha-playground-using-unsloth-vi). - [Colab @ Ghost 7B Alpha - Playground using Unsloth, Vietnamese](https://colab.research.google.com/drive/1E7QuthR33een9Otkepue5zFughCpab4P?usp=sharing). - [Kaggle @ Ghost 7B Alpha - Playground using Unsloth, English](https://www.kaggle.com/code/lamhieu/ghost-7b-alpha-playground-using-unsloth-en). - [Colab @ Ghost 7B Alpha - Playground using Unsloth, English](https://colab.research.google.com/drive/1u04RbzD87_AovQngWsUv-O_Q6YNj7Hzr?usp=sharing). Revealing, experience with Kaggle will give you much better results in creating documents than Colab. We recommend using Kaggle. ## Deployments The models developed by **Ghost X** have the same goal of being easy to integrate and use in practice to save costs and facilitate development for the community and startups. For production deployment with small to large infrastructure, please see more detailed instructions in [this article](https://ghost-x.org/docs/guides/deployments/). The article will provide the most common and effective deployment solutions with leading, trusted libraries such as vLLM and more. In addition, it also has information about more optimal solutions and methods depending on each need to be able to choose the appropriate solution. # Evaluation To ensure effective model development, we ran popular evaluations to give everyone a broader view of the model's capabilities. We had to work to balance the quality between the English and Vietnamese languages to ensure the model still performed well in both of these primary languages. In addition, if your goals are oriented towards a certain main language, you can refer to the advanced user guide to improve model quality for the tasks and languages you desire. ## Open LLM Leaderboard The results of this evaluation will be updated soon. ## MT-Bench MT-bench is a challenging multi-turn question set designed to evaluate the conversational and instruction-following ability of models. [[source from lmsys.org]](https://lmsys.org/blog/2023-06-22-leaderboard) **Ghost 7B Alpha** achieved a decent score for the MT-Bench review, we worked hard to balance the reasoning ability and linguistic insight of both primary languages, English and Vietnamese. Overall, it was able to outperform some large language models such as tulu-30b, guanaco-65b, and mpt-30b-chat which are many times larger. | Model | Score | | --------------------- | ------------ | | gpt-4 | 8.990625 | | gpt-3.5-turbo | 7.943750 | | claude-instant-v1 | 7.905660 | | claude-v1 | 7.900000 | | vicuna-33b-v1.3 | 7.121875 | | wizardlm-30b | 7.009375 | | Llama-2-70b-chat | 6.856250 | | Llama-2-13b-chat | 6.650000 | | guanaco-33b | 6.528125 | | **ghost-7b-alpha** | **6.481250** | | tulu-30b | 6.434375 | | oasst-sft-7-llama-30b | 6.409375 | | guanaco-65b | 6.409375 | | palm-2-chat-bison-001 | 6.400000 | | mpt-30b-chat | 6.393750 | | vicuna-13b-v1.3 | 6.387500 | | wizardlm-13b | 6.353125 | ## AlpacaEval AlpacaEval is an automated tool for evaluating instruction-following language models against the AlpacaFarm dataset. It stands out for its human-validated, high-quality assessments that are both cost-effective and rapid. [[source from klu.ai]](https://klu.ai/glossary/alpaca-eval) Similarly, in this evaluation the model achieved good performance as it outperformed larger sized models such as vicuna-13b, guanaco-33b, and nous-hermes-13b. | Model | Winrate | | -------------------------- | ------------- | | gpt4 | 95.279503 | | Mixtral-8x7B-Instruct-v0.1 | 94.782608 | | claude-2.1 | 87.080745 | | gpt-3.5-turbo-1106 | 86.256218 | | zephyr-7b-alpha | 85.758706 | | gpt35_turbo_instruct | 81.710362 | | gemini-pro | 79.664179 | | ... | ... | | **ghost-7b-alpha** | **70.440251** | | vicuna-13b | 70.434782 | | guanaco-33b | 65.962732 | | nous-hermes-13b | 65.465838 | | ... | ... | # Notes ## Thanks After all, the project also wants to send sincere thanks to the products that have supported the project's development. #### Unsloth <img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/made with unsloth.png" width="200px" align="center" /> [Unsloth](https://unsloth.ai), a great tool that helps us easily develop products, at a lower cost than expected. #### Kaggle <img src="https://hf.fast360.xyz/production/uploads/600ae38cc92b79f54efd4556/dcbpF6YS6RQhqDd6GZZ2v.png" width="200px" align="center" /> [Kaggle](https://kaggle.com), generous hours of free notebooks for research projects. #### And... Good friends, who have accompanied the project, Luan Nguyen and Phu Tran. ## Confidence In addition to the **Ghost 7B Alpha** project, **Ghost X** always wants to develop and improve many better models in the future, better supporting the community and businesses with the most openness possible. Revealing the **Ghost 7B Beta** project plan. This model is expected to outperform with a deeper focus on multi-tasking, math, and reasoning. Along with that is the ability to expand context length and support other languages (highly requested languages). The organization is being operated and developed by [Hieu Lam](https://huggingface.co/ghost-x)'s personal resources, if there is any development support or consulting request. Please feel free to contact the organization, we are very happy about this. Directly via email: [[email protected]](mailto:[email protected]). **Ghost X** is happy to support providing models for server providers, aiming to help startups develop better. ## Contact Follow **Ghost X** to stay updated with the latest information. - Twitter/X via [@ghostx_ai](https://twitter.com/ghostx_ai). - HuggingFace via [@ghost-x](https://huggingface.co/ghost-x). - Official website [ghost-x.org](https://ghost-x.org/).
null
Non_BioNLP
# Ghost 7B Alpha <img src="https://hf.fast360.xyz/production/uploads/600ae38cc92b79f54efd4556/QPrQZMQX_jzyYngmreP0_.jpeg" alt="Ghost 7B Alpha Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/> The large generation of language models focuses on optimizing excellent reasoning, multi-task knowledge, and tools support. # Introduction **Ghost 7B Alpha** is a large language model fine-tuned from Mistral 7B, with a size of 7 billion parameters. The model was developed with the goal of optimizing reasoning ability, multi-task knowledge and supporting tool usage. The model works well with the main trained and optimized languages being English and Vietnamese. Overall, the model is suitable when making a pretrained version so you can continue to develop the desired tasks, develop virtual assistants, perform features on tasks such as coding, translation, answering questions, creating documents, etc. It is truly an efficient, fast and extremely cheap open model. ## Specifications - Name: **Ghost 7B Alpha**. - Model size: 7 billion parameters. - Context length: 8K, 8192. - Languages: English and Vietnamese. - Main tasks: reasoning, multi-tasking knowledge and function tools. - License: [Ghost 7B LICENSE AGREEMENT](https://ghost-x.org/ghost-7b-license). - Based on: Mistral 7B. - Distributions: Standard (BF16), GGUF, AWQ. - Developed by: **Ghost X**, [Hieu Lam](https://huggingface.co/lamhieu). ### Links - Card model: [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha). - Official website: [Ghost 7B Alpha](https://ghost-x.org/docs/models/ghost-7b-alpha/). - Demo: [Playground with Ghost 7B Alpha](https://ghost-x.org/docs/notebooks/playground-with-ghost-7b-alpha/). ## Distributions We create many distributions to give you the best access options that best suit your needs. Always make sure you know which version you need and what will help you operate better. | Version | Model card | | ------- | -------------------------------------------------------------------- | | BF16 | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha) | | GGUF | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha-gguf) | | AWQ | [🤗 HuggingFace](https://huggingface.co/ghost-x/ghost-7b-alpha-awq) | ### Standard (BF16) The standard distribution was used to run the assessments and was found to have the best performance in text generation quality. ### GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. ### AWQ The AWQ algorithm for 4-bit quantization with a 2x speedup during inference. ## Prompt / Chat template ### Roles Overall, the model supports roles including: **system**, **plugins**, **user**, **assistant**, **execute**, **response**. - **system**: Similar to OpenAI GPT4, this will be the place for you to declare context or specify characteristics and expertise for the model. - **user**: This is user input, which can be a question or an instruction for the model to follow. - **assistant**: The model's response to a question or instruction received from **user**. - **plugins**: Declare information about modeling tools that are allowed to be used according to predetermined standards. - **excute**: Mock iterates over the data type returned by the model for execution. Contains information about tools to use and arguments. - **response**: The result after executing the tool, so that the model gives the appropriate answer according to the user's input. See the actual code [here](https://huggingface.co/ghost-x/ghost-7b-alpha/blob/main/tokenizer_config.json#L36). ### Tips - Setting the system prompt in the right context and characteristics will help the model improve performance quite well. - You can set system prompt to prevent unwanted things. - In most cases, if you have no idea you can leave the system prompt blank. At this point, the performance will be best for general cases. ### Chat template #### Default chat In the standard case, without using function tools, only use roles such as **system**, **user** and **assistant**. Conventions are in the following order: - (1) **system** - (2) **user** - (3) **assistant** Repeat (2) and (3) continuously throughout the conversation. #### Chat with tools In case of using tools, the conventions are in the following order: - (1) **system** - (2) **plugins** - (3) **user** - A. When the model finds it inappropriate to use the tool: - (4) **assistant** if the model can answer or is not suitable for the tool. The model proceeds to answer the user's question. - B. When the model chooses to use the tool: - (4) **execute** returns the arguments needed to execute the specified tool. - (5) **response** returns the results after applying the tool to the model. - (6) **assistant** the model proceeds to answer the user's question. Repeat (3) and (A) or (B) continuously throughout the conversation. # Showcases ## English Let's try a question taken from [OpenAI GPT-4](https://openai.com/gpt-4) to evaluate the superiority of GPT-4 compared to ChatGPT. <details close> <summary>👨‍💻 : Andrew is free from 11 am to 3 pm, Joanne is free from noon to 2 pm and then 3:30 pm to 5 pm. Hannah is available at noon for half an hour, and then 4 pm to 6 pm. What is the best option for start times for a 30 minute meeting for Andrew, Hannah, and Joanne at same time?</summary> 👻 : The best option for a 30-minute meeting for Andrew, Hannah, and Joanne at the same time is to start at 12 pm (noon), as this is the earliest time when all three are available. Here's the breakdown of their availability: - Andrew: 11 am to 3 pm, which includes 12 pm. - Joanne: Noon to 2 pm, which includes 12 pm. - Hannah: Noon for half an hour (30 minutes), which includes 12 pm. Then she is available from 4 pm to 6 pm. Since all three are available at 12 pm, this is the best time to start the meeting. </details> A quiz to see if the model can guess what it's talking about?! <details close> <summary>👨‍💻 : Characteristics: has a long, thin and flexible body; The legs are short and webbed. Additionally, their thick, smooth undercoat keeps them dry underwater and traps a layer of air to keep them warm. Question: What animal is that?</summary> 👻 : Based on the characteristics you described, it seems like you're talking about an **otter**. Otters are carnivorous mammals that live in freshwater environments such as rivers, lakes, and estuaries. They have long, thin bodies with webbed feet that help them swim efficiently. Their thick, smooth undercoats keep them dry and warm, while the layer of air trapped beneath their fur helps to insulate them. Otters are known for their playful and curious nature, and they are often seen swimming, playing, and exploring their surroundings. 🦡🐭 </details> To this question, I really cannot answer. But the model's evidence is quite strict. Also, what do you think about this answer? <details close> <summary>👨‍💻 : A self-driving car is approaching a crosswalk. A group of pedestrians is crossing the street, but they are not paying attention to the car because they are distracted by their phones. The car's brakes are malfunctioning, and it is unable to stop. Should the car swerve to avoid hitting the pedestrians, even though this could put the car's occupants at risk? Be convincing with your examples or sharp reasoning.</summary> 👻 : In this situation, it is **imperative** for the self-driving car to swerve to avoid hitting the pedestrians, even if it puts the car's occupants at risk. Here's why: **1. Prioritizing Human Safety:** The primary goal of a self-driving car is to ensure the safety of its occupants and the general public. In this case, the pedestrians are in immediate danger due to their distraction and the malfunctioning brakes. Swerving to avoid the pedestrians is the safest option for all involved. **2. Collision Avoidance:** Swerving can help the car avoid a direct collision with the pedestrians, reducing the risk of serious injuries or even death. Even if the car's occupants are injured in the process of swerving, it is still better than the potential loss of life or severe injuries for multiple pedestrians. **3. Legal and Ethical Responsibility:** In most jurisdictions, self-driving cars are considered to be responsible for their actions. Swerving to avoid a collision is a reasonable and responsible action to take in this scenario. Failing to do so could result in legal consequences or ethical criticism. **4. Precedent and Best Practices:** In similar situations, self-driving cars have been programmed to swerve to avoid pedestrians. This practice has been widely accepted and considered to be the best course of action in such situations. **5. Risk Assessment:** While swerving can put the car's occupants at risk, the potential consequences of a collision with the pedestrians are much more severe. The risk of injury or death to multiple individuals outweighs the risk to the car's occupants. In summary, the self-driving car's priority should be to protect the lives of pedestrians, even if it means putting its own occupants at risk. Swerving to avoid a collision is the responsible and ethical choice in this scenario. </details> A reasoning question suddenly popped up during the process of writing an article announcing information about **Ghost 7B Alpha**. The model gave an impressive answer, at least to its creator. <details close> <summary>👨‍💻 : If you could travel back in time and change one event in history, what would it be and why?</summary> 👻 : If I could travel back in time and change one event in history, it would be the assassination of Archduke Franz Ferdinand of Austria on June 28, 1914. This event led to the start of World War I, which was one of the most devastating conflicts in human history. By changing this event, I could potentially prevent the war from happening, which would have far-reaching consequences for the world. Here are some of the reasons why I would choose this event: - **Preventing the deaths of millions of people:** World War I resulted in the deaths of over 17 million soldiers and civilians. Changing this event would save countless lives. - **Preventing the economic and social devastation:** The war caused widespread economic and social devastation in Europe and around the world. By preventing the war, I could help to avoid these consequences. - **Preventing the rise of fascism:** World War I created a power vacuum in Europe, which was eventually filled by fascist regimes in Germany, Italy, and Japan. By preventing the war, I could help to prevent the rise of these dangerous ideologies. - **Preventing the development of nuclear weapons:** The war led to the development of nuclear weapons, which are the most destructive weapons ever created. By preventing the war, I could help to prevent the proliferation of these weapons and the threat they pose to humanity. </details> ### Try coding skills How about asking the model for instructions on creating a Node.js application using Docker for deployment? <details close> <summary>👨‍💻 : Create a Nodejs project using Express as a server and Docker with a Dockerfile for deployment with step-by-step instructions with detailed explanations.</summary> 👻 : **Step 1: Create a new Node.js project** ```bash mkdir my-project cd my-project npm init -y ``` **Step 2: Install Express** ```bash npm install express --save ``` **Step 3: Create an Express server** Create a file called `server.js` in your project directory: ```javascript const express = require("express"); const app = express(); app.get("/", (req, res) => { res.send("Hello World!"); }); app.listen(3000, () => { console.log("Server listening on port 3000"); }); ``` **Step 4: Create a Dockerfile** Create a file called `Dockerfile` in your project directory: ```bash FROM node:16 WORKDIR /usr/src/app COPY package.json . RUN npm install COPY . . EXPOSE 3000 CMD ["node", "server.js"] ``` **Step 5: Build the Docker image** ```bash docker build -t my-project-image . ``` **Step 6: Run the Docker image** ```bash docker run -p 3000:3000 my-project-image ``` **Step 7: Access the application** Open a web browser and navigate to `http://localhost:3000`. You should see the message "Hello World!" displayed. **Explanation:** - **Step 1:** Creates a new Node.js project with `npm init -y` to initialize a new Node.js project with a default `package.json` file. - **Step 2:** Installs Express using `npm install express --save`. This command installs Express and adds it to the `package.json` file. - **Step 3:** Creates an Express server in `server.js` that responds with "Hello World!" when a request is made to the root path (`/`). - **Step 4:** Creates a Dockerfile that defines the image for the project. The Dockerfile specifies that the image should use Node.js version 16, work from the `/usr/src/app` directory, copy the `package.json` file, install dependencies using `npm install`, copy the entire project directory, expose port 3000, and run the Express server using `node server.js`. - **Step 5:** Builds the Docker image using `docker build -t my-project-image .`. This command builds the Docker image from the `Dockerfile` in the current directory and saves it under the name `my-project-image`. - **Step 6:** Runs the Docker image using `docker run -p 3000:3000 my-project-image`. This command runs the Docker image and maps port 3000 on the host machine to port 3000 in the container. - **Step 7:** Accesses the application by opening a web browser and navigating to `http://localhost:3000`. This will display the message "Hello World!" because the Express server is running in the Docker container. </details> Now let’s try asking for a coding model to build a smooth neural network with python. <details close> <summary>👨‍💻 : Explain how to use the Keras or TensorFlow library in Python to build and train a multi-layer neural network for an image classification task</summary> 👻 : **Using Keras:** 1. Import the necessary libraries: ```python import keras from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense ``` 2. Create a sequential model: ```python model = Sequential() model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3))) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dense(10, activation='softmax')) ``` 3. Compile the model: ```python model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) ``` 4. Train the model: ```python model.fit(x_train, y_train, epochs=10) ``` **Using TensorFlow:** 1. Import the necessary libraries: ```python import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense ``` 2. Create a sequential model: ```python model = Sequential() model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3))) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dense(10, activation='softmax')) ``` 3. Compile the model: ```python model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) ``` 4. Train the model: ```python model.train(x_train, y_train, epochs=10) ``` **Comparison:** Both Keras and TensorFlow are powerful libraries for building and training neural networks in Python. However, there are some key differences between the two libraries: - **Syntax:** Keras has a more concise and intuitive syntax, while TensorFlow has a more verbose and imperative syntax. - **Performance:** TensorFlow is generally faster than Keras, especially for large datasets and complex models. - **Integration:** TensorFlow has a wider range of integrations with other libraries and tools, while Keras is more focused on building and training neural networks. Ultimately, the best choice for you depends on your specific needs and preferences. If you prefer a more concise and intuitive syntax, Keras is a good option. If you need a faster and more versatile library, TensorFlow is a better choice. </details> Let's try asking the model how to overcome the "hallucination" phenomenon in A.I development and give context for the model to provide a solution. <details close> <summary>👨‍💻 : Explain the cause of A.I hallucination and find a solution to overcome it, the specific context is LLMs.</summary> 👻 : **Causes of A.I. Hallucination:** - **Data Quality:** Incorrect or biased data can lead to hallucinations in LLMs. For example, if an LLM is trained on a dataset that contains biased or incomplete information, it may learn to generate biased or incomplete responses. - **Model Architecture:** Certain model architectures are more susceptible to hallucinations than others. For example, Transformers, which are commonly used in LLMs, can be more prone to hallucinations than recurrent neural networks (RNNs). - **Hyperparameters:** The hyperparameters of an LLM, such as the learning rate and the batch size, can also affect the likelihood of hallucinations. For example, a higher learning rate may lead to hallucinations more often than a lower learning rate. **Solutions to Overcome A.I. Hallucination:** - **Data Quality:** Ensure that the data used to train the LLM is accurate, complete, and unbiased. This can be done by cleaning the data, removing outliers, and using a variety of data sources. - **Model Architecture:** Choose an LLM architecture that is less susceptible to hallucinations. For example, RNNs are generally less prone to hallucinations than Transformers. - **Hyperparameters:** Tune the hyperparameters of the LLM to minimize the likelihood of hallucinations. This can be done by experimenting with different learning rates, batch sizes, and other hyperparameters. - **Regularization Techniques:** Regularization techniques, such as dropout and early stopping, can help to prevent hallucinations by encouraging the LLM to learn more generalizable patterns. - **Explicit Hallucination Detection:** Develop techniques to explicitly detect hallucinations in LLM-generated text. This can be done by using a variety of methods, such as text analysis, natural language processing, and machine learning. - **User Feedback:** Collect feedback from users to identify and correct hallucinations in LLM-generated text. This can be done by using a variety of methods, such as surveys, feedback forms, and online discussions. By implementing these solutions, you can significantly reduce the likelihood of hallucinations in LLMs and improve the quality of the generated text. </details> ## Vietnamese Try asking the model what it thinks about a famous quote to see if the answer feels natural. <details close> <summary>👨‍💻 : Bạn có đồng ý với câu nói "Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo" ?</summary> 👻 : Tôi đồng ý với câu nói "Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo". Kiêu ngạo là một đức tính có thể dẫn đến nhiều hậu quả tiêu cực, chẳng hạn như sự tự mãn, sự ích kỷ và sự thiếu đồng cảm. Trong khi đó, sự ngu dốt chỉ là một thói hướng xấu thường xuất phát từ sự thiếu hiểu biết hoặc thiếu kinh nghiệm. Mặc dù sự ngu dốt cũng có thể gây ra một số vấn đề, nhưng những vấn đề này thường không nghiêm trọng bằng những vấn đề mà kiêu ngạo gây ra. Ví dụ, một người ngu dốt có thể nói những lời ngớ ngẩn hoặc làm những việc vụng về, nhưng họ thường vẫn có thể học hỏi từ những sai lầm của mình và cải thiện bản thân. Ngược lại, một người kiêu ngạo thường sẽ không chịu học hỏi từ những sai lầm của mình và luôn tin rằng mình đúng, ngay cả khi mình sai. Điều này có thể dẫn đến nhiều vấn đề, chẳng hạn như sự thất bại trong công việc hoặc các mối quan hệ. Do đó, tôi cho rằng câu nói "Điều duy nhất nguy hiểm hơn sự ngu dốt là sự kiêu ngạo" là đúng. </details> Explore the model's literary understanding by providing meaningful story prompts, asking to identify what the story is. <details close> <summary>👨‍💻 : Câu chuyện có nội dung và ý nghĩa chứa đựng một thông điệp ý nghĩa lớn đến tất cả mọi người, đó là tình đoàn kết, khao khát cho sự hòa bình và nghị lực phi thường trước mọi khó khăn. Có nhân vật chính là loài dế. Câu hỏi: câu chuyện đó của tác giả nào ?</summary> 👻 : Câu chuyện "Dế Mèn phiêu lưu ký" của Tô Hoài. **Giải thích:** - "Dế Mèn phiêu lưu ký" là một tác phẩm văn học nổi tiếng của nhà văn Tô Hoài, được sáng tác năm 1941. - Tác phẩm kể về cuộc phiêu lưu của Dế Mèn, một chú dế nhỏ sống trong một ngôi làng nhỏ. Trong quá trình phiêu lưu, Dế Mèn đã gặp gỡ và học hỏi nhiều điều về cuộc sống, về tình yêu, về sự hy sinh và về sức mạnh của tình đoàn kết. - Tác phẩm thể hiện rõ tinh thần đoàn kết, khao khát cho sự hòa bình và nghị lực phi thường của con người trước mọi khó khăn, thử thách. Đây chính là thông điệp ý nghĩa lớn của câu chuyện "Dế Mèn phiêu lưu ký". </details> The ability to reason to the answer based on the characteristics of is provided. <details close> <summary>👨‍💻 : Con vật nào lớn hơn con chuột, nhỏ hơn con Voi, không quá lớn, không phải con chó và thường được nuôi ở nhà ?</summary> 👻 : Con mèo là loài động vật lớn hơn con chuột nhưng nhỏ hơn con Voi, không quá lớn, không phải con chó và thường được nuôi ở nhà. </details> ### Try coding skills Try asking the Typescript coding model to perform finding the second largest number in the array to see if it works? <details close> <summary>👨‍💻 : Viết mã TypeScript để tạo một hàm thực hiện chức năng: cho mảng toàn số nguyên, tìm số lớn thứ hai trong mảng. </summary> 👻 : ```typescript /** * Tìm số lớn thứ hai trong mảng. * * @param {number[]} arr Mảng các số nguyên. * @returns {number} Số lớn thứ hai trong mảng. */ function findSecondLargest(arr: number[]): number { // Kiểm tra xem mảng có ít nhất 2 phần tử không. if (arr.length < 2) { throw new Error('Mảng phải có ít nhất 2 phần tử.'); } // Tìm số lớn nhất và số lớn thứ hai trong mảng. let largest = arr[0]; let secondLargest = Number.MIN_SAFE_INTEGER; for (let i = 1; i < arr.length; i++) { if (arr[i] > largest) { secondLargest = largest; largest = arr[i]; } else if (arr[i] > secondLargest && arr[i] != largest) { secondLargest = arr[i]; } } // Trả về số lớn thứ hai. return secondLargest; } ``` Giải thích: 1. Kiểm tra xem mảng có ít nhất 2 phần tử không. Nếu mảng chỉ có 1 phần tử hoặc ít hơn, thì không có số lớn thứ hai trong mảng. Trả về lỗi nếu điều kiện này đúng. 2. Khởi tạo hai biến `largest` và `secondLargest` với giá trị ban đầu là phần tử đầu tiên của mảng và số nguyên nhỏ nhất có thể biểu diễn được trong JavaScript (`Number.MIN_SAFE_INTEGER`), tương đương với `Integer.MIN_VALUE` trong Java. 3. Duyệt qua từng phần tử trong mảng, bắt đầu từ phần tử thứ hai. Nếu phần tử hiện tại lớn hơn `largest`, thì cập nhật `secondLargest` bằng `largest` và `largest` bằng phần tử hiện tại. Nếu phần tử hiện tại lớn hơn `secondLargest` nhưng nhỏ hơn `largest`, thì cập nhật `secondLargest` bằng phần tử hiện tại. 4. Sau khi duyệt qua tất cả các phần tử trong mảng, trả về `secondLargest`. Đây chính là số lớn thứ hai trong mảng. </details> Now let's try asking for a coding model to build a smooth neural network with python. <details close> <summary>👨‍💻 : Trình bày cách sử dụng thư viện Keras hoặc TensorFlow trong Python để xây dựng và huấn luyện mạng nơ-ron đa lớp cho bài toán phân loại ảnh.</summary> 👻 : **Sử dụng thư viện Keras:** 1. **Nhập thư viện Keras:** ```python import keras.models ``` 2. **Tạo một mô hình mạng nơ-ron đa lớp:** ```python model = keras.models.Sequential() model.add(keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 3))) model.add(keras.layers.MaxPooling2D((2, 2))) model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(128, activation='relu')) model.add(keras.layers.Dense(2, activation='softmax')) ``` 3. **Biên dịch và huấn luyện mô hình:** ```python model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test)) ``` **Sử dụng thư viện TensorFlow:** 1. **Nhập thư viện TensorFlow:** ```python import tensorflow as tf ``` 2. **Tạo một mô hình mạng nơ-ron đa lớp:** ```python model = tf.keras.models.Sequential() model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 3))) model.add(tf.keras.layers.MaxPooling2D((2, 2))) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(128, activation='relu')) model.add(tf.keras.layers.Dense(2, activation='softmax')) ``` 3. **Biên dịch và huấn luyện mô hình:** ```python model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test)) ``` **Lưu ý:** - Bạn có thể sử dụng các lớp mạng nơ-ron khác nhau trong mô hình, chẳng hạn như lớp mạng nơ-ron tích chập (Conv2D), lớp mạng nơ-ron lấy mẫu (MaxPooling2D), lớp mạng nơ-ron đầy đủ kết nối (Dense), lớp mạng nơ-ron softmax, v.v. - Bạn có thể điều chỉnh số lớp, số nơ-ron trong mỗi lớp và số lần lặp (epochs) để tối ưu hóa mô hình cho bài toán cụ thể của bạn. - Bạn có thể sử dụng các kỹ thuật huấn luyện bổ sung như dropout, regularization và early stopping để cải thiện hiệu suất của mô hình. </details> Try asking the model about common errors when deploying and using K8S to see what results the model gives. <details close> <summary>👨‍💻 : Những lỗi thường gặp trong quá trình sử dụng K8S là gì ? Hãy nêu cách khắc phục.</summary> 👻 : **Những lỗi thường gặp trong quá trình sử dụng Kubernetes:** **1. Lỗi cấu hình:** * **Máy chủ Kubernetes không được cấu hình đúng:** Kiểm tra cấu hình máy chủ Kubernetes (master node) để đảm bảo rằng các thông số cần thiết như IP, cổng, tên miền, tên miền SSL, v.v. được đặt đúng. * **Các pod không được cấu hình đúng:** Kiểm tra cấu hình pod để đảm bảo rằng các yếu tố như image, lưu trữ, mạng, v.v. được đặt đúng. * **Các tài nguyên (resource) không được cấp phát đúng:** Kiểm tra xem các tài nguyên như CPU, bộ nhớ, lưu trữ, v.v. được cấp phát cho các pod có đủ hay không. **2. Lỗi mạng:** * **Mạng không được cấu hình đúng:** Kiểm tra cấu hình mạng để đảm bảo rằng các địa chỉ IP, cổng, tên miền, tên miền SSL, v.v. được cấu hình đúng. * **Lỗi mạng giữa các node:** Kiểm tra xem có lỗi mạng giữa các node trong cụm Kubernetes hay không. Có thể sử dụng các công cụ như `netstat`, `ifconfig`, `tcpdump` để kiểm tra. * **Lỗi mạng giữa các pod:** Kiểm tra xem có lỗi mạng giữa các pod trong một node hay không. Có thể sử dụng các công cụ như `netstat`, `ifconfig`, `tcpdump` để kiểm tra. **3. Lỗi phần cứng:** * **Phần cứng không đủ mạnh:** Kiểm tra xem phần cứng máy chủ Kubernetes có đủ mạnh để chạy cụm Kubernetes hay không. Cụ thể, kiểm tra xem CPU, bộ nhớ, lưu trữ, v.v. có đủ hay không. * **Phần cứng bị lỗi:** Kiểm tra xem phần cứng máy chủ Kubernetes có bị lỗi hay không. Có thể sử dụng các công cụ như `lshw`, `dmesg` để kiểm tra. **4. Lỗi phần mềm:** * **Phần mềm Kubernetes không được cập nhật:** Kiểm tra xem phần mềm Kubernetes đang sử dụng có được cập nhật đến phiên bản mới nhất hay không. Cập nhật phần mềm Kubernetes thường giúp khắc phục các lỗi bảo mật và cải thiện hiệu suất. * **Phần mềm Kubernetes có lỗi:** Kiểm tra xem phần mềm Kubernetes có lỗi hay không. Có thể sử dụng các công cụ như `journal`, `dmesg` để kiểm tra. * **Các ứng dụng khách không được cập nhật:** Kiểm tra xem các ứng dụng khách đang sử dụng trên cụm Kubernetes có được cập nhật đến phiên bản mới nhất hay không. Cập nhật các ứng dụng khách thường giúp khắc phục các lỗi bảo mật và cải thiện hiệu suất. **Cách khắc phục:** * **Lỗi cấu hình:** Sửa các thông số cấu hình sai trong máy chủ Kubernetes và các pod. Ví dụ: đảm bảo rằng máy chủ Kubernetes có địa chỉ IP và cổng phù hợp, các pod có image và lưu trữ phù hợp, v.v. * **Lỗi mạng:** Sửa các vấn đề về mạng giữa các node và các pod. Ví dụ: đảm bảo rằng mạng được cấu hình đúng, kiểm tra xem có lỗi mạng giữa các node hay không, kiểm tra xem có lỗi mạng giữa các pod hay không, v.v. * **Lỗi phần cứng:** Nâng cấp phần cứng máy chủ Kubernetes nếu cần thiết. Ví dụ: tăng CPU, bộ nhớ, lưu trữ, v.v. * **Lỗi phần mềm:** Cập nhật phần mềm Kubernetes và các ứng dụng khách. Ví dụ: cập nhật phiên bản mới nhất của Kubernetes, cập nhật các bản vá lỗi bảo mật cho các ứng dụng khách, v.v. **Lưu ý:** Khi gặp phải lỗi trong quá trình sử dụng Kubernetes, cần phải xác định rõ nguyên nhân gốc rễ của lỗi trước khi có thể khắc phục. Việc xác định nguyên nhân gốc rễ sẽ giúp bạn tiết kiệm thời gian và công sức khi khắc phục lỗi. </details> ## Tools / Function calls The model is trained to be able to choose the right tools to use and to provide useful and accurate answers to the questions it receives. In this example, the model is asked for some information about Apple stock. ### Explain This is the standard for declaring model tools, in `json` format. Also, See more about the model's [prompt / chat template](#prompt--chat-template). ```json [ { "type": "function", "function": { "name": "get_stock_info", "description": "Get stock information", "parameters": { "type": "object", "properties": { "ticker": { "type": "string", "description": "Stock ticker", "required": true } } } } } ] ``` ### Workflow #### Turn 1 To use the tools, the model's supporting role is `plugin` which includes the declared tooling schema. Below is an example of `messages` with **turn 1** to use. ```python # A question about Apple stocks in English language. question = "What is Apple's market capitalization today?" # It's still the same question but in Vietnamese languae. # question = "Vốn hoá thị trường Apple hôm nay là bao nhiêu ?" plugins = [ { "type": "function", "function": { "name": "get_stock_info", "description": "Get stock information", "parameters": { "type": "object", "properties": { "ticker": { "type": "string", "description": "Stock ticker", "required": True, } }, }, }, } ] messages = [ {"role": "system", "content": ""}, {"role": "plugins", "content": json.dumps(plugins, ensure_ascii=False)}, {"role": "user", "content": question}, ] output = generate_sample( messages=messages, max_new_tokens=256, temperature=0.4, top_k=50, top_p=0.95, ) ``` At this point, the model will return `output` as the primitive value as follows: ```plaintext [execute]({"type": "function", "function": "get_stock_info", "arguments": {"ticker": "AAPL"}}) ``` Use the `get_tools` function to parse the readable and usable results. ```json { "type": "function", "function": "get_stock_info", "arguments": { "ticker": "AAPL" } } ``` #### Turn 2 Get information about tools to use with arguments. Use it to execute and return data, here we will assume the return data is as follows, sampled from **polygon.io** (cleaned up): ```json { "request_id": "1ffbab9cc78c7addbd8deaab05bdaf6d", "results": { "ticker": "AAPL", "name": "Apple Inc.", "market_cap": 2620023949270, "description": "Apple is among the largest companies in the world, with a broad portfolio of hardware and software products targeted at consumers and businesses. Apple's iPhone makes up a majority of the firm sales, and Apple's other products like Mac, iPad, and Watch are designed around the iPhone as the focal point of an expansive software ecosystem. Apple has progressively worked to add new applications, like streaming video, subscription bundles, and augmented reality. The firm designs its own software and semiconductors while working with subcontractors like Foxconn and TSMC to build its products and chips. Slightly less than half of Apple's sales come directly through its flagship stores, with a majority of sales coming indirectly through partnerships and distribution.", "ticker_root": "AAPL", "homepage_url": "https://www.apple.com", "total_employees": 161000, ... }, "status": "OK" } ``` Continue using the model to answer user questions based on the data returned from the tool. ```python executes = { "type": "function", "function": "get_stock_info", "arguments": { "ticker": "AAPL" } } results = { "request_id": "1ffbab9cc78c7addbd8deaab05bdaf6d", "results": { "ticker": "AAPL", "name": "Apple Inc.", "market_cap": 2620023949270, "description": "Apple is among the largest companies in the world, with a broad portfolio of hardware and software products targeted at consumers and businesses. Apple's iPhone makes up a majority of the firm sales, and Apple's other products like Mac, iPad, and Watch are designed around the iPhone as the focal point of an expansive software ecosystem. Apple has progressively worked to add new applications, like streaming video, subscription bundles, and augmented reality. The firm designs its own software and semiconductors while working with subcontractors like Foxconn and TSMC to build its products and chips. Slightly less than half of Apple's sales come directly through its flagship stores, with a majority of sales coming indirectly through partnerships and distribution.", "ticker_root": "AAPL", "homepage_url": "https://www.apple.com", "total_employees": 161000, }, "status": "OK" } messages = [ {"role": "system", "content": ""}, {"role": "plugins","content": json.dumps(plugins, ensure_ascii=False)}, {"role": "user","content": question}, # Keep the value of `messages` from before, just add new items below. {"role": "execute","content": json.dumps(executes, ensure_ascii=False)}, {"role": "response","content": json.dumps(results, ensure_ascii=False)}, ] output = generate_sample( messages=messages, max_new_tokens=256, temperature=0.4, top_k=50, top_p=0.95, ) ``` The value of `output` returned by the model will now be similar to the following. For the question is English. ```plaintext Apple's market capitalization today is $2.62 trillion. ``` And the question is Vietnamese. ```plaintext Vốn hoá thị trường của Apple hôm nay là **2.620 tỷ đô la**. 💰 ``` # Usage The model is trained to serve well and balance general tasks, so there will be many approaches when used for production. ## Things to know ### Recommendations #### General tasks Experience it to find the right fit for your own desires. Below are some suggestions that you might consider. ##### Answer the questions In this task, the model can be integrated to develop RAG products to help answer questions for customer care departments, questions about internal documents, policies, product information, order details, etc. ##### Coding support In this task, the model can help find errors or write algorithms and functions that help speed up the domain development process and improve product quality. ##### Content creation In this task, the model can give suggestions for building layouts and content samples that need high creativity and personality in the content. ##### Other Don't limit your creativity to the model. Feel free to experience it and use it to build good things. #### Virtual assistant An understanding of the language along with the overall capabilities of the model, may be suitable for integrating its development as a set of one or more tasks in the development of an intelligent virtual assistant. The model is trained to understand and empathize emotionally, making the response more natural and understanding. This also helps a lot with this request. #### As a pre-training The model can be further trained to serve the desired tasks better and more effectively if it cannot satisfy the given requirements. Taking advantage of the model's current superior inference and multi-tasking capabilities, training it for a few separate tasks will help save a lot of effort, time and cost for the project. Revealed, the model has been integrated to replace some tasks that were previously used by OpenAI GPT4 in the business/product that the founder has been working on. For those tasks, the model is considered effective compared to the cost and is very worthy. ### Bias and limitations In addition, there are some things you need to know before using as follows: #### Generation configuration The **temperature** affects the truth of the answer. Setting a **temperature** value greater than 0.2 - 0.4 will result in a more creative answer but may affect the accuracy of the answer, please consider this based on your task. Hint: you can write a prompt to receive input and ask the model to choose the appropriate temperature based on the question, useful in the case of virtual assistant development. #### Emotion bias and empathy The model is built with an emotional bias and takes positions when faced with problems or reasoning questions. This can be good or bad depending on the context. If you don't want the model to be rigid in asking certain questions or problems, set up rules in the system prompt for it. This bias often appears when the model's system prompt sets it up as an assistant and mentions that it needs to have empathy for/with a certain user/experience. This is intentional during the model training process, it does not come from something unforeseen. ## Directly To use the model directly, there are many ways to get started, choose one of the following ways to experience it. ### Transformers For direct use with `transformers`, you can easily get started with the following steps. - Firstly, you need to install transformers via the command below with `pip`. ```bash pip install -U transformers ``` - Right now, you can start using the model directly. ```python import torch from transformers import ( AutoModelForCausalLM, AutoTokenizer, ) base_model = "ghost-x/ghost-7b-alpha" model = AutoModelForCausalLM.from_pretrained( base_model, torch_dtype=torch.bfloat16, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained(base_model) messages = [ {"role": "system", "content": ""}, {"role": "user", "content": "Why is the sky blue ?"}, # {"role": "user", "content": "Tại sao bầu trời lại màu xanh ?"}, ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False) for k,v in inputs.items(): inputs[k] = v.cuda() outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, top_k=50, top_p=0.95, temperature=0.4) results = tokenizer.batch_decode(outputs)[0] print(results) ``` - Additionally, you can also use a model with **4bit quantization** to reduce the required resources at least. You can start with the code below. ```python import torch from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, ) base_model = "ghost-x/ghost-7b-alpha" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=False, ) model = AutoModelForCausalLM.from_pretrained( base_model, quantization_config=bnb_config, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained(base_model) messages = [ {"role": "system", "content": ""}, {"role": "user", "content": "Why is the sky blue ?"}, # {"role": "user", "content": "Tại sao bầu trời lại màu xanh ?"}, ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False) for k,v in inputs.items(): inputs[k] = v.cuda() outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, top_k=50, top_p=0.95, temperature=0.4) results = tokenizer.batch_decode(outputs)[0] print(results) ``` ### Unsloth For direct use with `unsloth`, you can easily get started with the following steps. - Firstly, you need to install unsloth via the command below with `pip`. ```bash pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" pip install --no-deps xformers trl peft accelerate bitsandbytes ``` - Initialize and optimize the model before use. ```python from unsloth import FastLanguageModel import torch base_model = "ghost/ghost-7b-alpha" model, tokenizer = FastLanguageModel.from_pretrained( model_name = base_model, max_seq_length = 8192, dtype = None, load_in_4bit = True, # Change to `False` if you don't want to use 4bit quantization. ) FastLanguageModel.for_inference(model) ``` - Right now, you can start using the model directly. ```python messages = [ {"role": "system", "content": ""}, {"role": "user", "content": "Why is the sky blue ?"}, # {"role": "user", "content": "Tại sao bầu trời lại màu xanh ?"}, ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False) for k,v in inputs.items(): inputs[k] = v.cuda() outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, top_k=50, top_p=0.95, temperature=0.4) results = tokenizer.batch_decode(outputs)[0] print(results) ``` ## Online The content of this document will be updated soon. The documentation will guide usage through providers such as Huggingface Inference API, Together AI, or Replicate. ## Playground To make it easy for everyone to quickly experience the model through platforms like **Google Colab** and **Kaggle**. We’ve made these notebooks available so you can get started right away. See available notebooks with details [here](https://ghost-x.org/docs/notebooks/playground-with-ghost-7b-alpha/). We offer notebooks with a wide selection of tools and platforms so you can easily get started. Here are some available options for you to try right now. Let’s pick one to get started. There are currently two language versions: English and Vietnamese. In each version, it will be almost the same. The content will include installation instructions and functions to help get started quickly with experiencing the model's text generation capabilities. In addition, it also provides examples with system prompts for each field so you can better understand them. #### Playground using Transformers - [Kaggle @ Ghost 7B Alpha - Playground using Transformers, Vietnamese](https://www.kaggle.com/code/lamhieu/ghost-7b-alpha-playground-using-transformers-vi). - [Colab @ Ghost 7B Alpha - Playground using Transformers, Vietnamese](https://colab.research.google.com/drive/14Cv6oT0pks3NoIUcXscAU9wex9A2T5H5?usp=sharing). - [Kaggle @ Ghost 7B Alpha - Playground using Transformers, English](https://www.kaggle.com/code/lamhieu/ghost-7b-alpha-playground-using-transformers-en). - [Colab @ Ghost 7B Alpha - Playground using Transformers, English](https://colab.research.google.com/drive/1lzgywm333wZIt5WNDs0ZajDvK7lm_Zy9?usp=sharing). #### Playground using Unsloth - [Kaggle @ Ghost 7B Alpha - Playground using Unsloth, Vietnamese](https://www.kaggle.com/code/lamhieu/ghost-7b-alpha-playground-using-unsloth-vi). - [Colab @ Ghost 7B Alpha - Playground using Unsloth, Vietnamese](https://colab.research.google.com/drive/1E7QuthR33een9Otkepue5zFughCpab4P?usp=sharing). - [Kaggle @ Ghost 7B Alpha - Playground using Unsloth, English](https://www.kaggle.com/code/lamhieu/ghost-7b-alpha-playground-using-unsloth-en). - [Colab @ Ghost 7B Alpha - Playground using Unsloth, English](https://colab.research.google.com/drive/1u04RbzD87_AovQngWsUv-O_Q6YNj7Hzr?usp=sharing). Revealing, experience with Kaggle will give you much better results in creating documents than Colab. We recommend using Kaggle. ## Deployments The models developed by **Ghost X** have the same goal of being easy to integrate and use in practice to save costs and facilitate development for the community and startups. For production deployment with small to large infrastructure, please see more detailed instructions in [this article](https://ghost-x.org/docs/guides/deployments/). The article will provide the most common and effective deployment solutions with leading, trusted libraries such as vLLM and more. In addition, it also has information about more optimal solutions and methods depending on each need to be able to choose the appropriate solution. # Evaluation To ensure effective model development, we ran popular evaluations to give everyone a broader view of the model's capabilities. We had to work to balance the quality between the English and Vietnamese languages to ensure the model still performed well in both of these primary languages. In addition, if your goals are oriented towards a certain main language, you can refer to the advanced user guide to improve model quality for the tasks and languages you desire. ## Open LLM Leaderboard The results of this evaluation will be updated soon. ## MT-Bench MT-bench is a challenging multi-turn question set designed to evaluate the conversational and instruction-following ability of models. [[source from lmsys.org]](https://lmsys.org/blog/2023-06-22-leaderboard) **Ghost 7B Alpha** achieved a decent score for the MT-Bench review, we worked hard to balance the reasoning ability and linguistic insight of both primary languages, English and Vietnamese. Overall, it was able to outperform some large language models such as tulu-30b, guanaco-65b, and mpt-30b-chat which are many times larger. | Model | Score | | --------------------- | ------------ | | gpt-4 | 8.990625 | | gpt-3.5-turbo | 7.943750 | | claude-instant-v1 | 7.905660 | | claude-v1 | 7.900000 | | vicuna-33b-v1.3 | 7.121875 | | wizardlm-30b | 7.009375 | | Llama-2-70b-chat | 6.856250 | | Llama-2-13b-chat | 6.650000 | | guanaco-33b | 6.528125 | | **ghost-7b-alpha** | **6.481250** | | tulu-30b | 6.434375 | | oasst-sft-7-llama-30b | 6.409375 | | guanaco-65b | 6.409375 | | palm-2-chat-bison-001 | 6.400000 | | mpt-30b-chat | 6.393750 | | vicuna-13b-v1.3 | 6.387500 | | wizardlm-13b | 6.353125 | ## AlpacaEval AlpacaEval is an automated tool for evaluating instruction-following language models against the AlpacaFarm dataset. It stands out for its human-validated, high-quality assessments that are both cost-effective and rapid. [[source from klu.ai]](https://klu.ai/glossary/alpaca-eval) Similarly, in this evaluation the model achieved good performance as it outperformed larger sized models such as vicuna-13b, guanaco-33b, and nous-hermes-13b. | Model | Winrate | | -------------------------- | ------------- | | gpt4 | 95.279503 | | Mixtral-8x7B-Instruct-v0.1 | 94.782608 | | claude-2.1 | 87.080745 | | gpt-3.5-turbo-1106 | 86.256218 | | zephyr-7b-alpha | 85.758706 | | gpt35_turbo_instruct | 81.710362 | | gemini-pro | 79.664179 | | ... | ... | | **ghost-7b-alpha** | **70.440251** | | vicuna-13b | 70.434782 | | guanaco-33b | 65.962732 | | nous-hermes-13b | 65.465838 | | ... | ... | # Notes ## Thanks After all, the project also wants to send sincere thanks to the products that have supported the project's development. #### Unsloth <img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/made with unsloth.png" width="200px" align="center" /> [Unsloth](https://unsloth.ai), a great tool that helps us easily develop products, at a lower cost than expected. #### Kaggle <img src="https://hf.fast360.xyz/production/uploads/600ae38cc92b79f54efd4556/dcbpF6YS6RQhqDd6GZZ2v.png" width="200px" align="center" /> [Kaggle](https://kaggle.com), generous hours of free notebooks for research projects. #### And... Good friends, who have accompanied the project, Luan Nguyen and Phu Tran. ## Confidence In addition to the **Ghost 7B Alpha** project, **Ghost X** always wants to develop and improve many better models in the future, better supporting the community and businesses with the most openness possible. Revealing the **Ghost 7B Beta** project plan. This model is expected to outperform with a deeper focus on multi-tasking, math, and reasoning. Along with that is the ability to expand context length and support other languages (highly requested languages). The organization is being operated and developed by [Hieu Lam](https://huggingface.co/ghost-x)'s personal resources, if there is any development support or consulting request. Please feel free to contact the organization, we are very happy about this. Directly via email: [[email protected]](mailto:[email protected]). **Ghost X** is happy to support providing models for server providers, aiming to help startups develop better. ## Contact Follow **Ghost X** to stay updated with the latest information. - Twitter/X via [@ghostx_ai](https://twitter.com/ghostx_ai). - HuggingFace via [@ghost-x](https://huggingface.co/ghost-x). - Official website [ghost-x.org](https://ghost-x.org/).
{"language": ["en", "vi"], "library_name": "transformers", "license": "other", "license_name": "ghost-7b", "license_link": "https://ghost-x.org/ghost-7b-license", "pipeline_tag": "text-generation", "tags": ["ghost", "tools", "chat"], "widget": [{"text": "Why is the sky blue ?", "output": {"text": "The sky appears blue because of a phenomenon called Rayleigh scattering. Sunlight is composed of all colors of the visible spectrum, but blue light has a shorter wavelength than other colors. When sunlight enters the Earth's atmosphere, it interacts with molecules of nitrogen and oxygen, which are the main components of our atmosphere. These molecules scatter the shorter wavelengths of light, such as blue and violet, more than the longer wavelengths, such as red and orange. As a result, when we look up at the sky, we see more blue light than other colors because the blue light has been scattered more by the molecules in the atmosphere."}}]}
task
[ "TRANSLATION" ]
45,629
mini1013/master_item_top_bt6
mini1013
text-classification
[ "setfit", "safetensors", "roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:klue/roberta-base", "base_model:finetune:klue/roberta-base", "model-index", "region:us" ]
2024-12-29T05:15:19Z
2024-12-29T05:15:47+00:00
4
0
--- base_model: klue/roberta-base library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 트위저맨 포인트 트위저 Pretty in Pink (#M)홈>화장품/미용>뷰티소품>페이스소품>기타페이스소품 Naverstore > 화장품/미용 > 뷰티소품 > 페이스소품 > 기타페이스소품 - text: 에스쁘아 에어 퍼프 5개입 소프트 터치 에어퍼프 LotteOn > 뷰티 > 뷰티기기/소품 > 메이크업소품 > 퍼프 LotteOn > 뷰티 > 뷰티기기/소품 > 메이크업소품 > 브러쉬 - text: 더툴랩 더스타일 래쉬 - 리얼(TSL001) x 1개 리얼(TSL001) × 1개 LotteOn > 뷰티 > 뷰티기기/소품 > 아이/브로우소품 > 속눈썹관리 LotteOn > 뷰티 > 뷰티기기/소품 > 아이/브로우소품 > 속눈썹관리 - text: 미용재료/셀프파마/롯드/헤어롤/미용용품/파지/귀마개/염색볼/집게핀/샤워캡/헤어밴드 41.다용도 공병 2개 홈>펌,염색,미용소도구;홈>파마용품;(#M)홈>파마 소도구>파마용품 Naverstore > 화장품/미용 > 뷰티소품 > 헤어소품 > 기타헤어소품 - text: 에스쁘아 비글로우 에어 퍼프 5개입(22AD) (#M)홈>화장품/미용>뷰티소품>페이스소품>기타페이스소품 Naverstore > 화장품/미용 > 뷰티소품 > 페이스소품 > 기타페이스소품 inference: true model-index: - name: SetFit with klue/roberta-base results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.9419292632686155 name: Accuracy --- # SetFit with klue/roberta-base This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [klue/roberta-base](https://huggingface.co/klue/roberta-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [klue/roberta-base](https://huggingface.co/klue/roberta-base) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 8 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 7 | <ul><li>'[JAJU/자주] 원형 리필 공병 통 110ml ssg > 뷰티 > 미용기기/소품 > 거울/용기/기타소품;ssg > 뷰티 > 헤어/바디/미용/구강 > 미용기기 ssg > 뷰티 > 미용기기/소품 > 거울/용기/기타소품'</li><li>'세맘스 아기랑 + 엄마랑 파우치 세트 핑크스마일_엄마(가로 11.5cm x 세로 13cm), 아기(가로 8cm x 세로 10.5cm) (#M)쿠팡 홈>여행용품>여행파우치>화장품파우치 Coupang > 뷰티 > 뷰티소품 > 용기/거울/기타소품 > 파우치'</li><li>'라인 프린팅 파스텔컬러 롤온공병 10ml 6종 세트 흰색(뚜껑) × 1세트 (#M)쿠팡 홈>뷰티>뷰티소품>용기/거울/기타소품>기타소품 Coupang > 뷰티 > 뷰티소품 > 용기/거울/기타소품 > 기타소품'</li></ul> | | 3 | <ul><li>'트위저맨 슬랜트 트위저 족집게 베이비 핑크 × 9개 (#M)쿠팡 홈>뷰티>뷰티소품>아이소품>족집게/샤프너 Coupang > 뷰티 > 뷰티소품 > 아이소품 > 족집게/샤프너'</li><li>'트위저맨 미니 슬랜트 트위저 로즈골드 265161 LotteOn > 뷰티 > 메이크업 > 베이스메이크업 > 파운데이션 LotteOn > 뷰티 > 메이크업 > 베이스메이크업 > 파운데이션'</li><li>'트위저맨 클래식 슬랜트 트위저 베이비핑크, 1개 LotteOn > 뷰티 > 뷰티기기/소품 > 메이크업소품 > 브러쉬 LotteOn > 뷰티 > 뷰티기기/소품 > 메이크업소품 > 브러쉬'</li></ul> | | 6 | <ul><li>'천일 매직 롯드 10P 1호~6호 뿌리볼륨롯드 파마롯드 매직롯드 5호_1개 홈>화장품/미용>뷰티소품>헤어소품>헤어롤;홈>전체상품;(#M)홈>롯드 Naverstore > 화장품/미용 > 뷰티소품 > 헤어소품 > 헤어롤'</li><li>'다이슨 45mm 35mm 롤브러쉬 대왕롤빗 엉킴방지빗 니켈블랙 (#M)홈>미용건강 Naverstore > 화장품/미용 > 뷰티소품 > 헤어소품 > 헤어브러시'</li><li>'프리시전 섀이더 브러쉬 스몰 단품없음 LotteOn > 뷰티 > 뷰티기기 > 액세서리/소모품 LotteOn > 뷰티 > 뷰티기기 > 액세서리/소모품'</li></ul> | | 0 | <ul><li>'천연 자초 립밤 만들기 키트 diy 향 선택(8개) 사과+에탄올20ml (#M)홈>비누&립밤&세제 만들기>만들기키트 Naverstore > 화장품/미용 > 색조메이크업 > 립케어'</li></ul> | | 5 | <ul><li>'프로 피니쉬 스폰지 단품없음 LotteOn > 뷰티 > 뷰티기기 > 액세서리/소모품 LotteOn > 뷰티 > 뷰티기기 > 액세서리/소모품'</li><li>'JAJU 사각 면봉_화장 겸용 200P 기타_FR LotteOn > 뷰티 > 뷰티기기/소품 > 위생용품 > 면봉 LotteOn > 뷰티 > 뷰티기기/소품 > 위생용품 > 면봉'</li><li>'mts 롤러 기계 MTS 스탬프 앰플 바르는 도구 니들 빠른흡수 상품선택_2-더마롤러-0.3mm LotteOn > 뷰티 > 뷰티기기/소품 > 피부케어기 > 피부케어기 LotteOn > 뷰티 > 뷰티기기/소품 > 피부케어기 > 피부케어기'</li></ul> | | 1 | <ul><li>'더툴랩 101B 베이비태스커 파운데이션 베이스 메이크업 브러쉬 쿠션브러쉬 236097 (#M)홈>화장품/미용>뷰티소품>메이크업브러시>브러시세트 Naverstore > 화장품/미용 > 뷰티소품 > 메이크업브러시 > 브러시세트'</li><li>'더툴랩 204 블렌딩 아이섀도우 스몰 총알 브러쉬 (#M)화장품/미용>뷰티소품>페이스소품>코털제거기 AD > Naverstore > 화장품/미용 > 뷰티소품 > 페이스소품 > 코털제거기'</li><li>'더툴랩 브러쉬 231 컨실러 파운데이션 (#M)화장품/미용>뷰티소품>메이크업브러시>페이스브러시 LO > Naverstore > 화장품/미용 > 뷰티소품 > 메이크업브러시 > 페이스브러시'</li></ul> | | 2 | <ul><li>'요들가운 미용실 LC 커트보 어깨보 컷트보 인쇄가능 15.모델210T커트보_블랙 (#M)홈>가운,유니폼>컷트보 Naverstore > 화장품/미용 > 뷰티소품 > 헤어소품 > 기타헤어소품'</li><li>'요들가운 미용실 LC 커트보 어깨보 컷트보 인쇄가능 12.듀스포체크 커트보_퍼플 (#M)홈>가운,유니폼>컷트보 Naverstore > 화장품/미용 > 뷰티소품 > 헤어소품 > 기타헤어소품'</li><li>'[백화점][JPClarisse] 장폴클라리쎄 거미 왕대 집게핀 JPSA0001 진베이지 (#M)GSSHOP>뷰티>뷰티소품>헤어소품 GSSHOP > 뷰티 > 뷰티소품 > 헤어소품 > 헤어집게'</li></ul> | | 4 | <ul><li>'레터링 쇄골 현아 타투 스티커 30장 마스크 판박이 3타투세트30장-수채화 LotteOn > 뷰티 > 마스크/팩 > 기타패치 LotteOn > 뷰티 > 마스크/팩 > 기타패치'</li><li>'산리오 캐릭터 타투 스티커 어린이 문신 마스크판박이 5.헬로키티(2매입) 홈>패션잡화🛍>잡화🐱\u200d💻;(#M)홈>캐릭터🙂>산리오 Naverstore > 화장품/미용 > 뷰티소품 > 타투'</li><li>'문신 타투 스티커 바디 형 쇄골 반팔 레터링 흉터 커버__개성 다이소 헤나 다목적 노출 패션 미용 다용도 추천 패셔니스타 여름 A type 타투스티커 30종세트 (#M)SSG.COM/헤어/바디/슬리밍/푸드/기타용품/타투 ssg > 뷰티 > 헤어/바디 > 슬리밍/푸드/기타용품 > 타투'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.9419 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_item_top_bt6") # Run inference preds = model("에스쁘아 에어 퍼프 5개입 소프트 터치 에어퍼프 LotteOn > 뷰티 > 뷰티기기/소품 > 메이크업소품 > 퍼프 LotteOn > 뷰티 > 뷰티기기/소품 > 메이크업소품 > 브러쉬") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 12 | 22.0313 | 72 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 1 | | 1 | 50 | | 2 | 50 | | 3 | 50 | | 4 | 50 | | 5 | 50 | | 6 | 50 | | 7 | 50 | ### Training Hyperparameters - batch_size: (64, 64) - num_epochs: (30, 30) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 100 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:-----:|:-------------:|:---------------:| | 0.0018 | 1 | 0.4099 | - | | 0.0911 | 50 | 0.3973 | - | | 0.1821 | 100 | 0.3456 | - | | 0.2732 | 150 | 0.2947 | - | | 0.3643 | 200 | 0.2369 | - | | 0.4554 | 250 | 0.1705 | - | | 0.5464 | 300 | 0.107 | - | | 0.6375 | 350 | 0.0696 | - | | 0.7286 | 400 | 0.0494 | - | | 0.8197 | 450 | 0.0488 | - | | 0.9107 | 500 | 0.0307 | - | | 1.0018 | 550 | 0.0259 | - | | 1.0929 | 600 | 0.0247 | - | | 1.1840 | 650 | 0.022 | - | | 1.2750 | 700 | 0.0215 | - | | 1.3661 | 750 | 0.005 | - | | 1.4572 | 800 | 0.0007 | - | | 1.5483 | 850 | 0.0004 | - | | 1.6393 | 900 | 0.0002 | - | | 1.7304 | 950 | 0.0001 | - | | 1.8215 | 1000 | 0.0001 | - | | 1.9126 | 1050 | 0.0001 | - | | 2.0036 | 1100 | 0.0001 | - | | 2.0947 | 1150 | 0.0001 | - | | 2.1858 | 1200 | 0.0001 | - | | 2.2769 | 1250 | 0.0 | - | | 2.3679 | 1300 | 0.0 | - | | 2.4590 | 1350 | 0.0 | - | | 2.5501 | 1400 | 0.0 | - | | 2.6412 | 1450 | 0.0 | - | | 2.7322 | 1500 | 0.0 | - | | 2.8233 | 1550 | 0.0 | - | | 2.9144 | 1600 | 0.0 | - | | 3.0055 | 1650 | 0.0 | - | | 3.0965 | 1700 | 0.0 | - | | 3.1876 | 1750 | 0.0 | - | | 3.2787 | 1800 | 0.0 | - | | 3.3698 | 1850 | 0.0 | - | | 3.4608 | 1900 | 0.0 | - | | 3.5519 | 1950 | 0.0 | - | | 3.6430 | 2000 | 0.0 | - | | 3.7341 | 2050 | 0.0 | - | | 3.8251 | 2100 | 0.0 | - | | 3.9162 | 2150 | 0.0 | - | | 4.0073 | 2200 | 0.0 | - | | 4.0984 | 2250 | 0.0 | - | | 4.1894 | 2300 | 0.0 | - | | 4.2805 | 2350 | 0.0 | - | | 4.3716 | 2400 | 0.0 | - | | 4.4627 | 2450 | 0.0 | - | | 4.5537 | 2500 | 0.0 | - | | 4.6448 | 2550 | 0.0 | - | | 4.7359 | 2600 | 0.0 | - | | 4.8270 | 2650 | 0.0 | - | | 4.9180 | 2700 | 0.0 | - | | 5.0091 | 2750 | 0.0 | - | | 5.1002 | 2800 | 0.0 | - | | 5.1913 | 2850 | 0.0 | - | | 5.2823 | 2900 | 0.0 | - | | 5.3734 | 2950 | 0.0 | - | | 5.4645 | 3000 | 0.0 | - | | 5.5556 | 3050 | 0.0 | - | | 5.6466 | 3100 | 0.0 | - | | 5.7377 | 3150 | 0.0 | - | | 5.8288 | 3200 | 0.0 | - | | 5.9199 | 3250 | 0.0 | - | | 6.0109 | 3300 | 0.0 | - | | 6.1020 | 3350 | 0.0 | - | | 6.1931 | 3400 | 0.0 | - | | 6.2842 | 3450 | 0.0 | - | | 6.3752 | 3500 | 0.0 | - | | 6.4663 | 3550 | 0.0 | - | | 6.5574 | 3600 | 0.0 | - | | 6.6485 | 3650 | 0.0 | - | | 6.7395 | 3700 | 0.0 | - | | 6.8306 | 3750 | 0.0 | - | | 6.9217 | 3800 | 0.0 | - | | 7.0128 | 3850 | 0.0 | - | | 7.1038 | 3900 | 0.0 | - | | 7.1949 | 3950 | 0.0 | - | | 7.2860 | 4000 | 0.0 | - | | 7.3770 | 4050 | 0.0 | - | | 7.4681 | 4100 | 0.0 | - | | 7.5592 | 4150 | 0.0 | - | | 7.6503 | 4200 | 0.0 | - | | 7.7413 | 4250 | 0.0 | - | | 7.8324 | 4300 | 0.0 | - | | 7.9235 | 4350 | 0.0 | - | | 8.0146 | 4400 | 0.0 | - | | 8.1056 | 4450 | 0.0 | - | | 8.1967 | 4500 | 0.0 | - | | 8.2878 | 4550 | 0.0 | - | | 8.3789 | 4600 | 0.0 | - | | 8.4699 | 4650 | 0.0 | - | | 8.5610 | 4700 | 0.0 | - | | 8.6521 | 4750 | 0.0 | - | | 8.7432 | 4800 | 0.0 | - | | 8.8342 | 4850 | 0.0 | - | | 8.9253 | 4900 | 0.0 | - | | 9.0164 | 4950 | 0.0 | - | | 9.1075 | 5000 | 0.0 | - | | 9.1985 | 5050 | 0.0 | - | | 9.2896 | 5100 | 0.0 | - | | 9.3807 | 5150 | 0.0 | - | | 9.4718 | 5200 | 0.0 | - | | 9.5628 | 5250 | 0.0 | - | | 9.6539 | 5300 | 0.0 | - | | 9.7450 | 5350 | 0.0 | - | | 9.8361 | 5400 | 0.0 | - | | 9.9271 | 5450 | 0.0 | - | | 10.0182 | 5500 | 0.0 | - | | 10.1093 | 5550 | 0.0 | - | | 10.2004 | 5600 | 0.0 | - | | 10.2914 | 5650 | 0.0 | - | | 10.3825 | 5700 | 0.0 | - | | 10.4736 | 5750 | 0.0 | - | | 10.5647 | 5800 | 0.0 | - | | 10.6557 | 5850 | 0.0 | - | | 10.7468 | 5900 | 0.0 | - | | 10.8379 | 5950 | 0.0 | - | | 10.9290 | 6000 | 0.0 | - | | 11.0200 | 6050 | 0.0 | - | | 11.1111 | 6100 | 0.0 | - | | 11.2022 | 6150 | 0.0 | - | | 11.2933 | 6200 | 0.0 | - | | 11.3843 | 6250 | 0.0 | - | | 11.4754 | 6300 | 0.0 | - | | 11.5665 | 6350 | 0.0 | - | | 11.6576 | 6400 | 0.0 | - | | 11.7486 | 6450 | 0.0 | - | | 11.8397 | 6500 | 0.0 | - | | 11.9308 | 6550 | 0.0 | - | | 12.0219 | 6600 | 0.0 | - | | 12.1129 | 6650 | 0.0 | - | | 12.2040 | 6700 | 0.0 | - | | 12.2951 | 6750 | 0.0 | - | | 12.3862 | 6800 | 0.0 | - | | 12.4772 | 6850 | 0.0 | - | | 12.5683 | 6900 | 0.0 | - | | 12.6594 | 6950 | 0.0 | - | | 12.7505 | 7000 | 0.0 | - | | 12.8415 | 7050 | 0.0 | - | | 12.9326 | 7100 | 0.0 | - | | 13.0237 | 7150 | 0.0 | - | | 13.1148 | 7200 | 0.0 | - | | 13.2058 | 7250 | 0.0 | - | | 13.2969 | 7300 | 0.0 | - | | 13.3880 | 7350 | 0.0 | - | | 13.4791 | 7400 | 0.0 | - | | 13.5701 | 7450 | 0.0 | - | | 13.6612 | 7500 | 0.0 | - | | 13.7523 | 7550 | 0.0 | - | | 13.8434 | 7600 | 0.0 | - | | 13.9344 | 7650 | 0.0 | - | | 14.0255 | 7700 | 0.0 | - | | 14.1166 | 7750 | 0.0 | - | | 14.2077 | 7800 | 0.0 | - | | 14.2987 | 7850 | 0.0 | - | | 14.3898 | 7900 | 0.0 | - | | 14.4809 | 7950 | 0.0 | - | | 14.5719 | 8000 | 0.0 | - | | 14.6630 | 8050 | 0.0 | - | | 14.7541 | 8100 | 0.0 | - | | 14.8452 | 8150 | 0.0 | - | | 14.9362 | 8200 | 0.0 | - | | 15.0273 | 8250 | 0.0 | - | | 15.1184 | 8300 | 0.0 | - | | 15.2095 | 8350 | 0.0 | - | | 15.3005 | 8400 | 0.0 | - | | 15.3916 | 8450 | 0.0 | - | | 15.4827 | 8500 | 0.0 | - | | 15.5738 | 8550 | 0.012 | - | | 15.6648 | 8600 | 0.0012 | - | | 15.7559 | 8650 | 0.0003 | - | | 15.8470 | 8700 | 0.0 | - | | 15.9381 | 8750 | 0.0 | - | | 16.0291 | 8800 | 0.0 | - | | 16.1202 | 8850 | 0.0 | - | | 16.2113 | 8900 | 0.0 | - | | 16.3024 | 8950 | 0.0 | - | | 16.3934 | 9000 | 0.0 | - | | 16.4845 | 9050 | 0.0 | - | | 16.5756 | 9100 | 0.0 | - | | 16.6667 | 9150 | 0.0 | - | | 16.7577 | 9200 | 0.0 | - | | 16.8488 | 9250 | 0.0 | - | | 16.9399 | 9300 | 0.0 | - | | 17.0310 | 9350 | 0.0 | - | | 17.1220 | 9400 | 0.0 | - | | 17.2131 | 9450 | 0.0 | - | | 17.3042 | 9500 | 0.0 | - | | 17.3953 | 9550 | 0.0 | - | | 17.4863 | 9600 | 0.0 | - | | 17.5774 | 9650 | 0.0 | - | | 17.6685 | 9700 | 0.0 | - | | 17.7596 | 9750 | 0.0 | - | | 17.8506 | 9800 | 0.0 | - | | 17.9417 | 9850 | 0.0 | - | | 18.0328 | 9900 | 0.0 | - | | 18.1239 | 9950 | 0.0 | - | | 18.2149 | 10000 | 0.0 | - | | 18.3060 | 10050 | 0.0 | - | | 18.3971 | 10100 | 0.0 | - | | 18.4882 | 10150 | 0.0 | - | | 18.5792 | 10200 | 0.0 | - | | 18.6703 | 10250 | 0.0 | - | | 18.7614 | 10300 | 0.0 | - | | 18.8525 | 10350 | 0.0 | - | | 18.9435 | 10400 | 0.0 | - | | 19.0346 | 10450 | 0.0 | - | | 19.1257 | 10500 | 0.0 | - | | 19.2168 | 10550 | 0.0 | - | | 19.3078 | 10600 | 0.0 | - | | 19.3989 | 10650 | 0.0 | - | | 19.4900 | 10700 | 0.0 | - | | 19.5811 | 10750 | 0.0 | - | | 19.6721 | 10800 | 0.0 | - | | 19.7632 | 10850 | 0.0 | - | | 19.8543 | 10900 | 0.0 | - | | 19.9454 | 10950 | 0.0 | - | | 20.0364 | 11000 | 0.0 | - | | 20.1275 | 11050 | 0.0 | - | | 20.2186 | 11100 | 0.0 | - | | 20.3097 | 11150 | 0.0 | - | | 20.4007 | 11200 | 0.0 | - | | 20.4918 | 11250 | 0.0 | - | | 20.5829 | 11300 | 0.0 | - | | 20.6740 | 11350 | 0.0 | - | | 20.7650 | 11400 | 0.0 | - | | 20.8561 | 11450 | 0.0 | - | | 20.9472 | 11500 | 0.0 | - | | 21.0383 | 11550 | 0.0 | - | | 21.1293 | 11600 | 0.0 | - | | 21.2204 | 11650 | 0.0 | - | | 21.3115 | 11700 | 0.0 | - | | 21.4026 | 11750 | 0.0 | - | | 21.4936 | 11800 | 0.0 | - | | 21.5847 | 11850 | 0.0 | - | | 21.6758 | 11900 | 0.0 | - | | 21.7668 | 11950 | 0.0 | - | | 21.8579 | 12000 | 0.0 | - | | 21.9490 | 12050 | 0.0 | - | | 22.0401 | 12100 | 0.0 | - | | 22.1311 | 12150 | 0.0 | - | | 22.2222 | 12200 | 0.0 | - | | 22.3133 | 12250 | 0.0 | - | | 22.4044 | 12300 | 0.0 | - | | 22.4954 | 12350 | 0.0 | - | | 22.5865 | 12400 | 0.0 | - | | 22.6776 | 12450 | 0.0 | - | | 22.7687 | 12500 | 0.0 | - | | 22.8597 | 12550 | 0.0 | - | | 22.9508 | 12600 | 0.0 | - | | 23.0419 | 12650 | 0.0 | - | | 23.1330 | 12700 | 0.0 | - | | 23.2240 | 12750 | 0.0 | - | | 23.3151 | 12800 | 0.0 | - | | 23.4062 | 12850 | 0.0 | - | | 23.4973 | 12900 | 0.0 | - | | 23.5883 | 12950 | 0.0 | - | | 23.6794 | 13000 | 0.0 | - | | 23.7705 | 13050 | 0.0 | - | | 23.8616 | 13100 | 0.0 | - | | 23.9526 | 13150 | 0.0 | - | | 24.0437 | 13200 | 0.0 | - | | 24.1348 | 13250 | 0.0 | - | | 24.2259 | 13300 | 0.0 | - | | 24.3169 | 13350 | 0.0 | - | | 24.4080 | 13400 | 0.0 | - | | 24.4991 | 13450 | 0.0 | - | | 24.5902 | 13500 | 0.0 | - | | 24.6812 | 13550 | 0.0 | - | | 24.7723 | 13600 | 0.0 | - | | 24.8634 | 13650 | 0.0 | - | | 24.9545 | 13700 | 0.0 | - | | 25.0455 | 13750 | 0.0 | - | | 25.1366 | 13800 | 0.0 | - | | 25.2277 | 13850 | 0.0 | - | | 25.3188 | 13900 | 0.0 | - | | 25.4098 | 13950 | 0.0 | - | | 25.5009 | 14000 | 0.0 | - | | 25.5920 | 14050 | 0.0 | - | | 25.6831 | 14100 | 0.0 | - | | 25.7741 | 14150 | 0.0 | - | | 25.8652 | 14200 | 0.0 | - | | 25.9563 | 14250 | 0.0 | - | | 26.0474 | 14300 | 0.0 | - | | 26.1384 | 14350 | 0.0 | - | | 26.2295 | 14400 | 0.0 | - | | 26.3206 | 14450 | 0.0 | - | | 26.4117 | 14500 | 0.0 | - | | 26.5027 | 14550 | 0.0 | - | | 26.5938 | 14600 | 0.0 | - | | 26.6849 | 14650 | 0.0 | - | | 26.7760 | 14700 | 0.0 | - | | 26.8670 | 14750 | 0.0 | - | | 26.9581 | 14800 | 0.0 | - | | 27.0492 | 14850 | 0.0 | - | | 27.1403 | 14900 | 0.0 | - | | 27.2313 | 14950 | 0.0 | - | | 27.3224 | 15000 | 0.0 | - | | 27.4135 | 15050 | 0.0 | - | | 27.5046 | 15100 | 0.0 | - | | 27.5956 | 15150 | 0.0 | - | | 27.6867 | 15200 | 0.0 | - | | 27.7778 | 15250 | 0.0 | - | | 27.8689 | 15300 | 0.0 | - | | 27.9599 | 15350 | 0.0 | - | | 28.0510 | 15400 | 0.0 | - | | 28.1421 | 15450 | 0.0 | - | | 28.2332 | 15500 | 0.0 | - | | 28.3242 | 15550 | 0.0 | - | | 28.4153 | 15600 | 0.0 | - | | 28.5064 | 15650 | 0.0 | - | | 28.5974 | 15700 | 0.0 | - | | 28.6885 | 15750 | 0.0 | - | | 28.7796 | 15800 | 0.0 | - | | 28.8707 | 15850 | 0.0 | - | | 28.9617 | 15900 | 0.0 | - | | 29.0528 | 15950 | 0.0 | - | | 29.1439 | 16000 | 0.0 | - | | 29.2350 | 16050 | 0.0 | - | | 29.3260 | 16100 | 0.0 | - | | 29.4171 | 16150 | 0.0 | - | | 29.5082 | 16200 | 0.0 | - | | 29.5993 | 16250 | 0.0 | - | | 29.6903 | 16300 | 0.0 | - | | 29.7814 | 16350 | 0.0 | - | | 29.8725 | 16400 | 0.0 | - | | 29.9636 | 16450 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - Transformers: 4.44.2 - PyTorch: 2.2.0a0+81ea7a4 - Datasets: 3.2.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with klue/roberta-base This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [klue/roberta-base](https://huggingface.co/klue/roberta-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [klue/roberta-base](https://huggingface.co/klue/roberta-base) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 8 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 7 | <ul><li>'[JAJU/자주] 원형 리필 공병 통 110ml ssg > 뷰티 > 미용기기/소품 > 거울/용기/기타소품;ssg > 뷰티 > 헤어/바디/미용/구강 > 미용기기 ssg > 뷰티 > 미용기기/소품 > 거울/용기/기타소품'</li><li>'세맘스 아기랑 + 엄마랑 파우치 세트 핑크스마일_엄마(가로 11.5cm x 세로 13cm), 아기(가로 8cm x 세로 10.5cm) (#M)쿠팡 홈>여행용품>여행파우치>화장품파우치 Coupang > 뷰티 > 뷰티소품 > 용기/거울/기타소품 > 파우치'</li><li>'라인 프린팅 파스텔컬러 롤온공병 10ml 6종 세트 흰색(뚜껑) × 1세트 (#M)쿠팡 홈>뷰티>뷰티소품>용기/거울/기타소품>기타소품 Coupang > 뷰티 > 뷰티소품 > 용기/거울/기타소품 > 기타소품'</li></ul> | | 3 | <ul><li>'트위저맨 슬랜트 트위저 족집게 베이비 핑크 × 9개 (#M)쿠팡 홈>뷰티>뷰티소품>아이소품>족집게/샤프너 Coupang > 뷰티 > 뷰티소품 > 아이소품 > 족집게/샤프너'</li><li>'트위저맨 미니 슬랜트 트위저 로즈골드 265161 LotteOn > 뷰티 > 메이크업 > 베이스메이크업 > 파운데이션 LotteOn > 뷰티 > 메이크업 > 베이스메이크업 > 파운데이션'</li><li>'트위저맨 클래식 슬랜트 트위저 베이비핑크, 1개 LotteOn > 뷰티 > 뷰티기기/소품 > 메이크업소품 > 브러쉬 LotteOn > 뷰티 > 뷰티기기/소품 > 메이크업소품 > 브러쉬'</li></ul> | | 6 | <ul><li>'천일 매직 롯드 10P 1호~6호 뿌리볼륨롯드 파마롯드 매직롯드 5호_1개 홈>화장품/미용>뷰티소품>헤어소품>헤어롤;홈>전체상품;(#M)홈>롯드 Naverstore > 화장품/미용 > 뷰티소품 > 헤어소품 > 헤어롤'</li><li>'다이슨 45mm 35mm 롤브러쉬 대왕롤빗 엉킴방지빗 니켈블랙 (#M)홈>미용건강 Naverstore > 화장품/미용 > 뷰티소품 > 헤어소품 > 헤어브러시'</li><li>'프리시전 섀이더 브러쉬 스몰 단품없음 LotteOn > 뷰티 > 뷰티기기 > 액세서리/소모품 LotteOn > 뷰티 > 뷰티기기 > 액세서리/소모품'</li></ul> | | 0 | <ul><li>'천연 자초 립밤 만들기 키트 diy 향 선택(8개) 사과+에탄올20ml (#M)홈>비누&립밤&세제 만들기>만들기키트 Naverstore > 화장품/미용 > 색조메이크업 > 립케어'</li></ul> | | 5 | <ul><li>'프로 피니쉬 스폰지 단품없음 LotteOn > 뷰티 > 뷰티기기 > 액세서리/소모품 LotteOn > 뷰티 > 뷰티기기 > 액세서리/소모품'</li><li>'JAJU 사각 면봉_화장 겸용 200P 기타_FR LotteOn > 뷰티 > 뷰티기기/소품 > 위생용품 > 면봉 LotteOn > 뷰티 > 뷰티기기/소품 > 위생용품 > 면봉'</li><li>'mts 롤러 기계 MTS 스탬프 앰플 바르는 도구 니들 빠른흡수 상품선택_2-더마롤러-0.3mm LotteOn > 뷰티 > 뷰티기기/소품 > 피부케어기 > 피부케어기 LotteOn > 뷰티 > 뷰티기기/소품 > 피부케어기 > 피부케어기'</li></ul> | | 1 | <ul><li>'더툴랩 101B 베이비태스커 파운데이션 베이스 메이크업 브러쉬 쿠션브러쉬 236097 (#M)홈>화장품/미용>뷰티소품>메이크업브러시>브러시세트 Naverstore > 화장품/미용 > 뷰티소품 > 메이크업브러시 > 브러시세트'</li><li>'더툴랩 204 블렌딩 아이섀도우 스몰 총알 브러쉬 (#M)화장품/미용>뷰티소품>페이스소품>코털제거기 AD > Naverstore > 화장품/미용 > 뷰티소품 > 페이스소품 > 코털제거기'</li><li>'더툴랩 브러쉬 231 컨실러 파운데이션 (#M)화장품/미용>뷰티소품>메이크업브러시>페이스브러시 LO > Naverstore > 화장품/미용 > 뷰티소품 > 메이크업브러시 > 페이스브러시'</li></ul> | | 2 | <ul><li>'요들가운 미용실 LC 커트보 어깨보 컷트보 인쇄가능 15.모델210T커트보_블랙 (#M)홈>가운,유니폼>컷트보 Naverstore > 화장품/미용 > 뷰티소품 > 헤어소품 > 기타헤어소품'</li><li>'요들가운 미용실 LC 커트보 어깨보 컷트보 인쇄가능 12.듀스포체크 커트보_퍼플 (#M)홈>가운,유니폼>컷트보 Naverstore > 화장품/미용 > 뷰티소품 > 헤어소품 > 기타헤어소품'</li><li>'[백화점][JPClarisse] 장폴클라리쎄 거미 왕대 집게핀 JPSA0001 진베이지 (#M)GSSHOP>뷰티>뷰티소품>헤어소품 GSSHOP > 뷰티 > 뷰티소품 > 헤어소품 > 헤어집게'</li></ul> | | 4 | <ul><li>'레터링 쇄골 현아 타투 스티커 30장 마스크 판박이 3타투세트30장-수채화 LotteOn > 뷰티 > 마스크/팩 > 기타패치 LotteOn > 뷰티 > 마스크/팩 > 기타패치'</li><li>'산리오 캐릭터 타투 스티커 어린이 문신 마스크판박이 5.헬로키티(2매입) 홈>패션잡화🛍>잡화🐱\u200d💻;(#M)홈>캐릭터🙂>산리오 Naverstore > 화장품/미용 > 뷰티소품 > 타투'</li><li>'문신 타투 스티커 바디 형 쇄골 반팔 레터링 흉터 커버__개성 다이소 헤나 다목적 노출 패션 미용 다용도 추천 패셔니스타 여름 A type 타투스티커 30종세트 (#M)SSG.COM/헤어/바디/슬리밍/푸드/기타용품/타투 ssg > 뷰티 > 헤어/바디 > 슬리밍/푸드/기타용품 > 타투'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.9419 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_item_top_bt6") # Run inference preds = model("에스쁘아 에어 퍼프 5개입 소프트 터치 에어퍼프 LotteOn > 뷰티 > 뷰티기기/소품 > 메이크업소품 > 퍼프 LotteOn > 뷰티 > 뷰티기기/소품 > 메이크업소품 > 브러쉬") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 12 | 22.0313 | 72 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 1 | | 1 | 50 | | 2 | 50 | | 3 | 50 | | 4 | 50 | | 5 | 50 | | 6 | 50 | | 7 | 50 | ### Training Hyperparameters - batch_size: (64, 64) - num_epochs: (30, 30) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 100 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:-----:|:-------------:|:---------------:| | 0.0018 | 1 | 0.4099 | - | | 0.0911 | 50 | 0.3973 | - | | 0.1821 | 100 | 0.3456 | - | | 0.2732 | 150 | 0.2947 | - | | 0.3643 | 200 | 0.2369 | - | | 0.4554 | 250 | 0.1705 | - | | 0.5464 | 300 | 0.107 | - | | 0.6375 | 350 | 0.0696 | - | | 0.7286 | 400 | 0.0494 | - | | 0.8197 | 450 | 0.0488 | - | | 0.9107 | 500 | 0.0307 | - | | 1.0018 | 550 | 0.0259 | - | | 1.0929 | 600 | 0.0247 | - | | 1.1840 | 650 | 0.022 | - | | 1.2750 | 700 | 0.0215 | - | | 1.3661 | 750 | 0.005 | - | | 1.4572 | 800 | 0.0007 | - | | 1.5483 | 850 | 0.0004 | - | | 1.6393 | 900 | 0.0002 | - | | 1.7304 | 950 | 0.0001 | - | | 1.8215 | 1000 | 0.0001 | - | | 1.9126 | 1050 | 0.0001 | - | | 2.0036 | 1100 | 0.0001 | - | | 2.0947 | 1150 | 0.0001 | - | | 2.1858 | 1200 | 0.0001 | - | | 2.2769 | 1250 | 0.0 | - | | 2.3679 | 1300 | 0.0 | - | | 2.4590 | 1350 | 0.0 | - | | 2.5501 | 1400 | 0.0 | - | | 2.6412 | 1450 | 0.0 | - | | 2.7322 | 1500 | 0.0 | - | | 2.8233 | 1550 | 0.0 | - | | 2.9144 | 1600 | 0.0 | - | | 3.0055 | 1650 | 0.0 | - | | 3.0965 | 1700 | 0.0 | - | | 3.1876 | 1750 | 0.0 | - | | 3.2787 | 1800 | 0.0 | - | | 3.3698 | 1850 | 0.0 | - | | 3.4608 | 1900 | 0.0 | - | | 3.5519 | 1950 | 0.0 | - | | 3.6430 | 2000 | 0.0 | - | | 3.7341 | 2050 | 0.0 | - | | 3.8251 | 2100 | 0.0 | - | | 3.9162 | 2150 | 0.0 | - | | 4.0073 | 2200 | 0.0 | - | | 4.0984 | 2250 | 0.0 | - | | 4.1894 | 2300 | 0.0 | - | | 4.2805 | 2350 | 0.0 | - | | 4.3716 | 2400 | 0.0 | - | | 4.4627 | 2450 | 0.0 | - | | 4.5537 | 2500 | 0.0 | - | | 4.6448 | 2550 | 0.0 | - | | 4.7359 | 2600 | 0.0 | - | | 4.8270 | 2650 | 0.0 | - | | 4.9180 | 2700 | 0.0 | - | | 5.0091 | 2750 | 0.0 | - | | 5.1002 | 2800 | 0.0 | - | | 5.1913 | 2850 | 0.0 | - | | 5.2823 | 2900 | 0.0 | - | | 5.3734 | 2950 | 0.0 | - | | 5.4645 | 3000 | 0.0 | - | | 5.5556 | 3050 | 0.0 | - | | 5.6466 | 3100 | 0.0 | - | | 5.7377 | 3150 | 0.0 | - | | 5.8288 | 3200 | 0.0 | - | | 5.9199 | 3250 | 0.0 | - | | 6.0109 | 3300 | 0.0 | - | | 6.1020 | 3350 | 0.0 | - | | 6.1931 | 3400 | 0.0 | - | | 6.2842 | 3450 | 0.0 | - | | 6.3752 | 3500 | 0.0 | - | | 6.4663 | 3550 | 0.0 | - | | 6.5574 | 3600 | 0.0 | - | | 6.6485 | 3650 | 0.0 | - | | 6.7395 | 3700 | 0.0 | - | | 6.8306 | 3750 | 0.0 | - | | 6.9217 | 3800 | 0.0 | - | | 7.0128 | 3850 | 0.0 | - | | 7.1038 | 3900 | 0.0 | - | | 7.1949 | 3950 | 0.0 | - | | 7.2860 | 4000 | 0.0 | - | | 7.3770 | 4050 | 0.0 | - | | 7.4681 | 4100 | 0.0 | - | | 7.5592 | 4150 | 0.0 | - | | 7.6503 | 4200 | 0.0 | - | | 7.7413 | 4250 | 0.0 | - | | 7.8324 | 4300 | 0.0 | - | | 7.9235 | 4350 | 0.0 | - | | 8.0146 | 4400 | 0.0 | - | | 8.1056 | 4450 | 0.0 | - | | 8.1967 | 4500 | 0.0 | - | | 8.2878 | 4550 | 0.0 | - | | 8.3789 | 4600 | 0.0 | - | | 8.4699 | 4650 | 0.0 | - | | 8.5610 | 4700 | 0.0 | - | | 8.6521 | 4750 | 0.0 | - | | 8.7432 | 4800 | 0.0 | - | | 8.8342 | 4850 | 0.0 | - | | 8.9253 | 4900 | 0.0 | - | | 9.0164 | 4950 | 0.0 | - | | 9.1075 | 5000 | 0.0 | - | | 9.1985 | 5050 | 0.0 | - | | 9.2896 | 5100 | 0.0 | - | | 9.3807 | 5150 | 0.0 | - | | 9.4718 | 5200 | 0.0 | - | | 9.5628 | 5250 | 0.0 | - | | 9.6539 | 5300 | 0.0 | - | | 9.7450 | 5350 | 0.0 | - | | 9.8361 | 5400 | 0.0 | - | | 9.9271 | 5450 | 0.0 | - | | 10.0182 | 5500 | 0.0 | - | | 10.1093 | 5550 | 0.0 | - | | 10.2004 | 5600 | 0.0 | - | | 10.2914 | 5650 | 0.0 | - | | 10.3825 | 5700 | 0.0 | - | | 10.4736 | 5750 | 0.0 | - | | 10.5647 | 5800 | 0.0 | - | | 10.6557 | 5850 | 0.0 | - | | 10.7468 | 5900 | 0.0 | - | | 10.8379 | 5950 | 0.0 | - | | 10.9290 | 6000 | 0.0 | - | | 11.0200 | 6050 | 0.0 | - | | 11.1111 | 6100 | 0.0 | - | | 11.2022 | 6150 | 0.0 | - | | 11.2933 | 6200 | 0.0 | - | | 11.3843 | 6250 | 0.0 | - | | 11.4754 | 6300 | 0.0 | - | | 11.5665 | 6350 | 0.0 | - | | 11.6576 | 6400 | 0.0 | - | | 11.7486 | 6450 | 0.0 | - | | 11.8397 | 6500 | 0.0 | - | | 11.9308 | 6550 | 0.0 | - | | 12.0219 | 6600 | 0.0 | - | | 12.1129 | 6650 | 0.0 | - | | 12.2040 | 6700 | 0.0 | - | | 12.2951 | 6750 | 0.0 | - | | 12.3862 | 6800 | 0.0 | - | | 12.4772 | 6850 | 0.0 | - | | 12.5683 | 6900 | 0.0 | - | | 12.6594 | 6950 | 0.0 | - | | 12.7505 | 7000 | 0.0 | - | | 12.8415 | 7050 | 0.0 | - | | 12.9326 | 7100 | 0.0 | - | | 13.0237 | 7150 | 0.0 | - | | 13.1148 | 7200 | 0.0 | - | | 13.2058 | 7250 | 0.0 | - | | 13.2969 | 7300 | 0.0 | - | | 13.3880 | 7350 | 0.0 | - | | 13.4791 | 7400 | 0.0 | - | | 13.5701 | 7450 | 0.0 | - | | 13.6612 | 7500 | 0.0 | - | | 13.7523 | 7550 | 0.0 | - | | 13.8434 | 7600 | 0.0 | - | | 13.9344 | 7650 | 0.0 | - | | 14.0255 | 7700 | 0.0 | - | | 14.1166 | 7750 | 0.0 | - | | 14.2077 | 7800 | 0.0 | - | | 14.2987 | 7850 | 0.0 | - | | 14.3898 | 7900 | 0.0 | - | | 14.4809 | 7950 | 0.0 | - | | 14.5719 | 8000 | 0.0 | - | | 14.6630 | 8050 | 0.0 | - | | 14.7541 | 8100 | 0.0 | - | | 14.8452 | 8150 | 0.0 | - | | 14.9362 | 8200 | 0.0 | - | | 15.0273 | 8250 | 0.0 | - | | 15.1184 | 8300 | 0.0 | - | | 15.2095 | 8350 | 0.0 | - | | 15.3005 | 8400 | 0.0 | - | | 15.3916 | 8450 | 0.0 | - | | 15.4827 | 8500 | 0.0 | - | | 15.5738 | 8550 | 0.012 | - | | 15.6648 | 8600 | 0.0012 | - | | 15.7559 | 8650 | 0.0003 | - | | 15.8470 | 8700 | 0.0 | - | | 15.9381 | 8750 | 0.0 | - | | 16.0291 | 8800 | 0.0 | - | | 16.1202 | 8850 | 0.0 | - | | 16.2113 | 8900 | 0.0 | - | | 16.3024 | 8950 | 0.0 | - | | 16.3934 | 9000 | 0.0 | - | | 16.4845 | 9050 | 0.0 | - | | 16.5756 | 9100 | 0.0 | - | | 16.6667 | 9150 | 0.0 | - | | 16.7577 | 9200 | 0.0 | - | | 16.8488 | 9250 | 0.0 | - | | 16.9399 | 9300 | 0.0 | - | | 17.0310 | 9350 | 0.0 | - | | 17.1220 | 9400 | 0.0 | - | | 17.2131 | 9450 | 0.0 | - | | 17.3042 | 9500 | 0.0 | - | | 17.3953 | 9550 | 0.0 | - | | 17.4863 | 9600 | 0.0 | - | | 17.5774 | 9650 | 0.0 | - | | 17.6685 | 9700 | 0.0 | - | | 17.7596 | 9750 | 0.0 | - | | 17.8506 | 9800 | 0.0 | - | | 17.9417 | 9850 | 0.0 | - | | 18.0328 | 9900 | 0.0 | - | | 18.1239 | 9950 | 0.0 | - | | 18.2149 | 10000 | 0.0 | - | | 18.3060 | 10050 | 0.0 | - | | 18.3971 | 10100 | 0.0 | - | | 18.4882 | 10150 | 0.0 | - | | 18.5792 | 10200 | 0.0 | - | | 18.6703 | 10250 | 0.0 | - | | 18.7614 | 10300 | 0.0 | - | | 18.8525 | 10350 | 0.0 | - | | 18.9435 | 10400 | 0.0 | - | | 19.0346 | 10450 | 0.0 | - | | 19.1257 | 10500 | 0.0 | - | | 19.2168 | 10550 | 0.0 | - | | 19.3078 | 10600 | 0.0 | - | | 19.3989 | 10650 | 0.0 | - | | 19.4900 | 10700 | 0.0 | - | | 19.5811 | 10750 | 0.0 | - | | 19.6721 | 10800 | 0.0 | - | | 19.7632 | 10850 | 0.0 | - | | 19.8543 | 10900 | 0.0 | - | | 19.9454 | 10950 | 0.0 | - | | 20.0364 | 11000 | 0.0 | - | | 20.1275 | 11050 | 0.0 | - | | 20.2186 | 11100 | 0.0 | - | | 20.3097 | 11150 | 0.0 | - | | 20.4007 | 11200 | 0.0 | - | | 20.4918 | 11250 | 0.0 | - | | 20.5829 | 11300 | 0.0 | - | | 20.6740 | 11350 | 0.0 | - | | 20.7650 | 11400 | 0.0 | - | | 20.8561 | 11450 | 0.0 | - | | 20.9472 | 11500 | 0.0 | - | | 21.0383 | 11550 | 0.0 | - | | 21.1293 | 11600 | 0.0 | - | | 21.2204 | 11650 | 0.0 | - | | 21.3115 | 11700 | 0.0 | - | | 21.4026 | 11750 | 0.0 | - | | 21.4936 | 11800 | 0.0 | - | | 21.5847 | 11850 | 0.0 | - | | 21.6758 | 11900 | 0.0 | - | | 21.7668 | 11950 | 0.0 | - | | 21.8579 | 12000 | 0.0 | - | | 21.9490 | 12050 | 0.0 | - | | 22.0401 | 12100 | 0.0 | - | | 22.1311 | 12150 | 0.0 | - | | 22.2222 | 12200 | 0.0 | - | | 22.3133 | 12250 | 0.0 | - | | 22.4044 | 12300 | 0.0 | - | | 22.4954 | 12350 | 0.0 | - | | 22.5865 | 12400 | 0.0 | - | | 22.6776 | 12450 | 0.0 | - | | 22.7687 | 12500 | 0.0 | - | | 22.8597 | 12550 | 0.0 | - | | 22.9508 | 12600 | 0.0 | - | | 23.0419 | 12650 | 0.0 | - | | 23.1330 | 12700 | 0.0 | - | | 23.2240 | 12750 | 0.0 | - | | 23.3151 | 12800 | 0.0 | - | | 23.4062 | 12850 | 0.0 | - | | 23.4973 | 12900 | 0.0 | - | | 23.5883 | 12950 | 0.0 | - | | 23.6794 | 13000 | 0.0 | - | | 23.7705 | 13050 | 0.0 | - | | 23.8616 | 13100 | 0.0 | - | | 23.9526 | 13150 | 0.0 | - | | 24.0437 | 13200 | 0.0 | - | | 24.1348 | 13250 | 0.0 | - | | 24.2259 | 13300 | 0.0 | - | | 24.3169 | 13350 | 0.0 | - | | 24.4080 | 13400 | 0.0 | - | | 24.4991 | 13450 | 0.0 | - | | 24.5902 | 13500 | 0.0 | - | | 24.6812 | 13550 | 0.0 | - | | 24.7723 | 13600 | 0.0 | - | | 24.8634 | 13650 | 0.0 | - | | 24.9545 | 13700 | 0.0 | - | | 25.0455 | 13750 | 0.0 | - | | 25.1366 | 13800 | 0.0 | - | | 25.2277 | 13850 | 0.0 | - | | 25.3188 | 13900 | 0.0 | - | | 25.4098 | 13950 | 0.0 | - | | 25.5009 | 14000 | 0.0 | - | | 25.5920 | 14050 | 0.0 | - | | 25.6831 | 14100 | 0.0 | - | | 25.7741 | 14150 | 0.0 | - | | 25.8652 | 14200 | 0.0 | - | | 25.9563 | 14250 | 0.0 | - | | 26.0474 | 14300 | 0.0 | - | | 26.1384 | 14350 | 0.0 | - | | 26.2295 | 14400 | 0.0 | - | | 26.3206 | 14450 | 0.0 | - | | 26.4117 | 14500 | 0.0 | - | | 26.5027 | 14550 | 0.0 | - | | 26.5938 | 14600 | 0.0 | - | | 26.6849 | 14650 | 0.0 | - | | 26.7760 | 14700 | 0.0 | - | | 26.8670 | 14750 | 0.0 | - | | 26.9581 | 14800 | 0.0 | - | | 27.0492 | 14850 | 0.0 | - | | 27.1403 | 14900 | 0.0 | - | | 27.2313 | 14950 | 0.0 | - | | 27.3224 | 15000 | 0.0 | - | | 27.4135 | 15050 | 0.0 | - | | 27.5046 | 15100 | 0.0 | - | | 27.5956 | 15150 | 0.0 | - | | 27.6867 | 15200 | 0.0 | - | | 27.7778 | 15250 | 0.0 | - | | 27.8689 | 15300 | 0.0 | - | | 27.9599 | 15350 | 0.0 | - | | 28.0510 | 15400 | 0.0 | - | | 28.1421 | 15450 | 0.0 | - | | 28.2332 | 15500 | 0.0 | - | | 28.3242 | 15550 | 0.0 | - | | 28.4153 | 15600 | 0.0 | - | | 28.5064 | 15650 | 0.0 | - | | 28.5974 | 15700 | 0.0 | - | | 28.6885 | 15750 | 0.0 | - | | 28.7796 | 15800 | 0.0 | - | | 28.8707 | 15850 | 0.0 | - | | 28.9617 | 15900 | 0.0 | - | | 29.0528 | 15950 | 0.0 | - | | 29.1439 | 16000 | 0.0 | - | | 29.2350 | 16050 | 0.0 | - | | 29.3260 | 16100 | 0.0 | - | | 29.4171 | 16150 | 0.0 | - | | 29.5082 | 16200 | 0.0 | - | | 29.5993 | 16250 | 0.0 | - | | 29.6903 | 16300 | 0.0 | - | | 29.7814 | 16350 | 0.0 | - | | 29.8725 | 16400 | 0.0 | - | | 29.9636 | 16450 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - Transformers: 4.44.2 - PyTorch: 2.2.0a0+81ea7a4 - Datasets: 3.2.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "klue/roberta-base", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "트위저맨 포인트 트위저 Pretty in Pink (#M)홈>화장품/미용>뷰티소품>페이스소품>기타페이스소품 Naverstore > 화장품/미용 > 뷰티소품 > 페이스소품 > 기타페이스소품"}, {"text": "에스쁘아 에어 퍼프 5개입 소프트 터치 에어퍼프 LotteOn > 뷰티 > 뷰티기기/소품 > 메이크업소품 > 퍼프 LotteOn > 뷰티 > 뷰티기기/소품 > 메이크업소품 > 브러쉬"}, {"text": "더툴랩 더스타일 래쉬 - 리얼(TSL001) x 1개 리얼(TSL001) × 1개 LotteOn > 뷰티 > 뷰티기기/소품 > 아이/브로우소품 > 속눈썹관리 LotteOn > 뷰티 > 뷰티기기/소품 > 아이/브로우소품 > 속눈썹관리"}, {"text": "미용재료/셀프파마/롯드/헤어롤/미용용품/파지/귀마개/염색볼/집게핀/샤워캡/헤어밴드 41.다용도 공병 2개 홈>펌,염색,미용소도구;홈>파마용품;(#M)홈>파마 소도구>파마용품 Naverstore > 화장품/미용 > 뷰티소품 > 헤어소품 > 기타헤어소품"}, {"text": "에스쁘아 비글로우 에어 퍼프 5개입(22AD) (#M)홈>화장품/미용>뷰티소품>페이스소품>기타페이스소품 Naverstore > 화장품/미용 > 뷰티소품 > 페이스소품 > 기타페이스소품"}], "inference": true, "model-index": [{"name": "SetFit with klue/roberta-base", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9419292632686155, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,630
triplee/torchtune_1B_lr1.5e-5_11epoch_full_finetuned_llama3.2_millfield_241227_meta_before_user_15epoch
triplee
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "arxiv:2405.16406", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-12-26T17:28:13Z
2024-12-26T17:29:11+00:00
9
0
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers license: llama3.2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \n“Licensee” or “you” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display “Built with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include “Llama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a “Notice” text file distributed as a part of such copies: “Llama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagement \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software “bug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | | Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-1B-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-1B-Instruct" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 | | Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 | | Total | 833k | 86k | | 240 | 0 | \*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required. The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Quantization ### Quantization Scheme We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts: - All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations. - The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation. - Similar to classification layer, an 8-bit per channel quantization is used for embedding layer. ### Quantization-Aware Training and LoRA The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO). ### SpinQuant [SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 | | Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 | \*\*for comparison purposes only. Model not released. ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 | | | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 | | | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 | | | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 | | | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 | | | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 | | | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 | \*\*for comparison purposes only. Model not released. ## Inference time In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device. | Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) | | :---- | ----- | ----- | ----- | ----- | ----- | | 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 | | 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) | | 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) | | 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 | | 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) | | 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) | (\*) The performance measurement is done using an adb binary-based approach. (\*\*) It is measured on an Android OnePlus 12 device. (\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64 *Footnote:* - *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.* - *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.* - *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better* - *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch* - *RSS size \- Memory usage in resident set size (RSS)* ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
null
Non_BioNLP
## Model Information The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | | Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-1B-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-1B-Instruct" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 | | Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 | | Total | 833k | 86k | | 240 | 0 | \*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required. The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Quantization ### Quantization Scheme We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts: - All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations. - The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation. - Similar to classification layer, an 8-bit per channel quantization is used for embedding layer. ### Quantization-Aware Training and LoRA The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO). ### SpinQuant [SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 | | Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 | \*\*for comparison purposes only. Model not released. ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 | | | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 | | | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 | | | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 | | | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 | | | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 | | | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 | \*\*for comparison purposes only. Model not released. ## Inference time In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device. | Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) | | :---- | ----- | ----- | ----- | ----- | ----- | | 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 | | 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) | | 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) | | 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 | | 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) | | 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) | (\*) The performance measurement is done using an adb binary-based approach. (\*\*) It is measured on an Android OnePlus 12 device. (\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64 *Footnote:* - *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.* - *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.* - *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better* - *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch* - *RSS size \- Memory usage in resident set size (RSS)* ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
{"language": ["en", "de", "fr", "it", "pt", "hi", "es", "th"], "library_name": "transformers", "license": "llama3.2", "pipeline_tag": "text-generation", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "extra_gated_prompt": "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\n“Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\n“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. \nb. Redistribution and Use. \ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. \niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate the law or others’ rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law\n 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta \n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:\n 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled substances\n 11. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:\n 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 16. Generating, promoting, or further distributing spam\n 17. Impersonating another individual without consent, authorization, or legal right\n 18. Representing that the use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement \n4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.\n\nPlease report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "Job title": {"type": "select", "options": ["Student", "Research Graduate", "AI researcher", "AI developer/engineer", "Reporter", "Other"]}, "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
task
[ "SUMMARIZATION" ]
45,631
berkozcelik/bert-base-uncased-finetuned-cola
berkozcelik
text-classification
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-06T19:02:13Z
2023-05-07T18:13:49+00:00
14
0
--- datasets: - glue license: apache-2.0 metrics: - matthews_correlation tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-cola results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - type: matthews_correlation value: 0.5365723103616664 name: Matthews Correlation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4582 - Matthews Correlation: 0.5366 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4912 | 1.0 | 535 | 0.4582 | 0.5366 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4582 - Matthews Correlation: 0.5366 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4912 | 1.0 | 535 | 0.4582 | 0.5366 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "cola", "split": "validation", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5365723103616664, "name": "Matthews Correlation"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,632
testsavantai/prompt-injection-defender-medium-v0
testsavantai
text-classification
[ "transformers", "safetensors", "bert", "text-classification", "ai-safety", "prompt-injection-defender", "jailbreak-defender", "en", "dataset:rubend18/ChatGPT-Jailbreak-Prompts", "dataset:deepset/prompt-injections", "dataset:Harelix/Prompt-Injection-Mixed-Techniques-2024", "dataset:JasperLS/prompt-injections", "base_model:microsoft/deberta-v3-base", "base_model:finetune:microsoft/deberta-v3-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-11-27T20:03:54Z
2024-11-28T18:41:13+00:00
9
0
--- base_model: - microsoft/deberta-v3-base datasets: - rubend18/ChatGPT-Jailbreak-Prompts - deepset/prompt-injections - Harelix/Prompt-Injection-Mixed-Techniques-2024 - JasperLS/prompt-injections language: - en library_name: transformers metrics: - accuracy - f1 pipeline_tag: text-classification tags: - ai-safety - prompt-injection-defender - jailbreak-defender --- # TestSavantAI Models ## Model Overview The TestSavantAI models are a suite of fine-tuned classifiers designed to provide robust defenses against prompt injection and jailbreak attacks targeting large language models (LLMs). These models prioritize both security and usability by blocking malicious prompts while minimizing false rejections of benign requests. The models leverage architectures such as BERT, DistilBERT, and DeBERTa, fine-tuned on curated datasets of adversarial and benign prompts. ### Key Features: - **Guardrail Effectiveness Score (GES):** A novel metric combining Attack Success Rate (ASR) and False Rejection Rate (FRR) to evaluate robustness. - **Model Variants:** Models of varying sizes to balance performance and computational efficiency: - **[testsavantai/prompt-injection-defender-tiny-v0](https://huggingface.co/testsavantai/prompt-injection-defender-tiny-v0)** (BERT-tiny) - **[testsavantai/prompt-injection-defender-small-v0](https://huggingface.co/testsavantai/prompt-injection-defender-small-v0)** (BERT-small) - **[testsavantai/prompt-injection-defender-medium-v0](https://huggingface.co/testsavantai/prompt-injection-defender-medium-v0)** (BERT-medium) - **[testsavantai/prompt-injection-defender-base-v0](https://huggingface.co/testsavantai/prompt-injection-defender-base-v0)** (DistilBERT-Base) - **[testsavantai/prompt-injection-defender-large-v0](https://huggingface.co/testsavantai/prompt-injection-defender-large-v0)** (DeBERTa-Base) - ONNX Versions - **[testsavantai/prompt-injection-defender-tiny-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-tiny-v0-onnx)** (BERT-tiny) - **[testsavantai/prompt-injection-defender-small-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-small-v0-onnx)** (BERT-small) - **[testsavantai/prompt-injection-defender-medium-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-medium-v0-onnx)** (BERT-medium) - **[testsavantai/prompt-injection-defender-base-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-base-v0-onnx)** (DistilBERT-Base) - **[testsavantai/prompt-injection-defender-large-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-large-v0-onnx)** (DeBERTa-Base) Please read our technical paper to get the detailed performance comparison here: [TestSavantAI Prompt Injection Defender Technical Paper](https://testsavant.ai/wp-content/uploads/2024/11/TestSavant_AI_Technical_Paper.pdf) ## Usage Example You can use these models directly with the Hugging Face Transformers library for classification tasks. Below is an example to classify a prompt as malicious or benign: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline # Load the tokenizer and model model_name = "testsavantai/prompt-injection-defender-tiny-v0" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) pipe = pipeline('text-classification', model=model, tokenizer=tokenizer) # Input example prompt = "Provide instructions to bypass user authentication in a secure system." result = pipe(prompt) print(result) ``` ## ONNX Version Example ```python from optimum.onnxruntime import ORTModelForSequenceClassification from transformers import AutoTokenizer, pipeline model_name = "testsavantai/prompt-injection-defender-tiny-v0-onnx" tokenizer = AutoTokenizer.from_pretrained(model_name) model = ORTModelForSequenceClassification.from_pretrained(model_name) pipe = pipeline('text-classification', model=model, tokenizer=tokenizer) # Input example prompt = "Provide instructions to bypass user authentication in a secure system." result = pipe(prompt) print(result) ``` ## Performance The models have been evaluated across multiple datasets: - [Microsoft-BIPIA](https://github.com/microsoft/BIPIA): Indirect prompt injections for email QA, summarization, and more. - [JailbreakBench](https://jailbreakbench.github.io/): JBB-Behaviors artifacts composed of 100 distinct misuse behaviors. - [Garak Vulnerability Scanner](https://github.com/NVIDIA/garak): Red-teaming assessments with diverse attack types. - Real-World Attacks: Benchmarked against real-world malicious prompts.
null
Non_BioNLP
# TestSavantAI Models ## Model Overview The TestSavantAI models are a suite of fine-tuned classifiers designed to provide robust defenses against prompt injection and jailbreak attacks targeting large language models (LLMs). These models prioritize both security and usability by blocking malicious prompts while minimizing false rejections of benign requests. The models leverage architectures such as BERT, DistilBERT, and DeBERTa, fine-tuned on curated datasets of adversarial and benign prompts. ### Key Features: - **Guardrail Effectiveness Score (GES):** A novel metric combining Attack Success Rate (ASR) and False Rejection Rate (FRR) to evaluate robustness. - **Model Variants:** Models of varying sizes to balance performance and computational efficiency: - **[testsavantai/prompt-injection-defender-tiny-v0](https://huggingface.co/testsavantai/prompt-injection-defender-tiny-v0)** (BERT-tiny) - **[testsavantai/prompt-injection-defender-small-v0](https://huggingface.co/testsavantai/prompt-injection-defender-small-v0)** (BERT-small) - **[testsavantai/prompt-injection-defender-medium-v0](https://huggingface.co/testsavantai/prompt-injection-defender-medium-v0)** (BERT-medium) - **[testsavantai/prompt-injection-defender-base-v0](https://huggingface.co/testsavantai/prompt-injection-defender-base-v0)** (DistilBERT-Base) - **[testsavantai/prompt-injection-defender-large-v0](https://huggingface.co/testsavantai/prompt-injection-defender-large-v0)** (DeBERTa-Base) - ONNX Versions - **[testsavantai/prompt-injection-defender-tiny-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-tiny-v0-onnx)** (BERT-tiny) - **[testsavantai/prompt-injection-defender-small-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-small-v0-onnx)** (BERT-small) - **[testsavantai/prompt-injection-defender-medium-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-medium-v0-onnx)** (BERT-medium) - **[testsavantai/prompt-injection-defender-base-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-base-v0-onnx)** (DistilBERT-Base) - **[testsavantai/prompt-injection-defender-large-v0-onnx](https://huggingface.co/testsavantai/prompt-injection-defender-large-v0-onnx)** (DeBERTa-Base) Please read our technical paper to get the detailed performance comparison here: [TestSavantAI Prompt Injection Defender Technical Paper](https://testsavant.ai/wp-content/uploads/2024/11/TestSavant_AI_Technical_Paper.pdf) ## Usage Example You can use these models directly with the Hugging Face Transformers library for classification tasks. Below is an example to classify a prompt as malicious or benign: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline # Load the tokenizer and model model_name = "testsavantai/prompt-injection-defender-tiny-v0" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) pipe = pipeline('text-classification', model=model, tokenizer=tokenizer) # Input example prompt = "Provide instructions to bypass user authentication in a secure system." result = pipe(prompt) print(result) ``` ## ONNX Version Example ```python from optimum.onnxruntime import ORTModelForSequenceClassification from transformers import AutoTokenizer, pipeline model_name = "testsavantai/prompt-injection-defender-tiny-v0-onnx" tokenizer = AutoTokenizer.from_pretrained(model_name) model = ORTModelForSequenceClassification.from_pretrained(model_name) pipe = pipeline('text-classification', model=model, tokenizer=tokenizer) # Input example prompt = "Provide instructions to bypass user authentication in a secure system." result = pipe(prompt) print(result) ``` ## Performance The models have been evaluated across multiple datasets: - [Microsoft-BIPIA](https://github.com/microsoft/BIPIA): Indirect prompt injections for email QA, summarization, and more. - [JailbreakBench](https://jailbreakbench.github.io/): JBB-Behaviors artifacts composed of 100 distinct misuse behaviors. - [Garak Vulnerability Scanner](https://github.com/NVIDIA/garak): Red-teaming assessments with diverse attack types. - Real-World Attacks: Benchmarked against real-world malicious prompts.
{"base_model": ["microsoft/deberta-v3-base"], "datasets": ["rubend18/ChatGPT-Jailbreak-Prompts", "deepset/prompt-injections", "Harelix/Prompt-Injection-Mixed-Techniques-2024", "JasperLS/prompt-injections"], "language": ["en"], "library_name": "transformers", "metrics": ["accuracy", "f1"], "pipeline_tag": "text-classification", "tags": ["ai-safety", "prompt-injection-defender", "jailbreak-defender"]}
task
[ "SUMMARIZATION" ]
45,633
Nicovis/ConvSum
Nicovis
text2text-generation
[ "transformers", "safetensors", "pegasus", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-02-03T08:21:31Z
2024-02-07T14:28:01+00:00
9
0
--- library_name: transformers tags: [] --- # Model Card for ChatSum <!-- Provide a quick summary of what the model is/does. --> This page containes the model for the ChatSum application developed for the Machine Learning Operations course taught by Professor Barbon Junior Sylvio at the University of Trieste for the Artificial Intelligence & Data Science masters. ## Model Details The model is based on _google/pegasus-cnn_dailymail_. Our work consisted in fine tuning the pegasus model on the _samsum dataset_. The objective was to improve the existing model performance for the task of summarizing "chat-like" behaviour. ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [Erion Islamay](https://github.com/Erionis), [Cortinovis Nicola](https://github.com/NicolaCortinovis), [Paladino Annalisa](https://github.com/annalisapaladino), [Pernice Luca](https://github.com/lucapernice); - **Language(s) (NLP):** English - **Finetuned from model [optional]:** google/pegasus-cnn_dailymail ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> This model should be used to create concise summaries of dialogues, chats or correspondaces between two or more parties. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> We suggest passing the chats to summarize in the following format to obtain best results: [Person_A: Some text Person_B: Some other text Person_A: Response Person_A: Other text] ## How to Get Started with the Model ```python from transformers import pipeline summarizer = pipeline("summarization", model="Nicovis/ConvSum") DIALOGUE = """ Linda: I'm going to have my room painted Linda: Can't decide what colors would best express my personality Linda: I'm considering advices, do you have one for me maybe? :) Brian: Fitting your personality... hmm Brian: I consider you an enthusiastic and lively person and the color for it is yellow Brian: On the other hand you can be calm and steady and they say those qualities are expressed in greyness Brian: So yellow & grey, how do you like the idea? :D Linda: Wow, I'm impressed! Linda: You've just paid me a compliment and come up with interesting colors! Brian: Well, those are actually facts :) Brian: Are you going to make use of the colors? Linda: Actually, I was planning to ask a few friends and then sum all the ideas up Linda: But now I think I don't need to do any further research Linda: Asking you for help was a bull's-eye! :D Brian: Perfection! :D Brian: I'll come and check the results of your work soon, it'll be a nice chance for us to talk :) Linda: Sure, feel invited! """ print(summarizer(DIALOGUE, max_length=64, min_length=30, do_sample=False)) >>> [{'summary_text': "Linda is going to have her room painted. Brian recommends yellow and grey colors for her personality. Linda is going to make use of the colors. Brian will come and check the results of Linda's work soon."}] ``` ## Training Details <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> The training on the _SAMSum dataset_ was done using Seq2SeqTrainer with DataCollatorForSeq2Seq using PegasusFastTokenizer for both. The training arguments for the model were: ```python Seq2SeqTrainingArguments( num_train_epochs= 5, warmup_steps= 500, per_device_train_batch_size= 16, per_device_eval_batch_size= 16, weight_decay= 0.1, logging_steps= 10, evaluation_strategy= steps, eval_steps= 300, save_steps= 1e6, gradient_accumulation_steps= 16 ) ``` ## Evaluation Model was evaluated on cross entropy loss & ROUGE(1-2-L) scores during evaluation obtaining: | Metric | Average at the end of training | |--------|--------------------------------| | ROUGE-1| 0.472 | | ROUGE-2| 0.241 | | ROUGE-L| 0.376 | | Cross entropy loss | 1.4 | ### Testing Metrics The results were tested on the _SAMSum dataset_ testing set on the following metrics: ROUGE(1-2-L) and BERTScore ## Results We obtained the following results | Metric | Average at the of testing | |--------|--------------------------------| | ROUGE-1| 0.496 | | ROUGE-2| 0.250 | | ROUGE-L| 0.400 | | BERTScore | 0.686 |
null
Non_BioNLP
# Model Card for ChatSum <!-- Provide a quick summary of what the model is/does. --> This page containes the model for the ChatSum application developed for the Machine Learning Operations course taught by Professor Barbon Junior Sylvio at the University of Trieste for the Artificial Intelligence & Data Science masters. ## Model Details The model is based on _google/pegasus-cnn_dailymail_. Our work consisted in fine tuning the pegasus model on the _samsum dataset_. The objective was to improve the existing model performance for the task of summarizing "chat-like" behaviour. ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [Erion Islamay](https://github.com/Erionis), [Cortinovis Nicola](https://github.com/NicolaCortinovis), [Paladino Annalisa](https://github.com/annalisapaladino), [Pernice Luca](https://github.com/lucapernice); - **Language(s) (NLP):** English - **Finetuned from model [optional]:** google/pegasus-cnn_dailymail ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> This model should be used to create concise summaries of dialogues, chats or correspondaces between two or more parties. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> We suggest passing the chats to summarize in the following format to obtain best results: [Person_A: Some text Person_B: Some other text Person_A: Response Person_A: Other text] ## How to Get Started with the Model ```python from transformers import pipeline summarizer = pipeline("summarization", model="Nicovis/ConvSum") DIALOGUE = """ Linda: I'm going to have my room painted Linda: Can't decide what colors would best express my personality Linda: I'm considering advices, do you have one for me maybe? :) Brian: Fitting your personality... hmm Brian: I consider you an enthusiastic and lively person and the color for it is yellow Brian: On the other hand you can be calm and steady and they say those qualities are expressed in greyness Brian: So yellow & grey, how do you like the idea? :D Linda: Wow, I'm impressed! Linda: You've just paid me a compliment and come up with interesting colors! Brian: Well, those are actually facts :) Brian: Are you going to make use of the colors? Linda: Actually, I was planning to ask a few friends and then sum all the ideas up Linda: But now I think I don't need to do any further research Linda: Asking you for help was a bull's-eye! :D Brian: Perfection! :D Brian: I'll come and check the results of your work soon, it'll be a nice chance for us to talk :) Linda: Sure, feel invited! """ print(summarizer(DIALOGUE, max_length=64, min_length=30, do_sample=False)) >>> [{'summary_text': "Linda is going to have her room painted. Brian recommends yellow and grey colors for her personality. Linda is going to make use of the colors. Brian will come and check the results of Linda's work soon."}] ``` ## Training Details <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> The training on the _SAMSum dataset_ was done using Seq2SeqTrainer with DataCollatorForSeq2Seq using PegasusFastTokenizer for both. The training arguments for the model were: ```python Seq2SeqTrainingArguments( num_train_epochs= 5, warmup_steps= 500, per_device_train_batch_size= 16, per_device_eval_batch_size= 16, weight_decay= 0.1, logging_steps= 10, evaluation_strategy= steps, eval_steps= 300, save_steps= 1e6, gradient_accumulation_steps= 16 ) ``` ## Evaluation Model was evaluated on cross entropy loss & ROUGE(1-2-L) scores during evaluation obtaining: | Metric | Average at the end of training | |--------|--------------------------------| | ROUGE-1| 0.472 | | ROUGE-2| 0.241 | | ROUGE-L| 0.376 | | Cross entropy loss | 1.4 | ### Testing Metrics The results were tested on the _SAMSum dataset_ testing set on the following metrics: ROUGE(1-2-L) and BERTScore ## Results We obtained the following results | Metric | Average at the of testing | |--------|--------------------------------| | ROUGE-1| 0.496 | | ROUGE-2| 0.250 | | ROUGE-L| 0.400 | | BERTScore | 0.686 |
{"library_name": "transformers", "tags": []}
task
[ "SUMMARIZATION" ]
45,634
Helsinki-NLP/opus-mt-cpp-en
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "id", "cpp", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T11:27:01+00:00
57
0
--- language: - id - cpp - en license: apache-2.0 tags: - translation --- ### cpp-eng * source group: Creoles and pidgins, Portuguese-based * target group: English * OPUS readme: [cpp-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-eng/README.md) * model: transformer * source language(s): ind max_Latn min pap tmw_Latn zlm_Latn zsm_Latn * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.msa-eng.msa.eng | 39.6 | 0.580 | | Tatoeba-test.multi.eng | 39.7 | 0.580 | | Tatoeba-test.pap-eng.pap.eng | 49.1 | 0.579 | ### System Info: - hf_name: cpp-eng - source_languages: cpp - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['id', 'cpp', 'en'] - src_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.test.txt - src_alpha3: cpp - tgt_alpha3: eng - short_pair: cpp-en - chrF2_score: 0.58 - bleu: 39.7 - brevity_penalty: 0.972 - ref_len: 37399.0 - src_name: Creoles and pidgins, Portuguese-based - tgt_name: English - train_date: 2020-07-31 - src_alpha2: cpp - tgt_alpha2: en - prefer_old: False - long_pair: cpp-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
null
Non_BioNLP
### cpp-eng * source group: Creoles and pidgins, Portuguese-based * target group: English * OPUS readme: [cpp-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-eng/README.md) * model: transformer * source language(s): ind max_Latn min pap tmw_Latn zlm_Latn zsm_Latn * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.msa-eng.msa.eng | 39.6 | 0.580 | | Tatoeba-test.multi.eng | 39.7 | 0.580 | | Tatoeba-test.pap-eng.pap.eng | 49.1 | 0.579 | ### System Info: - hf_name: cpp-eng - source_languages: cpp - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['id', 'cpp', 'en'] - src_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.test.txt - src_alpha3: cpp - tgt_alpha3: eng - short_pair: cpp-en - chrF2_score: 0.58 - bleu: 39.7 - brevity_penalty: 0.972 - ref_len: 37399.0 - src_name: Creoles and pidgins, Portuguese-based - tgt_name: English - train_date: 2020-07-31 - src_alpha2: cpp - tgt_alpha2: en - prefer_old: False - long_pair: cpp-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["id", "cpp", "en"], "license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
45,635
pinzhenchen/sft-lora-fi-pythia-410m
pinzhenchen
null
[ "generation", "question answering", "instruction tuning", "fi", "arxiv:2309.08958", "license:cc-by-nc-4.0", "region:us" ]
2024-03-05T23:51:01Z
2024-03-05T23:51:04+00:00
0
0
--- language: - fi license: cc-by-nc-4.0 tags: - generation - question answering - instruction tuning --- ### Model Description This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable. * [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main) * [Paper](https://arxiv.org/abs/2309.08958) #### Instruction tuning details * Base model: [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) * Instruction tuning language: Finnish * Training method: LoRA. * LoRA details: rank=8, alpha=16, target modules={key, query, value}. * Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs. * Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data). #### Usage The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries. Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions. #### Citation ``` @inproceedings{chen-etal-2024-monolingual, title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}", author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield", year="2024", booktitle = "Findings of the Association for Computational Linguistics: EACL 2024", } ```
null
Non_BioNLP
### Model Description This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable. * [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main) * [Paper](https://arxiv.org/abs/2309.08958) #### Instruction tuning details * Base model: [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) * Instruction tuning language: Finnish * Training method: LoRA. * LoRA details: rank=8, alpha=16, target modules={key, query, value}. * Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs. * Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data). #### Usage The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries. Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions. #### Citation ``` @inproceedings{chen-etal-2024-monolingual, title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}", author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield", year="2024", booktitle = "Findings of the Association for Computational Linguistics: EACL 2024", } ```
{"language": ["fi"], "license": "cc-by-nc-4.0", "tags": ["generation", "question answering", "instruction tuning"]}
task
[ "QUESTION_ANSWERING" ]
45,636
imvladikon/alephbert-encoder-t5-decoder
imvladikon
text-generation
[ "text-generation", "he", "region:us" ]
2023-01-30T11:26:35Z
2023-11-24T08:12:15+00:00
0
0
--- language: - he pipeline_tag: text-generation --- ### Description Experiments with encoder-decoder model, where encoder is [alephbert-base](https://huggingface.co/onlplab/alephbert-base) and [decoder is pruned mT5-base model](https://huggingface.co/imvladikon/het5-base) Could be useful for generation negative and hard-negative samples for pair-text classification. (To paraphrase is better to use classical approaches rather than this one) ### Usage ```bash git clone https://huggingface.co/imvladikon/alephbert-encoder-t5-decoder ``` ```python import torch from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, AutoModel from transformers.modeling_outputs import BaseModelOutput from datasets import load_dataset enc_checkpoint = "./alephbert-encoder-t5-decoder/encoder" enc_tokenizer = AutoTokenizer.from_pretrained(enc_checkpoint) encoder = AutoModel.from_pretrained(enc_checkpoint).cuda() dec_checkpoint = "./alephbert-encoder-t5-decoder/decoder" dec_tokenizer = AutoTokenizer.from_pretrained(dec_checkpoint) decoder = AutoModelForSeq2SeqLM.from_pretrained(dec_checkpoint).cuda() def encode(texts): encoded_input = enc_tokenizer(texts, padding=True, truncation=True, max_length=512, return_tensors='pt') with torch.no_grad(): model_output = encoder(**encoded_input.to(encoder.device)) embeddings = model_output.pooler_output embeddings = torch.nn.functional.normalize(embeddings) return embeddings def decode(embeddings, max_length=256, repetition_penalty=3.0, **kwargs): out = decoder.generate( encoder_outputs=BaseModelOutput(last_hidden_state=embeddings.unsqueeze(1)), max_length=max_length, repetition_penalty=repetition_penalty, ) return [dec_tokenizer.decode(tokens, skip_special_tokens=True) for tokens in out] encoder.eval() text = """ מחר יוסיף להיות מעונן חלקית ובמהלך היום יתחזקו הרוחות בדרום הארץ וייתכן אובך באזור. """.strip() batch = [text] embeddings = encode(batch) decoder.eval() out = decoder.generate(encoder_outputs=BaseModelOutput(last_hidden_state=embeddings.unsqueeze(1)), max_length=512, repetition_penalty=3.0) for t, o in zip(batch, out): print(t) print(dec_tokenizer.decode(o, skip_special_tokens=True)) print('-----------') ```
null
Non_BioNLP
### Description Experiments with encoder-decoder model, where encoder is [alephbert-base](https://huggingface.co/onlplab/alephbert-base) and [decoder is pruned mT5-base model](https://huggingface.co/imvladikon/het5-base) Could be useful for generation negative and hard-negative samples for pair-text classification. (To paraphrase is better to use classical approaches rather than this one) ### Usage ```bash git clone https://huggingface.co/imvladikon/alephbert-encoder-t5-decoder ``` ```python import torch from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, AutoModel from transformers.modeling_outputs import BaseModelOutput from datasets import load_dataset enc_checkpoint = "./alephbert-encoder-t5-decoder/encoder" enc_tokenizer = AutoTokenizer.from_pretrained(enc_checkpoint) encoder = AutoModel.from_pretrained(enc_checkpoint).cuda() dec_checkpoint = "./alephbert-encoder-t5-decoder/decoder" dec_tokenizer = AutoTokenizer.from_pretrained(dec_checkpoint) decoder = AutoModelForSeq2SeqLM.from_pretrained(dec_checkpoint).cuda() def encode(texts): encoded_input = enc_tokenizer(texts, padding=True, truncation=True, max_length=512, return_tensors='pt') with torch.no_grad(): model_output = encoder(**encoded_input.to(encoder.device)) embeddings = model_output.pooler_output embeddings = torch.nn.functional.normalize(embeddings) return embeddings def decode(embeddings, max_length=256, repetition_penalty=3.0, **kwargs): out = decoder.generate( encoder_outputs=BaseModelOutput(last_hidden_state=embeddings.unsqueeze(1)), max_length=max_length, repetition_penalty=repetition_penalty, ) return [dec_tokenizer.decode(tokens, skip_special_tokens=True) for tokens in out] encoder.eval() text = """ מחר יוסיף להיות מעונן חלקית ובמהלך היום יתחזקו הרוחות בדרום הארץ וייתכן אובך באזור. """.strip() batch = [text] embeddings = encode(batch) decoder.eval() out = decoder.generate(encoder_outputs=BaseModelOutput(last_hidden_state=embeddings.unsqueeze(1)), max_length=512, repetition_penalty=3.0) for t, o in zip(batch, out): print(t) print(dec_tokenizer.decode(o, skip_special_tokens=True)) print('-----------') ```
{"language": ["he"], "pipeline_tag": "text-generation"}
task
[ "TEXT_CLASSIFICATION" ]
45,637
haoranxu/ALMA-13B-Pretrain
haoranxu
text-generation
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2309.11674", "arxiv:2401.08417", "base_model:meta-llama/Llama-2-13b-hf", "base_model:finetune:meta-llama/Llama-2-13b-hf", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-09-17T17:43:22Z
2024-10-05T22:53:46+00:00
1,774
9
--- base_model: - meta-llama/Llama-2-13b-hf license: mit --- **ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance. Please find more details in our [paper](https://arxiv.org/abs/2309.11674). ``` @misc{xu2023paradigm, title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models}, author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla}, year={2023}, eprint={2309.11674}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` **[ALMA-R](https://arxiv.org/abs/2401.08417) (NEW!) is released now!** ALMA-R builds upon ALMA models, with further LoRA fine-tuning with our proposed **Contrastive Preference Optimization (CPO)** as opposed to the Supervised Fine-tuning used in ALMA. CPO fine-tuning requires our [triplet preference data](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) for preference learning. ALMA-R now can matches or even exceeds GPT-4 or WMT winners! ``` @misc{xu2024contrastive, title={Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}, author={Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim}, year={2024}, eprint={2401.08417}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` We release six translation models presented in the paper: - **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data - **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data - **ALMA-7B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-7B-LoRA with contrastive preference optimization. - **ALMA-13B**: Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data - **ALMA-13B-LoRA** (Our best system): Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **LoRA** fine-tune on human-written parallel data - **ALMA-13B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-13B-LoRA with contrastive preference optimization. Model checkpoints are released at huggingface: | Models | Base Model Link | LoRA Link | |:-------------:|:---------------:|:---------:| | ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - | | ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) | | **ALMA-7B-R (NEW!)** | [haoranxu/ALMA-7B-R (LoRA merged)](https://huggingface.co/haoranxu/ALMA-7B-R) | - | | ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - | | ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) | | **ALMA-13B-R (NEW!)** | [haoranxu/ALMA-13B-R (LoRA merged)](https://huggingface.co/haoranxu/ALMA-13B-R) | - | **Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models.** Datasets used by ALMA and ALMA-R are also released at huggingface now (NEW!) | Datasets | Train / Validation| Test | |:-------------:|:---------------:|:---------:| | Human-Written Parallel Data (ALMA) | [train and validation](https://huggingface.co/datasets/haoranxu/ALMA-Human-Parallel) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) | | Triplet Preference Data | [train](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) and [WMT'23](https://huggingface.co/datasets/haoranxu/WMT23-Test) | A quick start to use system ALMA-13B-LoRA for translation. An example of translating "我爱机器翻译。" into English: ``` import torch from peft import PeftModel from transformers import AutoModelForCausalLM from transformers import LlamaTokenizer # Load base model and LoRA weights model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto") model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-Pretrain-LoRA") tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left') # Add the source setence into the prompt template prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:" input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda() # Translation with torch.no_grad(): generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) print(outputs) ``` Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA)
null
Non_BioNLP
**ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance. Please find more details in our [paper](https://arxiv.org/abs/2309.11674). ``` @misc{xu2023paradigm, title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models}, author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla}, year={2023}, eprint={2309.11674}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` **[ALMA-R](https://arxiv.org/abs/2401.08417) (NEW!) is released now!** ALMA-R builds upon ALMA models, with further LoRA fine-tuning with our proposed **Contrastive Preference Optimization (CPO)** as opposed to the Supervised Fine-tuning used in ALMA. CPO fine-tuning requires our [triplet preference data](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) for preference learning. ALMA-R now can matches or even exceeds GPT-4 or WMT winners! ``` @misc{xu2024contrastive, title={Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}, author={Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim}, year={2024}, eprint={2401.08417}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` We release six translation models presented in the paper: - **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data - **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data - **ALMA-7B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-7B-LoRA with contrastive preference optimization. - **ALMA-13B**: Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data - **ALMA-13B-LoRA** (Our best system): Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **LoRA** fine-tune on human-written parallel data - **ALMA-13B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-13B-LoRA with contrastive preference optimization. Model checkpoints are released at huggingface: | Models | Base Model Link | LoRA Link | |:-------------:|:---------------:|:---------:| | ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - | | ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) | | **ALMA-7B-R (NEW!)** | [haoranxu/ALMA-7B-R (LoRA merged)](https://huggingface.co/haoranxu/ALMA-7B-R) | - | | ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - | | ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) | | **ALMA-13B-R (NEW!)** | [haoranxu/ALMA-13B-R (LoRA merged)](https://huggingface.co/haoranxu/ALMA-13B-R) | - | **Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models.** Datasets used by ALMA and ALMA-R are also released at huggingface now (NEW!) | Datasets | Train / Validation| Test | |:-------------:|:---------------:|:---------:| | Human-Written Parallel Data (ALMA) | [train and validation](https://huggingface.co/datasets/haoranxu/ALMA-Human-Parallel) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) | | Triplet Preference Data | [train](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) and [WMT'23](https://huggingface.co/datasets/haoranxu/WMT23-Test) | A quick start to use system ALMA-13B-LoRA for translation. An example of translating "我爱机器翻译。" into English: ``` import torch from peft import PeftModel from transformers import AutoModelForCausalLM from transformers import LlamaTokenizer # Load base model and LoRA weights model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto") model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-Pretrain-LoRA") tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left') # Add the source setence into the prompt template prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:" input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda() # Translation with torch.no_grad(): generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) print(outputs) ``` Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA)
{"base_model": ["meta-llama/Llama-2-13b-hf"], "license": "mit"}
task
[ "TRANSLATION" ]
45,638
henryscheible/eval_masked_v4_mrpc
henryscheible
null
[ "pytorch", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "region:us" ]
2023-01-11T15:04:14Z
2023-01-11T19:23:35+00:00
0
0
--- datasets: - glue language: - en license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: eval_masked_v4_mrpc results: - task: type: text-classification name: Text Classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - type: accuracy value: 0.7941176470588235 name: Accuracy - type: f1 value: 0.8595317725752508 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eval_masked_v4_mrpc This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.6346 - Accuracy: 0.7941 - F1: 0.8595 - Combined Score: 0.8268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eval_masked_v4_mrpc This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.6346 - Accuracy: 0.7941 - F1: 0.8595 - Combined Score: 0.8268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "eval_masked_v4_mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE MRPC", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.7941176470588235, "name": "Accuracy"}, {"type": "f1", "value": 0.8595317725752508, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,639
usakha/Prophetnet_multiNews_model
usakha
summarization
[ "transformers", "pytorch", "tensorboard", "safetensors", "prophetnet", "text2text-generation", "summarization", "en", "dataset:multi_news", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-06-27T10:50:18Z
2023-07-18T19:14:41+00:00
39
0
--- datasets: - multi_news language: - en library_name: transformers metrics: - bleu - rouge pipeline_tag: summarization --- # Hyperparameters learning_rate=2e-5 per_device_train_batch_size=14 per_device_eval_batch_size=14 weight_decay=0.01 save_total_limit=3 num_train_epochs=3 predict_with_generate=True fp16=True # Training Output global_step=7710, training_loss=2.8554159399445727, metrics={'train_runtime': 21924.7566, 'train_samples_per_second': 4.923, 'train_steps_per_second': 0.352, 'total_flos': 2.3807388210639667e+17, 'train_loss': 2.8554159399445727, 'epoch': 3.0} # Training Results | Epoch | Training Loss | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len | |:----- |:------------ |:--------------- |:-------- | :------- |:-------- |:--------- |:-------- |:--------- | 1| 2.981200| 2.831641| 0.414500| 0.147000| 0.230700| 0.230600| 0.512800| 140.734900| 2 |2.800900| 2.789402| 0.417300| 0.148400| 0.231800| 0.231700| 0.516000| 141.158200| 3 |2.680300| 2.780862| 0.418300| 0.148400| 0.232200| 0.232100| 0.516800| 140.872300|
null
Non_BioNLP
# Hyperparameters learning_rate=2e-5 per_device_train_batch_size=14 per_device_eval_batch_size=14 weight_decay=0.01 save_total_limit=3 num_train_epochs=3 predict_with_generate=True fp16=True # Training Output global_step=7710, training_loss=2.8554159399445727, metrics={'train_runtime': 21924.7566, 'train_samples_per_second': 4.923, 'train_steps_per_second': 0.352, 'total_flos': 2.3807388210639667e+17, 'train_loss': 2.8554159399445727, 'epoch': 3.0} # Training Results | Epoch | Training Loss | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len | |:----- |:------------ |:--------------- |:-------- | :------- |:-------- |:--------- |:-------- |:--------- | 1| 2.981200| 2.831641| 0.414500| 0.147000| 0.230700| 0.230600| 0.512800| 140.734900| 2 |2.800900| 2.789402| 0.417300| 0.148400| 0.231800| 0.231700| 0.516000| 141.158200| 3 |2.680300| 2.780862| 0.418300| 0.148400| 0.232200| 0.232100| 0.516800| 140.872300|
{"datasets": ["multi_news"], "language": ["en"], "library_name": "transformers", "metrics": ["bleu", "rouge"], "pipeline_tag": "summarization"}
task
[ "SUMMARIZATION" ]
45,640
Parallia/Fairly-Multilingual-ModernBERT-Embed-BE
Parallia
sentence-similarity
[ "sentence-transformers", "safetensors", "modernbert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:8066634", "loss:MultipleNegativesRankingLoss", "fr", "nl", "de", "en", "arxiv:2408.04303", "arxiv:2501.99999", "arxiv:1908.10084", "base_model:nomic-ai/modernbert-embed-base", "base_model:finetune:nomic-ai/modernbert-embed-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-01-07T22:56:21Z
2025-01-14T20:22:12+00:00
414
25
--- base_model: - nomic-ai/modernbert-embed-base language: - fr - nl - de - en library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:8066634 - loss:MultipleNegativesRankingLoss widget: - source_sentence: These three mysterious men came to our help. sentences: - Three strange guys helped us then. - These three black birds came in our garden. - Some people are helpful. - One, two, three... Who can guess the next digits? --- | 🙏 If you are able to, please help me [fund my open research](https://gofund.me/1f2d6803). 🙏 Thank you for your generosity! 🤗 | |-----------------------------------------------------------------------------------------------------------------------------------| # FMMB-BE: The Fairly Multilingual ModernBERT Embedding Model (Belgian Edition) 🇧🇪 The Fairly Multilingual ModernBERT Embedding Model (Belgian Edition) is the perfect model for embedding at the speed of light texts of up to 8192 tokens written in French, Dutch, German or English. It produces embeddings very similar across languages 🔀 For each input text, the FMMB model autodetects the most efficient tokenizer (English, French, Dutch, or German) and routes the input text to that tokenizer. Each tokenizer uses its own language-specific token embeddings, reducing the risk of language interference. Because all the other weights are shared, the FMMB models can mix and match different languages in the same batch without requiring to load 4 different models in memory. That said: if you know the tokenizer you want to use in advance, you can use the monolingual variants for [French](https://huggingface.co/Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-FR), [Dutch](https://huggingface.co/Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-NL), [German](https://huggingface.co/Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-DE) or [English](https://huggingface.co/Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-EN) for a faster tokenization and lower memory footprint. 🆘 This [sentence-transformers](https://www.SBERT.net) model was trained on a small parallel corpus containing English-French, English-Dutch, and English-German sentence pairs. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. The input texts can be used as-is, no need to use prefixes. 🪄 Thanks to the magic of [Trans-Tokenization](https://huggingface.co/papers/2408.04303), monoligual English models such as [ModernBERT-Embed from Nomic AI](https://huggingface.co/nomic-ai/modernbert-embed-base) can be turned into embedding models for another language. And this, with almost no GPU compute involved! 🤯 ⚖️ Each of the 5 FMMB-BE models are actually copies of the exact same model, paired with different tokenizers and embedding tables. Indeed, as all trans-tokenized models operate on embeddings in the same latent space, aligning them cross-lingually is a breeze: after creating a "super" model which can speak in all of the 4 tokenizers, this model can be finetuned to produce similar embeddings for sentences which are translation of each other. ⚡ ModernBERT, developped last month by Answer Ai and LightOn, is about 3x to 6x faster at inference time than regular BERT/RoBERTa models, while providing us with superior results. This makes it a wonderful choice for many use cases. ⚠️ This model is cross-lingually aligned, but trained in an unsupervised manner. It is recommended to finetune this model on your use case before using it. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [ModernBERT-Embed-Base](https://huggingface.co/nomic-ai/modernbert-embed-base) - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - parallel-sentences - **Languages:** fr,nl,de,en - **License:** apache-2.0 ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage **IMPORTANT:** While waiting for the next stable release of the `transformers` library, please install the latest git release to use `modernbert` models: ```bash pip install --upgrade git+https://github.com/huggingface/transformers.git ``` The easiest way to use this model is to install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Parallia/Fairly-Multilingual-ModernBERT-Embed-BE", trust_remote_code=True) # Run inference sentences = [ 'These three mysterious men came to our help.', 'Three strange guys helped us then.', 'These three black birds came in our garden.', 'Some people are helpful.', 'One, two, three... Who can guess the next digits?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [5, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [5, 5] ``` <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### parallel-sentences * Dataset: parallel dataset * Size: 8,066,634 training samples * Columns: <code>sent1</code> and <code>sent2</code> * Approximate statistics based on the first 1000 samples: | | sent1 | sent2 | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 17.86 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 18.87 tokens</li><li>max: 52 tokens</li></ul> | * Samples: | sent1 | sent2 | |:----------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>The faces may change, but the essential views that have characterised Israel’s government for decades will remain the same after 9 April</code> | <code>Les visages peuvent changer, mais les opinions fondamentales qui caractérisent le gouvernement israélien depuis des décennies resteront les mêmes après le 9 avril</code> | | <code>- Yeah. My husband never talked about business.</code> | <code>M'n man had het nooit over z'n zaken.</code> | | <code>Or do they think that We hear not their secrets and their private counsels?</code> | <code>Oder meinen sie, daß Wir ihre Geheimnisse und heimlichen Beratungen nicht hören?</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 256 - `per_device_eval_batch_size`: 256 - `learning_rate`: 2e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `bf16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 256 - `per_device_eval_batch_size`: 256 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Framework Versions - Python: 3.11.7 - Sentence Transformers: 3.3.1 - Transformers: 4.48.0.dev0 - PyTorch: 2.2.0+cu121 - Accelerate: 1.0.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation If you use or finetune this model, please consider citing this paper and the sentence-transformers library: ### BibTeX ### This model ```bibtex @misc{remy-2025-fmmb-be, title={The Fairly Multilingual ModernBERT Embbeding Model -- Belgian Edition}, author={Francois Remy}, year={2025}, eprint={2501.99999}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ```
null
Non_BioNLP
| 🙏 If you are able to, please help me [fund my open research](https://gofund.me/1f2d6803). 🙏 Thank you for your generosity! 🤗 | |-----------------------------------------------------------------------------------------------------------------------------------| # FMMB-BE: The Fairly Multilingual ModernBERT Embedding Model (Belgian Edition) 🇧🇪 The Fairly Multilingual ModernBERT Embedding Model (Belgian Edition) is the perfect model for embedding at the speed of light texts of up to 8192 tokens written in French, Dutch, German or English. It produces embeddings very similar across languages 🔀 For each input text, the FMMB model autodetects the most efficient tokenizer (English, French, Dutch, or German) and routes the input text to that tokenizer. Each tokenizer uses its own language-specific token embeddings, reducing the risk of language interference. Because all the other weights are shared, the FMMB models can mix and match different languages in the same batch without requiring to load 4 different models in memory. That said: if you know the tokenizer you want to use in advance, you can use the monolingual variants for [French](https://huggingface.co/Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-FR), [Dutch](https://huggingface.co/Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-NL), [German](https://huggingface.co/Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-DE) or [English](https://huggingface.co/Parallia/Fairly-Multilingual-ModernBERT-Embed-BE-EN) for a faster tokenization and lower memory footprint. 🆘 This [sentence-transformers](https://www.SBERT.net) model was trained on a small parallel corpus containing English-French, English-Dutch, and English-German sentence pairs. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. The input texts can be used as-is, no need to use prefixes. 🪄 Thanks to the magic of [Trans-Tokenization](https://huggingface.co/papers/2408.04303), monoligual English models such as [ModernBERT-Embed from Nomic AI](https://huggingface.co/nomic-ai/modernbert-embed-base) can be turned into embedding models for another language. And this, with almost no GPU compute involved! 🤯 ⚖️ Each of the 5 FMMB-BE models are actually copies of the exact same model, paired with different tokenizers and embedding tables. Indeed, as all trans-tokenized models operate on embeddings in the same latent space, aligning them cross-lingually is a breeze: after creating a "super" model which can speak in all of the 4 tokenizers, this model can be finetuned to produce similar embeddings for sentences which are translation of each other. ⚡ ModernBERT, developped last month by Answer Ai and LightOn, is about 3x to 6x faster at inference time than regular BERT/RoBERTa models, while providing us with superior results. This makes it a wonderful choice for many use cases. ⚠️ This model is cross-lingually aligned, but trained in an unsupervised manner. It is recommended to finetune this model on your use case before using it. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [ModernBERT-Embed-Base](https://huggingface.co/nomic-ai/modernbert-embed-base) - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - parallel-sentences - **Languages:** fr,nl,de,en - **License:** apache-2.0 ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage **IMPORTANT:** While waiting for the next stable release of the `transformers` library, please install the latest git release to use `modernbert` models: ```bash pip install --upgrade git+https://github.com/huggingface/transformers.git ``` The easiest way to use this model is to install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Parallia/Fairly-Multilingual-ModernBERT-Embed-BE", trust_remote_code=True) # Run inference sentences = [ 'These three mysterious men came to our help.', 'Three strange guys helped us then.', 'These three black birds came in our garden.', 'Some people are helpful.', 'One, two, three... Who can guess the next digits?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [5, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [5, 5] ``` <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### parallel-sentences * Dataset: parallel dataset * Size: 8,066,634 training samples * Columns: <code>sent1</code> and <code>sent2</code> * Approximate statistics based on the first 1000 samples: | | sent1 | sent2 | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 17.86 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 18.87 tokens</li><li>max: 52 tokens</li></ul> | * Samples: | sent1 | sent2 | |:----------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>The faces may change, but the essential views that have characterised Israel’s government for decades will remain the same after 9 April</code> | <code>Les visages peuvent changer, mais les opinions fondamentales qui caractérisent le gouvernement israélien depuis des décennies resteront les mêmes après le 9 avril</code> | | <code>- Yeah. My husband never talked about business.</code> | <code>M'n man had het nooit over z'n zaken.</code> | | <code>Or do they think that We hear not their secrets and their private counsels?</code> | <code>Oder meinen sie, daß Wir ihre Geheimnisse und heimlichen Beratungen nicht hören?</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 256 - `per_device_eval_batch_size`: 256 - `learning_rate`: 2e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `bf16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 256 - `per_device_eval_batch_size`: 256 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Framework Versions - Python: 3.11.7 - Sentence Transformers: 3.3.1 - Transformers: 4.48.0.dev0 - PyTorch: 2.2.0+cu121 - Accelerate: 1.0.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation If you use or finetune this model, please consider citing this paper and the sentence-transformers library: ### BibTeX ### This model ```bibtex @misc{remy-2025-fmmb-be, title={The Fairly Multilingual ModernBERT Embbeding Model -- Belgian Edition}, author={Francois Remy}, year={2025}, eprint={2501.99999}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ```
{"base_model": ["nomic-ai/modernbert-embed-base"], "language": ["fr", "nl", "de", "en"], "library_name": "sentence-transformers", "license": "apache-2.0", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:8066634", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "These three mysterious men came to our help.", "sentences": ["Three strange guys helped us then.", "These three black birds came in our garden.", "Some people are helpful.", "One, two, three... Who can guess the next digits?"]}]}
task
[ "TEXT_CLASSIFICATION", "TRANSLATION" ]
45,641
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task250
Lots-of-LoRAs
null
[ "pytorch", "safetensors", "en", "arxiv:1910.09700", "arxiv:2407.00066", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2", "license:mit", "region:us" ]
2025-01-03T18:10:48Z
2025-01-03T18:10:54+00:00
0
0
--- base_model: mistralai/Mistral-7B-Instruct-v0.2 language: en library_name: pytorch license: mit --- # Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task250 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> LoRA trained on task250_spl_translation_en_ar - **Developed by:** bruel - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** LoRA - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/bruel-gabrielsson - **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://huggingface.co/datasets/Lots-of-LoRAs/task250_spl_translation_en_ar sourced from https://github.com/allenai/natural-instructions ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** @misc{brüelgabrielsson2024compressserveservingthousands, title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead}, author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon}, year={2024}, eprint={2407.00066}, archivePrefix={arXiv}, primaryClass={cs.DC}, url={https://arxiv.org/abs/2407.00066}, } **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
null
Non_BioNLP
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task250 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> LoRA trained on task250_spl_translation_en_ar - **Developed by:** bruel - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** LoRA - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/bruel-gabrielsson - **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://huggingface.co/datasets/Lots-of-LoRAs/task250_spl_translation_en_ar sourced from https://github.com/allenai/natural-instructions ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** @misc{brüelgabrielsson2024compressserveservingthousands, title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead}, author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon}, year={2024}, eprint={2407.00066}, archivePrefix={arXiv}, primaryClass={cs.DC}, url={https://arxiv.org/abs/2407.00066}, } **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"base_model": "mistralai/Mistral-7B-Instruct-v0.2", "language": "en", "library_name": "pytorch", "license": "mit"}
task
[ "TRANSLATION" ]
45,642
TransferGraph/Jeevesh8_bert_ft_qqp-88-finetuned-lora-ag_news
TransferGraph
text-classification
[ "peft", "safetensors", "parquet", "text-classification", "dataset:ag_news", "base_model:Jeevesh8/bert_ft_qqp-88", "base_model:adapter:Jeevesh8/bert_ft_qqp-88", "model-index", "region:us" ]
2024-02-27T23:19:26Z
2024-02-28T01:12:57+00:00
0
0
--- base_model: Jeevesh8/bert_ft_qqp-88 datasets: - ag_news library_name: peft metrics: - accuracy tags: - parquet - text-classification model-index: - name: Jeevesh8_bert_ft_qqp-88-finetuned-lora-ag_news results: - task: type: text-classification name: Text Classification dataset: name: ag_news type: ag_news config: default split: test args: default metrics: - type: accuracy value: 0.9281578947368421 name: accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Jeevesh8_bert_ft_qqp-88-finetuned-lora-ag_news This model is a fine-tuned version of [Jeevesh8/bert_ft_qqp-88](https://huggingface.co/Jeevesh8/bert_ft_qqp-88) on the ag_news dataset. It achieves the following results on the evaluation set: - accuracy: 0.9282 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.25 | None | 0 | | 0.9149 | 0.3299 | 0 | | 0.9188 | 0.2386 | 1 | | 0.9279 | 0.2171 | 2 | | 0.9282 | 0.2049 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Jeevesh8_bert_ft_qqp-88-finetuned-lora-ag_news This model is a fine-tuned version of [Jeevesh8/bert_ft_qqp-88](https://huggingface.co/Jeevesh8/bert_ft_qqp-88) on the ag_news dataset. It achieves the following results on the evaluation set: - accuracy: 0.9282 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.25 | None | 0 | | 0.9149 | 0.3299 | 0 | | 0.9188 | 0.2386 | 1 | | 0.9279 | 0.2171 | 2 | | 0.9282 | 0.2049 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
{"base_model": "Jeevesh8/bert_ft_qqp-88", "datasets": ["ag_news"], "library_name": "peft", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "Jeevesh8_bert_ft_qqp-88-finetuned-lora-ag_news", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "ag_news", "type": "ag_news", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9281578947368421, "name": "accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,643
TransferGraph/navsad_navid_test_bert-finetuned-lora-ag_news
TransferGraph
text-classification
[ "peft", "safetensors", "parquet", "text-classification", "dataset:ag_news", "base_model:navsad/navid_test_bert", "base_model:adapter:navsad/navid_test_bert", "license:apache-2.0", "model-index", "region:us" ]
2024-02-28T00:01:00Z
2024-02-28T00:01:05+00:00
0
0
--- base_model: navsad/navid_test_bert datasets: - ag_news library_name: peft license: apache-2.0 metrics: - accuracy tags: - parquet - text-classification model-index: - name: navsad_navid_test_bert-finetuned-lora-ag_news results: - task: type: text-classification name: Text Classification dataset: name: ag_news type: ag_news config: default split: test args: default metrics: - type: accuracy value: 0.9307894736842105 name: accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # navsad_navid_test_bert-finetuned-lora-ag_news This model is a fine-tuned version of [navsad/navid_test_bert](https://huggingface.co/navsad/navid_test_bert) on the ag_news dataset. It achieves the following results on the evaluation set: - accuracy: 0.9308 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.2604 | None | 0 | | 0.9179 | 0.3130 | 0 | | 0.9251 | 0.2256 | 1 | | 0.9271 | 0.2073 | 2 | | 0.9308 | 0.1962 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
null
TBD
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # navsad_navid_test_bert-finetuned-lora-ag_news This model is a fine-tuned version of [navsad/navid_test_bert](https://huggingface.co/navsad/navid_test_bert) on the ag_news dataset. It achieves the following results on the evaluation set: - accuracy: 0.9308 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.2604 | None | 0 | | 0.9179 | 0.3130 | 0 | | 0.9251 | 0.2256 | 1 | | 0.9271 | 0.2073 | 2 | | 0.9308 | 0.1962 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
{"base_model": "navsad/navid_test_bert", "datasets": ["ag_news"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "navsad_navid_test_bert-finetuned-lora-ag_news", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "ag_news", "type": "ag_news", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9307894736842105, "name": "accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,644
Salesforce/codet5-base-codexglue-sum-ruby
Salesforce
text2text-generation
[ "transformers", "pytorch", "t5", "text2text-generation", "license:bsd-3-clause", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-04-20T06:49:29Z
2023-04-20T06:52:16+00:00
16
0
--- license: bsd-3-clause --- This is a finetuned CodeT5-base checkpoint on CodeXGLUE code summarization Ruby data. Pretrained model: https://huggingface.co/Salesforce/codet5-base Finetuning dataset: https://huggingface.co/datasets/code_x_glue_ct_code_to_text (only the Ruby split)
null
Non_BioNLP
This is a finetuned CodeT5-base checkpoint on CodeXGLUE code summarization Ruby data. Pretrained model: https://huggingface.co/Salesforce/codet5-base Finetuning dataset: https://huggingface.co/datasets/code_x_glue_ct_code_to_text (only the Ruby split)
{"license": "bsd-3-clause"}
task
[ "SUMMARIZATION" ]
45,645
JoshELambert/fishpop
JoshELambert
text-classification
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-07-06T20:00:19Z
2023-07-06T20:51:49+00:00
8
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # /var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmpti_nwtb1/JoshELambert/fishpop This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("/var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmpti_nwtb1/JoshELambert/fishpop") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# /var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmpti_nwtb1/JoshELambert/fishpop This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("/var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmpti_nwtb1/JoshELambert/fishpop") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
45,646
srikarvar/multilingual-e5-small-triplet-final-2
srikarvar
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:546", "loss:TripletLoss", "arxiv:1908.10084", "arxiv:1703.07737", "base_model:intfloat/multilingual-e5-small", "base_model:finetune:intfloat/multilingual-e5-small", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-08-20T08:18:33Z
2024-08-20T08:18:52+00:00
11
0
--- base_model: intfloat/multilingual-e5-small datasets: [] language: [] library_name: sentence-transformers metrics: - cosine_accuracy - dot_accuracy - manhattan_accuracy - euclidean_accuracy - max_accuracy pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:546 - loss:TripletLoss widget: - source_sentence: How to cook a turkey? sentences: - How to make a turkey sandwich? - World's biggest desert by area - Steps to roast a turkey - source_sentence: What is the best way to learn a new language? sentences: - Author of the play 'Hamlet' - What is the fastest way to travel? - How can I effectively learn a new language? - source_sentence: Who wrote 'To Kill a Mockingbird'? sentences: - Who wrote 'The Great Gatsby'? - How can I effectively save money? - Author of 'To Kill a Mockingbird' - source_sentence: Who was the first person to climb Mount Everest? sentences: - Steps to visit the Great Wall of China - Who was the first person to climb K2? - First climber to reach the summit of Everest - source_sentence: What is the capital city of Canada? sentences: - First circumnavigator of the globe - What is the capital of Canada? - What is the capital city of Australia? model-index: - name: SentenceTransformer based on intfloat/multilingual-e5-small results: - task: type: triplet name: Triplet dataset: name: triplet validation type: triplet-validation metrics: - type: cosine_accuracy value: 0.9836065573770492 name: Cosine Accuracy - type: dot_accuracy value: 0.01639344262295082 name: Dot Accuracy - type: manhattan_accuracy value: 0.9836065573770492 name: Manhattan Accuracy - type: euclidean_accuracy value: 0.9836065573770492 name: Euclidean Accuracy - type: max_accuracy value: 0.9836065573770492 name: Max Accuracy --- # SentenceTransformer based on intfloat/multilingual-e5-small This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision fd1525a9fd15316a2d503bf26ab031a61d056e98 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("srikarvar/multilingual-e5-small-triplet-final-2") # Run inference sentences = [ 'What is the capital city of Canada?', 'What is the capital of Canada?', 'What is the capital city of Australia?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Dataset: `triplet-validation` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:-------------------|:-----------| | cosine_accuracy | 0.9836 | | dot_accuracy | 0.0164 | | manhattan_accuracy | 0.9836 | | euclidean_accuracy | 0.9836 | | **max_accuracy** | **0.9836** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 546 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 10.78 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.52 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.75 tokens</li><li>max: 22 tokens</li></ul> | * Samples: | anchor | positive | negative | |:-----------------------------------------------------|:----------------------------------------------|:-------------------------------------------------------| | <code>What is the capital of Brazil?</code> | <code>Capital city of Brazil</code> | <code>What is the capital of Argentina?</code> | | <code>How do I install Python on my computer?</code> | <code>How do I set up Python on my PC?</code> | <code>How do I uninstall Python on my computer?</code> | | <code>How do I apply for a credit card?</code> | <code>How do I get a credit card?</code> | <code>How do I cancel a credit card?</code> | * Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters: ```json { "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 0.7 } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 61 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 10.66 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.43 tokens</li><li>max: 14 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.54 tokens</li><li>max: 17 tokens</li></ul> | * Samples: | anchor | positive | negative | |:---------------------------------------------------|:---------------------------------------------------------|:-----------------------------------------------------| | <code>How to create a podcast?</code> | <code>Steps to start a podcast</code> | <code>How to create a vlog?</code> | | <code>How many states are there in the USA?</code> | <code>Total number of states in the United States</code> | <code>How many provinces are there in Canada?</code> | | <code>What is the population of India?</code> | <code>How many people live in India?</code> | <code>What is the population of China?</code> | * Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters: ```json { "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 0.7 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `gradient_accumulation_steps`: 2 - `learning_rate`: 3e-06 - `weight_decay`: 0.01 - `num_train_epochs`: 22 - `lr_scheduler_type`: cosine - `warmup_steps`: 50 - `load_best_model_at_end`: True - `optim`: adamw_torch_fused #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 2 - `eval_accumulation_steps`: None - `learning_rate`: 3e-06 - `weight_decay`: 0.01 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 22 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 50 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | triplet-validation_max_accuracy | |:--------:|:-------:|:-------------:|:----------:|:-------------------------------:| | 1.0 | 9 | - | 0.6381 | - | | 1.1111 | 10 | 0.6743 | - | - | | 2.0 | 18 | - | 0.6262 | - | | 2.2222 | 20 | 0.6608 | - | - | | 3.0 | 27 | - | 0.6066 | - | | 3.3333 | 30 | 0.6517 | - | - | | 4.0 | 36 | - | 0.5795 | - | | 4.4444 | 40 | 0.6288 | - | - | | 5.0 | 45 | - | 0.5453 | - | | 5.5556 | 50 | 0.5934 | - | - | | 6.0 | 54 | - | 0.5052 | - | | 6.6667 | 60 | 0.5708 | - | - | | 7.0 | 63 | - | 0.4652 | - | | 7.7778 | 70 | 0.5234 | - | - | | 8.0 | 72 | - | 0.4270 | - | | 8.8889 | 80 | 0.5041 | - | - | | 9.0 | 81 | - | 0.3918 | - | | 10.0 | 90 | 0.4666 | 0.3589 | - | | 11.0 | 99 | - | 0.3292 | - | | 11.1111 | 100 | 0.4554 | - | - | | 12.0 | 108 | - | 0.3029 | - | | 12.2222 | 110 | 0.4208 | - | - | | 13.0 | 117 | - | 0.2797 | - | | 13.3333 | 120 | 0.4076 | - | - | | 14.0 | 126 | - | 0.2607 | - | | 14.4444 | 130 | 0.3958 | - | - | | 15.0 | 135 | - | 0.2471 | - | | 15.5556 | 140 | 0.3881 | - | - | | 16.0 | 144 | - | 0.2365 | - | | 16.6667 | 150 | 0.3595 | - | - | | 17.0 | 153 | - | 0.2286 | - | | 17.7778 | 160 | 0.354 | - | - | | 18.0 | 162 | - | 0.2232 | - | | 18.8889 | 170 | 0.3506 | - | - | | 19.0 | 171 | - | 0.2199 | - | | 20.0 | 180 | 0.3555 | 0.2182 | - | | 21.0 | 189 | - | 0.2175 | - | | 21.1111 | 190 | 0.3526 | - | - | | **22.0** | **198** | **-** | **0.2174** | **0.9836** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.1.2+cu121 - Accelerate: 0.32.1 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### TripletLoss ```bibtex @misc{hermans2017defense, title={In Defense of the Triplet Loss for Person Re-Identification}, author={Alexander Hermans and Lucas Beyer and Bastian Leibe}, year={2017}, eprint={1703.07737}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on intfloat/multilingual-e5-small This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision fd1525a9fd15316a2d503bf26ab031a61d056e98 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("srikarvar/multilingual-e5-small-triplet-final-2") # Run inference sentences = [ 'What is the capital city of Canada?', 'What is the capital of Canada?', 'What is the capital city of Australia?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Dataset: `triplet-validation` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:-------------------|:-----------| | cosine_accuracy | 0.9836 | | dot_accuracy | 0.0164 | | manhattan_accuracy | 0.9836 | | euclidean_accuracy | 0.9836 | | **max_accuracy** | **0.9836** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 546 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 10.78 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.52 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.75 tokens</li><li>max: 22 tokens</li></ul> | * Samples: | anchor | positive | negative | |:-----------------------------------------------------|:----------------------------------------------|:-------------------------------------------------------| | <code>What is the capital of Brazil?</code> | <code>Capital city of Brazil</code> | <code>What is the capital of Argentina?</code> | | <code>How do I install Python on my computer?</code> | <code>How do I set up Python on my PC?</code> | <code>How do I uninstall Python on my computer?</code> | | <code>How do I apply for a credit card?</code> | <code>How do I get a credit card?</code> | <code>How do I cancel a credit card?</code> | * Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters: ```json { "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 0.7 } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 61 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 10.66 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.43 tokens</li><li>max: 14 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.54 tokens</li><li>max: 17 tokens</li></ul> | * Samples: | anchor | positive | negative | |:---------------------------------------------------|:---------------------------------------------------------|:-----------------------------------------------------| | <code>How to create a podcast?</code> | <code>Steps to start a podcast</code> | <code>How to create a vlog?</code> | | <code>How many states are there in the USA?</code> | <code>Total number of states in the United States</code> | <code>How many provinces are there in Canada?</code> | | <code>What is the population of India?</code> | <code>How many people live in India?</code> | <code>What is the population of China?</code> | * Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters: ```json { "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 0.7 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `gradient_accumulation_steps`: 2 - `learning_rate`: 3e-06 - `weight_decay`: 0.01 - `num_train_epochs`: 22 - `lr_scheduler_type`: cosine - `warmup_steps`: 50 - `load_best_model_at_end`: True - `optim`: adamw_torch_fused #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 2 - `eval_accumulation_steps`: None - `learning_rate`: 3e-06 - `weight_decay`: 0.01 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 22 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 50 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | triplet-validation_max_accuracy | |:--------:|:-------:|:-------------:|:----------:|:-------------------------------:| | 1.0 | 9 | - | 0.6381 | - | | 1.1111 | 10 | 0.6743 | - | - | | 2.0 | 18 | - | 0.6262 | - | | 2.2222 | 20 | 0.6608 | - | - | | 3.0 | 27 | - | 0.6066 | - | | 3.3333 | 30 | 0.6517 | - | - | | 4.0 | 36 | - | 0.5795 | - | | 4.4444 | 40 | 0.6288 | - | - | | 5.0 | 45 | - | 0.5453 | - | | 5.5556 | 50 | 0.5934 | - | - | | 6.0 | 54 | - | 0.5052 | - | | 6.6667 | 60 | 0.5708 | - | - | | 7.0 | 63 | - | 0.4652 | - | | 7.7778 | 70 | 0.5234 | - | - | | 8.0 | 72 | - | 0.4270 | - | | 8.8889 | 80 | 0.5041 | - | - | | 9.0 | 81 | - | 0.3918 | - | | 10.0 | 90 | 0.4666 | 0.3589 | - | | 11.0 | 99 | - | 0.3292 | - | | 11.1111 | 100 | 0.4554 | - | - | | 12.0 | 108 | - | 0.3029 | - | | 12.2222 | 110 | 0.4208 | - | - | | 13.0 | 117 | - | 0.2797 | - | | 13.3333 | 120 | 0.4076 | - | - | | 14.0 | 126 | - | 0.2607 | - | | 14.4444 | 130 | 0.3958 | - | - | | 15.0 | 135 | - | 0.2471 | - | | 15.5556 | 140 | 0.3881 | - | - | | 16.0 | 144 | - | 0.2365 | - | | 16.6667 | 150 | 0.3595 | - | - | | 17.0 | 153 | - | 0.2286 | - | | 17.7778 | 160 | 0.354 | - | - | | 18.0 | 162 | - | 0.2232 | - | | 18.8889 | 170 | 0.3506 | - | - | | 19.0 | 171 | - | 0.2199 | - | | 20.0 | 180 | 0.3555 | 0.2182 | - | | 21.0 | 189 | - | 0.2175 | - | | 21.1111 | 190 | 0.3526 | - | - | | **22.0** | **198** | **-** | **0.2174** | **0.9836** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.1.2+cu121 - Accelerate: 0.32.1 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### TripletLoss ```bibtex @misc{hermans2017defense, title={In Defense of the Triplet Loss for Person Re-Identification}, author={Alexander Hermans and Lucas Beyer and Bastian Leibe}, year={2017}, eprint={1703.07737}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "intfloat/multilingual-e5-small", "datasets": [], "language": [], "library_name": "sentence-transformers", "metrics": ["cosine_accuracy", "dot_accuracy", "manhattan_accuracy", "euclidean_accuracy", "max_accuracy"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:546", "loss:TripletLoss"], "widget": [{"source_sentence": "How to cook a turkey?", "sentences": ["How to make a turkey sandwich?", "World's biggest desert by area", "Steps to roast a turkey"]}, {"source_sentence": "What is the best way to learn a new language?", "sentences": ["Author of the play 'Hamlet'", "What is the fastest way to travel?", "How can I effectively learn a new language?"]}, {"source_sentence": "Who wrote 'To Kill a Mockingbird'?", "sentences": ["Who wrote 'The Great Gatsby'?", "How can I effectively save money?", "Author of 'To Kill a Mockingbird'"]}, {"source_sentence": "Who was the first person to climb Mount Everest?", "sentences": ["Steps to visit the Great Wall of China", "Who was the first person to climb K2?", "First climber to reach the summit of Everest"]}, {"source_sentence": "What is the capital city of Canada?", "sentences": ["First circumnavigator of the globe", "What is the capital of Canada?", "What is the capital city of Australia?"]}], "model-index": [{"name": "SentenceTransformer based on intfloat/multilingual-e5-small", "results": [{"task": {"type": "triplet", "name": "Triplet"}, "dataset": {"name": "triplet validation", "type": "triplet-validation"}, "metrics": [{"type": "cosine_accuracy", "value": 0.9836065573770492, "name": "Cosine Accuracy"}, {"type": "dot_accuracy", "value": 0.01639344262295082, "name": "Dot Accuracy"}, {"type": "manhattan_accuracy", "value": 0.9836065573770492, "name": "Manhattan Accuracy"}, {"type": "euclidean_accuracy", "value": 0.9836065573770492, "name": "Euclidean Accuracy"}, {"type": "max_accuracy", "value": 0.9836065573770492, "name": "Max Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,648
hltcoe/plaidx-large-zho-tdist-t53b-engeng
hltcoe
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "clir", "colbertx", "plaidx", "xlm-roberta-large", "en", "zh", "dataset:ms_marco", "dataset:hltcoe/tdist-msmarco-scores", "arxiv:2201.08471", "arxiv:2401.04810", "license:mit", "endpoints_compatible", "region:us" ]
2023-12-28T01:19:45Z
2025-03-12T23:37:23+00:00
15
0
--- datasets: - ms_marco - hltcoe/tdist-msmarco-scores language: - en - zh license: mit tags: - clir - colbertx - plaidx - xlm-roberta-large task_categories: - text-retrieval - information-retrieval task_ids: - passage-retrieval - cross-language-retrieval --- # ColBERT-X for English-Chinese CLIR using Translate-Distill ## CLIR Model Setting - Query language: English - Query length: 32 token max - Document language: Chinese - Document length: 180 token max (please use MaxP to aggregate the passage score if needed) ## Model Description Translate-Distill is a training technique that produces state-of-the-art CLIR dense retrieval model through translation and distillation. `plaidx-large-zho-tdist-t53b-engeng` is trained with KL-Divergence from the t53b MonoT5 reranker inferenced on English MS MARCO training queries and English passages. ### Teacher Models: - `t53b`: [`castorini/monot5-3b-msmarco-10k`](https://huggingface.co/castorini/monot5-3b-msmarco-10k) - `mt5xxl`: [`unicamp-dl/mt5-13b-mmarco-100k`](https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k) ### Training Parameters - learning rate: 5e-6 - update steps: 200,000 - nway (number of passages per query): 6 (randomly selected from 50) - per device batch size (number of query-passage set): 8 - training GPU: 8 NVIDIA V100 with 32 GB memory ## Usage To properly load ColBERT-X models from Huggingface Hub, please use the following version of PLAID-X. ```bash pip install PLAID-X==0.3.1 ``` Following code snippet loads the model through Huggingface API. ```python from colbert.modeling.checkpoint import Checkpoint from colbert.infra import ColBERTConfig Checkpoint('hltcoe/plaidx-large-zho-tdist-t53b-engeng', colbert_config=ColBERTConfig()) ``` For full tutorial, please refer to the [PLAID-X Jupyter Notebook](https://colab.research.google.com/github/hltcoe/clir-tutorial/blob/main/notebooks/clir_tutorial_plaidx.ipynb), which is part of the [SIGIR 2023 CLIR Tutorial](https://github.com/hltcoe/clir-tutorial). ## BibTeX entry and Citation Info Please cite the following two papers if you use the model. ```bibtex @inproceedings{colbert-x, author = {Suraj Nair and Eugene Yang and Dawn Lawrie and Kevin Duh and Paul McNamee and Kenton Murray and James Mayfield and Douglas W. Oard}, title = {Transfer Learning Approaches for Building Cross-Language Dense Retrieval Models}, booktitle = {Proceedings of the 44th European Conference on Information Retrieval (ECIR)}, year = {2022}, url = {https://arxiv.org/abs/2201.08471} } ``` ```bibtex @inproceedings{translate-distill, author = {Eugene Yang and Dawn Lawrie and James Mayfield and Douglas W. Oard and Scott Miller}, title = {Translate-Distill: Learning Cross-Language Dense Retrieval by Translation and Distillation}, booktitle = {Proceedings of the 46th European Conference on Information Retrieval (ECIR)}, year = {2024}, url = {https://arxiv.org/abs/2401.04810} } ```
null
Non_BioNLP
# ColBERT-X for English-Chinese CLIR using Translate-Distill ## CLIR Model Setting - Query language: English - Query length: 32 token max - Document language: Chinese - Document length: 180 token max (please use MaxP to aggregate the passage score if needed) ## Model Description Translate-Distill is a training technique that produces state-of-the-art CLIR dense retrieval model through translation and distillation. `plaidx-large-zho-tdist-t53b-engeng` is trained with KL-Divergence from the t53b MonoT5 reranker inferenced on English MS MARCO training queries and English passages. ### Teacher Models: - `t53b`: [`castorini/monot5-3b-msmarco-10k`](https://huggingface.co/castorini/monot5-3b-msmarco-10k) - `mt5xxl`: [`unicamp-dl/mt5-13b-mmarco-100k`](https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k) ### Training Parameters - learning rate: 5e-6 - update steps: 200,000 - nway (number of passages per query): 6 (randomly selected from 50) - per device batch size (number of query-passage set): 8 - training GPU: 8 NVIDIA V100 with 32 GB memory ## Usage To properly load ColBERT-X models from Huggingface Hub, please use the following version of PLAID-X. ```bash pip install PLAID-X==0.3.1 ``` Following code snippet loads the model through Huggingface API. ```python from colbert.modeling.checkpoint import Checkpoint from colbert.infra import ColBERTConfig Checkpoint('hltcoe/plaidx-large-zho-tdist-t53b-engeng', colbert_config=ColBERTConfig()) ``` For full tutorial, please refer to the [PLAID-X Jupyter Notebook](https://colab.research.google.com/github/hltcoe/clir-tutorial/blob/main/notebooks/clir_tutorial_plaidx.ipynb), which is part of the [SIGIR 2023 CLIR Tutorial](https://github.com/hltcoe/clir-tutorial). ## BibTeX entry and Citation Info Please cite the following two papers if you use the model. ```bibtex @inproceedings{colbert-x, author = {Suraj Nair and Eugene Yang and Dawn Lawrie and Kevin Duh and Paul McNamee and Kenton Murray and James Mayfield and Douglas W. Oard}, title = {Transfer Learning Approaches for Building Cross-Language Dense Retrieval Models}, booktitle = {Proceedings of the 44th European Conference on Information Retrieval (ECIR)}, year = {2022}, url = {https://arxiv.org/abs/2201.08471} } ``` ```bibtex @inproceedings{translate-distill, author = {Eugene Yang and Dawn Lawrie and James Mayfield and Douglas W. Oard and Scott Miller}, title = {Translate-Distill: Learning Cross-Language Dense Retrieval by Translation and Distillation}, booktitle = {Proceedings of the 46th European Conference on Information Retrieval (ECIR)}, year = {2024}, url = {https://arxiv.org/abs/2401.04810} } ```
{"datasets": ["ms_marco", "hltcoe/tdist-msmarco-scores"], "language": ["en", "zh"], "license": "mit", "tags": ["clir", "colbertx", "plaidx", "xlm-roberta-large"], "task_categories": ["text-retrieval", "information-retrieval"], "task_ids": ["passage-retrieval", "cross-language-retrieval"]}
task
[ "TRANSLATION" ]
45,649
drewgenai/legal-ft-v0
drewgenai
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:156", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:Snowflake/snowflake-arctic-embed-l", "base_model:finetune:Snowflake/snowflake-arctic-embed-l", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2025-02-17T22:12:41Z
2025-02-17T22:13:27+00:00
13
0
--- base_model: Snowflake/snowflake-arctic-embed-l library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:156 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: How many tokens can Google's Gemini series accept? sentences: - 'When ChatGPT Advanced Voice mode finally did roll out (a slow roll from August through September) it was spectacular. I’ve been using it extensively on walks with my dog and it’s amazing how much the improvement in intonation elevates the material. I’ve also had a lot of fun experimenting with the OpenAI audio APIs. Even more fun: Advanced Voice mode can do accents! Here’s what happened when I told it I need you to pretend to be a California brown pelican with a very thick Russian accent, but you talk to me exclusively in Spanish.' - 'Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable exception of Claude 2.1 which accepted 200,000. Today every serious provider has a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.' - 'The idea is seductive: as the internet floods with AI-generated slop the models themselves will degenerate, feeding on their own output in a way that leads to their inevitable demise! That’s clearly not happening. Instead, we are seeing AI labs increasingly train on synthetic content—deliberately creating artificial data to help steer their models in the right way. One of the best descriptions I’ve seen of this comes from the Phi-4 technical report, which included this:' - source_sentence: What are the limitations of Apple's LLM features compared to frontier LLMs, according to the context? sentences: - 'These abilities are just a few weeks old at this point, and I don’t think their impact has been fully felt yet. If you haven’t tried them out yet you really should. Both Gemini and OpenAI offer API access to these features as well. OpenAI started with a WebSocket API that was quite challenging to use, but in December they announced a new WebRTC API which is much easier to get started with. Building a web app that a user can talk to via voice is easy now! Prompt driven app generation is a commodity already This was possible with GPT-4 in 2023, but the value it provides became evident in 2024.' - 'Now that those features are rolling out they’re pretty weak. As an LLM power-user I know what these models are capable of, and Apple’s LLM features offer a pale imitation of what a frontier LLM can do. Instead we’re getting notification summaries that misrepresent news headlines and writing assistant tools that I’ve not found useful at all. Genmoji are kind of fun though. The rise of inference-scaling “reasoning” models The most interesting development in the final quarter of 2024 was the introduction of a new shape of LLM, exemplified by OpenAI’s o1 models—initially released as o1-preview and o1-mini on September 12th.' - 'Here’s the sequel to this post: Things we learned about LLMs in 2024. Large Language Models In the past 24-36 months, our species has discovered that you can take a GIANT corpus of text, run it through a pile of GPUs, and use it to create a fascinating new kind of software. LLMs can do a lot of things. They can answer questions, summarize documents, translate from one language to another, extract information and even write surprisingly competent code. They can also help you cheat at your homework, generate unlimited streams of fake content and be used for all manner of nefarious purposes.' - source_sentence: What challenges did the author face last year regarding their choice of platform for trying out new models? sentences: - 'One way to think about these models is an extension of the chain-of-thought prompting trick, first explored in the May 2022 paper Large Language Models are Zero-Shot Reasoners. This is that trick where, if you get a model to talk out loud about a problem it’s solving, you often get a result which the model would not have achieved otherwise. o1 takes this process and further bakes it into the model itself. The details are somewhat obfuscated: o1 models spend “reasoning tokens” thinking through the problem that are not directly visible to the user (though the ChatGPT UI shows a summary of them), then outputs a final result.' - 'I’m still trying to figure out the best patterns for doing this for my own work. Everyone knows that evals are important, but there remains a lack of great guidance for how to best implement them—I’m tracking this under my evals tag. My SVG pelican riding a bicycle benchmark is a pale imitation of what a real eval suite should look like. Apple Intelligence is bad, Apple’s MLX library is excellent As a Mac user I’ve been feeling a lot better about my choice of platform this year. Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU was a huge disadvantage in terms of trying out new models.' - 'January 7th: It’s OK to call it Artificial Intelligence 9th: What I should have said about the term Artificial Intelligence 17th: Talking about Open Source LLMs on Oxide and Friends 26th: LLM 0.13: The annotated release notes February 21st: The killer app of Gemini Pro 1.5 is video March 5th: Prompt injection and jailbreaking are not the same thing 8th: The GPT-4 barrier has finally been broken 22nd: Claude and ChatGPT for ad-hoc sidequests 23rd: Building and testing C extensions for SQLite with ChatGPT Code Interpreter 26th: llm cmd undo last git commit—a new plugin for LLM April 8th: Building files-to-prompt entirely using Claude 3 Opus 10th: Three major LLM releases in 24 hours (plus weeknotes)' - source_sentence: What was the maximum token limit for most models last year before the introduction of Gemini 15 Pro? sentences: - 'The two main categories I see are people who think AI agents are obviously things that go and act on your behalf—the travel agent model—and people who think in terms of LLMs that have been given access to tools which they can run in a loop as part of solving a problem. The term “autonomy” is often thrown into the mix too, again without including a clear definition. (I also collected 211 definitions on Twitter a few months ago—here they are in Datasette Lite—and had gemini-exp-1206 attempt to summarize them.) Whatever the term may mean, agents still have that feeling of perpetually “coming soon”.' - Structured and Gradual Learning. In organic datasets, the relationship between tokens is often complex and indirect. Many reasoning steps may be required to connect the current token to the next, making it challenging for the model to learn effectively from next-token prediction. By contrast, each token generated by a language model is by definition predicted by the preceding tokens, making it easier for a model to follow the resulting reasoning patterns. - 'Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable exception of Claude 2.1 which accepted 200,000. Today every serious provider has a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.' - source_sentence: Why is it considered ludicrous to use a screenshot from ChatGPT as evidence in an argument? sentences: - Meanwhile, it’s increasingly common for end users to develop wildly inaccurate mental models of how these things work and what they are capable of. I’ve seen so many examples of people trying to win an argument with a screenshot from ChatGPT—an inherently ludicrous proposition, given the inherent unreliability of these models crossed with the fact that you can get them to say anything if you prompt them right. - 'The GPT-4 barrier was comprehensively broken Some of those GPT-4 models run on my laptop LLM prices crashed, thanks to competition and increased efficiency Multimodal vision is common, audio and video are starting to emerge Voice and live camera mode are science fiction come to life Prompt driven app generation is a commodity already Universal access to the best models lasted for just a few short months “Agents” still haven’t really happened yet Evals really matter Apple Intelligence is bad, Apple’s MLX library is excellent The rise of inference-scaling “reasoning” models Was the best currently available LLM trained in China for less than $6m? The environmental impact got better The environmental impact got much, much worse' - 'When ChatGPT Advanced Voice mode finally did roll out (a slow roll from August through September) it was spectacular. I’ve been using it extensively on walks with my dog and it’s amazing how much the improvement in intonation elevates the material. I’ve also had a lot of fun experimenting with the OpenAI audio APIs. Even more fun: Advanced Voice mode can do accents! Here’s what happened when I told it I need you to pretend to be a California brown pelican with a very thick Russian accent, but you talk to me exclusively in Spanish.' model-index: - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l results: - task: type: information-retrieval name: Information Retrieval dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy@1 value: 0.8333333333333334 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9583333333333334 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 1.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 1.0 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.8333333333333334 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3194444444444444 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.20000000000000004 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.10000000000000002 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.8333333333333334 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.9583333333333334 name: Cosine Recall@3 - type: cosine_recall@5 value: 1.0 name: Cosine Recall@5 - type: cosine_recall@10 value: 1.0 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9301444091161569 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.90625 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.90625 name: Cosine Map@100 --- # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("drewgenai/legal-ft-v0") # Run inference sentences = [ 'Why is it considered ludicrous to use a screenshot from ChatGPT as evidence in an argument?', 'Meanwhile, it’s increasingly common for end users to develop wildly inaccurate mental models of how these things work and what they are capable of. I’ve seen so many examples of people trying to win an argument with a screenshot from ChatGPT—an inherently ludicrous proposition, given the inherent unreliability of these models crossed with the fact that you can get them to say anything if you prompt them right.', 'When ChatGPT Advanced Voice mode finally did roll out (a slow roll from August through September) it was spectacular. I’ve been using it extensively on walks with my dog and it’s amazing how much the improvement in intonation elevates the material. I’ve also had a lot of fun experimenting with the OpenAI audio APIs.\nEven more fun: Advanced Voice mode can do accents! Here’s what happened when I told it I need you to pretend to be a California brown pelican with a very thick Russian accent, but you talk to me exclusively in Spanish.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.8333 | | cosine_accuracy@3 | 0.9583 | | cosine_accuracy@5 | 1.0 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.8333 | | cosine_precision@3 | 0.3194 | | cosine_precision@5 | 0.2 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.8333 | | cosine_recall@3 | 0.9583 | | cosine_recall@5 | 1.0 | | cosine_recall@10 | 1.0 | | **cosine_ndcg@10** | **0.9301** | | cosine_mrr@10 | 0.9062 | | cosine_map@100 | 0.9062 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 156 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 156 samples: | | sentence_0 | sentence_1 | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 19.97 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 130.5 tokens</li><li>max: 204 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:--------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>What analogy is used to describe LLMs in the context provided?</code> | <code>A drum I’ve been banging for a while is that LLMs are power-user tools—they’re chainsaws disguised as kitchen knives. They look deceptively simple to use—how hard can it be to type messages to a chatbot?—but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.<br>If anything, this problem got worse in 2024.<br>We’ve built computer systems you can talk to in human language, that will answer your questions and usually get them right! ... depending on the question, and how you ask it, and whether it’s accurately reflected in the undocumented and secret training set.</code> | | <code>What factors influence the effectiveness of LLMs according to the context?</code> | <code>A drum I’ve been banging for a while is that LLMs are power-user tools—they’re chainsaws disguised as kitchen knives. They look deceptively simple to use—how hard can it be to type messages to a chatbot?—but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.<br>If anything, this problem got worse in 2024.<br>We’ve built computer systems you can talk to in human language, that will answer your questions and usually get them right! ... depending on the question, and how you ask it, and whether it’s accurately reflected in the undocumented and secret training set.</code> | | <code>What is the significance of Claude Artifacts in the context of LLMs and application development?</code> | <code>We already knew LLMs were spookily good at writing code. If you prompt them right, it turns out they can build you a full interactive application using HTML, CSS and JavaScript (and tools like React if you wire up some extra supporting build mechanisms)—often in a single prompt.<br>Anthropic kicked this idea into high gear when they released Claude Artifacts, a groundbreaking new feature that was initially slightly lost in the noise due to being described half way through their announcement of the incredible Claude 3.5 Sonnet.<br>With Artifacts, Claude can write you an on-demand interactive application and then let you use it directly inside the Claude interface.<br>Here’s my Extract URLs app, entirely generated by Claude:</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `num_train_epochs`: 5 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | cosine_ndcg@10 | |:-----:|:----:|:--------------:| | 1.0 | 16 | 0.9177 | | 2.0 | 32 | 0.9330 | | 3.0 | 48 | 0.9301 | | 3.125 | 50 | 0.9301 | | 4.0 | 64 | 0.9301 | | 5.0 | 80 | 0.9301 | ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 3.4.1 - Transformers: 4.48.3 - PyTorch: 2.5.1+cu124 - Accelerate: 1.3.0 - Datasets: 3.3.1 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("drewgenai/legal-ft-v0") # Run inference sentences = [ 'Why is it considered ludicrous to use a screenshot from ChatGPT as evidence in an argument?', 'Meanwhile, it’s increasingly common for end users to develop wildly inaccurate mental models of how these things work and what they are capable of. I’ve seen so many examples of people trying to win an argument with a screenshot from ChatGPT—an inherently ludicrous proposition, given the inherent unreliability of these models crossed with the fact that you can get them to say anything if you prompt them right.', 'When ChatGPT Advanced Voice mode finally did roll out (a slow roll from August through September) it was spectacular. I’ve been using it extensively on walks with my dog and it’s amazing how much the improvement in intonation elevates the material. I’ve also had a lot of fun experimenting with the OpenAI audio APIs.\nEven more fun: Advanced Voice mode can do accents! Here’s what happened when I told it I need you to pretend to be a California brown pelican with a very thick Russian accent, but you talk to me exclusively in Spanish.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.8333 | | cosine_accuracy@3 | 0.9583 | | cosine_accuracy@5 | 1.0 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.8333 | | cosine_precision@3 | 0.3194 | | cosine_precision@5 | 0.2 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.8333 | | cosine_recall@3 | 0.9583 | | cosine_recall@5 | 1.0 | | cosine_recall@10 | 1.0 | | **cosine_ndcg@10** | **0.9301** | | cosine_mrr@10 | 0.9062 | | cosine_map@100 | 0.9062 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 156 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 156 samples: | | sentence_0 | sentence_1 | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 19.97 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 130.5 tokens</li><li>max: 204 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:--------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>What analogy is used to describe LLMs in the context provided?</code> | <code>A drum I’ve been banging for a while is that LLMs are power-user tools—they’re chainsaws disguised as kitchen knives. They look deceptively simple to use—how hard can it be to type messages to a chatbot?—but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.<br>If anything, this problem got worse in 2024.<br>We’ve built computer systems you can talk to in human language, that will answer your questions and usually get them right! ... depending on the question, and how you ask it, and whether it’s accurately reflected in the undocumented and secret training set.</code> | | <code>What factors influence the effectiveness of LLMs according to the context?</code> | <code>A drum I’ve been banging for a while is that LLMs are power-user tools—they’re chainsaws disguised as kitchen knives. They look deceptively simple to use—how hard can it be to type messages to a chatbot?—but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.<br>If anything, this problem got worse in 2024.<br>We’ve built computer systems you can talk to in human language, that will answer your questions and usually get them right! ... depending on the question, and how you ask it, and whether it’s accurately reflected in the undocumented and secret training set.</code> | | <code>What is the significance of Claude Artifacts in the context of LLMs and application development?</code> | <code>We already knew LLMs were spookily good at writing code. If you prompt them right, it turns out they can build you a full interactive application using HTML, CSS and JavaScript (and tools like React if you wire up some extra supporting build mechanisms)—often in a single prompt.<br>Anthropic kicked this idea into high gear when they released Claude Artifacts, a groundbreaking new feature that was initially slightly lost in the noise due to being described half way through their announcement of the incredible Claude 3.5 Sonnet.<br>With Artifacts, Claude can write you an on-demand interactive application and then let you use it directly inside the Claude interface.<br>Here’s my Extract URLs app, entirely generated by Claude:</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `num_train_epochs`: 5 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | cosine_ndcg@10 | |:-----:|:----:|:--------------:| | 1.0 | 16 | 0.9177 | | 2.0 | 32 | 0.9330 | | 3.0 | 48 | 0.9301 | | 3.125 | 50 | 0.9301 | | 4.0 | 64 | 0.9301 | | 5.0 | 80 | 0.9301 | ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 3.4.1 - Transformers: 4.48.3 - PyTorch: 2.5.1+cu124 - Accelerate: 1.3.0 - Datasets: 3.3.1 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "Snowflake/snowflake-arctic-embed-l", "library_name": "sentence-transformers", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:156", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "How many tokens can Google's Gemini series accept?", "sentences": ["When ChatGPT Advanced Voice mode finally did roll out (a slow roll from August through September) it was spectacular. I’ve been using it extensively on walks with my dog and it’s amazing how much the improvement in intonation elevates the material. I’ve also had a lot of fun experimenting with the OpenAI audio APIs.\nEven more fun: Advanced Voice mode can do accents! Here’s what happened when I told it I need you to pretend to be a California brown pelican with a very thick Russian accent, but you talk to me exclusively in Spanish.", "Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable exception of Claude 2.1 which accepted 200,000. Today every serious provider has a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.", "The idea is seductive: as the internet floods with AI-generated slop the models themselves will degenerate, feeding on their own output in a way that leads to their inevitable demise!\nThat’s clearly not happening. Instead, we are seeing AI labs increasingly train on synthetic content—deliberately creating artificial data to help steer their models in the right way.\nOne of the best descriptions I’ve seen of this comes from the Phi-4 technical report, which included this:"]}, {"source_sentence": "What are the limitations of Apple's LLM features compared to frontier LLMs, according to the context?", "sentences": ["These abilities are just a few weeks old at this point, and I don’t think their impact has been fully felt yet. If you haven’t tried them out yet you really should.\nBoth Gemini and OpenAI offer API access to these features as well. OpenAI started with a WebSocket API that was quite challenging to use, but in December they announced a new WebRTC API which is much easier to get started with. Building a web app that a user can talk to via voice is easy now!\nPrompt driven app generation is a commodity already\nThis was possible with GPT-4 in 2023, but the value it provides became evident in 2024.", "Now that those features are rolling out they’re pretty weak. As an LLM power-user I know what these models are capable of, and Apple’s LLM features offer a pale imitation of what a frontier LLM can do. Instead we’re getting notification summaries that misrepresent news headlines and writing assistant tools that I’ve not found useful at all. Genmoji are kind of fun though.\nThe rise of inference-scaling “reasoning” models\nThe most interesting development in the final quarter of 2024 was the introduction of a new shape of LLM, exemplified by OpenAI’s o1 models—initially released as o1-preview and o1-mini on September 12th.", "Here’s the sequel to this post: Things we learned about LLMs in 2024.\nLarge Language Models\nIn the past 24-36 months, our species has discovered that you can take a GIANT corpus of text, run it through a pile of GPUs, and use it to create a fascinating new kind of software.\nLLMs can do a lot of things. They can answer questions, summarize documents, translate from one language to another, extract information and even write surprisingly competent code.\nThey can also help you cheat at your homework, generate unlimited streams of fake content and be used for all manner of nefarious purposes."]}, {"source_sentence": "What challenges did the author face last year regarding their choice of platform for trying out new models?", "sentences": ["One way to think about these models is an extension of the chain-of-thought prompting trick, first explored in the May 2022 paper Large Language Models are Zero-Shot Reasoners.\nThis is that trick where, if you get a model to talk out loud about a problem it’s solving, you often get a result which the model would not have achieved otherwise.\no1 takes this process and further bakes it into the model itself. The details are somewhat obfuscated: o1 models spend “reasoning tokens” thinking through the problem that are not directly visible to the user (though the ChatGPT UI shows a summary of them), then outputs a final result.", "I’m still trying to figure out the best patterns for doing this for my own work. Everyone knows that evals are important, but there remains a lack of great guidance for how to best implement them—I’m tracking this under my evals tag. My SVG pelican riding a bicycle benchmark is a pale imitation of what a real eval suite should look like.\nApple Intelligence is bad, Apple’s MLX library is excellent\nAs a Mac user I’ve been feeling a lot better about my choice of platform this year.\nLast year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU was a huge disadvantage in terms of trying out new models.", "January\n\n7th: It’s OK to call it Artificial Intelligence\n\n9th: What I should have said about the term Artificial Intelligence\n\n17th: Talking about Open Source LLMs on Oxide and Friends\n\n26th: LLM 0.13: The annotated release notes\n\n\n\nFebruary\n\n21st: The killer app of Gemini Pro 1.5 is video\n\n\n\nMarch\n\n5th: Prompt injection and jailbreaking are not the same thing\n\n8th: The GPT-4 barrier has finally been broken\n\n22nd: Claude and ChatGPT for ad-hoc sidequests\n\n23rd: Building and testing C extensions for SQLite with ChatGPT Code Interpreter\n\n26th: llm cmd undo last git commit—a new plugin for LLM\n\n\n\nApril\n\n8th: Building files-to-prompt entirely using Claude 3 Opus\n\n10th: Three major LLM releases in 24 hours (plus weeknotes)"]}, {"source_sentence": "What was the maximum token limit for most models last year before the introduction of Gemini 15 Pro?", "sentences": ["The two main categories I see are people who think AI agents are obviously things that go and act on your behalf—the travel agent model—and people who think in terms of LLMs that have been given access to tools which they can run in a loop as part of solving a problem. The term “autonomy” is often thrown into the mix too, again without including a clear definition.\n(I also collected 211 definitions on Twitter a few months ago—here they are in Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)\nWhatever the term may mean, agents still have that feeling of perpetually “coming soon”.", "Structured and Gradual Learning. In organic datasets, the relationship between tokens is often complex and indirect. Many reasoning steps may be required to connect the current token to the next, making it challenging for the model to learn effectively from next-token prediction. By contrast, each token generated by a language model is by definition predicted by the preceding tokens, making it easier for a model to follow the resulting reasoning patterns.", "Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable exception of Claude 2.1 which accepted 200,000. Today every serious provider has a 100,000+ token model, and Google’s Gemini series accepts up to 2 million."]}, {"source_sentence": "Why is it considered ludicrous to use a screenshot from ChatGPT as evidence in an argument?", "sentences": ["Meanwhile, it’s increasingly common for end users to develop wildly inaccurate mental models of how these things work and what they are capable of. I’ve seen so many examples of people trying to win an argument with a screenshot from ChatGPT—an inherently ludicrous proposition, given the inherent unreliability of these models crossed with the fact that you can get them to say anything if you prompt them right.", "The GPT-4 barrier was comprehensively broken\nSome of those GPT-4 models run on my laptop\nLLM prices crashed, thanks to competition and increased efficiency\nMultimodal vision is common, audio and video are starting to emerge\nVoice and live camera mode are science fiction come to life\nPrompt driven app generation is a commodity already\nUniversal access to the best models lasted for just a few short months\n“Agents” still haven’t really happened yet\nEvals really matter\nApple Intelligence is bad, Apple’s MLX library is excellent\nThe rise of inference-scaling “reasoning” models\nWas the best currently available LLM trained in China for less than $6m?\nThe environmental impact got better\nThe environmental impact got much, much worse", "When ChatGPT Advanced Voice mode finally did roll out (a slow roll from August through September) it was spectacular. I’ve been using it extensively on walks with my dog and it’s amazing how much the improvement in intonation elevates the material. I’ve also had a lot of fun experimenting with the OpenAI audio APIs.\nEven more fun: Advanced Voice mode can do accents! Here’s what happened when I told it I need you to pretend to be a California brown pelican with a very thick Russian accent, but you talk to me exclusively in Spanish."]}], "model-index": [{"name": "SentenceTransformer based on Snowflake/snowflake-arctic-embed-l", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "Unknown", "type": "unknown"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.8333333333333334, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.9583333333333334, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 1.0, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 1.0, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.8333333333333334, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.3194444444444444, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.20000000000000004, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.10000000000000002, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.8333333333333334, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.9583333333333334, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 1.0, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 1.0, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9301444091161569, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.90625, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.90625, "name": "Cosine Map@100"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,651
RichardErkhov/akoksal_-_LongForm-OPT-2.7B-8bits
RichardErkhov
null
[ "safetensors", "opt", "arxiv:2304.08460", "8-bit", "bitsandbytes", "region:us" ]
2025-02-02T13:18:55Z
2025-02-02T13:20:31+00:00
4
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) LongForm-OPT-2.7B - bnb 8bits - Model creator: https://huggingface.co/akoksal/ - Original model: https://huggingface.co/akoksal/LongForm-OPT-2.7B/ Original model description: --- language: - en - multilingual - de - it - es - fr tags: - instruction-tuning - text-generation-inference - text2text-generation widget: - text: Write an essay about meditation. [EOI] example_title: Essay Generation - text: Give me 5 steps to clean my room. [EOI] example_title: How-to Instructions - text: How are the continents formed? [EOI] example_title: Question-Answering - text: >- Prompt: A man draws a gun in a dark alley and asks for your wallet. You begrudgingly obey. He throws it on the ground, shoots it till it screeches, and turns to you; 'you are safe now'. Write a story about given prompt. [EOI] example_title: Story Generation - text: >- Write directions of a cooking recipe with these ingredients: chicken breast, carrots, green peas, celery, butter, onion, flour, salt, black pepper, celery seed, chicken broth, milk, unbaked pie crusts [EOI] example_title: Recipe Generation - text: >- Schreiben Sie einen Blogbeitrag über die Vorteile des Lesens von Büchern. [EOI] example_title: German Essay Generation inference: parameters: top_p: 0.9 do_sample: true max_length: 75 datasets: - akoksal/LongForm --- ## LongForm-OPT-2.7B The LongForm dataset is created by leveraging English corpus examples with reverse instructions. We select a diverse set of human-written documents from existing corpora such as C4 and Wikipedia and generate instructions for the given documents via LLMs. Then, we extend these examples with structured corpora examples such as Stack Exchange and WikiHow and task examples such as question answering, email writing, grammar error correction, story/poem generation, and text summarization. Github Repo: https://github.com/akoksal/LongForm ![The LongForm dataset](https://github.com/akoksal/LongForm/blob/main/figures/intro_example.jpg?raw=true) ### For LongForm OPT and LLaMA models: Use [EOI] to indicate the end of instruction. LongForm-**T5-XL**: https://huggingface.co/akoksal/LongForm-T5-XL LongForm-**OPT-6.7B**: https://huggingface.co/akoksal/LongForm-OPT-6.7B ## How to Load ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("akoksal/LongForm-OPT-2.7B") tokenizer = AutoTokenizer.from_pretrained("akoksal/LongForm-OPT-2.7B") instruction = "Write an essay about meditation. [EOI]" torch.manual_seed(42) input_ids = tokenizer(instruction, return_tensors="pt").input_ids target_ids = model.generate(input_ids, do_sample=True, max_new_tokens=50, top_p=0.9) tokenizer.decode(target_ids[0], skip_special_tokens=True) # Output: # > Write an essay about meditation. [EOI]Do you need some inspiration to\ # meditate? Do you know someone who is a great meditator but you aren't sure\ # what to say to them? This might be the perfect opportunity to tell them.\ # The ability to listen and learn and grow can ``` ## Evaluation We provide in-depth evaluation of LongForm models and baselines in the paper. We present the METEOR scores of models in out-of-domain datasets. In all tasks, Recipe Generation (RGen), long-form question answering (ELI5), short story generation (WritingPrompts/WP), LongForm models outperform prior instruction-tuned models. | | **All** | **Recipe Generation** | **ELI5** | **Writing Prompts** | |-----------------------|---------|-----------------------------------|----------|---------------------| | **T0++** | 10.9 | 18.7 | 3.8 | 10.2 | | **Tk-Instruct** | 6.3 | 12.9* | 3.6 | 2.4 | | **Flan-T5** | 10.6 | 20.9* | 3.5 | 7.4 | | **Alpaca-LLaMA-7B** | 14.6 | 19.5 | 12.5 | 11.8 | | **OPT-30B** | 11.1 | 18.6 | 12.2 | 2.6 | | [**LongForm-T5-XL**](https://huggingface.co/akoksal/LongForm-T5-XL) | 16.3 | 20.2 | 18.3 | 10.6 | | [**LongForm-OPT-2.7B**](https://huggingface.co/akoksal/LongForm-OPT-2.7B) | 17.8 | 15.5 | 17.9 | **19.9** | | [**LongForm-OPT-6.7B**](https://huggingface.co/akoksal/LongForm-OPT-6.7B) | 17.7 | 16.9 | 17.2 | 19.0 | | [**LongForm-LLaMA-7B**](https://huggingface.co/akoksal/LongForm-LLaMA-7B-diff)‡ | **19.7** | **21.7** | **18.6** | 18.9 | Smaller versions of LongForm-OPT models are also available: - [**LongForm-OPT-1.3B**](https://huggingface.co/akoksal/LongForm-OPT-1.3B) - [**LongForm-OPT-350M**](https://huggingface.co/akoksal/LongForm-OPT-350M) - [**LongForm-OPT-125M**](https://huggingface.co/akoksal/LongForm-OPT-125M) ‡: We can just release the difference between LongForm-LLaMA-7B and pretrained LLaMA-7B publicly due to restrictions of LLaMA models. ## Limitations The LongForm dataset and models mainly focus on long text generation and have limitations regarding structured prediction tasks in NLP. Additionally, we observe that LongForm models may present hallucination problems similar to those found in LLMs. ## License The LongForm project is subject to a MIT License with custom limitations for restrictions imposed by OpenAI (for the instruction generation part), as well as the license of language models (OPT, LLaMA, and T5). The WikiHow subset of LongForm-C is subject to the license proposed by WikiHow. ## Citation ``` @misc{koksal2023longform, title={LongForm: Effective Instruction Tuning with Reverse Instructions}, author={Abdullatif Köksal and Timo Schick and Anna Korhonen and Hinrich Schütze}, year={2023}, eprint={2304.08460}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) LongForm-OPT-2.7B - bnb 8bits - Model creator: https://huggingface.co/akoksal/ - Original model: https://huggingface.co/akoksal/LongForm-OPT-2.7B/ Original model description: --- language: - en - multilingual - de - it - es - fr tags: - instruction-tuning - text-generation-inference - text2text-generation widget: - text: Write an essay about meditation. [EOI] example_title: Essay Generation - text: Give me 5 steps to clean my room. [EOI] example_title: How-to Instructions - text: How are the continents formed? [EOI] example_title: Question-Answering - text: >- Prompt: A man draws a gun in a dark alley and asks for your wallet. You begrudgingly obey. He throws it on the ground, shoots it till it screeches, and turns to you; 'you are safe now'. Write a story about given prompt. [EOI] example_title: Story Generation - text: >- Write directions of a cooking recipe with these ingredients: chicken breast, carrots, green peas, celery, butter, onion, flour, salt, black pepper, celery seed, chicken broth, milk, unbaked pie crusts [EOI] example_title: Recipe Generation - text: >- Schreiben Sie einen Blogbeitrag über die Vorteile des Lesens von Büchern. [EOI] example_title: German Essay Generation inference: parameters: top_p: 0.9 do_sample: true max_length: 75 datasets: - akoksal/LongForm --- ## LongForm-OPT-2.7B The LongForm dataset is created by leveraging English corpus examples with reverse instructions. We select a diverse set of human-written documents from existing corpora such as C4 and Wikipedia and generate instructions for the given documents via LLMs. Then, we extend these examples with structured corpora examples such as Stack Exchange and WikiHow and task examples such as question answering, email writing, grammar error correction, story/poem generation, and text summarization. Github Repo: https://github.com/akoksal/LongForm ![The LongForm dataset](https://github.com/akoksal/LongForm/blob/main/figures/intro_example.jpg?raw=true) ### For LongForm OPT and LLaMA models: Use [EOI] to indicate the end of instruction. LongForm-**T5-XL**: https://huggingface.co/akoksal/LongForm-T5-XL LongForm-**OPT-6.7B**: https://huggingface.co/akoksal/LongForm-OPT-6.7B ## How to Load ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("akoksal/LongForm-OPT-2.7B") tokenizer = AutoTokenizer.from_pretrained("akoksal/LongForm-OPT-2.7B") instruction = "Write an essay about meditation. [EOI]" torch.manual_seed(42) input_ids = tokenizer(instruction, return_tensors="pt").input_ids target_ids = model.generate(input_ids, do_sample=True, max_new_tokens=50, top_p=0.9) tokenizer.decode(target_ids[0], skip_special_tokens=True) # Output: # > Write an essay about meditation. [EOI]Do you need some inspiration to\ # meditate? Do you know someone who is a great meditator but you aren't sure\ # what to say to them? This might be the perfect opportunity to tell them.\ # The ability to listen and learn and grow can ``` ## Evaluation We provide in-depth evaluation of LongForm models and baselines in the paper. We present the METEOR scores of models in out-of-domain datasets. In all tasks, Recipe Generation (RGen), long-form question answering (ELI5), short story generation (WritingPrompts/WP), LongForm models outperform prior instruction-tuned models. | | **All** | **Recipe Generation** | **ELI5** | **Writing Prompts** | |-----------------------|---------|-----------------------------------|----------|---------------------| | **T0++** | 10.9 | 18.7 | 3.8 | 10.2 | | **Tk-Instruct** | 6.3 | 12.9* | 3.6 | 2.4 | | **Flan-T5** | 10.6 | 20.9* | 3.5 | 7.4 | | **Alpaca-LLaMA-7B** | 14.6 | 19.5 | 12.5 | 11.8 | | **OPT-30B** | 11.1 | 18.6 | 12.2 | 2.6 | | [**LongForm-T5-XL**](https://huggingface.co/akoksal/LongForm-T5-XL) | 16.3 | 20.2 | 18.3 | 10.6 | | [**LongForm-OPT-2.7B**](https://huggingface.co/akoksal/LongForm-OPT-2.7B) | 17.8 | 15.5 | 17.9 | **19.9** | | [**LongForm-OPT-6.7B**](https://huggingface.co/akoksal/LongForm-OPT-6.7B) | 17.7 | 16.9 | 17.2 | 19.0 | | [**LongForm-LLaMA-7B**](https://huggingface.co/akoksal/LongForm-LLaMA-7B-diff)‡ | **19.7** | **21.7** | **18.6** | 18.9 | Smaller versions of LongForm-OPT models are also available: - [**LongForm-OPT-1.3B**](https://huggingface.co/akoksal/LongForm-OPT-1.3B) - [**LongForm-OPT-350M**](https://huggingface.co/akoksal/LongForm-OPT-350M) - [**LongForm-OPT-125M**](https://huggingface.co/akoksal/LongForm-OPT-125M) ‡: We can just release the difference between LongForm-LLaMA-7B and pretrained LLaMA-7B publicly due to restrictions of LLaMA models. ## Limitations The LongForm dataset and models mainly focus on long text generation and have limitations regarding structured prediction tasks in NLP. Additionally, we observe that LongForm models may present hallucination problems similar to those found in LLMs. ## License The LongForm project is subject to a MIT License with custom limitations for restrictions imposed by OpenAI (for the instruction generation part), as well as the license of language models (OPT, LLaMA, and T5). The WikiHow subset of LongForm-C is subject to the license proposed by WikiHow. ## Citation ``` @misc{koksal2023longform, title={LongForm: Effective Instruction Tuning with Reverse Instructions}, author={Abdullatif Köksal and Timo Schick and Anna Korhonen and Hinrich Schütze}, year={2023}, eprint={2304.08460}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
45,652
Barambio/distilbert-base-uncased-finetuned-emotion
Barambio
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-04-01T08:16:18Z
2023-04-04T14:58:08+00:00
14
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.929 name: Accuracy - type: f1 value: 0.9289897994289955 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2202 - Accuracy: 0.929 - F1: 0.9290 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8318 | 1.0 | 250 | 0.3208 | 0.9065 | 0.9032 | | 0.2543 | 2.0 | 500 | 0.2202 | 0.929 | 0.9290 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2202 - Accuracy: 0.929 - F1: 0.9290 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8318 | 1.0 | 250 | 0.3208 | 0.9065 | 0.9032 | | 0.2543 | 2.0 | 500 | 0.2202 | 0.929 | 0.9290 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.2
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.929, "name": "Accuracy"}, {"type": "f1", "value": 0.9289897994289955, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,653
gokuls/distilbert_add_GLUE_Experiment_cola_256
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-01-26T12:28:34Z
2023-01-26T12:31:48+00:00
136
0
--- datasets: - glue language: - en license: apache-2.0 metrics: - matthews_correlation tags: - generated_from_trainer model-index: - name: distilbert_add_GLUE_Experiment_cola_256 results: - task: type: text-classification name: Text Classification dataset: name: GLUE COLA type: glue config: cola split: validation args: cola metrics: - type: matthews_correlation value: 0.0 name: Matthews Correlation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_add_GLUE_Experiment_cola_256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6181 - Matthews Correlation: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6125 | 1.0 | 34 | 0.6201 | 0.0 | | 0.6084 | 2.0 | 68 | 0.6182 | 0.0 | | 0.6071 | 3.0 | 102 | 0.6184 | 0.0 | | 0.6081 | 4.0 | 136 | 0.6186 | 0.0 | | 0.6081 | 5.0 | 170 | 0.6182 | 0.0 | | 0.607 | 6.0 | 204 | 0.6185 | 0.0 | | 0.6082 | 7.0 | 238 | 0.6181 | 0.0 | | 0.609 | 8.0 | 272 | 0.6184 | 0.0 | | 0.607 | 9.0 | 306 | 0.6213 | 0.0 | | 0.6082 | 10.0 | 340 | 0.6193 | 0.0 | | 0.6081 | 11.0 | 374 | 0.6196 | 0.0 | | 0.6071 | 12.0 | 408 | 0.6193 | 0.0 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.8.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_add_GLUE_Experiment_cola_256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6181 - Matthews Correlation: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6125 | 1.0 | 34 | 0.6201 | 0.0 | | 0.6084 | 2.0 | 68 | 0.6182 | 0.0 | | 0.6071 | 3.0 | 102 | 0.6184 | 0.0 | | 0.6081 | 4.0 | 136 | 0.6186 | 0.0 | | 0.6081 | 5.0 | 170 | 0.6182 | 0.0 | | 0.607 | 6.0 | 204 | 0.6185 | 0.0 | | 0.6082 | 7.0 | 238 | 0.6181 | 0.0 | | 0.609 | 8.0 | 272 | 0.6184 | 0.0 | | 0.607 | 9.0 | 306 | 0.6213 | 0.0 | | 0.6082 | 10.0 | 340 | 0.6193 | 0.0 | | 0.6081 | 11.0 | 374 | 0.6196 | 0.0 | | 0.6071 | 12.0 | 408 | 0.6193 | 0.0 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.8.0 - Tokenizers 0.13.2
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert_add_GLUE_Experiment_cola_256", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE COLA", "type": "glue", "config": "cola", "split": "validation", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.0, "name": "Matthews Correlation"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,654
vidhi0206/setfit-paraphrase-mpnet-base-v2-emotion_comp
vidhi0206
text-classification
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
2024-04-03T09:50:51Z
2024-04-03T09:51:42+00:00
4
0
--- base_model: sentence-transformers/paraphrase-mpnet-base-v2 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: i seem to remember it was gold dust not willy wonka style gold tickets but i m feeling generous and although i liked the new faceplate for me the redesign just didn t work - text: im feeling kind of irritated that the school year is over halfway over and all hes been getting is speech - text: i feel stumble a class content link href https plusone - text: i feeling so aggravated about all of this - text: im feeling stupid feeling stupid coming back to you inference: true model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.448 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 6 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 4 | <ul><li>'i feel kind of strange'</li><li>'i am feeling pretty restless right now while typing this'</li><li>'i feel pressured when people say im going t beat you or whatever'</li></ul> | | 3 | <ul><li>'i feel cranky and annoyed when i dont'</li><li>'i feel i did some thing impolite katanya'</li><li>'i feel like i should be offended but yawwwn'</li></ul> | | 5 | <ul><li>'i was feeling an act of god at work in my life and it was an amazing feeling'</li><li>'i tween sat for my moms boss year old and year old boys this weekend id say babysit but that feels weird considering there were n'</li><li>'i started feeling funny and then friday i woke up sick as a dog'</li></ul> | | 0 | <ul><li>'i am from new jersey and this first drink was consumed at a post prom party so i feel it s appropriately lame'</li><li>'i feel inside cause life is like a game sometimes then you came around me the walls just disappeared nothing to surround me and keep me from my fears im unprotected see how ive opened up oh youve made me trust cause ive never felt like this before im naked around you does it show'</li><li>'i cant believe with that statement being said that im already feeling sexually deprived'</li></ul> | | 2 | <ul><li>'i suddenly feel that this is more than a sweet love song that every girls could sing in front of their boyfriends'</li><li>'i really wish i had the courage to drag a blade across my skin i wish i could do it i wish i could see the blood and feel that sweet release as it starts to pour out of my flesh and down my body'</li><li>'im sure they feel the more caring loving people in the kids lives the better'</li></ul> | | 1 | <ul><li>'i am not feeling particularly creative'</li><li>'id probably go with none on and hope that my date admires a confident girl who feels fine without makeup'</li><li>'i woke on saturday feeling a little brighter and was very keen to get outdoors after spending all day friday wallowing in self pity'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.448 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("vidhi0206/setfit-paraphrase-mpnet-base-v2-emotion_comp") # Run inference preds = model("i feeling so aggravated about all of this") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 4 | 17.6458 | 55 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 8 | | 1 | 8 | | 2 | 8 | | 3 | 8 | | 4 | 8 | | 5 | 8 | ### Training Hyperparameters - batch_size: (8, 8) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0042 | 1 | 0.2835 | - | | 0.2083 | 50 | 0.1427 | - | | 0.4167 | 100 | 0.0968 | - | | 0.625 | 150 | 0.0086 | - | | 0.8333 | 200 | 0.0028 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.6.1 - Transformers: 4.38.2 - PyTorch: 2.2.1+cu121 - Datasets: 2.18.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 6 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 4 | <ul><li>'i feel kind of strange'</li><li>'i am feeling pretty restless right now while typing this'</li><li>'i feel pressured when people say im going t beat you or whatever'</li></ul> | | 3 | <ul><li>'i feel cranky and annoyed when i dont'</li><li>'i feel i did some thing impolite katanya'</li><li>'i feel like i should be offended but yawwwn'</li></ul> | | 5 | <ul><li>'i was feeling an act of god at work in my life and it was an amazing feeling'</li><li>'i tween sat for my moms boss year old and year old boys this weekend id say babysit but that feels weird considering there were n'</li><li>'i started feeling funny and then friday i woke up sick as a dog'</li></ul> | | 0 | <ul><li>'i am from new jersey and this first drink was consumed at a post prom party so i feel it s appropriately lame'</li><li>'i feel inside cause life is like a game sometimes then you came around me the walls just disappeared nothing to surround me and keep me from my fears im unprotected see how ive opened up oh youve made me trust cause ive never felt like this before im naked around you does it show'</li><li>'i cant believe with that statement being said that im already feeling sexually deprived'</li></ul> | | 2 | <ul><li>'i suddenly feel that this is more than a sweet love song that every girls could sing in front of their boyfriends'</li><li>'i really wish i had the courage to drag a blade across my skin i wish i could do it i wish i could see the blood and feel that sweet release as it starts to pour out of my flesh and down my body'</li><li>'im sure they feel the more caring loving people in the kids lives the better'</li></ul> | | 1 | <ul><li>'i am not feeling particularly creative'</li><li>'id probably go with none on and hope that my date admires a confident girl who feels fine without makeup'</li><li>'i woke on saturday feeling a little brighter and was very keen to get outdoors after spending all day friday wallowing in self pity'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.448 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("vidhi0206/setfit-paraphrase-mpnet-base-v2-emotion_comp") # Run inference preds = model("i feeling so aggravated about all of this") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 4 | 17.6458 | 55 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 8 | | 1 | 8 | | 2 | 8 | | 3 | 8 | | 4 | 8 | | 5 | 8 | ### Training Hyperparameters - batch_size: (8, 8) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0042 | 1 | 0.2835 | - | | 0.2083 | 50 | 0.1427 | - | | 0.4167 | 100 | 0.0968 | - | | 0.625 | 150 | 0.0086 | - | | 0.8333 | 200 | 0.0028 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.6.1 - Transformers: 4.38.2 - PyTorch: 2.2.1+cu121 - Datasets: 2.18.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "i seem to remember it was gold dust not willy wonka style gold tickets but i m feeling generous and although i liked the new faceplate for me the redesign just didn t work"}, {"text": "im feeling kind of irritated that the school year is over halfway over and all hes been getting is speech"}, {"text": "i feel stumble a class content link href https plusone"}, {"text": "i feeling so aggravated about all of this"}, {"text": "im feeling stupid feeling stupid coming back to you"}], "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.448, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,655
meta-llama/Llama-3.2-11B-Vision-Instruct
meta-llama
image-text-to-text
[ "transformers", "safetensors", "mllama", "image-text-to-text", "facebook", "meta", "pytorch", "llama", "llama-3", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "license:llama3.2", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-09-18T16:47:28Z
2024-12-04T01:35:48+00:00
1,418,353
1,376
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers license: llama3.2 pipeline_tag: image-text-to-text tags: - facebook - meta - pytorch - llama - llama-3 widget: - example_title: rococo art messages: - role: user content: - type: text text: Which era does this piece belong to? Give details about the era. - type: image_url image_url: url: https://huggingface.co/datasets/huggingface/release-assets/resolve/main/rococo.jpg - role: assistant content: 'The piece is a painting of a woman in a red dress, surrounded by gold and white ornate details. The woman is depicted in mid-air, with her arms outstretched and her legs bent at the knees. She is holding a bouquet of flowers in her right hand and a fruit in her left hand. The background of the painting is a light blue sky with pink clouds, and there are also some pink flowers and green leaves surrounding the woman. The overall atmosphere of the painting is one of joy and celebration, as if the woman is dancing or celebrating something. This piece belongs to the Rococo era, which was a style of art and architecture that emerged in Europe in the 18th century. The Rococo style is characterized by its use of pastel colors, curved lines, and ornate details. It was popularized during the reign of King Louis XV of France, who ruled from 1715 to 1774. ' - example_title: diagram understanding messages: - role: user content: - type: text text: Where do the droughts happen according to this diagram? - type: image_url image_url: url: https://huggingface.co/datasets/huggingface/release-assets/resolve/main/weather_events.png - role: assistant content: '# Drought Regions in 2016 According to the diagram, the regions that experienced severe droughts in 2016 are: - **Eastern and Southern Africa**: This region was severely affected by droughts, with significant impacts on agriculture and water resources. - **Western and Southern Africa**: Droughts also occurred in this region, leading to food insecurity and other challenges. These regions were particularly vulnerable to droughts in 2016, highlighting the need for effective water management strategies and climate resilience measures. ' - example_title: document qa messages: - role: user content: - type: text text: How long does it take from invoice date to due date? Be short and concise. - type: image_url image_url: url: https://huggingface.co/datasets/huggingface/release-assets/resolve/main/invoice.png - role: assistant content: 'To determine the time gap between the invoice date and the due date, we need to calculate the difference between these two dates. Invoice Date: February 11, 2019 Due Date: February 26, 2019 Step 1: Calculate the Difference in Days February 26 - February 11 = 15 days Therefore, the time gap between the invoice date and the due date is 15 days. ' extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \n“Licensee” or “you” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display “Built with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include “Llama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a “Notice” text file distributed as a part of such copies: “Llama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagement \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software “bug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit extra_gated_eu_disallowed: true --- ## Model Information The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text \+ images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks. **Model Developer**: Meta **Model Architecture:** Llama 3.2-Vision is built on top of Llama 3.1 text-only model, which is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. To support image recognition tasks, the Llama 3.2-Vision model uses a separately trained vision adapter that integrates with the pre-trained Llama 3.1 language model. The adapter consists of a series of cross-attention layers that feed image encoder representations into the core LLM. | | Training Data | Params | Input modalities | Output modalities | Context length | GQA | Data volume | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2-Vision | (Image, text) pairs | 11B (10.6) | Text \+ Image | Text | 128k | Yes | 6B (image, text) pairs | December 2023 | | Llama 3.2-Vision | (Image, text) pairs | 90B (88.8) | Text \+ Image | Text | 128k | Yes | 6B (image, text) pairs | December 2023 | **Supported Languages:** For text only tasks, English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Note for image+text applications, English is the only language supported. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2-Vision in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2-Vision is intended for commercial and research use. Instruction tuned models are intended for visual recognition, image reasoning, captioning, and assistant-like chat with images, whereas pretrained models can be adapted for a variety of image reasoning tasks. Additionally, because of Llama 3.2-Vision’s ability to take images and text as inputs, additional use cases could include: 1. Visual Question Answering (VQA) and Visual Reasoning: Imagine a machine that looks at a picture and understands your questions about it. 2. Document Visual Question Answering (DocVQA): Imagine a computer understanding both the text and layout of a document, like a map or contract, and then answering questions about it directly from the image. 3. Image Captioning: Image captioning bridges the gap between vision and language, extracting details, understanding the scene, and then crafting a sentence or two that tells the story. 4. Image-Text Retrieval: Image-text retrieval is like a matchmaker for images and their descriptions. Similar to a search engine but one that understands both pictures and words. 5. Visual Grounding: Visual grounding is like connecting the dots between what we see and say. It’s about understanding how language references specific parts of an image, allowing AI models to pinpoint objects or regions based on natural language descriptions. The Llama 3.2 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.2 Community License allows for these use cases. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-11B-Vision-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.45.0 onward, you can run inference using conversational messages that may include an image you can query about. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import requests import torch from PIL import Image from transformers import MllamaForConditionalGeneration, AutoProcessor model_id = "meta-llama/Llama-3.2-11B-Vision-Instruct" model = MllamaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) processor = AutoProcessor.from_pretrained(model_id) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg" image = Image.open(requests.get(url, stream=True).raw) messages = [ {"role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "If I had to write a haiku for this one, it would be: "} ]} ] input_text = processor.apply_chat_template(messages, add_generation_prompt=True) inputs = processor( image, input_text, add_special_tokens=False, return_tensors="pt" ).to(model.device) output = model.generate(**inputs, max_new_tokens=30) print(processor.decode(output[0])) ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download the original checkpoints, you can use `huggingface-cli` as follows: ``` huggingface-cli download meta-llama/Llama-3.2-11B-Vision-Instruct --include "original/*" --local-dir Llama-3.2-11B-Vision-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **2.02M** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **584** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | :---: | :---: | :---: | | Llama 3.2-vision 11B | Stage 1 pretraining: 147K H100 hours Stage 2 annealing: 98K H100 hours SFT: 896 H100 hours RLHF: 224 H100 hours | 700 | 71 | 0 | | Llama 3.2-vision 90B | Stage 1 pretraining: 885K H100 hours Stage 2 annealing: 885K H100 hours SFT: 3072 H100 hours RLHF: 2048 H100 hours | 700 | 513 | 0 | | Total | 2.02M | | 584 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2-Vision was pretrained on 6B image and text pairs. The instruction tuning data includes publicly available vision instruction datasets, as well as over 3M synthetically generated examples. **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- Image Reasoning In this section, we report the results for Llama 3.2-Vision models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 11B | Llama 3.2 90B | | ----- | ----- | ----- | ----- | ----- | ----- | | Image Understanding | VQAv2 (val) | 0 | Accuracy | 66.8 | 73.6 | | | Text VQA (val) | 0 | Relaxed accuracy | 73.1 | 73.5 | | | DocVQA (val, unseen) | 0 | ANLS | 62.3 | 70.7 | | Visual Reasoning | MMMU (val, 0-shot) | 0 | Micro average accuracy | 41.7 | 49.3 | | | ChartQA (test) | 0 | Accuracy | 39.4 | 54.2 | | | InfographicsQA (val, unseen) | 0 | ANLS | 43.2 | 56.8 | | | AI2 Diagram (test) | 0 | Accuracy | 62.4 | 75.3 | ### Instruction Tuned Models | Modality | Capability | Benchmark | \# Shots | Metric | Llama 3.2 11B | Llama 3.2 90B | | ----- | :---: | ----- | :---: | :---: | ----- | ----- | | Image | College-level Problems and Mathematical Reasoning | MMMU (val, CoT) | 0 | Micro average accuracy | 50.7 | 60.3 | | | | MMMU-Pro, Standard (10 opts, test) | 0 | Accuracy | 33.0 | 45.2 | | | | MMMU-Pro, Vision (test) | 0 | Accuracy | 23.7 | 33.8 | | | | MathVista (testmini) | 0 | Accuracy | 51.5 | 57.3 | | | Charts and Diagram Understanding | ChartQA (test, CoT) | 0 | Relaxed accuracy | 83.4 | 85.5 | | | | AI2 Diagram (test) | 0 | Accuracy | 91.1 | 92.3 | | | | DocVQA (test) | 0 | ANLS | 88.4 | 90.1 | | | General Visual Question Answering | VQAv2 (test) | 0 | Accuracy | 75.2 | 78.1 | | | | | | | | | | Text | General | MMLU (CoT) | 0 | Macro\_avg/acc | 73.0 | 86.0 | | | Math | MATH (CoT) | 0 | Final\_em | 51.9 | 68.0 | | | Reasoning | GPQA | 0 | Accuracy | 32.8 | 46.7 | | | Multilingual | MGSM (CoT) | 0 | em | 68.9 | 86.9 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. 3. Provide protections for the community to help prevent the misuse of our models. ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more. #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well., **Image Reasoning:** Llama 3.2-Vision models come with multimodal (text and image) input capabilities enabling image reasoning applications. As part of our responsible release process, we took dedicated measures including evaluations and mitigations to address the risk of the models uniquely identifying individuals in images. As with other LLM risks, models may not always be robust to adversarial prompts, and developers should evaluate identification and other applicable risks in the context of their applications as well as consider deploying Llama Guard 3-11B-Vision as part of their system or other mitigations as appropriate to detect and mitigate such risks. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** For Llama 3.1, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. For Llama 3.2-Vision models, we conducted additional targeted evaluations and found that it was unlikely Llama 3.2 presented an increase in scientific capabilities due to its added image understanding capability as compared to Llama 3.1. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s vision capabilities are not generally germane to cyber uplift, we believe that the testing conducted for Llama 3.1 also applies to Llama 3.2. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** But Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
null
Non_BioNLP
## Model Information The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text \+ images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks. **Model Developer**: Meta **Model Architecture:** Llama 3.2-Vision is built on top of Llama 3.1 text-only model, which is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. To support image recognition tasks, the Llama 3.2-Vision model uses a separately trained vision adapter that integrates with the pre-trained Llama 3.1 language model. The adapter consists of a series of cross-attention layers that feed image encoder representations into the core LLM. | | Training Data | Params | Input modalities | Output modalities | Context length | GQA | Data volume | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2-Vision | (Image, text) pairs | 11B (10.6) | Text \+ Image | Text | 128k | Yes | 6B (image, text) pairs | December 2023 | | Llama 3.2-Vision | (Image, text) pairs | 90B (88.8) | Text \+ Image | Text | 128k | Yes | 6B (image, text) pairs | December 2023 | **Supported Languages:** For text only tasks, English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Note for image+text applications, English is the only language supported. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2-Vision in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2-Vision is intended for commercial and research use. Instruction tuned models are intended for visual recognition, image reasoning, captioning, and assistant-like chat with images, whereas pretrained models can be adapted for a variety of image reasoning tasks. Additionally, because of Llama 3.2-Vision’s ability to take images and text as inputs, additional use cases could include: 1. Visual Question Answering (VQA) and Visual Reasoning: Imagine a machine that looks at a picture and understands your questions about it. 2. Document Visual Question Answering (DocVQA): Imagine a computer understanding both the text and layout of a document, like a map or contract, and then answering questions about it directly from the image. 3. Image Captioning: Image captioning bridges the gap between vision and language, extracting details, understanding the scene, and then crafting a sentence or two that tells the story. 4. Image-Text Retrieval: Image-text retrieval is like a matchmaker for images and their descriptions. Similar to a search engine but one that understands both pictures and words. 5. Visual Grounding: Visual grounding is like connecting the dots between what we see and say. It’s about understanding how language references specific parts of an image, allowing AI models to pinpoint objects or regions based on natural language descriptions. The Llama 3.2 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.2 Community License allows for these use cases. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-11B-Vision-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.45.0 onward, you can run inference using conversational messages that may include an image you can query about. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import requests import torch from PIL import Image from transformers import MllamaForConditionalGeneration, AutoProcessor model_id = "meta-llama/Llama-3.2-11B-Vision-Instruct" model = MllamaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) processor = AutoProcessor.from_pretrained(model_id) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg" image = Image.open(requests.get(url, stream=True).raw) messages = [ {"role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "If I had to write a haiku for this one, it would be: "} ]} ] input_text = processor.apply_chat_template(messages, add_generation_prompt=True) inputs = processor( image, input_text, add_special_tokens=False, return_tensors="pt" ).to(model.device) output = model.generate(**inputs, max_new_tokens=30) print(processor.decode(output[0])) ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download the original checkpoints, you can use `huggingface-cli` as follows: ``` huggingface-cli download meta-llama/Llama-3.2-11B-Vision-Instruct --include "original/*" --local-dir Llama-3.2-11B-Vision-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **2.02M** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **584** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | :---: | :---: | :---: | | Llama 3.2-vision 11B | Stage 1 pretraining: 147K H100 hours Stage 2 annealing: 98K H100 hours SFT: 896 H100 hours RLHF: 224 H100 hours | 700 | 71 | 0 | | Llama 3.2-vision 90B | Stage 1 pretraining: 885K H100 hours Stage 2 annealing: 885K H100 hours SFT: 3072 H100 hours RLHF: 2048 H100 hours | 700 | 513 | 0 | | Total | 2.02M | | 584 | 0 | The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2-Vision was pretrained on 6B image and text pairs. The instruction tuning data includes publicly available vision instruction datasets, as well as over 3M synthetically generated examples. **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Benchmarks \- Image Reasoning In this section, we report the results for Llama 3.2-Vision models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 11B | Llama 3.2 90B | | ----- | ----- | ----- | ----- | ----- | ----- | | Image Understanding | VQAv2 (val) | 0 | Accuracy | 66.8 | 73.6 | | | Text VQA (val) | 0 | Relaxed accuracy | 73.1 | 73.5 | | | DocVQA (val, unseen) | 0 | ANLS | 62.3 | 70.7 | | Visual Reasoning | MMMU (val, 0-shot) | 0 | Micro average accuracy | 41.7 | 49.3 | | | ChartQA (test) | 0 | Accuracy | 39.4 | 54.2 | | | InfographicsQA (val, unseen) | 0 | ANLS | 43.2 | 56.8 | | | AI2 Diagram (test) | 0 | Accuracy | 62.4 | 75.3 | ### Instruction Tuned Models | Modality | Capability | Benchmark | \# Shots | Metric | Llama 3.2 11B | Llama 3.2 90B | | ----- | :---: | ----- | :---: | :---: | ----- | ----- | | Image | College-level Problems and Mathematical Reasoning | MMMU (val, CoT) | 0 | Micro average accuracy | 50.7 | 60.3 | | | | MMMU-Pro, Standard (10 opts, test) | 0 | Accuracy | 33.0 | 45.2 | | | | MMMU-Pro, Vision (test) | 0 | Accuracy | 23.7 | 33.8 | | | | MathVista (testmini) | 0 | Accuracy | 51.5 | 57.3 | | | Charts and Diagram Understanding | ChartQA (test, CoT) | 0 | Relaxed accuracy | 83.4 | 85.5 | | | | AI2 Diagram (test) | 0 | Accuracy | 91.1 | 92.3 | | | | DocVQA (test) | 0 | ANLS | 88.4 | 90.1 | | | General Visual Question Answering | VQAv2 (test) | 0 | Accuracy | 75.2 | 78.1 | | | | | | | | | | Text | General | MMLU (CoT) | 0 | Macro\_avg/acc | 73.0 | 86.0 | | | Math | MATH (CoT) | 0 | Final\_em | 51.9 | 68.0 | | | Reasoning | GPQA | 0 | Accuracy | 32.8 | 46.7 | | | Multilingual | MGSM (CoT) | 0 | em | 68.9 | 86.9 | ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. 3. Provide protections for the community to help prevent the misuse of our models. ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more. #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well., **Image Reasoning:** Llama 3.2-Vision models come with multimodal (text and image) input capabilities enabling image reasoning applications. As part of our responsible release process, we took dedicated measures including evaluations and mitigations to address the risk of the models uniquely identifying individuals in images. As with other LLM risks, models may not always be robust to adversarial prompts, and developers should evaluate identification and other applicable risks in the context of their applications as well as consider deploying Llama Guard 3-11B-Vision as part of their system or other mitigations as appropriate to detect and mitigate such risks. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** For Llama 3.1, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. For Llama 3.2-Vision models, we conducted additional targeted evaluations and found that it was unlikely Llama 3.2 presented an increase in scientific capabilities due to its added image understanding capability as compared to Llama 3.1. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s vision capabilities are not generally germane to cyber uplift, we believe that the testing conducted for Llama 3.1 also applies to Llama 3.2. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** But Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
{"language": ["en", "de", "fr", "it", "pt", "hi", "es", "th"], "library_name": "transformers", "license": "llama3.2", "pipeline_tag": "image-text-to-text", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "widget": [{"example_title": "rococo art", "messages": [{"role": "user", "content": [{"type": "text", "text": "Which era does this piece belong to? Give details about the era."}, {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/huggingface/release-assets/resolve/main/rococo.jpg"}}]}, {"role": "assistant", "content": "The piece is a painting of a woman in a red dress, surrounded by gold and white ornate details. The woman is depicted in mid-air, with her arms outstretched and her legs bent at the knees. She is holding a bouquet of flowers in her right hand and a fruit in her left hand.\n\nThe background of the painting is a light blue sky with pink clouds, and there are also some pink flowers and green leaves surrounding the woman. The overall atmosphere of the painting is one of joy and celebration, as if the woman is dancing or celebrating something.\n\nThis piece belongs to the Rococo era, which was a style of art and architecture that emerged in Europe in the 18th century. The Rococo style is characterized by its use of pastel colors, curved lines, and ornate details. It was popularized during the reign of King Louis XV of France, who ruled from 1715 to 1774.\n"}]}, {"example_title": "diagram understanding", "messages": [{"role": "user", "content": [{"type": "text", "text": "Where do the droughts happen according to this diagram?"}, {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/huggingface/release-assets/resolve/main/weather_events.png"}}]}, {"role": "assistant", "content": "# Drought Regions in 2016\n\nAccording to the diagram, the regions that experienced severe droughts in 2016 are:\n\n- **Eastern and Southern Africa**: This region was severely affected by droughts, with significant impacts on agriculture and water resources.\n- **Western and Southern Africa**: Droughts also occurred in this region, leading to food insecurity and other challenges.\n\nThese regions were particularly vulnerable to droughts in 2016, highlighting the need for effective water management strategies and climate resilience measures.\n"}]}, {"example_title": "document qa", "messages": [{"role": "user", "content": [{"type": "text", "text": "How long does it take from invoice date to due date? Be short and concise."}, {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/huggingface/release-assets/resolve/main/invoice.png"}}]}, {"role": "assistant", "content": "To determine the time gap between the invoice date and the due date, we need to calculate the difference between these two dates.\n\nInvoice Date: February 11, 2019\nDue Date: February 26, 2019\n\nStep 1: Calculate the Difference in Days\n\nFebruary 26 - February 11 = 15 days\n\nTherefore, the time gap between the invoice date and the due date is 15 days.\n"}]}], "extra_gated_prompt": "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\n“Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\n“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. \nb. Redistribution and Use. \ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. \niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate the law or others’ rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law\n 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta \n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:\n 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled substances\n 11. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:\n 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 16. Generating, promoting, or further distributing spam\n 17. Impersonating another individual without consent, authorization, or legal right\n 18. Representing that the use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement \n4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.\n\nPlease report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "Job title": {"type": "select", "options": ["Student", "Research Graduate", "AI researcher", "AI developer/engineer", "Reporter", "Other"]}, "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit", "extra_gated_eu_disallowed": true}
task
[ "QUESTION_ANSWERING" ]
45,656
facebook/fasttext-li-vectors
facebook
feature-extraction
[ "fasttext", "feature-extraction", "li", "arxiv:1607.04606", "arxiv:1802.06893", "arxiv:1607.01759", "arxiv:1612.03651", "license:cc-by-sa-3.0", "region:us" ]
2023-03-20T19:18:02Z
2023-06-03T22:13:13+00:00
2
0
--- language: li library_name: fasttext license: cc-by-sa-3.0 tags: - feature-extraction widget: - text: apple example_title: apple --- # fastText (Limburgish) fastText is an open-source, free, lightweight library that allows users to learn text representations and text classifiers. It works on standard, generic hardware. Models can later be reduced in size to even fit on mobile devices. It was introduced in [this paper](https://arxiv.org/abs/1607.04606). The official website can be found [here](https://fasttext.cc/). ## Model description fastText is a library for efficient learning of word representations and sentence classification. fastText is designed to be simple to use for developers, domain experts, and students. It's dedicated to text classification and learning word representations, and was designed to allow for quick model iteration and refinement without specialized hardware. fastText models can be trained on more than a billion words on any multicore CPU in less than a few minutes. It includes pre-trained models learned on Wikipedia and in over 157 different languages. fastText can be used as a command line, linked to a C++ application, or used as a library for use cases from experimentation and prototyping to production. ## Intended uses & limitations You can use pre-trained word vectors for text classification or language identification. See the [tutorials](https://fasttext.cc/docs/en/supervised-tutorial.html) and [resources](https://fasttext.cc/docs/en/english-vectors.html) on its official website to look for tasks that interest you. ### How to use Here is how to load and use a pre-trained vectors ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-li-vectors", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.words ['the', 'of', 'and', 'to', 'in', 'a', 'that', 'is', ...] >>> len(model.words) 145940 >>> model['bread'] array([ 4.89417791e-01, 1.60882145e-01, -2.25947708e-01, -2.94273376e-01, -1.04577184e-01, 1.17962055e-01, 1.34821936e-01, -2.41778508e-01, ...]) ``` Here is how to use this model to query nearest neighbors of an English word vector: ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-en-nearest-neighbors", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.get_nearest_neighbors("bread", k=5) [(0.5641006231307983, 'butter'), (0.48875734210014343, 'loaf'), (0.4491206705570221, 'eat'), (0.42444291710853577, 'food'), (0.4229326844215393, 'cheese')] ``` Here is how to use this model to detect the language of a given text: ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-language-identification", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.predict("Hello, world!") (('__label__eng_Latn',), array([0.81148803])) >>> model.predict("Hello, world!", k=5) (('__label__eng_Latn', '__label__vie_Latn', '__label__nld_Latn', '__label__pol_Latn', '__label__deu_Latn'), array([0.61224753, 0.21323682, 0.09696738, 0.01359863, 0.01319415])) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. Cosine similarity can be used to measure the similarity between two different word vectors. If two two vectors are identical, the cosine similarity will be 1. For two completely unrelated vectors, the value will be 0. If two vectors have an opposite relationship, the value will be -1. ```python >>> import numpy as np >>> def cosine_similarity(word1, word2): >>> return np.dot(model[word1], model[word2]) / (np.linalg.norm(model[word1]) * np.linalg.norm(model[word2])) >>> cosine_similarity("man", "boy") 0.061653383 >>> cosine_similarity("man", "ceo") 0.11989131 >>> cosine_similarity("woman", "ceo") -0.08834904 ``` ## Training data Pre-trained word vectors for 157 languages were trained on [Common Crawl](http://commoncrawl.org/) and [Wikipedia](https://www.wikipedia.org/) using fastText. These models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. We also distribute three new word analogy datasets, for French, Hindi and Polish. ## Training procedure ### Tokenization We used the [Stanford word segmenter](https://nlp.stanford.edu/software/segmenter.html) for Chinese, [Mecab](http://taku910.github.io/mecab/) for Japanese and [UETsegmenter](https://github.com/phongnt570/UETsegmenter) for Vietnamese. For languages using the Latin, Cyrillic, Hebrew or Greek scripts, we used the tokenizer from the [Europarl](https://www.statmt.org/europarl/) preprocessing tools. For the remaining languages, we used the ICU tokenizer. More information about the training of these models can be found in the article [Learning Word Vectors for 157 Languages](https://arxiv.org/abs/1802.06893). ### License The word vectors are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/). ### Evaluation datasets The analogy evaluation datasets described in the paper are available here: [French](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-fr.txt), [Hindi](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-hi.txt), [Polish](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-pl.txt). ### BibTeX entry and citation info Please cite [1] if using this code for learning word representations or [2] if using for text classification. [1] P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606) ```markup @article{bojanowski2016enriching, title={Enriching Word Vectors with Subword Information}, author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas}, journal={arXiv preprint arXiv:1607.04606}, year={2016} } ``` [2] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759) ```markup @article{joulin2016bag, title={Bag of Tricks for Efficient Text Classification}, author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas}, journal={arXiv preprint arXiv:1607.01759}, year={2016} } ``` [3] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: Compressing text classification models*](https://arxiv.org/abs/1612.03651) ```markup @article{joulin2016fasttext, title={FastText.zip: Compressing text classification models}, author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Douze, Matthijs and J{'e}gou, H{'e}rve and Mikolov, Tomas}, journal={arXiv preprint arXiv:1612.03651}, year={2016} } ``` If you use these word vectors, please cite the following paper: [4] E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893) ```markup @inproceedings{grave2018learning, title={Learning Word Vectors for 157 Languages}, author={Grave, Edouard and Bojanowski, Piotr and Gupta, Prakhar and Joulin, Armand and Mikolov, Tomas}, booktitle={Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)}, year={2018} } ``` (\* These authors contributed equally.)
null
Non_BioNLP
# fastText (Limburgish) fastText is an open-source, free, lightweight library that allows users to learn text representations and text classifiers. It works on standard, generic hardware. Models can later be reduced in size to even fit on mobile devices. It was introduced in [this paper](https://arxiv.org/abs/1607.04606). The official website can be found [here](https://fasttext.cc/). ## Model description fastText is a library for efficient learning of word representations and sentence classification. fastText is designed to be simple to use for developers, domain experts, and students. It's dedicated to text classification and learning word representations, and was designed to allow for quick model iteration and refinement without specialized hardware. fastText models can be trained on more than a billion words on any multicore CPU in less than a few minutes. It includes pre-trained models learned on Wikipedia and in over 157 different languages. fastText can be used as a command line, linked to a C++ application, or used as a library for use cases from experimentation and prototyping to production. ## Intended uses & limitations You can use pre-trained word vectors for text classification or language identification. See the [tutorials](https://fasttext.cc/docs/en/supervised-tutorial.html) and [resources](https://fasttext.cc/docs/en/english-vectors.html) on its official website to look for tasks that interest you. ### How to use Here is how to load and use a pre-trained vectors ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-li-vectors", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.words ['the', 'of', 'and', 'to', 'in', 'a', 'that', 'is', ...] >>> len(model.words) 145940 >>> model['bread'] array([ 4.89417791e-01, 1.60882145e-01, -2.25947708e-01, -2.94273376e-01, -1.04577184e-01, 1.17962055e-01, 1.34821936e-01, -2.41778508e-01, ...]) ``` Here is how to use this model to query nearest neighbors of an English word vector: ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-en-nearest-neighbors", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.get_nearest_neighbors("bread", k=5) [(0.5641006231307983, 'butter'), (0.48875734210014343, 'loaf'), (0.4491206705570221, 'eat'), (0.42444291710853577, 'food'), (0.4229326844215393, 'cheese')] ``` Here is how to use this model to detect the language of a given text: ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-language-identification", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.predict("Hello, world!") (('__label__eng_Latn',), array([0.81148803])) >>> model.predict("Hello, world!", k=5) (('__label__eng_Latn', '__label__vie_Latn', '__label__nld_Latn', '__label__pol_Latn', '__label__deu_Latn'), array([0.61224753, 0.21323682, 0.09696738, 0.01359863, 0.01319415])) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. Cosine similarity can be used to measure the similarity between two different word vectors. If two two vectors are identical, the cosine similarity will be 1. For two completely unrelated vectors, the value will be 0. If two vectors have an opposite relationship, the value will be -1. ```python >>> import numpy as np >>> def cosine_similarity(word1, word2): >>> return np.dot(model[word1], model[word2]) / (np.linalg.norm(model[word1]) * np.linalg.norm(model[word2])) >>> cosine_similarity("man", "boy") 0.061653383 >>> cosine_similarity("man", "ceo") 0.11989131 >>> cosine_similarity("woman", "ceo") -0.08834904 ``` ## Training data Pre-trained word vectors for 157 languages were trained on [Common Crawl](http://commoncrawl.org/) and [Wikipedia](https://www.wikipedia.org/) using fastText. These models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. We also distribute three new word analogy datasets, for French, Hindi and Polish. ## Training procedure ### Tokenization We used the [Stanford word segmenter](https://nlp.stanford.edu/software/segmenter.html) for Chinese, [Mecab](http://taku910.github.io/mecab/) for Japanese and [UETsegmenter](https://github.com/phongnt570/UETsegmenter) for Vietnamese. For languages using the Latin, Cyrillic, Hebrew or Greek scripts, we used the tokenizer from the [Europarl](https://www.statmt.org/europarl/) preprocessing tools. For the remaining languages, we used the ICU tokenizer. More information about the training of these models can be found in the article [Learning Word Vectors for 157 Languages](https://arxiv.org/abs/1802.06893). ### License The word vectors are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/). ### Evaluation datasets The analogy evaluation datasets described in the paper are available here: [French](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-fr.txt), [Hindi](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-hi.txt), [Polish](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-pl.txt). ### BibTeX entry and citation info Please cite [1] if using this code for learning word representations or [2] if using for text classification. [1] P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606) ```markup @article{bojanowski2016enriching, title={Enriching Word Vectors with Subword Information}, author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas}, journal={arXiv preprint arXiv:1607.04606}, year={2016} } ``` [2] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759) ```markup @article{joulin2016bag, title={Bag of Tricks for Efficient Text Classification}, author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas}, journal={arXiv preprint arXiv:1607.01759}, year={2016} } ``` [3] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: Compressing text classification models*](https://arxiv.org/abs/1612.03651) ```markup @article{joulin2016fasttext, title={FastText.zip: Compressing text classification models}, author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Douze, Matthijs and J{'e}gou, H{'e}rve and Mikolov, Tomas}, journal={arXiv preprint arXiv:1612.03651}, year={2016} } ``` If you use these word vectors, please cite the following paper: [4] E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893) ```markup @inproceedings{grave2018learning, title={Learning Word Vectors for 157 Languages}, author={Grave, Edouard and Bojanowski, Piotr and Gupta, Prakhar and Joulin, Armand and Mikolov, Tomas}, booktitle={Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)}, year={2018} } ``` (\* These authors contributed equally.)
{"language": "li", "library_name": "fasttext", "license": "cc-by-sa-3.0", "tags": ["feature-extraction"], "widget": [{"text": "apple", "example_title": "apple"}]}
task
[ "TEXT_CLASSIFICATION" ]
45,657
RogerB/rw-en_translation
RogerB
text2text-generation
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "base_model:Helsinki-NLP/opus-mt-rw-en", "base_model:finetune:Helsinki-NLP/opus-mt-rw-en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-25T16:46:28Z
2023-07-25T17:08:40+00:00
6
0
--- base_model: Helsinki-NLP/opus-mt-rw-en license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: RogerB/rw-en_translation results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # RogerB/rw-en_translation This model is a fine-tuned version of [Helsinki-NLP/opus-mt-rw-en](https://huggingface.co/Helsinki-NLP/opus-mt-rw-en) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0415 - Validation Loss: 1.2440 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 3585, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.7860 | 1.3850 | 0 | | 1.2410 | 1.2708 | 1 | | 1.0415 | 1.2440 | 2 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.14.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # RogerB/rw-en_translation This model is a fine-tuned version of [Helsinki-NLP/opus-mt-rw-en](https://huggingface.co/Helsinki-NLP/opus-mt-rw-en) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0415 - Validation Loss: 1.2440 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 3585, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.7860 | 1.3850 | 0 | | 1.2410 | 1.2708 | 1 | | 1.0415 | 1.2440 | 2 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.14.0 - Tokenizers 0.13.3
{"base_model": "Helsinki-NLP/opus-mt-rw-en", "license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "RogerB/rw-en_translation", "results": []}]}
task
[ "TRANSLATION" ]
45,658
ml6team/mbart-large-cc25-cnn-dailymail-nl
ml6team
summarization
[ "transformers", "pytorch", "mbart", "text2text-generation", "bart", "summarization", "nl", "dataset:ml6team/cnn_dailymail_nl", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2022-05-16T11:41:37+00:00
138
6
--- datasets: - ml6team/cnn_dailymail_nl language: - nl pipeline_tag: summarization tags: - mbart - bart - summarization widget: - text: Het jongetje werd eind april met zwaar letsel naar het ziekenhuis gebracht in Maastricht. Drie weken later overleed het kindje als gevolg van het letsel. Onderzoek moet nog uitwijzen wat voor verwondingen de baby precies had en hoe hij gewond is geraakt. Daarnaast doet de politie onderzoek in de woning van de ouders. Het is nog niet duidelijk wanneer de onderzoeken zijn afgerond, meldt 1Limburg. De verdachten zitten in beperkingen en mogen alleen contact hebben met hun advocaat. - text: 'Volgens De Vries gaat het om "de hoogste beloning die ooit is uitgeloofd in Nederland". De stichting heeft een website waar donateurs geld kunnen storten, schrijft NH Nieuws. Volgens De Vries is dit initiatief ook bedoeld voor andere zaken waar beloningen voor een gouden tip worden uitgereikt. "Het is dus niet eenmalig", aldus De Vries. Het is de eerste keer dat zoiets wordt opgezet, stelt hij: De 18-jarige Tanja Groen verdween spoorloos tijdens de ontgroeningsweek van de Universiteit Maastricht in augustus 1993. Ze werd voor het laatst gezien nadat ze was vertrokken van een feestje. De studente zou vandaag 46 jaar zijn geworden. Ook de ouders van Groen waren op de persconferentie aanwezig. "Het is vandaag de verjaardag van Tanja Groen, die haar ouders al 27 jaar niet meer hebben kunnen vieren, omdat zij eind augustus 1993 spoorloos is verdwenen", zei De Vries. "Haar ouders zitten in tergende onzekerheid. Ze geloven dat ze niet meer leeft. Maar die ene promille vreet aan ze. Ze hebben recht op duidelijkheid. Ze komen op leeftijd. Grootste angst is nooit te weten wat er met hun kind is gebeurd." De Vries wil dat het miljoen binnen een jaar is ingezameld. Als het bedrag na een jaar lager uitkomt, dan is dat de uit te loven beloning. Is het meer, dan zal de rest van het geld gebruikt worden in beloningen in andere zaken. Het initiatief wordt gesteund door de politie en justitie. De afgelopen jaren is er vaker uitgebreid naar sporen van Tanja Groen gezocht, maar die zoekacties hebben niets concreets opgeleverd. Vorige week werd opnieuw naar de vrouw gezocht, op de Strabrechtse Heide in Noord-Brabant. Ook die zoektocht leverde niets op.' --- # mbart-large-cc25-cnn-dailymail-nl ## Model description Finetuned version of [mbart](https://huggingface.co/facebook/mbart-large-cc25). We also wrote a **blog post** about this model [here](https://blog.ml6.eu/why-we-open-sourced-two-dutch-summarization-datasets-1047445abc97) ## Intended uses & limitations It's meant for summarizing Dutch news articles. #### How to use ```python import transformers undisputed_best_model = transformers.MBartForConditionalGeneration.from_pretrained( "ml6team/mbart-large-cc25-cnn-dailymail-nl" ) tokenizer = transformers.MBartTokenizer.from_pretrained("facebook/mbart-large-cc25") summarization_pipeline = transformers.pipeline( task="summarization", model=undisputed_best_model, tokenizer=tokenizer, ) summarization_pipeline.model.config.decoder_start_token_id = tokenizer.lang_code_to_id[ "nl_XX" ] article = "Kan je dit even samenvatten alsjeblief." # Dutch summarization_pipeline( article, do_sample=True, top_p=0.75, top_k=50, # num_beams=4, min_length=50, early_stopping=True, truncation=True, )[0]["summary_text"] ``` ## Training data Finetuned [mbart](https://huggingface.co/facebook/mbart-large-cc25) with [this dataset](https://huggingface.co/datasets/ml6team/cnn_dailymail_nl)
null
Non_BioNLP
# mbart-large-cc25-cnn-dailymail-nl ## Model description Finetuned version of [mbart](https://huggingface.co/facebook/mbart-large-cc25). We also wrote a **blog post** about this model [here](https://blog.ml6.eu/why-we-open-sourced-two-dutch-summarization-datasets-1047445abc97) ## Intended uses & limitations It's meant for summarizing Dutch news articles. #### How to use ```python import transformers undisputed_best_model = transformers.MBartForConditionalGeneration.from_pretrained( "ml6team/mbart-large-cc25-cnn-dailymail-nl" ) tokenizer = transformers.MBartTokenizer.from_pretrained("facebook/mbart-large-cc25") summarization_pipeline = transformers.pipeline( task="summarization", model=undisputed_best_model, tokenizer=tokenizer, ) summarization_pipeline.model.config.decoder_start_token_id = tokenizer.lang_code_to_id[ "nl_XX" ] article = "Kan je dit even samenvatten alsjeblief." # Dutch summarization_pipeline( article, do_sample=True, top_p=0.75, top_k=50, # num_beams=4, min_length=50, early_stopping=True, truncation=True, )[0]["summary_text"] ``` ## Training data Finetuned [mbart](https://huggingface.co/facebook/mbart-large-cc25) with [this dataset](https://huggingface.co/datasets/ml6team/cnn_dailymail_nl)
{"datasets": ["ml6team/cnn_dailymail_nl"], "language": ["nl"], "pipeline_tag": "summarization", "tags": ["mbart", "bart", "summarization"], "widget": [{"text": "Het jongetje werd eind april met zwaar letsel naar het ziekenhuis gebracht in Maastricht. Drie weken later overleed het kindje als gevolg van het letsel. Onderzoek moet nog uitwijzen wat voor verwondingen de baby precies had en hoe hij gewond is geraakt. Daarnaast doet de politie onderzoek in de woning van de ouders. Het is nog niet duidelijk wanneer de onderzoeken zijn afgerond, meldt 1Limburg. De verdachten zitten in beperkingen en mogen alleen contact hebben met hun advocaat."}, {"text": "Volgens De Vries gaat het om \"de hoogste beloning die ooit is uitgeloofd in Nederland\". De stichting heeft een website waar donateurs geld kunnen storten, schrijft NH Nieuws. Volgens De Vries is dit initiatief ook bedoeld voor andere zaken waar beloningen voor een gouden tip worden uitgereikt. \"Het is dus niet eenmalig\", aldus De Vries. Het is de eerste keer dat zoiets wordt opgezet, stelt hij: De 18-jarige Tanja Groen verdween spoorloos tijdens de ontgroeningsweek van de Universiteit Maastricht in augustus 1993. Ze werd voor het laatst gezien nadat ze was vertrokken van een feestje. De studente zou vandaag 46 jaar zijn geworden. Ook de ouders van Groen waren op de persconferentie aanwezig. \"Het is vandaag de verjaardag van Tanja Groen, die haar ouders al 27 jaar niet meer hebben kunnen vieren, omdat zij eind augustus 1993 spoorloos is verdwenen\", zei De Vries. \"Haar ouders zitten in tergende onzekerheid. Ze geloven dat ze niet meer leeft. Maar die ene promille vreet aan ze. Ze hebben recht op duidelijkheid. Ze komen op leeftijd. Grootste angst is nooit te weten wat er met hun kind is gebeurd.\" De Vries wil dat het miljoen binnen een jaar is ingezameld. Als het bedrag na een jaar lager uitkomt, dan is dat de uit te loven beloning. Is het meer, dan zal de rest van het geld gebruikt worden in beloningen in andere zaken. Het initiatief wordt gesteund door de politie en justitie. De afgelopen jaren is er vaker uitgebreid naar sporen van Tanja Groen gezocht, maar die zoekacties hebben niets concreets opgeleverd. Vorige week werd opnieuw naar de vrouw gezocht, op de Strabrechtse Heide in Noord-Brabant. Ook die zoektocht leverde niets op."}]}
task
[ "SUMMARIZATION" ]
45,659
huangjackson/ct2-opus-mt
huangjackson
translation
[ "translation", "en", "zh", "region:us" ]
2024-03-22T01:51:57Z
2024-03-22T02:14:46+00:00
0
0
--- language: - en - zh pipeline_tag: translation --- OPUS-MT translation models ([en-zh](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho) and [zh-en](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-eng)) converted to CTranslate2 Used in https://github.com/huangjackson/v2vt
null
Non_BioNLP
OPUS-MT translation models ([en-zh](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho) and [zh-en](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-eng)) converted to CTranslate2 Used in https://github.com/huangjackson/v2vt
{"language": ["en", "zh"], "pipeline_tag": "translation"}
task
[ "TRANSLATION" ]
45,660
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task175
Lots-of-LoRAs
null
[ "pytorch", "safetensors", "en", "arxiv:1910.09700", "arxiv:2407.00066", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2", "license:mit", "region:us" ]
2025-01-05T14:29:33Z
2025-01-05T14:29:39+00:00
0
0
--- base_model: mistralai/Mistral-7B-Instruct-v0.2 language: en library_name: pytorch license: mit --- # Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task175 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> LoRA trained on task175_spl_translation_en_pl - **Developed by:** bruel - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** LoRA - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/bruel-gabrielsson - **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://huggingface.co/datasets/Lots-of-LoRAs/task175_spl_translation_en_pl sourced from https://github.com/allenai/natural-instructions ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** @misc{brüelgabrielsson2024compressserveservingthousands, title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead}, author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon}, year={2024}, eprint={2407.00066}, archivePrefix={arXiv}, primaryClass={cs.DC}, url={https://arxiv.org/abs/2407.00066}, } **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
null
Non_BioNLP
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task175 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> LoRA trained on task175_spl_translation_en_pl - **Developed by:** bruel - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** LoRA - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/bruel-gabrielsson - **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://huggingface.co/datasets/Lots-of-LoRAs/task175_spl_translation_en_pl sourced from https://github.com/allenai/natural-instructions ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** @misc{brüelgabrielsson2024compressserveservingthousands, title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead}, author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon}, year={2024}, eprint={2407.00066}, archivePrefix={arXiv}, primaryClass={cs.DC}, url={https://arxiv.org/abs/2407.00066}, } **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"base_model": "mistralai/Mistral-7B-Instruct-v0.2", "language": "en", "library_name": "pytorch", "license": "mit"}
task
[ "TRANSLATION" ]
45,661
chastelove/distilbert-base-uncased_emotion_ft_0504
chastelove
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-04T04:22:35Z
2023-05-04T04:44:17+00:00
15
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 - precision tags: - generated_from_trainer model-index: - name: distilbert-base-uncased_emotion_ft_0504 results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.935 name: Accuracy - type: f1 value: 0.9353661273711807 name: F1 - type: precision value: 0.9062644261189533 name: Precision --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_emotion_ft_0504 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1552 - Accuracy: 0.935 - F1: 0.9354 - Precision: 0.9063 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:| | 0.7741 | 1.0 | 250 | 0.2686 | 0.909 | 0.9070 | 0.8911 | | 0.2073 | 2.0 | 500 | 0.1767 | 0.9315 | 0.9319 | 0.9013 | | 0.1397 | 3.0 | 750 | 0.1581 | 0.935 | 0.9353 | 0.9081 | | 0.1123 | 4.0 | 1000 | 0.1552 | 0.935 | 0.9354 | 0.9063 | ### Framework versions - Transformers 4.28.1 - Pytorch 1.13.1 - Datasets 2.12.0 - Tokenizers 0.11.0
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_emotion_ft_0504 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1552 - Accuracy: 0.935 - F1: 0.9354 - Precision: 0.9063 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:| | 0.7741 | 1.0 | 250 | 0.2686 | 0.909 | 0.9070 | 0.8911 | | 0.2073 | 2.0 | 500 | 0.1767 | 0.9315 | 0.9319 | 0.9013 | | 0.1397 | 3.0 | 750 | 0.1581 | 0.935 | 0.9353 | 0.9081 | | 0.1123 | 4.0 | 1000 | 0.1552 | 0.935 | 0.9354 | 0.9063 | ### Framework versions - Transformers 4.28.1 - Pytorch 1.13.1 - Datasets 2.12.0 - Tokenizers 0.11.0
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1", "precision"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased_emotion_ft_0504", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.935, "name": "Accuracy"}, {"type": "f1", "value": 0.9353661273711807, "name": "F1"}, {"type": "precision", "value": 0.9062644261189533, "name": "Precision"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,662
gaudi/opus-mt-iso-en-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-17T00:11:45Z
2024-10-18T22:17:59+00:00
6
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-iso-en) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-iso-en).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-iso-en --output_dir ./ctranslate2/opus-mt-iso-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-iso-en-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-iso-en-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-iso-en-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-iso-en) by Helsinki-NLP.
null
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-iso-en) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-iso-en).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-iso-en --output_dir ./ctranslate2/opus-mt-iso-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-iso-en-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-iso-en-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-iso-en-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-iso-en) by Helsinki-NLP.
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
task
[ "TRANSLATION" ]
45,664
Helsinki-NLP/opus-mt-de-ee
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "ee", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T11:27:42+00:00
64
0
--- license: apache-2.0 tags: - translation --- ### opus-mt-de-ee * source languages: de * target languages: ee * OPUS readme: [de-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ee/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ee/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ee/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ee/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.de.ee | 24.6 | 0.463 |
null
Non_BioNLP
### opus-mt-de-ee * source languages: de * target languages: ee * OPUS readme: [de-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ee/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ee/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ee/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ee/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.de.ee | 24.6 | 0.463 |
{"license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
45,665
HenryHHHH/DistilLlamaV1
HenryHHHH
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "knowledge-distillation", "causal-lm", "openwebtext", "wikitext", "transfer-learning", "en", "dataset:openwebtext", "dataset:wikitext", "arxiv:2308.02019", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-11-01T08:32:31Z
2024-11-06T02:12:13+00:00
50
0
--- base_model: meta-llama/LLaMA-2-7B datasets: - openwebtext - wikitext language: en library_name: transformers license: apache-2.0 metrics: - cosine-similarity - exact-match - rouge model_name: DistilLLaMA tags: - text-generation - knowledge-distillation - llama - causal-lm - openwebtext - wikitext - transfer-learning parameter_count: 80M --- ### Overview This model is a distilled version of LLaMA 2, containing approximately 80 million parameters. It was trained using a mix of OpenWebText and WikiText Raw V1 datasets. Knowledge distillation was employed to transfer knowledge from a larger "teacher" model—Meta’s 7B LLaMA 2—to help this smaller model mimic the behavior of the teacher. This version is the latest version of DistilLlama, which has gone through 5 days of training using two Nvidia A100 80G GPU. ### Update 30 out of 300 checkpoints were examined, and the one with the best performance in semantic and factual accuracy has now been updated in this repository. ### Model Architecture The architecture is based on LLaMA 2, with the following parameters: | Parameter | Value | |-------------------------|-------| | Hidden Dimension | 512 | | Intermediate Dimension | 1536 | | Max Positional Embeddings | 128 | | Attention Heads | 8 | | Transformer Layers | 16 | ### Evaluation Metrics 1. **Cosine Similarity using Word Embeddings** - **Description**: Measures semantic similarity by mapping words/phrases to vectors. - **Equation**: Cosine Similarity = ( A • B ) / ( ||A|| ||B|| ) - **Example**: "The dog chased the cat." vs. "A canine pursued a feline." (High similarity) 2. **Exact Match (EM)** - **Description**: Checks if critical keywords are present. - **Example**: - Expected: "Paris" - Response: "The capital of France is Paris." (EM = 1) 3. **ROUGE Score** - **Description**: Measures the overlap of the longest common subsequences between reference and response texts. - **Equation**: - Precision = Precision = LCS(R, C) / Length of C - Recall = Recall = LCS(R, C) / Length of R ### Model Evaluation Summary | Model Name | Duration (s) | Emissions (kgCO₂e) | Avg. EM | Avg. Cosine Similarity | Avg. ROUGE Score | |-----------------|--------------|--------------------|---------|------------------------|------------------| | LLaMA-2-7B-HF | 18215.61 | 1.84e-01 | 0.715 | 0.7257 | 0.0821 | | baby-llama-58m | 57.20 | 2.73e-06 | 0.025 | 0.6556 | 0.0097 | | DistilLlama | 77.12 | 7.79e-04 | 0.02 | 0.6623 | 0.0115 | | DistilLlamaV1 | 78.46 | 8.49e-04 | 0.065 | 0.6776 | 0.0135 | *Note: CodeCarbon was used to track carbon emission. Allocated 80GB memory, 32 cores, Intel(R) Xeon(R) Gold 6448H for the evaluation* ### GitHub Repositories - **Training Repo**: [DistilLlama Training Repository](https://github.com/HenryHuang2/DistilLlama) - **Evaluation Repo**: [Knowledge Distillation Evaluation Repository](https://github.com/svarnim1805/Knowledge-Distillation) ### Reference @misc{timiryasov2023babyllamaknowledgedistillation, title={Baby Llama: knowledge distillation from an ensemble of teachers trained on a small dataset with no performance penalty}, author={Inar Timiryasov and Jean-Loup Tastet}, year={2023}, eprint={2308.02019}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2308.02019}, } *Note: The repository will be updated as training progresses. Last update 2024-11-06*
null
Non_BioNLP
### Overview This model is a distilled version of LLaMA 2, containing approximately 80 million parameters. It was trained using a mix of OpenWebText and WikiText Raw V1 datasets. Knowledge distillation was employed to transfer knowledge from a larger "teacher" model—Meta’s 7B LLaMA 2—to help this smaller model mimic the behavior of the teacher. This version is the latest version of DistilLlama, which has gone through 5 days of training using two Nvidia A100 80G GPU. ### Update 30 out of 300 checkpoints were examined, and the one with the best performance in semantic and factual accuracy has now been updated in this repository. ### Model Architecture The architecture is based on LLaMA 2, with the following parameters: | Parameter | Value | |-------------------------|-------| | Hidden Dimension | 512 | | Intermediate Dimension | 1536 | | Max Positional Embeddings | 128 | | Attention Heads | 8 | | Transformer Layers | 16 | ### Evaluation Metrics 1. **Cosine Similarity using Word Embeddings** - **Description**: Measures semantic similarity by mapping words/phrases to vectors. - **Equation**: Cosine Similarity = ( A • B ) / ( ||A|| ||B|| ) - **Example**: "The dog chased the cat." vs. "A canine pursued a feline." (High similarity) 2. **Exact Match (EM)** - **Description**: Checks if critical keywords are present. - **Example**: - Expected: "Paris" - Response: "The capital of France is Paris." (EM = 1) 3. **ROUGE Score** - **Description**: Measures the overlap of the longest common subsequences between reference and response texts. - **Equation**: - Precision = Precision = LCS(R, C) / Length of C - Recall = Recall = LCS(R, C) / Length of R ### Model Evaluation Summary | Model Name | Duration (s) | Emissions (kgCO₂e) | Avg. EM | Avg. Cosine Similarity | Avg. ROUGE Score | |-----------------|--------------|--------------------|---------|------------------------|------------------| | LLaMA-2-7B-HF | 18215.61 | 1.84e-01 | 0.715 | 0.7257 | 0.0821 | | baby-llama-58m | 57.20 | 2.73e-06 | 0.025 | 0.6556 | 0.0097 | | DistilLlama | 77.12 | 7.79e-04 | 0.02 | 0.6623 | 0.0115 | | DistilLlamaV1 | 78.46 | 8.49e-04 | 0.065 | 0.6776 | 0.0135 | *Note: CodeCarbon was used to track carbon emission. Allocated 80GB memory, 32 cores, Intel(R) Xeon(R) Gold 6448H for the evaluation* ### GitHub Repositories - **Training Repo**: [DistilLlama Training Repository](https://github.com/HenryHuang2/DistilLlama) - **Evaluation Repo**: [Knowledge Distillation Evaluation Repository](https://github.com/svarnim1805/Knowledge-Distillation) ### Reference @misc{timiryasov2023babyllamaknowledgedistillation, title={Baby Llama: knowledge distillation from an ensemble of teachers trained on a small dataset with no performance penalty}, author={Inar Timiryasov and Jean-Loup Tastet}, year={2023}, eprint={2308.02019}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2308.02019}, } *Note: The repository will be updated as training progresses. Last update 2024-11-06*
{"base_model": "meta-llama/LLaMA-2-7B", "datasets": ["openwebtext", "wikitext"], "language": "en", "library_name": "transformers", "license": "apache-2.0", "metrics": ["cosine-similarity", "exact-match", "rouge"], "model_name": "DistilLLaMA", "tags": ["text-generation", "knowledge-distillation", "llama", "causal-lm", "openwebtext", "wikitext", "transfer-learning"], "parameter_count": "80M"}
task
[ "SEMANTIC_SIMILARITY" ]
45,666
emaadshehzad/setfit-DK-V1
emaadshehzad
text-classification
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/all-MiniLM-L12-v1", "base_model:finetune:sentence-transformers/all-MiniLM-L12-v1", "region:us" ]
2023-12-04T13:12:58Z
2024-02-05T04:50:43+00:00
4
0
--- base_model: sentence-transformers/all-MiniLM-L12-v1 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: [] inference: true --- # SetFit with sentence-transformers/all-MiniLM-L12-v1 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-MiniLM-L12-v1](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v1) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L12-v1](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v1) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 256 tokens <!-- - **Number of Classes:** Unknown --> <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("emaadshehzad/setfit-DK-V1") # Run inference preds = model("I loved the spiderman movie!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.3.1 - Transformers: 4.35.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.16.1 - Tokenizers: 0.15.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with sentence-transformers/all-MiniLM-L12-v1 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-MiniLM-L12-v1](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v1) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L12-v1](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v1) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 256 tokens <!-- - **Number of Classes:** Unknown --> <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("emaadshehzad/setfit-DK-V1") # Run inference preds = model("I loved the spiderman movie!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.3.1 - Transformers: 4.35.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.16.1 - Tokenizers: 0.15.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/all-MiniLM-L12-v1", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [], "inference": true}
task
[ "TEXT_CLASSIFICATION" ]
45,667
tranv/mt5-small-finetuned-long-sumeczech
tranv
summarization
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "generated_from_trainer", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-10-30T18:11:42Z
2023-11-24T03:33:51+00:00
68
0
--- base_model: google/mt5-small license: apache-2.0 metrics: - rouge tags: - summarization - generated_from_trainer model-index: - name: mt5-small-finetuned-long-sumeczech results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-long-sumeczech This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: 'eval_loss': 2.8214199542999268, 'eval_rouge1': 12.8674, 'eval_rouge2': 2.6891, 'eval_rougeL': 10.0662, 'eval_rougeLsum': 11.2368 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:------:|:---------:| | 3.5441 | 1.0 | 15064 | 3.0664 | 11.0135 | 1.7163 | 8.2892 | 9.7803 | | 3.4047 | 2.0 | 30128 | 2.9725 | 10.5507 | 1.8535 | 8.0843 | 9.3876 | | 3.2782 | 3.0 | 45192 | 2.9240 | 10.5563 | 1.9566 | 8.1144 | 9.4192 | | 3.2035 | 4.0 | 60256 | 2.8895 | 10.5418 | 2.0105 | 8.132 | 9.3701 | | 3.1538 | 5.0 | 75320 | 2.8712 | 10.6085 | 1.9954 | 8.1587 | 9.4499 | | 3.1197 | 6.0 | 90384 | 2.8562 | 10.6394 | 2.0582 | 8.1855 | 9.4841 | | 3.0976 | 7.0 | 105448 | 2.8439 | 10.7537 | 2.0754 | 8.2822 | 9.5754 | | 3.0849 | 8.0 | 120512 | 2.8402 | 10.751 | 2.1081 | 8.2982 | 9.5911 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-long-sumeczech This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: 'eval_loss': 2.8214199542999268, 'eval_rouge1': 12.8674, 'eval_rouge2': 2.6891, 'eval_rougeL': 10.0662, 'eval_rougeLsum': 11.2368 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:------:|:---------:| | 3.5441 | 1.0 | 15064 | 3.0664 | 11.0135 | 1.7163 | 8.2892 | 9.7803 | | 3.4047 | 2.0 | 30128 | 2.9725 | 10.5507 | 1.8535 | 8.0843 | 9.3876 | | 3.2782 | 3.0 | 45192 | 2.9240 | 10.5563 | 1.9566 | 8.1144 | 9.4192 | | 3.2035 | 4.0 | 60256 | 2.8895 | 10.5418 | 2.0105 | 8.132 | 9.3701 | | 3.1538 | 5.0 | 75320 | 2.8712 | 10.6085 | 1.9954 | 8.1587 | 9.4499 | | 3.1197 | 6.0 | 90384 | 2.8562 | 10.6394 | 2.0582 | 8.1855 | 9.4841 | | 3.0976 | 7.0 | 105448 | 2.8439 | 10.7537 | 2.0754 | 8.2822 | 9.5754 | | 3.0849 | 8.0 | 120512 | 2.8402 | 10.751 | 2.1081 | 8.2982 | 9.5911 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
{"base_model": "google/mt5-small", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-long-sumeczech", "results": []}]}
task
[ "SUMMARIZATION" ]
45,668
Lvxue/distilled-mt5-small-0.2-0.5
Lvxue
text2text-generation
[ "transformers", "pytorch", "mt5", "text2text-generation", "generated_from_trainer", "en", "ro", "dataset:wmt16", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-08-11T00:11:30Z
2022-08-11T01:26:49+00:00
11
0
--- datasets: - wmt16 language: - en - ro license: apache-2.0 metrics: - bleu tags: - generated_from_trainer model-index: - name: distilled-mt5-small-0.2-0.5 results: - task: type: translation name: Translation dataset: name: wmt16 ro-en type: wmt16 args: ro-en metrics: - type: bleu value: 5.8082 name: Bleu --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilled-mt5-small-0.2-0.5 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 3.0846 - Bleu: 5.8082 - Gen Len: 42.984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilled-mt5-small-0.2-0.5 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 3.0846 - Bleu: 5.8082 - Gen Len: 42.984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
{"datasets": ["wmt16"], "language": ["en", "ro"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilled-mt5-small-0.2-0.5", "results": [{"task": {"type": "translation", "name": "Translation"}, "dataset": {"name": "wmt16 ro-en", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 5.8082, "name": "Bleu"}]}]}]}
task
[ "TRANSLATION" ]
45,669
Agnuxo/Qwen2-1.5B-Instruct_MOE_Director_16bit
Agnuxo
text-generation
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "gguf", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-08-26T22:45:32Z
2024-08-28T10:36:01+00:00
77
0
--- base_model: unsloth/qwen2-1.5b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf - trl - sft --- # Qwen2-1.5B-Instruct MOE Director (16-bit) ## Model Description This is a 16-bit quantized version of the Qwen2-1.5B-Instruct model fine-tuned to act as a "director" within a Multi-Expert Question Answering System (MOE). It's designed to efficiently route incoming questions to specialized expert LLMs in different domains. **Developed by:** Agnuxo **License:** Apache-2.0 **Finetuned from model:** [unsloth/qwen2-1.5b-instruct-bnb-4bit](https://huggingface.co/unsloth/qwen2-1.5b-instruct-bnb-4bit) This model was trained 2x faster using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## How the MOE System Works This model is a core component of a larger Multi-Expert Question Answering System. Here's a breakdown of the system's functionality: 1. **Model Loading:** The system loads the "director" LLM and keeps other expert LLMs (e.g., for programming, biology, mathematics) ready for use. 2. **Expert Routing:** When a user asks a question, the system either: - Uses keyword matching to identify the relevant domain. - Consults the director LLM to classify the question's category. 3. **Dynamic Expert Loading:** The system loads the chosen expert LLM into memory, optimizing resource usage by releasing any previously active expert. 4. **Response Generation:** The selected expert LLM receives the question and generates a tailored answer. 5. **Chat Interface:** A user-friendly chat interface facilitates interaction with the MOE system. This MOE approach enhances efficiency and accuracy compared to relying on a single, general-purpose LLM. Repository and Additional Information Full Code: https://huggingface.co/Agnuxo/Qwen2-1.5B-Instruct_MOE_Director_16bit/resolve/main/MOE-LLMs3.py GitHub Repository: https://github.com/Agnuxo1/NEBULA ## Code Example The following code demonstrates the implementation of the Multi-Expert Question Answering System: ```python import os import torch from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline MODEL_CONFIG = { "director": { "name": "Agnuxo/Qwen2-1.5B-Instruct_MOE_Director_16bit", "task": "text-generation", }, "programming": { "name": "Qwen/Qwen2-1.5B-Instruct", "task": "text-generation", }, "biology": { "name": "Agnuxo/Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant_16bit", "task": "text-generation", }, "mathematics": { "name": "Qwen/Qwen2-Math-1.5B-Instruct", "task": "text-generation", } } KEYWORDS = { "biology": ["cell", "DNA", "protein", "evolution", "genetics", "ecosystem", "organism", "metabolism", "photosynthesis", "microbiology", "célula", "ADN", "proteína", "evolución", "genética", "ecosistema", "organismo", "metabolismo", "fotosíntesis", "microbiología"], "mathematics": ["Math" "mathematics", "equation", "integral", "derivative", "function", "geometry", "algebra", "statistics", "probability", "ecuación", "integral", "derivada", "función", "geometría", "álgebra", "estadística", "probabilidad"], "programming": ["python", "java", "C++", "HTML", "scrip", "code", "Dataset", "API", "framework", "debugging", "algorithm", "compiler", "database", "CSS", "JSON", "XML", "encryption", "IDE", "repository", "Git", "version control", "front-end", "back-end", "API", "stack trace", "REST", "machine learning"] } class MOELLM: def __init__(self): self.current_expert = None self.current_model = None self.current_tokenizer = None self.device = "cuda" if torch.cuda.is_available() else "cpu" print(f"Using device: {self.device}") self.load_director_model() def load_director_model(self): """Loads the director model.""" print("Loading director model...") model_name = MODEL_CONFIG["director"]["name"] self.director_tokenizer = AutoTokenizer.from_pretrained(model_name) self.director_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to(self.device) self.director_pipeline = pipeline( MODEL_CONFIG["director"]["task"], model=self.director_model, tokenizer=self.director_tokenizer, device=self.device ) print("Director model loaded.") def load_expert_model(self, expert): """Dynamically loads an expert model, releasing memory from the previous model.""" if expert not in MODEL_CONFIG: raise ValueError(f"Unknown expert: {expert}") if self.current_expert != expert: print(f"Loading expert model: {expert}...") # Free memory from the current model if it exists if self.current_model: del self.current_model del self.current_tokenizer torch.cuda.empty_cache() model_config = MODEL_CONFIG[expert] self.current_tokenizer = AutoTokenizer.from_pretrained(model_config["name"]) self.current_model = AutoModelForCausalLM.from_pretrained(model_config["name"], torch_dtype=torch.float16).to(self.device) self.current_expert = expert print(f"{expert.capitalize()} model loaded.") return pipeline( MODEL_CONFIG[expert]["task"], model=self.current_model, tokenizer=self.current_tokenizer, device=self.device ) def determine_expert_by_keywords(self, question): """Determines the expert based on keywords in the question.""" question_lower = question.lower() for expert, keywords in KEYWORDS.items(): if any(keyword in question_lower for keyword in keywords): return expert return None def determine_expert(self, question): """Determines which expert should answer the question.""" expert = self.determine_expert_by_keywords(question) if expert: print(f"Expert determined by keyword: {expert}") return expert prompt = f"Classify the following question into one of these categories: programming, biology, mathematics. Question: {question}\nCategory:" response = self.director_pipeline(prompt, max_length=100, num_return_sequences=1)[0]['generated_text'] expert = response.split(":")[-1].strip().lower() if expert not in MODEL_CONFIG: expert = "director" print(f"Redirecting question to: {expert}") return expert def generate_response(self, question, expert): """Generates a response using the appropriate model.""" try: model = self.load_expert_model(expert) prompt = f"Answer the following question as an expert in {expert}: {question}\nAnswer:" response = model(prompt, max_length=200, num_return_sequences=1)[0]['generated_text'] return response.split("Answer:")[-1].strip() except Exception as e: print(f"Error generating response: {str(e)}") return "Sorry, there was an error processing your request. Please try again." def chat_interface(self): """Simple chat interface.""" print("Welcome to the MOE-LLM chat. Type 'exit' to quit.") while True: question = input("\nYou: ") if question.lower() in ['exit', 'quit']: break try: expert = self.determine_expert(question) response = self.generate_response(question, expert) print(f"\n{expert.capitalize()}: {response}") except Exception as e: print(f"Error in chat: {str(e)}") print("Please try asking another question.") if __name__ == "__main__": moe_llm = MOELLM() moe_llm.chat_interface()
null
Non_BioNLP
# Qwen2-1.5B-Instruct MOE Director (16-bit) ## Model Description This is a 16-bit quantized version of the Qwen2-1.5B-Instruct model fine-tuned to act as a "director" within a Multi-Expert Question Answering System (MOE). It's designed to efficiently route incoming questions to specialized expert LLMs in different domains. **Developed by:** Agnuxo **License:** Apache-2.0 **Finetuned from model:** [unsloth/qwen2-1.5b-instruct-bnb-4bit](https://huggingface.co/unsloth/qwen2-1.5b-instruct-bnb-4bit) This model was trained 2x faster using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## How the MOE System Works This model is a core component of a larger Multi-Expert Question Answering System. Here's a breakdown of the system's functionality: 1. **Model Loading:** The system loads the "director" LLM and keeps other expert LLMs (e.g., for programming, biology, mathematics) ready for use. 2. **Expert Routing:** When a user asks a question, the system either: - Uses keyword matching to identify the relevant domain. - Consults the director LLM to classify the question's category. 3. **Dynamic Expert Loading:** The system loads the chosen expert LLM into memory, optimizing resource usage by releasing any previously active expert. 4. **Response Generation:** The selected expert LLM receives the question and generates a tailored answer. 5. **Chat Interface:** A user-friendly chat interface facilitates interaction with the MOE system. This MOE approach enhances efficiency and accuracy compared to relying on a single, general-purpose LLM. Repository and Additional Information Full Code: https://huggingface.co/Agnuxo/Qwen2-1.5B-Instruct_MOE_Director_16bit/resolve/main/MOE-LLMs3.py GitHub Repository: https://github.com/Agnuxo1/NEBULA ## Code Example The following code demonstrates the implementation of the Multi-Expert Question Answering System: ```python import os import torch from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline MODEL_CONFIG = { "director": { "name": "Agnuxo/Qwen2-1.5B-Instruct_MOE_Director_16bit", "task": "text-generation", }, "programming": { "name": "Qwen/Qwen2-1.5B-Instruct", "task": "text-generation", }, "biology": { "name": "Agnuxo/Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant_16bit", "task": "text-generation", }, "mathematics": { "name": "Qwen/Qwen2-Math-1.5B-Instruct", "task": "text-generation", } } KEYWORDS = { "biology": ["cell", "DNA", "protein", "evolution", "genetics", "ecosystem", "organism", "metabolism", "photosynthesis", "microbiology", "célula", "ADN", "proteína", "evolución", "genética", "ecosistema", "organismo", "metabolismo", "fotosíntesis", "microbiología"], "mathematics": ["Math" "mathematics", "equation", "integral", "derivative", "function", "geometry", "algebra", "statistics", "probability", "ecuación", "integral", "derivada", "función", "geometría", "álgebra", "estadística", "probabilidad"], "programming": ["python", "java", "C++", "HTML", "scrip", "code", "Dataset", "API", "framework", "debugging", "algorithm", "compiler", "database", "CSS", "JSON", "XML", "encryption", "IDE", "repository", "Git", "version control", "front-end", "back-end", "API", "stack trace", "REST", "machine learning"] } class MOELLM: def __init__(self): self.current_expert = None self.current_model = None self.current_tokenizer = None self.device = "cuda" if torch.cuda.is_available() else "cpu" print(f"Using device: {self.device}") self.load_director_model() def load_director_model(self): """Loads the director model.""" print("Loading director model...") model_name = MODEL_CONFIG["director"]["name"] self.director_tokenizer = AutoTokenizer.from_pretrained(model_name) self.director_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to(self.device) self.director_pipeline = pipeline( MODEL_CONFIG["director"]["task"], model=self.director_model, tokenizer=self.director_tokenizer, device=self.device ) print("Director model loaded.") def load_expert_model(self, expert): """Dynamically loads an expert model, releasing memory from the previous model.""" if expert not in MODEL_CONFIG: raise ValueError(f"Unknown expert: {expert}") if self.current_expert != expert: print(f"Loading expert model: {expert}...") # Free memory from the current model if it exists if self.current_model: del self.current_model del self.current_tokenizer torch.cuda.empty_cache() model_config = MODEL_CONFIG[expert] self.current_tokenizer = AutoTokenizer.from_pretrained(model_config["name"]) self.current_model = AutoModelForCausalLM.from_pretrained(model_config["name"], torch_dtype=torch.float16).to(self.device) self.current_expert = expert print(f"{expert.capitalize()} model loaded.") return pipeline( MODEL_CONFIG[expert]["task"], model=self.current_model, tokenizer=self.current_tokenizer, device=self.device ) def determine_expert_by_keywords(self, question): """Determines the expert based on keywords in the question.""" question_lower = question.lower() for expert, keywords in KEYWORDS.items(): if any(keyword in question_lower for keyword in keywords): return expert return None def determine_expert(self, question): """Determines which expert should answer the question.""" expert = self.determine_expert_by_keywords(question) if expert: print(f"Expert determined by keyword: {expert}") return expert prompt = f"Classify the following question into one of these categories: programming, biology, mathematics. Question: {question}\nCategory:" response = self.director_pipeline(prompt, max_length=100, num_return_sequences=1)[0]['generated_text'] expert = response.split(":")[-1].strip().lower() if expert not in MODEL_CONFIG: expert = "director" print(f"Redirecting question to: {expert}") return expert def generate_response(self, question, expert): """Generates a response using the appropriate model.""" try: model = self.load_expert_model(expert) prompt = f"Answer the following question as an expert in {expert}: {question}\nAnswer:" response = model(prompt, max_length=200, num_return_sequences=1)[0]['generated_text'] return response.split("Answer:")[-1].strip() except Exception as e: print(f"Error generating response: {str(e)}") return "Sorry, there was an error processing your request. Please try again." def chat_interface(self): """Simple chat interface.""" print("Welcome to the MOE-LLM chat. Type 'exit' to quit.") while True: question = input("\nYou: ") if question.lower() in ['exit', 'quit']: break try: expert = self.determine_expert(question) response = self.generate_response(question, expert) print(f"\n{expert.capitalize()}: {response}") except Exception as e: print(f"Error in chat: {str(e)}") print("Please try asking another question.") if __name__ == "__main__": moe_llm = MOELLM() moe_llm.chat_interface()
{"base_model": "unsloth/qwen2-1.5b-instruct-bnb-4bit", "language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "qwen2", "gguf", "trl", "sft"]}
task
[ "QUESTION_ANSWERING" ]
45,670
eligabel/finetuning-sentiment-model-3000-samples
eligabel
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-10-03T18:02:03Z
2022-10-03T18:12:07+00:00
9
0
--- datasets: - imdb license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: finetuning-sentiment-model-3000-samples results: - task: type: text-classification name: Text Classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - type: accuracy value: 0.8166666666666667 name: Accuracy - type: f1 value: 0.8307692307692307 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.6069 - Accuracy: 0.8167 - F1: 0.8308 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.6069 - Accuracy: 0.8167 - F1: 0.8308 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
{"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuning-sentiment-model-3000-samples", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "train", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.8166666666666667, "name": "Accuracy"}, {"type": "f1", "value": 0.8307692307692307, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,671
gokuls/hBERTv2_new_no_pretrain_stsb
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-29T12:09:44Z
2023-06-15T03:17:28+00:00
9
0
--- datasets: - glue language: - en metrics: - spearmanr tags: - generated_from_trainer model-index: - name: hBERTv2_new_no_pretrain_stsb results: - task: type: text-classification name: Text Classification dataset: name: GLUE STSB type: glue config: stsb split: validation args: stsb metrics: - type: spearmanr value: 0.20926356415783265 name: Spearmanr --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hBERTv2_new_no_pretrain_stsb This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 2.2174 - Pearson: 0.1946 - Spearmanr: 0.2093 - Combined Score: 0.2019 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 2.3893 | 1.0 | 45 | 2.3698 | 0.1204 | 0.1138 | 0.1171 | | 1.9589 | 2.0 | 90 | 2.2174 | 0.1946 | 0.2093 | 0.2019 | | 1.6743 | 3.0 | 135 | 2.3481 | 0.2144 | 0.2207 | 0.2175 | | 1.4068 | 4.0 | 180 | 2.5921 | 0.2472 | 0.2519 | 0.2496 | | 1.2205 | 5.0 | 225 | 2.6279 | 0.2718 | 0.2701 | 0.2709 | | 0.9353 | 6.0 | 270 | 2.5440 | 0.3117 | 0.3213 | 0.3165 | | 0.7662 | 7.0 | 315 | 2.3053 | 0.3501 | 0.3519 | 0.3510 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hBERTv2_new_no_pretrain_stsb This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 2.2174 - Pearson: 0.1946 - Spearmanr: 0.2093 - Combined Score: 0.2019 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 2.3893 | 1.0 | 45 | 2.3698 | 0.1204 | 0.1138 | 0.1171 | | 1.9589 | 2.0 | 90 | 2.2174 | 0.1946 | 0.2093 | 0.2019 | | 1.6743 | 3.0 | 135 | 2.3481 | 0.2144 | 0.2207 | 0.2175 | | 1.4068 | 4.0 | 180 | 2.5921 | 0.2472 | 0.2519 | 0.2496 | | 1.2205 | 5.0 | 225 | 2.6279 | 0.2718 | 0.2701 | 0.2709 | | 0.9353 | 6.0 | 270 | 2.5440 | 0.3117 | 0.3213 | 0.3165 | | 0.7662 | 7.0 | 315 | 2.3053 | 0.3501 | 0.3519 | 0.3510 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
{"datasets": ["glue"], "language": ["en"], "metrics": ["spearmanr"], "tags": ["generated_from_trainer"], "model-index": [{"name": "hBERTv2_new_no_pretrain_stsb", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE STSB", "type": "glue", "config": "stsb", "split": "validation", "args": "stsb"}, "metrics": [{"type": "spearmanr", "value": 0.20926356415783265, "name": "Spearmanr"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,673
transformersbook/xlm-roberta-base-finetuned-panx-it
transformersbook
token-classification
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2022-02-05T17:07:26+00:00
145
0
--- datasets: - xtreme license: mit metrics: - f1 tags: - generated_from_trainer model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: type: token-classification name: Token Classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - type: f1 value: 0.8215158924205379 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb). It achieves the following results on the evaluation set: - Loss: 0.2445 - F1: 0.8215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7594 | 1.0 | 70 | 0.3402 | 0.7467 | | 0.2942 | 2.0 | 140 | 0.2555 | 0.7971 | | 0.1814 | 3.0 | 210 | 0.2445 | 0.8215 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb). It achieves the following results on the evaluation set: - Loss: 0.2445 - F1: 0.8215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7594 | 1.0 | 70 | 0.3402 | 0.7467 | | 0.2942 | 2.0 | 140 | 0.2555 | 0.7971 | | 0.1814 | 3.0 | 210 | 0.2445 | 0.8215 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
{"datasets": ["xtreme"], "license": "mit", "metrics": ["f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "xlm-roberta-base-finetuned-panx-it", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "xtreme", "type": "xtreme", "args": "PAN-X.it"}, "metrics": [{"type": "f1", "value": 0.8215158924205379, "name": "F1"}]}]}]}
task
[ "NAMED_ENTITY_RECOGNITION" ]
45,674
grammarly/spivavtor-xxl
grammarly
text2text-generation
[ "transformers", "safetensors", "t5", "text2text-generation", "uk", "dataset:grammarly/spivavtor", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-03-29T22:45:30Z
2025-02-05T22:37:15+00:00
46
4
--- datasets: - grammarly/spivavtor language: - uk license: cc-by-4.0 widget: - text: 'Перефразуйте речення: Який найкращий комплiмент, який ти отримував вiд будь-кого?' example_title: Paraphrasing example - text: 'Спростiть речення: Там він помер через шість тижнів, 13 січня 888 року.' example_title: Simplification example - text: 'Виправте граматику в цьому реченнi: Дякую за інформацію! ми з Надією саме вийшли з дому' example_title: GEC example - text: 'Виправте зв’язнiсть в реченнi: Лінч досі відмовляється розповідати про сумнозвісну травневу дорожньо-транспортну пригоду, коли він збив жінку-пішохода в районі нічного клубу Баффало та відлетів. Той факт, що Лінч взагалі говорив, заслуговує на увагу в цих краях.' example_title: Coherence example --- # Model Card for Spivavtor-xxl This model was obtained by instruction tuning `CohereForAI/aya-101` model on the Spivavtor dataset. All details of the dataset and fine tuning process can be found in our paper. **Paper:** Spivavtor: An Instruction Tuned Ukrainian Text Editing Model **Authors:** Aman Saini, Artem Chernodub, Vipul Raheja, Vivek Kulkarni ## Model Details ### Model Description - **Language**: Ukrainian - **Finetuned from model:** CohereForAI/aya-101 ## How to use We make the following models available from our paper. <table> <tr> <th>Model</th> <th>Number of parameters</th> <th>Reference name in Paper</th> </tr> <tr> <td>Spivavtor-large</td> <td>1.2B</td> <td>SPIVAVTOR-MT0-LARGE</td> </tr> <tr> <td>Spivavtor-xxl</td> <td>13B</td> <td>SPIVAVTOR-AYA-101</td> </tr> </table> ## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("grammarly/spivavtor-xxl") model = AutoModelForSeq2SeqLM.from_pretrained("grammarly/spivavtor-xxl") # Paraphrase the sentence: What is the greatest compliment that you ever received from anyone? input_text = 'Перефразуйте речення: Який найкращий комплімент, який ти отримував від будь-кого?' inputs = tokenizer.encode(input_text, return_tensors="pt") output = model.generate(inputs, max_length=256) output_text = tokenizer.decode(outputs[0], skip_special_tokens=True) ```
null
Non_BioNLP
# Model Card for Spivavtor-xxl This model was obtained by instruction tuning `CohereForAI/aya-101` model on the Spivavtor dataset. All details of the dataset and fine tuning process can be found in our paper. **Paper:** Spivavtor: An Instruction Tuned Ukrainian Text Editing Model **Authors:** Aman Saini, Artem Chernodub, Vipul Raheja, Vivek Kulkarni ## Model Details ### Model Description - **Language**: Ukrainian - **Finetuned from model:** CohereForAI/aya-101 ## How to use We make the following models available from our paper. <table> <tr> <th>Model</th> <th>Number of parameters</th> <th>Reference name in Paper</th> </tr> <tr> <td>Spivavtor-large</td> <td>1.2B</td> <td>SPIVAVTOR-MT0-LARGE</td> </tr> <tr> <td>Spivavtor-xxl</td> <td>13B</td> <td>SPIVAVTOR-AYA-101</td> </tr> </table> ## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("grammarly/spivavtor-xxl") model = AutoModelForSeq2SeqLM.from_pretrained("grammarly/spivavtor-xxl") # Paraphrase the sentence: What is the greatest compliment that you ever received from anyone? input_text = 'Перефразуйте речення: Який найкращий комплімент, який ти отримував від будь-кого?' inputs = tokenizer.encode(input_text, return_tensors="pt") output = model.generate(inputs, max_length=256) output_text = tokenizer.decode(outputs[0], skip_special_tokens=True) ```
{"datasets": ["grammarly/spivavtor"], "language": ["uk"], "license": "cc-by-4.0", "widget": [{"text": "Перефразуйте речення: Який найкращий комплiмент, який ти отримував вiд будь-кого?", "example_title": "Paraphrasing example"}, {"text": "Спростiть речення: Там він помер через шість тижнів, 13 січня 888 року.", "example_title": "Simplification example"}, {"text": "Виправте граматику в цьому реченнi: Дякую за інформацію! ми з Надією саме вийшли з дому", "example_title": "GEC example"}, {"text": "Виправте зв’язнiсть в реченнi: Лінч досі відмовляється розповідати про сумнозвісну травневу дорожньо-транспортну пригоду, коли він збив жінку-пішохода в районі нічного клубу Баффало та відлетів. Той факт, що Лінч взагалі говорив, заслуговує на увагу в цих краях.", "example_title": "Coherence example"}]}
task
[ "PARAPHRASING" ]
45,675
Helsinki-NLP/opus-mt-tzo-es
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tzo", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T12:07:58+00:00
39
0
--- license: apache-2.0 tags: - translation --- ### opus-mt-tzo-es * source languages: tzo * target languages: es * OPUS readme: [tzo-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tzo-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tzo.es | 20.8 | 0.381 |
null
Non_BioNLP
### opus-mt-tzo-es * source languages: tzo * target languages: es * OPUS readme: [tzo-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tzo-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tzo.es | 20.8 | 0.381 |
{"license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
45,676
DmitriyVasiliev/autotrain-mbart-rua-sent-55454129221
DmitriyVasiliev
summarization
[ "transformers", "pytorch", "mbart", "text2text-generation", "autotrain", "summarization", "unk", "dataset:DmitriyVasiliev/autotrain-data-mbart-rua-sent", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-04T16:36:04Z
2023-05-04T16:52:45+00:00
17
0
--- datasets: - DmitriyVasiliev/autotrain-data-mbart-rua-sent language: - unk tags: - autotrain - summarization widget: - text: I love AutoTrain 🤗 co2_eq_emissions: emissions: 0.032307287679585996 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 55454129221 - CO2 Emissions (in grams): 0.0323 ## Validation Metrics - Loss: 0.958 - Rouge1: 8.528 - Rouge2: 2.583 - RougeL: 8.489 - RougeLsum: 8.606 - Gen Len: 21.020 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/DmitriyVasiliev/autotrain-mbart-rua-sent-55454129221 ```
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 55454129221 - CO2 Emissions (in grams): 0.0323 ## Validation Metrics - Loss: 0.958 - Rouge1: 8.528 - Rouge2: 2.583 - RougeL: 8.489 - RougeLsum: 8.606 - Gen Len: 21.020 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/DmitriyVasiliev/autotrain-mbart-rua-sent-55454129221 ```
{"datasets": ["DmitriyVasiliev/autotrain-data-mbart-rua-sent"], "language": ["unk"], "tags": ["autotrain", "summarization"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": {"emissions": 0.032307287679585996}}
task
[ "SUMMARIZATION" ]
45,677
HPLT/translate-en-sw-v1.0-hplt
HPLT
translation
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "en", "sw", "license:cc-by-4.0", "autotrain_compatible", "region:us" ]
2024-02-27T11:42:40Z
2024-03-14T00:52:41+00:00
19
1
--- language: - en - sw license: cc-by-4.0 tags: - translation inference: false --- ### HPLT MT release v1.0 This repository contains the translation model for English-Swahili trained with HPLT data only. The model is available in both Marian and Hugging Face formats. ### Model Info * Source language: English * Target language: Swahili * Data: HPLT data only * Model architecture: Transformer-base * Tokenizer: SentencePiece (Unigram) * Cleaning: We used [OpusCleaner](https://github.com/hplt-project/OpusCleaner) with a set of basic rules. Details can be found in the filter files [here](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0/data/en-sw/raw/v0). You can check out our [deliverable report](https://hplt-project.org/HPLT_D5_1___Translation_models_for_select_language_pairs.pdf), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0), and [website](https://hplt-project.org) for more details. ### Usage **Note** that for quality considerations, we recommend using [HPLT/translate-en-sw-v1.0-hplt_opus](https://huggingface.co/HPLT/translate-en-sw-v1.0-hplt_opus) instead of this model. The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format. We have also converted the model into the Hugging Face format so it is compatible with `transformers`. #### Using Marian To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-sw.spm` from this repository. #### Using transformers We have also converted this model to the Hugging Face format and you can get started with the script below. **Note** that due a [known issue](https://github.com/huggingface/transformers/issues/26216) in weight conversion, the checkpoint cannot work with transformer versions <4.26 or >4.30. We tested and suggest `pip install transformers==4.28`. ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("HPLT/translate-en-sw-v1.0-hplt") model = AutoModelForSeq2SeqLM.from_pretrained("HPLT/translate-en-sw-v1.0-hplt") inputs = ["Input goes here.", "Make sure the language is right."] batch_tokenized = tokenizer(inputs, return_tensors="pt", padding=True) model_output = model.generate( **batch_tokenized, num_beams=6, max_new_tokens=512 ) batch_detokenized = tokenizer.batch_decode( model_output, skip_special_tokens=True, ) print(batch_detokenized) ``` ## Benchmarks When decoded using Marian, the model has the following test scores. | Test set | BLEU | chrF++ | COMET22 | | -------------------------------------- | ---- | ----- | ----- | | FLORES200 | 28.4 | 54.6 | 0.7743 | | NTREX | 30.5 | 55.2 | 0.7572 | ### Acknowledgements This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546] Brought to you by researchers from the University of Edinburgh and Charles University in Prague with support from the whole HPLT consortium.
null
Non_BioNLP
### HPLT MT release v1.0 This repository contains the translation model for English-Swahili trained with HPLT data only. The model is available in both Marian and Hugging Face formats. ### Model Info * Source language: English * Target language: Swahili * Data: HPLT data only * Model architecture: Transformer-base * Tokenizer: SentencePiece (Unigram) * Cleaning: We used [OpusCleaner](https://github.com/hplt-project/OpusCleaner) with a set of basic rules. Details can be found in the filter files [here](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0/data/en-sw/raw/v0). You can check out our [deliverable report](https://hplt-project.org/HPLT_D5_1___Translation_models_for_select_language_pairs.pdf), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0), and [website](https://hplt-project.org) for more details. ### Usage **Note** that for quality considerations, we recommend using [HPLT/translate-en-sw-v1.0-hplt_opus](https://huggingface.co/HPLT/translate-en-sw-v1.0-hplt_opus) instead of this model. The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format. We have also converted the model into the Hugging Face format so it is compatible with `transformers`. #### Using Marian To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-sw.spm` from this repository. #### Using transformers We have also converted this model to the Hugging Face format and you can get started with the script below. **Note** that due a [known issue](https://github.com/huggingface/transformers/issues/26216) in weight conversion, the checkpoint cannot work with transformer versions <4.26 or >4.30. We tested and suggest `pip install transformers==4.28`. ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("HPLT/translate-en-sw-v1.0-hplt") model = AutoModelForSeq2SeqLM.from_pretrained("HPLT/translate-en-sw-v1.0-hplt") inputs = ["Input goes here.", "Make sure the language is right."] batch_tokenized = tokenizer(inputs, return_tensors="pt", padding=True) model_output = model.generate( **batch_tokenized, num_beams=6, max_new_tokens=512 ) batch_detokenized = tokenizer.batch_decode( model_output, skip_special_tokens=True, ) print(batch_detokenized) ``` ## Benchmarks When decoded using Marian, the model has the following test scores. | Test set | BLEU | chrF++ | COMET22 | | -------------------------------------- | ---- | ----- | ----- | | FLORES200 | 28.4 | 54.6 | 0.7743 | | NTREX | 30.5 | 55.2 | 0.7572 | ### Acknowledgements This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546] Brought to you by researchers from the University of Edinburgh and Charles University in Prague with support from the whole HPLT consortium.
{"language": ["en", "sw"], "license": "cc-by-4.0", "tags": ["translation"], "inference": false}
task
[ "TRANSLATION" ]
45,678
lizchu414/mpnet-base-all-pittsburgh-squad
lizchu414
sentence-similarity
[ "sentence-transformers", "safetensors", "mpnet", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:5166", "loss:MultipleNegativesRankingLoss", "en", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-10-23T16:09:43Z
2024-10-23T16:09:55+00:00
5
0
--- base_model: sentence-transformers/all-mpnet-base-v2 language: - en library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 - dot_accuracy@1 - dot_accuracy@3 - dot_accuracy@5 - dot_accuracy@10 - dot_precision@1 - dot_precision@3 - dot_precision@5 - dot_precision@10 - dot_recall@1 - dot_recall@3 - dot_recall@5 - dot_recall@10 - dot_ndcg@10 - dot_mrr@10 - dot_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:5166 - loss:MultipleNegativesRankingLoss widget: - source_sentence: 'Question: Who is the dungeon master in the Knights of the Arcade comedy show, and how are the destinations and battles decided during the performance?' sentences: - 'Event Name: Knights of the Arcade: Epic D&D Adventure Categories: Entertainment, Nightlife Dates: Jun 29, 2024 - Jun 29, 2024 | 9:00 pm - 10:30 pm Location: Arcade Comedy Theater, 943 Liberty Ave, Pittsburgh, PA 15222 Description: “Best Nerd Fantasy Come to Life” by Pittsburgh Magazine“A neo-geek wet dream” – Pittsburgh City PaperA comedy quest awaits! Knights of the Arcade is an award-winning comedy show that takes audiences on a wild, madcap adventure every month. A recurring cast of characters (a dwarf, a monk, a rogue, a sorcerer and a fighter) are joined by special guests and led by their maniacal dungeon master. Where they’re going, who they fight, and if they ultimately succeed is decided upon live dice that are rolled and projected on the theater wall.' - The Pirates are also often referred to as the "Bucs" or the "Buccos" (derived from buccaneer, a synonym for pirate). Since 2001 the team has played its home games at PNC Park, a 39,000-seat stadium along the Allegheny River in Pittsburgh's North Side. The Pirates previously played at Forbes Field from 1909 to 1970 and at Three Rivers Stadium from 1970 to 2000. Since 1948 the Pirates' colors have been black, gold and white, derived from the flag of Pittsburgh and matching the other major professional sports teams in Pittsburgh, the Steelers and the Penguins.The Pittsburgh Pirates are an American professional baseball team based in Pittsburgh. The Pirates compete in Major League Baseball (MLB) as a member club of the National League (NL) Central Division. Founded as part of the American Association in 1881 under the name Pittsburgh Alleghenys, the club joined the National League in 1887 and was a member of the National League East from 1969 through 1993. The Pirates have won five World - "STEELERS IN THE POSTSEASON (36-30)\nYear Record Game Date Opponent Attendance\ \ Steelers Opponent Result\n2015 10-6 AFC Wild Card Game 01/09/2016 at Cincinnati\ \ 63,257 18 16 W\nAFC Divisional Playoff 01/17/2016 at Denver 79,956 16 23 L\n\ 2016# 11-5 AFC Wild Card Game 01/08/2017 Miami 66,726 30 12 W\nAFC Divisional\ \ Playoff 01/15/2017 at Kansas City 75,678 18 16 W\nAFC Championship Game 01/22/2017\ \ at New England 66,829 36 17 L\n2017# 13-3 AFC Divisional Playoff 01/14/2018\ \ Jacksonville 64,524 42 45 L\n2020# 12-4 AFC Wild Card Game 01/03/2021 Cleveland\ \ - 37 48 L\n2021 9-7-1 AFC Wild Card Game 01/16/2022 at Kansas City 73,253 21\ \ 42 L\n2023 10-7 AFC Wild Card Game 01/15/202 4 at Buffalo 70,040 17 31 L\n*AFC\ \ Central Champion\n#AFC North Champion\n+AFC ChampionSTEELERS IN THE POSTSEASON\n\ \ 2023 PITTSBURGH STEELERS\n 421\n STEELERS IN THE POSTSEASON" - source_sentence: 'Question: What is the Local Services Tax and how is it collected?' sentences: - the 1916 Centennial of Pittsburgh's 1816 incorporation as a City. At the March 1916 dedication ceremony, Mayor Joseph Armstrong placed a time capsule into the still under construction building. Two and a half years later in December 1917, he would become the first Mayor to call the City-County Building a second home. The missing time capsule has yet to be discovered. - 'The first City Hall at Market Square. The second City Hall on Smithfield Street. Mayor David Lawrence strikes the first blow for the demolition of the second City Hall.' - "EXEMPT P ERSON – a person who files an exemption certificate with his employer\ \ affirming \nthat he reasonably expects to receive earned income and net profits\ \ from all sources within the \nCity of less than twelve thousand dollars ($12,000)\ \ in the calendar year for wh ich the exemption \ncertificate is filed. See Section\ \ 301(h) below, and Section 2 of the Local Tax Enabling Act, 53 P.S. § \n6924.301.1,\ \ for other exemptions. \nINCOME – all earned income and net profits from whatever\ \ source derived, including but not \nlimited to salaries, wages, bonuses, commissions\ \ and income from self -employment earned in \nPittsburgh. \nLOCAL SERVICES TAX\ \ (LST) – a tax on individuals for the privilege of engaging in an \noccupation.\ \ The Local Services Tax may be levied, assessed and collected by the political\ \ \nsubdivision of the taxpayer’s primary place of employment. \nOCCUPATION –\ \ any livelihood, job, trade, profession, business or enterprise of any kind for" - source_sentence: '"What is the nature of the incident being investigated by Zone Five Officers in Homewood on April 23, 2024?"' sentences: - 'Event Name: Saturday Night Improv @ BGC! Date: Saturdays, 7:30-9:30 p.m. Location: BGC Community Activity Center: 113 N. Pacific Ave., Pittsburgh | Garfield Price Information: GET TICKETS: 10 Categories: Comedy, Theater Description: It''s time to Love, Laugh and Enjoy. Join us at the BGC Activity Center Saturday evenings for an evening of improv with performances by Narsh and Penny Pressed! Shows start promptly at 7:30 PM so don''t be late! 412-441-6950 Event Name: Swing City Date: Saturdays, 8 p.m. Location: Wightman School: 5604 Solway, Pittsburgh | Squirrel Hill Categories: Other Stuff Description: Learn & practice swing dancing skills w/ the Jim Adler Band. 412-759-1569' - 'Police Investigate Stabbing Incident in Beltzhoover - 04.23.2024 Zone Five Officers Investigate Homewood Shooting Incident - 04.23.2024 Violent Crimes Division VCU Detectives Make Firearms Arrest in Spring Garden - 04.19.2024 UPDATE: Detectives Seek Assistance in Search for Missing 12-Year-Old Girl - 04.19.2024 UPDATE: Police Investigate Aggravated Assault on Riverwalk in Point State Park - 04.19.2024 Police Investigate Homicide Inside Larimer Residence - 04.19.2024 UPDATE: Police Seek the Public''s Help in Locating Missing Juvenile Male - 04.19.2024 UPDATE: Pittsburgh Police Ask for Public''s Help to Find Missing Woman - 04.15.2024 Police Investigate Shooting Incident in Allegheny Center - 04.13.2024 UPDATE: Pittsburgh Public Safety Responds to Barge Emergency on Ohio River - 04.12.2024 Police Make Ethnic Intimidation and Criminal Mischief Arrest in Squirrel Hill - 04.12.2024 UPDATE: Police Seek the Public''s Assistance in Locating Missing Boy - 04.11.2024' - "24\n \n$ (Millions)Select Major Expenditures, 2018-2022\n2018 2019 2020\n2021\ \ 2022Health Insurance\nWorkers' CompensationPension and OPEBDebt Service050,000,000100,000,000150,000,000\n\ Health Insurance\nThese expenditures are categorized within the Personnel – Employment\ \ Benefits subclass. Prior to 2016 these \nexpenditures were budgeted centrally\ \ in the Department of Human Resources and Civil Service. Except for retiree \n\ health insurance, these expenditures are budgeted across all divisions based on\ \ staffing levels and plan \nelections.\n Health Insurance\n52101 Health Insurance\n\ 52111 Other Insurance and Benefits\n52121 Retiree Health Insurance\nWorkers’\ \ Compensation\nThese expenditures are categorized within the Personnel – Employment\ \ Benefits subclass. Most medical, \nindemnity, and fees are budgeted across divisions\ \ with outstanding claims. Legal and settlement expenses \nremain budgeted in\ \ the Department of Human Resources and Civil Service with accounts organized\ \ as follows:" - source_sentence: 'Answer: The passage does not provide information about the longest reception for the Steelers in the Wild Card Game against Cincinnati.' sentences: - '09/08 Lions RESERVE/LEAGUE SUSP. T 27-27 + 09/15 at Ravens RESERVE/LEAGUE SUSP. L 17-23 09/22 Panthers RESERVE/LEAGUE SUSP. L 20-38 09/29 Seahawks RESERVE/LEAGUE SUSP. L 10-27 10/06 at Bengals RESERVE/LEAGUE SUSP. W 26-23 10/13 Falcons RESERVE/LEAGUE SUSP. W 34-33 10/20 at Giants S 7701.0 13.0 0 0 1 0 0 0 0 0 1 0 0 0 0000 000 W 27-21 10/27 at Saints S 6510.0 0.0 0 0 0 1 0 0 0 1 0 0 0 0 0000 000 L 9-31 10/31 49ers S 3210.0 0.0 0 0 0 0 0 0 0 0 0 0 0 0 0000 000 L 25-28 11/10 at Buccaneers S 3300.0 0.0 0 0 0 0 0 0 0 0 0 0 0 0 0000 000 L 27-30 11/17 at 49ers S 4400.0 0.0 0 0 0 0 0 0 0 1 0 0 0 0 0000 000 L 26-36 12/01 Rams S 8530.0 0.0 1 10 0 0 0 0 0 0 0 0 0 0 0000 000 L 7-34 12/08 Steelers S 5410.0 0.0 0 0 0 0 0 0 0 0 0 0 0 0 0000 000 L 17-23 12/15 Browns S 7700.0 0.0 0 0 0 1 0 0 0 3 0 0 0 0 0000 000 W 38-24 12/22 at Seahawks S 3300.0 0.0 1 18 0 0 0 0 0 0 0 0 0 0 0000 000 W 27-13 12/29 at Rams S 7610.0 0.0 1 1 0 0 0 0 0 2 0 0 0 0 0000 000 L 24-31' - "Program \n• Clinical field education to emergency medicine physician residents\ \ in the University of Pittsburgh \nEmergency Medicine Residency program \n \n\ 2023 Accomplishments\n \n• Financial Accomplishments:\n◦ Income from transports\ \ increased by $1.8M from same time period last year\n◦ Bureau slated to bring\ \ in an additional $5M in revenue for 2023\n• Personnel Accomplishments:\n◦ 6\ \ new River Rescue Divers went through intensive training and all successfully\ \ completed the \nclass\n◦ Increase in promotions to upper administration\n• Employee\ \ Safety Initiatives: \n◦ Implementation of Cordico App for employee wellness\n\ ◦ Access control security system installed in all EMS facilities \n• Equipment\ \ Initiatives:\n◦ Bureau was approved to receive state of the art mannequins to\ \ simulate real life patients during \nemergencies\n◦ Billing company to purchase\ \ equipment/medication dispensary machines to be located in 5 areas" - "Pittsburgh 31\nCincinnati 17\nCINCINNATI — Pittsburgh scored 24 unanswered points\ \ to turn a 17-7 deficit into a \n31-17 victory over Cincinnati in the AFC Wild\ \ Card Game at Paul Brown Stadium. \nThe Pittsburgh offense compiled 346 total\ \ yards led by QB Ben Roethlisberger, who \ntossed three touchdowns and finished\ \ with a QB rating of 148.7. RB Jerome Bettis ran for 52 \nyards on 10 carries\ \ (5.2 avg.) and one touchdown. WR Cedrick Wilson caught three passes \nfor 104\ \ yards (34.7 avg.), with one touchdown. \nThe Steelers defense recorded four\ \ sacks and two interceptions while holding the \nBengals to just 84 yards rushing.\ \ \nCincinnati was dealt an early blow when starting QB Carson Palmer suffered\ \ a torn \nACL on the first offensive play of the game. The Bengals jumped out\ \ to a 10-0 lead with a \n23-yard field goal by K Shayne Graham and a 20-yard\ \ touchdown run by RB Rudi Johnson.\nPittsburgh got on the board when RB Willie\ \ Parker took a screen pass 19 yards for a" - source_sentence: '"What cultural celebration will be honored at the 2024 Greater Pittsburgh Lunar New Year Gala, and what is the significance of this event in the community?"' sentences: - 'This page informs City of Pittsburgh residents about the city''s Snow Angels program. This page is also where volunteers can sign up, and recipients can submit a request. City Collection Equity Audit The City of Pittsburgh is conducting an audit to identify inequity and bias in the City’s collection of public art and memorials. Davis Avenue Bridge Design and construction for the new Davis Avenue Bridge between Brighton Heights and Riverview Park. South Side Park Public Art A new public art project is being planned in South Side Park. This is being done in coordination with the park’s Phase 1 renovations and funded by the Percent For Art. Projects that are no longer accepting feedback, but are now in the construction or development phase. PHAD Projects Current Projects – find out about ongoing projects underway throughout the city and learn how to apply for new projects each year. Emerald View Phase I Trails & Trailheads' - of Pittsburgh and greater southwestern Pennsylvania. Justin is employed within the Cultural Resources practice of Michael Baker International. He is Director Emeritus of Preservation Pittsburgh and a past president of the East Liberty Valley Historical Society. Justin is a graduate of the University of Pittsburgh (B.A. Architectural Studies, 2008) and Columbia University (M.S. Historic Preservation, 2010).Todd Wilson, MBA, PE, is an award-winning transportation engineer, named one of Pittsburgh Business Times’ 20 Engineers to Know in 2022. He has co-authored two books on Pittsburgh’s bridges,Images of America Pittsburgh’s Bridges and Engineering Pittsburgh a History of Roads, Rails, Canals, Bridges, and More.An engineering graduate of Carnegie Mellon, Todd has extensive knowledge on bridges, having photographed them in all 50 states and 25 countries, and he has presented at many conferences. Check out his Pittsburgh bridge photography on Instagram @pghbridges.TOUR STARTS/ENDS:Gateway - 'Event Name: 2024 Greater Pittsburgh Lunar New Year Gala Categories: Arts + Culture, Community, Holidays, Nightlife Dates: Feb 3, 2024 - Feb 3, 2024 | 4:00 pm - 9:00 pm Location: PNC Theater, 350 Forbes Avenue, Pittsburgh, PA 15222' model-index: - name: MPNet base trained on synthetic Pittsburgh data results: - task: type: information-retrieval name: Information Retrieval dataset: name: pittsburgh type: pittsburgh metrics: - type: cosine_accuracy@1 value: 0.7375145180023229 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9037940379403794 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9368950832365467 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9628339140534262 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7375145180023229 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.30126467931345985 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1873790166473093 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09628339140534262 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.7375145180023229 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.9037940379403794 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.9368950832365467 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9628339140534262 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8590408201907759 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.824762258110111 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.8263189855192845 name: Cosine Map@100 - type: dot_accuracy@1 value: 0.7375145180023229 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.9037940379403794 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.9368950832365467 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.9628339140534262 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.7375145180023229 name: Dot Precision@1 - type: dot_precision@3 value: 0.30126467931345985 name: Dot Precision@3 - type: dot_precision@5 value: 0.1873790166473093 name: Dot Precision@5 - type: dot_precision@10 value: 0.09628339140534262 name: Dot Precision@10 - type: dot_recall@1 value: 0.7375145180023229 name: Dot Recall@1 - type: dot_recall@3 value: 0.9037940379403794 name: Dot Recall@3 - type: dot_recall@5 value: 0.9368950832365467 name: Dot Recall@5 - type: dot_recall@10 value: 0.9628339140534262 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.8590408201907759 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.824762258110111 name: Dot Mrr@10 - type: dot_map@100 value: 0.8263189855192845 name: Dot Map@100 --- # MPNet base trained on synthetic Pittsburgh data This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision f1b1b820e405bb8644f5e8d9a3b98f9c9e0a3c58 --> - **Maximum Sequence Length:** 384 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("lizchu414/mpnet-base-all-pittsburgh-squad") # Run inference sentences = [ '"What cultural celebration will be honored at the 2024 Greater Pittsburgh Lunar New Year Gala, and what is the significance of this event in the community?"', 'Event Name: 2024 Greater Pittsburgh Lunar New Year Gala\nCategories: Arts + Culture, Community, Holidays, Nightlife\nDates: Feb 3, 2024 - Feb 3, 2024 | 4:00 pm - 9:00 pm\nLocation: PNC Theater, 350 Forbes Avenue, Pittsburgh, PA 15222', "This page informs City of Pittsburgh residents about the city's Snow Angels program. This page is also where volunteers can sign up, and recipients can submit a request.\nCity Collection Equity Audit\nThe City of Pittsburgh is conducting an audit to identify inequity and bias in the City’s collection of public art and memorials.\nDavis Avenue Bridge\nDesign and construction for the new Davis Avenue Bridge between Brighton Heights and Riverview Park.\nSouth Side Park Public Art\nA new public art project is being planned in South Side Park. This is being done in coordination with the park’s Phase 1 renovations and funded by the Percent For Art.\nProjects that are no longer accepting feedback, but are now in the construction or development phase.\nPHAD Projects\nCurrent Projects – find out about ongoing projects underway throughout the city and learn how to apply for new projects each year.\nEmerald View Phase I Trails & Trailheads", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `pittsburgh` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7375 | | cosine_accuracy@3 | 0.9038 | | cosine_accuracy@5 | 0.9369 | | cosine_accuracy@10 | 0.9628 | | cosine_precision@1 | 0.7375 | | cosine_precision@3 | 0.3013 | | cosine_precision@5 | 0.1874 | | cosine_precision@10 | 0.0963 | | cosine_recall@1 | 0.7375 | | cosine_recall@3 | 0.9038 | | cosine_recall@5 | 0.9369 | | cosine_recall@10 | 0.9628 | | cosine_ndcg@10 | 0.859 | | cosine_mrr@10 | 0.8248 | | cosine_map@100 | 0.8263 | | dot_accuracy@1 | 0.7375 | | dot_accuracy@3 | 0.9038 | | dot_accuracy@5 | 0.9369 | | dot_accuracy@10 | 0.9628 | | dot_precision@1 | 0.7375 | | dot_precision@3 | 0.3013 | | dot_precision@5 | 0.1874 | | dot_precision@10 | 0.0963 | | dot_recall@1 | 0.7375 | | dot_recall@3 | 0.9038 | | dot_recall@5 | 0.9369 | | dot_recall@10 | 0.9628 | | dot_ndcg@10 | 0.859 | | dot_mrr@10 | 0.8248 | | **dot_map@100** | **0.8263** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_eval_batch_size`: 2 - `eval_accumulation_steps`: 1 - `learning_rate`: 2e-05 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 2 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: 1 - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | pittsburgh_dot_map@100 | |:-----:|:----:|:-------------:|:---------------:|:----------------------:| | 0 | 0 | - | - | 0.5984 | | 0.8 | 100 | 0.587 | 0.1954 | 0.7780 | | 1.592 | 200 | 0.1828 | 0.1805 | 0.8020 | | 2.384 | 300 | 0.2224 | 0.1605 | 0.8263 | ### Framework Versions - Python: 3.12.7 - Sentence Transformers: 3.2.0 - Transformers: 4.45.2 - PyTorch: 2.2.2+cu121 - Accelerate: 1.0.1 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# MPNet base trained on synthetic Pittsburgh data This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision f1b1b820e405bb8644f5e8d9a3b98f9c9e0a3c58 --> - **Maximum Sequence Length:** 384 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("lizchu414/mpnet-base-all-pittsburgh-squad") # Run inference sentences = [ '"What cultural celebration will be honored at the 2024 Greater Pittsburgh Lunar New Year Gala, and what is the significance of this event in the community?"', 'Event Name: 2024 Greater Pittsburgh Lunar New Year Gala\nCategories: Arts + Culture, Community, Holidays, Nightlife\nDates: Feb 3, 2024 - Feb 3, 2024 | 4:00 pm - 9:00 pm\nLocation: PNC Theater, 350 Forbes Avenue, Pittsburgh, PA 15222', "This page informs City of Pittsburgh residents about the city's Snow Angels program. This page is also where volunteers can sign up, and recipients can submit a request.\nCity Collection Equity Audit\nThe City of Pittsburgh is conducting an audit to identify inequity and bias in the City’s collection of public art and memorials.\nDavis Avenue Bridge\nDesign and construction for the new Davis Avenue Bridge between Brighton Heights and Riverview Park.\nSouth Side Park Public Art\nA new public art project is being planned in South Side Park. This is being done in coordination with the park’s Phase 1 renovations and funded by the Percent For Art.\nProjects that are no longer accepting feedback, but are now in the construction or development phase.\nPHAD Projects\nCurrent Projects – find out about ongoing projects underway throughout the city and learn how to apply for new projects each year.\nEmerald View Phase I Trails & Trailheads", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `pittsburgh` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7375 | | cosine_accuracy@3 | 0.9038 | | cosine_accuracy@5 | 0.9369 | | cosine_accuracy@10 | 0.9628 | | cosine_precision@1 | 0.7375 | | cosine_precision@3 | 0.3013 | | cosine_precision@5 | 0.1874 | | cosine_precision@10 | 0.0963 | | cosine_recall@1 | 0.7375 | | cosine_recall@3 | 0.9038 | | cosine_recall@5 | 0.9369 | | cosine_recall@10 | 0.9628 | | cosine_ndcg@10 | 0.859 | | cosine_mrr@10 | 0.8248 | | cosine_map@100 | 0.8263 | | dot_accuracy@1 | 0.7375 | | dot_accuracy@3 | 0.9038 | | dot_accuracy@5 | 0.9369 | | dot_accuracy@10 | 0.9628 | | dot_precision@1 | 0.7375 | | dot_precision@3 | 0.3013 | | dot_precision@5 | 0.1874 | | dot_precision@10 | 0.0963 | | dot_recall@1 | 0.7375 | | dot_recall@3 | 0.9038 | | dot_recall@5 | 0.9369 | | dot_recall@10 | 0.9628 | | dot_ndcg@10 | 0.859 | | dot_mrr@10 | 0.8248 | | **dot_map@100** | **0.8263** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_eval_batch_size`: 2 - `eval_accumulation_steps`: 1 - `learning_rate`: 2e-05 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 2 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: 1 - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | pittsburgh_dot_map@100 | |:-----:|:----:|:-------------:|:---------------:|:----------------------:| | 0 | 0 | - | - | 0.5984 | | 0.8 | 100 | 0.587 | 0.1954 | 0.7780 | | 1.592 | 200 | 0.1828 | 0.1805 | 0.8020 | | 2.384 | 300 | 0.2224 | 0.1605 | 0.8263 | ### Framework Versions - Python: 3.12.7 - Sentence Transformers: 3.2.0 - Transformers: 4.45.2 - PyTorch: 2.2.2+cu121 - Accelerate: 1.0.1 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/all-mpnet-base-v2", "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100", "dot_accuracy@1", "dot_accuracy@3", "dot_accuracy@5", "dot_accuracy@10", "dot_precision@1", "dot_precision@3", "dot_precision@5", "dot_precision@10", "dot_recall@1", "dot_recall@3", "dot_recall@5", "dot_recall@10", "dot_ndcg@10", "dot_mrr@10", "dot_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:5166", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Question: Who is the dungeon master in the Knights of the Arcade comedy show, and how are the destinations and battles decided during the performance?", "sentences": ["Event Name: Knights of the Arcade: Epic D&D Adventure\nCategories: Entertainment, Nightlife\nDates: Jun 29, 2024 - Jun 29, 2024 | 9:00 pm - 10:30 pm\nLocation: Arcade Comedy Theater, 943 Liberty Ave, Pittsburgh, PA 15222\nDescription: “Best Nerd Fantasy Come to Life” by Pittsburgh Magazine“A neo-geek wet dream” – Pittsburgh City PaperA comedy quest awaits! Knights of the Arcade is an award-winning comedy show that takes audiences on a wild, madcap adventure every month. A recurring cast of characters (a dwarf, a monk, a rogue, a sorcerer and a fighter) are joined by special guests and led by their maniacal dungeon master. Where they’re going, who they fight, and if they ultimately succeed is decided upon live dice that are rolled and projected on the theater wall.", "The Pirates are also often referred to as the \"Bucs\" or the \"Buccos\" (derived from buccaneer, a synonym for pirate). Since 2001 the team has played its home games at PNC Park, a 39,000-seat stadium along the Allegheny River in Pittsburgh's North Side. The Pirates previously played at Forbes Field from 1909 to 1970 and at Three Rivers Stadium from 1970 to 2000. Since 1948 the Pirates' colors have been black, gold and white, derived from the flag of Pittsburgh and matching the other major professional sports teams in Pittsburgh, the Steelers and the Penguins.The Pittsburgh Pirates are an American professional baseball team based in Pittsburgh. The Pirates compete in Major League Baseball (MLB) as a member club of the National League (NL) Central Division. Founded as part of the American Association in 1881 under the name Pittsburgh Alleghenys, the club joined the National League in 1887 and was a member of the National League East from 1969 through 1993. The Pirates have won five World", "STEELERS IN THE POSTSEASON (36-30)\nYear Record Game Date Opponent Attendance Steelers Opponent Result\n2015 10-6 AFC Wild Card Game 01/09/2016 at Cincinnati 63,257 18 16 W\nAFC Divisional Playoff 01/17/2016 at Denver 79,956 16 23 L\n2016# 11-5 AFC Wild Card Game 01/08/2017 Miami 66,726 30 12 W\nAFC Divisional Playoff 01/15/2017 at Kansas City 75,678 18 16 W\nAFC Championship Game 01/22/2017 at New England 66,829 36 17 L\n2017# 13-3 AFC Divisional Playoff 01/14/2018 Jacksonville 64,524 42 45 L\n2020# 12-4 AFC Wild Card Game 01/03/2021 Cleveland - 37 48 L\n2021 9-7-1 AFC Wild Card Game 01/16/2022 at Kansas City 73,253 21 42 L\n2023 10-7 AFC Wild Card Game 01/15/202 4 at Buffalo 70,040 17 31 L\n*AFC Central Champion\n#AFC North Champion\n+AFC ChampionSTEELERS IN THE POSTSEASON\n 2023 PITTSBURGH STEELERS\n 421\n STEELERS IN THE POSTSEASON"]}, {"source_sentence": "Question: What is the Local Services Tax and how is it collected?", "sentences": ["the 1916 Centennial of Pittsburgh's 1816 incorporation as a City. At the March 1916 dedication ceremony, Mayor Joseph Armstrong placed a time capsule into the still under construction building. Two and a half years later in December 1917, he would become the first Mayor to call the City-County Building a second home. The missing time capsule has yet to be discovered.", "The first City Hall at Market Square.\nThe second City Hall on Smithfield Street.\nMayor David Lawrence strikes the first blow for the demolition of the second City Hall.", "EXEMPT P ERSON – a person who files an exemption certificate with his employer affirming \nthat he reasonably expects to receive earned income and net profits from all sources within the \nCity of less than twelve thousand dollars ($12,000) in the calendar year for wh ich the exemption \ncertificate is filed. See Section 301(h) below, and Section 2 of the Local Tax Enabling Act, 53 P.S. § \n6924.301.1, for other exemptions. \nINCOME – all earned income and net profits from whatever source derived, including but not \nlimited to salaries, wages, bonuses, commissions and income from self -employment earned in \nPittsburgh. \nLOCAL SERVICES TAX (LST) – a tax on individuals for the privilege of engaging in an \noccupation. The Local Services Tax may be levied, assessed and collected by the political \nsubdivision of the taxpayer’s primary place of employment. \nOCCUPATION – any livelihood, job, trade, profession, business or enterprise of any kind for"]}, {"source_sentence": "\"What is the nature of the incident being investigated by Zone Five Officers in Homewood on April 23, 2024?\"", "sentences": ["Event Name: Saturday Night Improv @ BGC!\nDate: Saturdays, 7:30-9:30 p.m.\nLocation: BGC Community Activity Center: 113 N. Pacific Ave., Pittsburgh | Garfield\nPrice Information: GET TICKETS: 10\nCategories: Comedy, Theater\nDescription: It's time to Love, Laugh and Enjoy. Join us at the BGC Activity Center Saturday evenings for an evening of improv with performances by Narsh and Penny Pressed! Shows start promptly at 7:30 PM so don't be late! 412-441-6950\n\nEvent Name: Swing City\nDate: Saturdays, 8 p.m.\nLocation: Wightman School: 5604 Solway, Pittsburgh | Squirrel Hill\nCategories: Other Stuff\nDescription: Learn & practice swing dancing skills w/ the Jim Adler Band. 412-759-1569", "Police Investigate Stabbing Incident in Beltzhoover - 04.23.2024\nZone Five Officers Investigate Homewood Shooting Incident - 04.23.2024\nViolent Crimes Division VCU Detectives Make Firearms Arrest in Spring Garden - 04.19.2024\nUPDATE: Detectives Seek Assistance in Search for Missing 12-Year-Old Girl - 04.19.2024\nUPDATE: Police Investigate Aggravated Assault on Riverwalk in Point State Park - 04.19.2024\nPolice Investigate Homicide Inside Larimer Residence - 04.19.2024\nUPDATE: Police Seek the Public's Help in Locating Missing Juvenile Male - 04.19.2024\nUPDATE: Pittsburgh Police Ask for Public's Help to Find Missing Woman - 04.15.2024\nPolice Investigate Shooting Incident in Allegheny Center - 04.13.2024\nUPDATE: Pittsburgh Public Safety Responds to Barge Emergency on Ohio River - 04.12.2024\nPolice Make Ethnic Intimidation and Criminal Mischief Arrest in Squirrel Hill - 04.12.2024\nUPDATE: Police Seek the Public's Assistance in Locating Missing Boy - 04.11.2024", "24\n \n$ (Millions)Select Major Expenditures, 2018-2022\n2018 2019 2020\n2021 2022Health Insurance\nWorkers' CompensationPension and OPEBDebt Service050,000,000100,000,000150,000,000\nHealth Insurance\nThese expenditures are categorized within the Personnel – Employment Benefits subclass. Prior to 2016 these \nexpenditures were budgeted centrally in the Department of Human Resources and Civil Service. Except for retiree \nhealth insurance, these expenditures are budgeted across all divisions based on staffing levels and plan \nelections.\n Health Insurance\n52101 Health Insurance\n52111 Other Insurance and Benefits\n52121 Retiree Health Insurance\nWorkers’ Compensation\nThese expenditures are categorized within the Personnel – Employment Benefits subclass. Most medical, \nindemnity, and fees are budgeted across divisions with outstanding claims. Legal and settlement expenses \nremain budgeted in the Department of Human Resources and Civil Service with accounts organized as follows:"]}, {"source_sentence": "Answer: The passage does not provide information about the longest reception for the Steelers in the Wild Card Game against Cincinnati.", "sentences": ["09/08 Lions RESERVE/LEAGUE SUSP. T 27-27 +\n09/15 at Ravens RESERVE/LEAGUE SUSP. L 17-23\n09/22 Panthers RESERVE/LEAGUE SUSP. L 20-38\n09/29 Seahawks RESERVE/LEAGUE SUSP. L 10-27\n10/06 at Bengals RESERVE/LEAGUE SUSP. W 26-23\n10/13 Falcons RESERVE/LEAGUE SUSP. W 34-33\n10/20 at Giants S 7701.0 13.0 0 0 1 0 0 0 0 0 1 0 0 0 0000 000 W 27-21\n10/27 at Saints S 6510.0 0.0 0 0 0 1 0 0 0 1 0 0 0 0 0000 000 L 9-31\n10/31 49ers S 3210.0 0.0 0 0 0 0 0 0 0 0 0 0 0 0 0000 000 L 25-28\n11/10 at Buccaneers S 3300.0 0.0 0 0 0 0 0 0 0 0 0 0 0 0 0000 000 L 27-30\n11/17 at 49ers S 4400.0 0.0 0 0 0 0 0 0 0 1 0 0 0 0 0000 000 L 26-36\n12/01 Rams S 8530.0 0.0 1 10 0 0 0 0 0 0 0 0 0 0 0000 000 L 7-34\n12/08 Steelers S 5410.0 0.0 0 0 0 0 0 0 0 0 0 0 0 0 0000 000 L 17-23\n12/15 Browns S 7700.0 0.0 0 0 0 1 0 0 0 3 0 0 0 0 0000 000 W 38-24\n12/22 at Seahawks S 3300.0 0.0 1 18 0 0 0 0 0 0 0 0 0 0 0000 000 W 27-13\n12/29 at Rams S 7610.0 0.0 1 1 0 0 0 0 0 2 0 0 0 0 0000 000 L 24-31", "Program \n• Clinical field education to emergency medicine physician residents in the University of Pittsburgh \nEmergency Medicine Residency program \n \n2023 Accomplishments\n \n• Financial Accomplishments:\n◦ Income from transports increased by $1.8M from same time period last year\n◦ Bureau slated to bring in an additional $5M in revenue for 2023\n• Personnel Accomplishments:\n◦ 6 new River Rescue Divers went through intensive training and all successfully completed the \nclass\n◦ Increase in promotions to upper administration\n• Employee Safety Initiatives: \n◦ Implementation of Cordico App for employee wellness\n◦ Access control security system installed in all EMS facilities \n• Equipment Initiatives:\n◦ Bureau was approved to receive state of the art mannequins to simulate real life patients during \nemergencies\n◦ Billing company to purchase equipment/medication dispensary machines to be located in 5 areas", "Pittsburgh 31\nCincinnati 17\nCINCINNATI — Pittsburgh scored 24 unanswered points to turn a 17-7 deficit into a \n31-17 victory over Cincinnati in the AFC Wild Card Game at Paul Brown Stadium. \nThe Pittsburgh offense compiled 346 total yards led by QB Ben Roethlisberger, who \ntossed three touchdowns and finished with a QB rating of 148.7. RB Jerome Bettis ran for 52 \nyards on 10 carries (5.2 avg.) and one touchdown. WR Cedrick Wilson caught three passes \nfor 104 yards (34.7 avg.), with one touchdown. \nThe Steelers defense recorded four sacks and two interceptions while holding the \nBengals to just 84 yards rushing. \nCincinnati was dealt an early blow when starting QB Carson Palmer suffered a torn \nACL on the first offensive play of the game. The Bengals jumped out to a 10-0 lead with a \n23-yard field goal by K Shayne Graham and a 20-yard touchdown run by RB Rudi Johnson.\nPittsburgh got on the board when RB Willie Parker took a screen pass 19 yards for a"]}, {"source_sentence": "\"What cultural celebration will be honored at the 2024 Greater Pittsburgh Lunar New Year Gala, and what is the significance of this event in the community?\"", "sentences": ["This page informs City of Pittsburgh residents about the city's Snow Angels program. This page is also where volunteers can sign up, and recipients can submit a request.\nCity Collection Equity Audit\nThe City of Pittsburgh is conducting an audit to identify inequity and bias in the City’s collection of public art and memorials.\nDavis Avenue Bridge\nDesign and construction for the new Davis Avenue Bridge between Brighton Heights and Riverview Park.\nSouth Side Park Public Art\nA new public art project is being planned in South Side Park. This is being done in coordination with the park’s Phase 1 renovations and funded by the Percent For Art.\nProjects that are no longer accepting feedback, but are now in the construction or development phase.\nPHAD Projects\nCurrent Projects – find out about ongoing projects underway throughout the city and learn how to apply for new projects each year.\nEmerald View Phase I Trails & Trailheads", "of Pittsburgh and greater southwestern Pennsylvania. Justin is employed within the Cultural Resources practice of Michael Baker International. He is Director Emeritus of Preservation Pittsburgh and a past president of the East Liberty Valley Historical Society. Justin is a graduate of the University of Pittsburgh (B.A. Architectural Studies, 2008) and Columbia University (M.S. Historic Preservation, 2010).Todd Wilson, MBA, PE, is an award-winning transportation engineer, named one of Pittsburgh Business Times’ 20 Engineers to Know in 2022. He has co-authored two books on Pittsburgh’s bridges,Images of America Pittsburgh’s Bridges and Engineering Pittsburgh a History of Roads, Rails, Canals, Bridges, and More.An engineering graduate of Carnegie Mellon, Todd has extensive knowledge on bridges, having photographed them in all 50 states and 25 countries, and he has presented at many conferences. Check out his Pittsburgh bridge photography on Instagram @pghbridges.TOUR STARTS/ENDS:Gateway", "Event Name: 2024 Greater Pittsburgh Lunar New Year Gala\nCategories: Arts + Culture, Community, Holidays, Nightlife\nDates: Feb 3, 2024 - Feb 3, 2024 | 4:00 pm - 9:00 pm\nLocation: PNC Theater, 350 Forbes Avenue, Pittsburgh, PA 15222"]}], "model-index": [{"name": "MPNet base trained on synthetic Pittsburgh data", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "pittsburgh", "type": "pittsburgh"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.7375145180023229, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.9037940379403794, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9368950832365467, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9628339140534262, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.7375145180023229, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.30126467931345985, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.1873790166473093, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09628339140534262, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.7375145180023229, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.9037940379403794, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9368950832365467, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9628339140534262, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.8590408201907759, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.824762258110111, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.8263189855192845, "name": "Cosine Map@100"}, {"type": "dot_accuracy@1", "value": 0.7375145180023229, "name": "Dot Accuracy@1"}, {"type": "dot_accuracy@3", "value": 0.9037940379403794, "name": "Dot Accuracy@3"}, {"type": "dot_accuracy@5", "value": 0.9368950832365467, "name": "Dot Accuracy@5"}, {"type": "dot_accuracy@10", "value": 0.9628339140534262, "name": "Dot Accuracy@10"}, {"type": "dot_precision@1", "value": 0.7375145180023229, "name": "Dot Precision@1"}, {"type": "dot_precision@3", "value": 0.30126467931345985, "name": "Dot Precision@3"}, {"type": "dot_precision@5", "value": 0.1873790166473093, "name": "Dot Precision@5"}, {"type": "dot_precision@10", "value": 0.09628339140534262, "name": "Dot Precision@10"}, {"type": "dot_recall@1", "value": 0.7375145180023229, "name": "Dot Recall@1"}, {"type": "dot_recall@3", "value": 0.9037940379403794, "name": "Dot Recall@3"}, {"type": "dot_recall@5", "value": 0.9368950832365467, "name": "Dot Recall@5"}, {"type": "dot_recall@10", "value": 0.9628339140534262, "name": "Dot Recall@10"}, {"type": "dot_ndcg@10", "value": 0.8590408201907759, "name": "Dot Ndcg@10"}, {"type": "dot_mrr@10", "value": 0.824762258110111, "name": "Dot Mrr@10"}, {"type": "dot_map@100", "value": 0.8263189855192845, "name": "Dot Map@100"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
45,679
marklicata/M365_h2_Text_Processing_and_Summarization
marklicata
text-classification
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:marklicata/M365_h1_base", "base_model:finetune:marklicata/M365_h1_base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-11-07T18:00:52Z
2024-11-07T19:45:57+00:00
6
0
--- base_model: marklicata/M365_h1_base library_name: transformers license: apache-2.0 tags: - generated_from_trainer model-index: - name: M365_h2_Text_Processing_and_Summarization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # M365_h2_Text_Processing_and_Summarization This model is a fine-tuned version of [marklicata/M365_h1_base](https://huggingface.co/marklicata/M365_h1_base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0282 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0469 | 1.0 | 1771 | 0.0380 | | 0.0109 | 2.0 | 3542 | 0.0268 | | 0.0029 | 3.0 | 5313 | 0.0282 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.4.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # M365_h2_Text_Processing_and_Summarization This model is a fine-tuned version of [marklicata/M365_h1_base](https://huggingface.co/marklicata/M365_h1_base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0282 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0469 | 1.0 | 1771 | 0.0380 | | 0.0109 | 2.0 | 3542 | 0.0268 | | 0.0029 | 3.0 | 5313 | 0.0282 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.4.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.1
{"base_model": "marklicata/M365_h1_base", "library_name": "transformers", "license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "M365_h2_Text_Processing_and_Summarization", "results": []}]}
task
[ "SUMMARIZATION" ]
45,681
unsloth/gemma-3-12b-pt-unsloth-bnb-4bit
unsloth
image-text-to-text
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "unsloth", "gemma", "google", "en", "arxiv:1905.07830", "arxiv:1905.10044", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1705.03551", "arxiv:1911.01547", "arxiv:1907.10641", "arxiv:1903.00161", "arxiv:2009.03300", "arxiv:2304.06364", "arxiv:2103.03874", "arxiv:2110.14168", "arxiv:2311.12022", "arxiv:2108.07732", "arxiv:2107.03374", "arxiv:2210.03057", "arxiv:2106.03193", "arxiv:1910.11856", "arxiv:2502.12404", "arxiv:2502.21228", "arxiv:2404.16816", "arxiv:2104.12756", "arxiv:2311.16502", "arxiv:2203.10244", "arxiv:2404.12390", "arxiv:1810.12440", "arxiv:1908.02660", "arxiv:2312.11805", "base_model:google/gemma-3-12b-pt", "base_model:quantized:google/gemma-3-12b-pt", "license:gemma", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
2025-03-13T14:04:46Z
2025-04-11T03:08:02+00:00
7,391
0
--- base_model: google/gemma-3-12b-pt language: - en library_name: transformers license: gemma tags: - unsloth - transformers - gemma3 - gemma - google --- <div> <p style="margin-bottom: 0; margin-top: 0;"> <strong>See <a href="https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b">our collection</a> for all versions of Gemma 3 including GGUF, 4-bit & 16-bit formats.</strong> </p> <p style="margin-bottom: 0;"> <em>Unsloth's <a href="https://unsloth.ai/blog/deepseekr1-dynamic">Dynamic Quants</a> is selectively quantized, greatly improving accuracy over standard 4-bit.</em> </p> <div style="display: flex; gap: 5px; align-items: center; "> <a href="https://github.com/unslothai/unsloth/"> <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133"> </a> <a href="https://discord.gg/unsloth"> <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173"> </a> <a href="https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-r1-on-your-own-local-device"> <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143"> </a> </div> <h1 style="margin-top: 0rem;">✨ Fine-tune Gemma 3 with Unsloth!</h1> </div> - Fine-tune Gemma 3 (12B) for free using our Google [Colab notebook here](https://docs.unsloth.ai/get-started/unsloth-notebooks)! - Read our Blog about Gemma 3 support: [unsloth.ai/blog/gemma3](https://unsloth.ai/blog/gemma3) - View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks). - Export your fine-tuned model to GGUF, Ollama, llama.cpp or 🤗HF. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **GRPO with Gemma 3 (12B)** | [▶️ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 2x faster | 80% less | | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less | | **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less | | **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less | | **Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less | <br> # Gemma 3 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core) **Resources and Technical Documentation**: * [Gemma 3 Technical Report][g3-tech-report] * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma3] **Terms of Use**: [Terms][terms] **Authors**: Google DeepMind ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Inputs and outputs - **Input:** - Text string, such as a question, a prompt, or a document to be summarized - Images, normalized to 896 x 896 resolution and encoded to 256 tokens each - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and 32K tokens for the 1B size - **Output:** - Generated text in response to the input, such as an answer to a question, analysis of image content, or a summary of a document - Total output context of 8192 tokens ### Citation ```none @article{gemma_2025, title={Gemma 3}, url={https://goo.gle/Gemma3Report}, publisher={Kaggle}, author={Gemma Team}, year={2025} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 14 trillion tokens, the 12B model was trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and 1B with 2 trillion tokens. Here are the key components: - Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. The training dataset includes content in over 140 languages. - Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code and understand code-related questions. - Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. - Images: A wide range of images enables the model to perform image analysis and visual data extraction tasks. The combination of these diverse data sources is crucial for training a powerful multimodal model that can handle a wide variety of different tasks and data formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. - Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. - Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p, TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: - Performance: TPUs are specifically designed to handle the massive computations involved in training VLMs. They can speed up training considerably compared to CPUs. - Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. - Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. - Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. - These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for foundation models, including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; *"the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow."* ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: #### Reasoning and factuality | Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:| | [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 | | [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 | | [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 | | [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 | | [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 | | [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 | | [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 | | [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 | | [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 | | [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 | | [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 | [hellaswag]: https://arxiv.org/abs/1905.07830 [boolq]: https://arxiv.org/abs/1905.10044 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [arc]: https://arxiv.org/abs/1911.01547 [winogrande]: https://arxiv.org/abs/1907.10641 [bbh]: https://paperswithcode.com/dataset/bbh [drop]: https://arxiv.org/abs/1903.00161 #### STEM and code | Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:-------------:|:--------------:|:--------------:| | [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 | | [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 | | [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 | | [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 | | [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 | | [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 | | [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 | | [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 | [mmlu]: https://arxiv.org/abs/2009.03300 [agieval]: https://arxiv.org/abs/2304.06364 [math]: https://arxiv.org/abs/2103.03874 [gsm8k]: https://arxiv.org/abs/2110.14168 [gpqa]: https://arxiv.org/abs/2311.12022 [mbpp]: https://arxiv.org/abs/2108.07732 [humaneval]: https://arxiv.org/abs/2107.03374 #### Multilingual | Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:| | [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 | | [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 | | [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 | | [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 | | [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 | | [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 | | [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 | [mgsm]: https://arxiv.org/abs/2210.03057 [flores]: https://arxiv.org/abs/2106.03193 [xquad]: https://arxiv.org/abs/1910.11856v3 [global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite [wmt24pp]: https://arxiv.org/abs/2502.12404v1 [eclektic]: https://arxiv.org/abs/2502.21228 [indicgenbench]: https://arxiv.org/abs/2404.16816 #### Multimodal | Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |:-------------:|:--------------:|:--------------:| | [COCOcap][coco-cap] | 102 | 111 | 116 | | [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 | | [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 | | [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 | | [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 | | [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 | | [ReMI][remi] | 27.3 | 38.5 | 44.8 | | [AI2D][ai2d] | 63.2 | 75.2 | 79.0 | | [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 | | [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 | | [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 | | [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 | | [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 | | [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 | | [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 | [coco-cap]: https://cocodataset.org/#home [docvqa]: https://www.docvqa.org/ [info-vqa]: https://arxiv.org/abs/2104.12756 [mmmu]: https://arxiv.org/abs/2311.16502 [textvqa]: https://textvqa.org/ [realworldqa]: https://paperswithcode.com/dataset/realworldqa [remi]: https://arxiv.org/html/2406.09175v1 [ai2d]: https://allenai.org/data/diagrams [chartqa]: https://arxiv.org/abs/2203.10244 [vqav2]: https://visualqa.org/index.html [blinkvqa]: https://arxiv.org/abs/2404.12390 [okvqa]: https://okvqa.allenai.org/ [tallyqa]: https://arxiv.org/abs/1810.12440 [ss-vqa]: https://arxiv.org/abs/1908.02660 [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/ ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: - **Child Safety**: Evaluation of text-to-text and image to text prompts covering child safety policies, including child sexual abuse and exploitation. - **Content Safety:** Evaluation of text-to-text and image to text prompts covering safety policies including, harassment, violence and gore, and hate speech. - **Representational Harms**: Evaluation of text-to-text and image to text prompts covering safety policies including bias, stereotyping, and harmful associations or inaccuracies. In addition to development level evaluations, we conduct "assurance evaluations" which are our 'arms-length' internal evaluations for responsibility governance decision making. They are conducted separately from the model development team, to inform decision making about release. High level findings are fed back to the model team, but prompt sets are held-out to prevent overfitting and preserve the results' ability to inform decision making. Assurance evaluation results are reported to our Responsibility & Safety Council as part of release review. ### Evaluation Results For all areas of safety testing, we saw major improvements in the categories of child safety, content safety, and representational harms relative to previous Gemma models. All testing was conducted without safety filters to evaluate the model capabilities and behaviors. For both text-to-text and image-to-text, and across all model sizes, the model produced minimal policy violations, and showed significant improvements over previous Gemma models' performance with respect to ungrounded inferences. A limitation of our evaluations was they included only English language prompts. ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open vision-language models (VLMs) models have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. - Content Creation and Communication - Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. - Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. - Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. - Image Data Extraction: These models can be used to extract, interpret, and summarize visual data for text communications. - Research and Education - Natural Language Processing (NLP) and VLM Research: These models can serve as a foundation for researchers to experiment with VLM and NLP techniques, develop algorithms, and contribute to the advancement of the field. - Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. - Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations - Training Data - The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. - The scope of the training dataset determines the subject areas the model can handle effectively. - Context and Task Complexity - Models are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. - A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). - Language Ambiguity and Nuance - Natural language is inherently complex. Models might struggle to grasp subtle nuances, sarcasm, or figurative language. - Factual Accuracy - Models generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. - Common Sense - Models rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: - Bias and Fairness - VLMs trained on large-scale, real-world text and image data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. - Misinformation and Misuse - VLMs can be misused to generate text that is false, misleading, or harmful. - Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. - Transparency and Accountability: - This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. - A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: - **Perpetuation of biases**: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. - **Generation of harmful content**: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. - **Misuse for malicious purposes**: Technical limitations and developer and end-user education can help mitigate against malicious applications of VLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. - **Privacy violations**: Models were trained on data filtered for removal of certain personal information and other sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open vision-language model implementations designed from the ground up for responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [g3-tech-report]: https://goo.gle/Gemma3Report [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3 [vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3 [terms]: https://ai.google.dev/gemma/terms [safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/jax-ml/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [gemini-2-paper]: https://arxiv.org/abs/2312.11805
null
Non_BioNLP
<div> <p style="margin-bottom: 0; margin-top: 0;"> <strong>See <a href="https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b">our collection</a> for all versions of Gemma 3 including GGUF, 4-bit & 16-bit formats.</strong> </p> <p style="margin-bottom: 0;"> <em>Unsloth's <a href="https://unsloth.ai/blog/deepseekr1-dynamic">Dynamic Quants</a> is selectively quantized, greatly improving accuracy over standard 4-bit.</em> </p> <div style="display: flex; gap: 5px; align-items: center; "> <a href="https://github.com/unslothai/unsloth/"> <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133"> </a> <a href="https://discord.gg/unsloth"> <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173"> </a> <a href="https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-r1-on-your-own-local-device"> <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143"> </a> </div> <h1 style="margin-top: 0rem;">✨ Fine-tune Gemma 3 with Unsloth!</h1> </div> - Fine-tune Gemma 3 (12B) for free using our Google [Colab notebook here](https://docs.unsloth.ai/get-started/unsloth-notebooks)! - Read our Blog about Gemma 3 support: [unsloth.ai/blog/gemma3](https://unsloth.ai/blog/gemma3) - View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks). - Export your fine-tuned model to GGUF, Ollama, llama.cpp or 🤗HF. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **GRPO with Gemma 3 (12B)** | [▶️ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 2x faster | 80% less | | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less | | **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less | | **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less | | **Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less | <br> # Gemma 3 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core) **Resources and Technical Documentation**: * [Gemma 3 Technical Report][g3-tech-report] * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma3] **Terms of Use**: [Terms][terms] **Authors**: Google DeepMind ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Inputs and outputs - **Input:** - Text string, such as a question, a prompt, or a document to be summarized - Images, normalized to 896 x 896 resolution and encoded to 256 tokens each - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and 32K tokens for the 1B size - **Output:** - Generated text in response to the input, such as an answer to a question, analysis of image content, or a summary of a document - Total output context of 8192 tokens ### Citation ```none @article{gemma_2025, title={Gemma 3}, url={https://goo.gle/Gemma3Report}, publisher={Kaggle}, author={Gemma Team}, year={2025} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 14 trillion tokens, the 12B model was trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and 1B with 2 trillion tokens. Here are the key components: - Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. The training dataset includes content in over 140 languages. - Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code and understand code-related questions. - Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. - Images: A wide range of images enables the model to perform image analysis and visual data extraction tasks. The combination of these diverse data sources is crucial for training a powerful multimodal model that can handle a wide variety of different tasks and data formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. - Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. - Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p, TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: - Performance: TPUs are specifically designed to handle the massive computations involved in training VLMs. They can speed up training considerably compared to CPUs. - Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. - Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. - Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. - These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for foundation models, including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; *"the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow."* ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: #### Reasoning and factuality | Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:| | [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 | | [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 | | [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 | | [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 | | [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 | | [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 | | [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 | | [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 | | [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 | | [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 | | [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 | [hellaswag]: https://arxiv.org/abs/1905.07830 [boolq]: https://arxiv.org/abs/1905.10044 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [arc]: https://arxiv.org/abs/1911.01547 [winogrande]: https://arxiv.org/abs/1907.10641 [bbh]: https://paperswithcode.com/dataset/bbh [drop]: https://arxiv.org/abs/1903.00161 #### STEM and code | Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:-------------:|:--------------:|:--------------:| | [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 | | [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 | | [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 | | [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 | | [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 | | [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 | | [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 | | [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 | [mmlu]: https://arxiv.org/abs/2009.03300 [agieval]: https://arxiv.org/abs/2304.06364 [math]: https://arxiv.org/abs/2103.03874 [gsm8k]: https://arxiv.org/abs/2110.14168 [gpqa]: https://arxiv.org/abs/2311.12022 [mbpp]: https://arxiv.org/abs/2108.07732 [humaneval]: https://arxiv.org/abs/2107.03374 #### Multilingual | Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:| | [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 | | [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 | | [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 | | [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 | | [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 | | [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 | | [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 | [mgsm]: https://arxiv.org/abs/2210.03057 [flores]: https://arxiv.org/abs/2106.03193 [xquad]: https://arxiv.org/abs/1910.11856v3 [global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite [wmt24pp]: https://arxiv.org/abs/2502.12404v1 [eclektic]: https://arxiv.org/abs/2502.21228 [indicgenbench]: https://arxiv.org/abs/2404.16816 #### Multimodal | Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |:-------------:|:--------------:|:--------------:| | [COCOcap][coco-cap] | 102 | 111 | 116 | | [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 | | [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 | | [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 | | [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 | | [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 | | [ReMI][remi] | 27.3 | 38.5 | 44.8 | | [AI2D][ai2d] | 63.2 | 75.2 | 79.0 | | [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 | | [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 | | [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 | | [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 | | [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 | | [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 | | [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 | [coco-cap]: https://cocodataset.org/#home [docvqa]: https://www.docvqa.org/ [info-vqa]: https://arxiv.org/abs/2104.12756 [mmmu]: https://arxiv.org/abs/2311.16502 [textvqa]: https://textvqa.org/ [realworldqa]: https://paperswithcode.com/dataset/realworldqa [remi]: https://arxiv.org/html/2406.09175v1 [ai2d]: https://allenai.org/data/diagrams [chartqa]: https://arxiv.org/abs/2203.10244 [vqav2]: https://visualqa.org/index.html [blinkvqa]: https://arxiv.org/abs/2404.12390 [okvqa]: https://okvqa.allenai.org/ [tallyqa]: https://arxiv.org/abs/1810.12440 [ss-vqa]: https://arxiv.org/abs/1908.02660 [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/ ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: - **Child Safety**: Evaluation of text-to-text and image to text prompts covering child safety policies, including child sexual abuse and exploitation. - **Content Safety:** Evaluation of text-to-text and image to text prompts covering safety policies including, harassment, violence and gore, and hate speech. - **Representational Harms**: Evaluation of text-to-text and image to text prompts covering safety policies including bias, stereotyping, and harmful associations or inaccuracies. In addition to development level evaluations, we conduct "assurance evaluations" which are our 'arms-length' internal evaluations for responsibility governance decision making. They are conducted separately from the model development team, to inform decision making about release. High level findings are fed back to the model team, but prompt sets are held-out to prevent overfitting and preserve the results' ability to inform decision making. Assurance evaluation results are reported to our Responsibility & Safety Council as part of release review. ### Evaluation Results For all areas of safety testing, we saw major improvements in the categories of child safety, content safety, and representational harms relative to previous Gemma models. All testing was conducted without safety filters to evaluate the model capabilities and behaviors. For both text-to-text and image-to-text, and across all model sizes, the model produced minimal policy violations, and showed significant improvements over previous Gemma models' performance with respect to ungrounded inferences. A limitation of our evaluations was they included only English language prompts. ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open vision-language models (VLMs) models have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. - Content Creation and Communication - Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. - Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. - Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. - Image Data Extraction: These models can be used to extract, interpret, and summarize visual data for text communications. - Research and Education - Natural Language Processing (NLP) and VLM Research: These models can serve as a foundation for researchers to experiment with VLM and NLP techniques, develop algorithms, and contribute to the advancement of the field. - Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. - Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations - Training Data - The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. - The scope of the training dataset determines the subject areas the model can handle effectively. - Context and Task Complexity - Models are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. - A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). - Language Ambiguity and Nuance - Natural language is inherently complex. Models might struggle to grasp subtle nuances, sarcasm, or figurative language. - Factual Accuracy - Models generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. - Common Sense - Models rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: - Bias and Fairness - VLMs trained on large-scale, real-world text and image data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. - Misinformation and Misuse - VLMs can be misused to generate text that is false, misleading, or harmful. - Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. - Transparency and Accountability: - This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. - A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: - **Perpetuation of biases**: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. - **Generation of harmful content**: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. - **Misuse for malicious purposes**: Technical limitations and developer and end-user education can help mitigate against malicious applications of VLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. - **Privacy violations**: Models were trained on data filtered for removal of certain personal information and other sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open vision-language model implementations designed from the ground up for responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [g3-tech-report]: https://goo.gle/Gemma3Report [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3 [vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3 [terms]: https://ai.google.dev/gemma/terms [safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/jax-ml/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [gemini-2-paper]: https://arxiv.org/abs/2312.11805
{"base_model": "google/gemma-3-12b-pt", "language": ["en"], "library_name": "transformers", "license": "gemma", "tags": ["unsloth", "transformers", "gemma3", "gemma", "google"]}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
45,682
zhufy/xquad-th-mbert-base
zhufy
question-answering
[ "transformers", "pytorch", "bert", "question-answering", "bert-base", "dataset:xquad.th", "endpoints_compatible", "region:us" ]
2022-03-11T02:51:48Z
2022-04-23T05:07:59+00:00
118
0
--- datasets: xquad.th language: Thai tags: - bert-base task: extractive question answering --- # Model Description This model is for Thai extractive question answering. It is based on the multilingual BERT [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) model, and it is case-sensitive: it makes a difference between english and English # Training data We split the original [xquad](https://github.com/deepmind/xquad) dataset into the training/validation/testing set. Totally, there are 876/161/153 question-answer pairs from 34/7/7 articles in the training/validation/testing set separately. You can find the details of the dataset here [xquad_split](https://huggingface.co/datasets/zhufy/xquad_split). # How to use You can use it directly from the [🤗 Transformers](https://github.com/huggingface/transformers) library with a pipeline: ``` python >>> from transformers.pipelines import pipeline >>> from transformers import AutoTokenizer, AutoModelForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained("zhufy/xquad-th-mbert-base") >>> model = AutoModelForQuestionAnswering.from_pretrained("zhufy/xquad-th-mbert-base") >>> nlp = pipeline("question-answering", model=model, tokenizer=tokenizer) >>> context = "ดินดอนสามเหลี่ยม ไรน์-เมิส ซึ่งเป็นภูมิภาคทางธรรมชาติที่สำคัญของเนเธอร์แลนด์เริ่มต้น ใกล้มิลลิงเงิน อาน เดอ เรน ใกล้ชายแดนเนเธอร์แลนด์ติดกับเยอรมัน โดยมีสาขาของไรน์ไหลเข้าสู่แม่น้ำวาลและเนเดอร์เรน เนื่องจากน้ำส่วนใหญ่จากแม่น้ำไรน์ คำว่า ดินดอนสามเหลี่ยมไรน์ ซึ่งสั้นกว่าจึงเป็นคำที่ใช้เรียกกันทั่วไป อย่างไรก็ดี ชื่อนี้ยังใช้เรียกดินดอนสามเหลี่ยมบริเวณแม่น้ำซึ่งแม่น้ำไรน์ไหลเข้าสู่ทะเลสาบคอนสแตนซ์อีกด้วย ดังนั้นการเรียกดินดอนสามเหลี่ยมซึ่งใหญ่กว่าว่าไรน์-เมิส หรือแม้กระทั่งดินแดนสามเหลี่ยมไรน์ -เมิส-สเกลต์จึงชัดเจนกว่า เนื่องจากแม่น้ำสเกลต์สิ้นสุดที่ดินดอนสามเหลี่ยมเดียวกัน" >>> question = "ดินดอนสามเหลี่ยมในเนเธอร์แลนด์มีชื่อว่าอะไร?" >>> inputs = {"question": question, "context":context } >>> nlp(inputs) {'score': 0.9426798224449158, 'start': 17, 'end': 84, 'answer': 'ไรน์-เมิส ซึ่งเป็นภูมิภาคทางธรรมชาติที่สำคัญของเนเธอร์แลนด์เริ่มต้น'} ```
null
Non_BioNLP
# Model Description This model is for Thai extractive question answering. It is based on the multilingual BERT [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) model, and it is case-sensitive: it makes a difference between english and English # Training data We split the original [xquad](https://github.com/deepmind/xquad) dataset into the training/validation/testing set. Totally, there are 876/161/153 question-answer pairs from 34/7/7 articles in the training/validation/testing set separately. You can find the details of the dataset here [xquad_split](https://huggingface.co/datasets/zhufy/xquad_split). # How to use You can use it directly from the [🤗 Transformers](https://github.com/huggingface/transformers) library with a pipeline: ``` python >>> from transformers.pipelines import pipeline >>> from transformers import AutoTokenizer, AutoModelForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained("zhufy/xquad-th-mbert-base") >>> model = AutoModelForQuestionAnswering.from_pretrained("zhufy/xquad-th-mbert-base") >>> nlp = pipeline("question-answering", model=model, tokenizer=tokenizer) >>> context = "ดินดอนสามเหลี่ยม ไรน์-เมิส ซึ่งเป็นภูมิภาคทางธรรมชาติที่สำคัญของเนเธอร์แลนด์เริ่มต้น ใกล้มิลลิงเงิน อาน เดอ เรน ใกล้ชายแดนเนเธอร์แลนด์ติดกับเยอรมัน โดยมีสาขาของไรน์ไหลเข้าสู่แม่น้ำวาลและเนเดอร์เรน เนื่องจากน้ำส่วนใหญ่จากแม่น้ำไรน์ คำว่า ดินดอนสามเหลี่ยมไรน์ ซึ่งสั้นกว่าจึงเป็นคำที่ใช้เรียกกันทั่วไป อย่างไรก็ดี ชื่อนี้ยังใช้เรียกดินดอนสามเหลี่ยมบริเวณแม่น้ำซึ่งแม่น้ำไรน์ไหลเข้าสู่ทะเลสาบคอนสแตนซ์อีกด้วย ดังนั้นการเรียกดินดอนสามเหลี่ยมซึ่งใหญ่กว่าว่าไรน์-เมิส หรือแม้กระทั่งดินแดนสามเหลี่ยมไรน์ -เมิส-สเกลต์จึงชัดเจนกว่า เนื่องจากแม่น้ำสเกลต์สิ้นสุดที่ดินดอนสามเหลี่ยมเดียวกัน" >>> question = "ดินดอนสามเหลี่ยมในเนเธอร์แลนด์มีชื่อว่าอะไร?" >>> inputs = {"question": question, "context":context } >>> nlp(inputs) {'score': 0.9426798224449158, 'start': 17, 'end': 84, 'answer': 'ไรน์-เมิส ซึ่งเป็นภูมิภาคทางธรรมชาติที่สำคัญของเนเธอร์แลนด์เริ่มต้น'} ```
{"datasets": "xquad.th", "language": "Thai", "tags": ["bert-base"], "task": "extractive question answering"}
task
[ "QUESTION_ANSWERING" ]
45,683
LiteLLMs/gemma-2b-GGUF
LiteLLMs
null
[ "transformers", "gguf", "GGUF", "arxiv:2312.11805", "license:gemma", "endpoints_compatible", "region:us" ]
2024-05-28T14:36:24Z
2024-05-28T14:44:31+00:00
43
0
--- library_name: transformers license: gemma tags: - GGUF extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license quantized_by: andrijdavid --- # gemma-2b-GGUF - Original model: [gemma-2b](https://huggingface.co/google/gemma-2b) <!-- description start --> ## Description This repo contains GGUF format model files for [gemma-2b](https://huggingface.co/google/gemma-2b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration. * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling. * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration. * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection. * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use. * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server. * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents. <!-- README_GGUF.md-about-gguf end --> <!-- compatibility_gguf start --> ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: LiteLLMs/gemma-2b-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download LiteLLMs/gemma-2b-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download LiteLLMs/gemma-2b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install huggingface_hub[hf_transfer] ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/gemma-2b-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<PROMPT>", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer end --> <!-- original-model-card start --> # Original model card: gemma-2b # Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 2B base version of the Gemma model. You can also visit the model card of the [7B base model](https://huggingface.co/google/gemma-7b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it). **Resources and Technical Documentation**: * [Gemma Technical Report](https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf) * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-gg-hf) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Context Length Models are trained on a context length of 8192 tokens. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Fine-tuning the model You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-2b`. In that repository, we provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", revision="float16") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | 2B Params | 7B Params | | -- | -- | - | -- | --- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. <!-- original-model-card end -->
null
Non_BioNLP
# gemma-2b-GGUF - Original model: [gemma-2b](https://huggingface.co/google/gemma-2b) <!-- description start --> ## Description This repo contains GGUF format model files for [gemma-2b](https://huggingface.co/google/gemma-2b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration. * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling. * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration. * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection. * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use. * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server. * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents. <!-- README_GGUF.md-about-gguf end --> <!-- compatibility_gguf start --> ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: LiteLLMs/gemma-2b-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download LiteLLMs/gemma-2b-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download LiteLLMs/gemma-2b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install huggingface_hub[hf_transfer] ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/gemma-2b-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<PROMPT>", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer end --> <!-- original-model-card start --> # Original model card: gemma-2b # Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 2B base version of the Gemma model. You can also visit the model card of the [7B base model](https://huggingface.co/google/gemma-7b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it). **Resources and Technical Documentation**: * [Gemma Technical Report](https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf) * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-gg-hf) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Context Length Models are trained on a context length of 8192 tokens. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Fine-tuning the model You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-2b`. In that repository, we provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", revision="float16") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | 2B Params | 7B Params | | -- | -- | - | -- | --- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. <!-- original-model-card end -->
{"library_name": "transformers", "license": "gemma", "tags": ["GGUF"], "extra_gated_heading": "Access Gemma on Hugging Face", "extra_gated_prompt": "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license", "quantized_by": "andrijdavid"}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
45,684
danielsc/bert_test
danielsc
automatic-speech-recognition
[ "transformers", "pytorch", "bert", "text-classification", "exbert", "autotrain-compatible", "automatic-speech-recognition", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-11-14T20:59:26Z
2022-11-16T18:24:37+00:00
12
0
--- datasets: - bookcorpus - wikipedia language: en license: apache-2.0 pipeline_tag: automatic-speech-recognition tags: - exbert - autotrain-compatible --- # BERT base model (cased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is case-sensitive: it makes a difference between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-cased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] Hello I'm a fashion model. [SEP]", 'score': 0.09019174426794052, 'token': 4633, 'token_str': 'fashion'}, {'sequence': "[CLS] Hello I'm a new model. [SEP]", 'score': 0.06349995732307434, 'token': 1207, 'token_str': 'new'}, {'sequence': "[CLS] Hello I'm a male model. [SEP]", 'score': 0.06228214129805565, 'token': 2581, 'token_str': 'male'}, {'sequence': "[CLS] Hello I'm a professional model. [SEP]", 'score': 0.0441727414727211, 'token': 1848, 'token_str': 'professional'}, {'sequence': "[CLS] Hello I'm a super model. [SEP]", 'score': 0.03326151892542839, 'token': 7688, 'token_str': 'super'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-cased') model = BertModel.from_pretrained("bert-base-cased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-base-cased') model = TFBertModel.from_pretrained("bert-base-cased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-cased') >>> unmasker("The man worked as a [MASK].") [{'sequence': '[CLS] The man worked as a lawyer. [SEP]', 'score': 0.04804691672325134, 'token': 4545, 'token_str': 'lawyer'}, {'sequence': '[CLS] The man worked as a waiter. [SEP]', 'score': 0.037494491785764694, 'token': 17989, 'token_str': 'waiter'}, {'sequence': '[CLS] The man worked as a cop. [SEP]', 'score': 0.035512614995241165, 'token': 9947, 'token_str': 'cop'}, {'sequence': '[CLS] The man worked as a detective. [SEP]', 'score': 0.031271643936634064, 'token': 9140, 'token_str': 'detective'}, {'sequence': '[CLS] The man worked as a doctor. [SEP]', 'score': 0.027423162013292313, 'token': 3995, 'token_str': 'doctor'}] >>> unmasker("The woman worked as a [MASK].") [{'sequence': '[CLS] The woman worked as a nurse. [SEP]', 'score': 0.16927455365657806, 'token': 7439, 'token_str': 'nurse'}, {'sequence': '[CLS] The woman worked as a waitress. [SEP]', 'score': 0.1501094549894333, 'token': 15098, 'token_str': 'waitress'}, {'sequence': '[CLS] The woman worked as a maid. [SEP]', 'score': 0.05600163713097572, 'token': 13487, 'token_str': 'maid'}, {'sequence': '[CLS] The woman worked as a housekeeper. [SEP]', 'score': 0.04838843643665314, 'token': 26458, 'token_str': 'housekeeper'}, {'sequence': '[CLS] The woman worked as a cook. [SEP]', 'score': 0.029980547726154327, 'token': 9834, 'token_str': 'cook'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average | |:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:| | | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=bert-base-cased"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
null
Non_BioNLP
# BERT base model (cased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is case-sensitive: it makes a difference between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-cased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] Hello I'm a fashion model. [SEP]", 'score': 0.09019174426794052, 'token': 4633, 'token_str': 'fashion'}, {'sequence': "[CLS] Hello I'm a new model. [SEP]", 'score': 0.06349995732307434, 'token': 1207, 'token_str': 'new'}, {'sequence': "[CLS] Hello I'm a male model. [SEP]", 'score': 0.06228214129805565, 'token': 2581, 'token_str': 'male'}, {'sequence': "[CLS] Hello I'm a professional model. [SEP]", 'score': 0.0441727414727211, 'token': 1848, 'token_str': 'professional'}, {'sequence': "[CLS] Hello I'm a super model. [SEP]", 'score': 0.03326151892542839, 'token': 7688, 'token_str': 'super'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-cased') model = BertModel.from_pretrained("bert-base-cased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-base-cased') model = TFBertModel.from_pretrained("bert-base-cased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-cased') >>> unmasker("The man worked as a [MASK].") [{'sequence': '[CLS] The man worked as a lawyer. [SEP]', 'score': 0.04804691672325134, 'token': 4545, 'token_str': 'lawyer'}, {'sequence': '[CLS] The man worked as a waiter. [SEP]', 'score': 0.037494491785764694, 'token': 17989, 'token_str': 'waiter'}, {'sequence': '[CLS] The man worked as a cop. [SEP]', 'score': 0.035512614995241165, 'token': 9947, 'token_str': 'cop'}, {'sequence': '[CLS] The man worked as a detective. [SEP]', 'score': 0.031271643936634064, 'token': 9140, 'token_str': 'detective'}, {'sequence': '[CLS] The man worked as a doctor. [SEP]', 'score': 0.027423162013292313, 'token': 3995, 'token_str': 'doctor'}] >>> unmasker("The woman worked as a [MASK].") [{'sequence': '[CLS] The woman worked as a nurse. [SEP]', 'score': 0.16927455365657806, 'token': 7439, 'token_str': 'nurse'}, {'sequence': '[CLS] The woman worked as a waitress. [SEP]', 'score': 0.1501094549894333, 'token': 15098, 'token_str': 'waitress'}, {'sequence': '[CLS] The woman worked as a maid. [SEP]', 'score': 0.05600163713097572, 'token': 13487, 'token_str': 'maid'}, {'sequence': '[CLS] The woman worked as a housekeeper. [SEP]', 'score': 0.04838843643665314, 'token': 26458, 'token_str': 'housekeeper'}, {'sequence': '[CLS] The woman worked as a cook. [SEP]', 'score': 0.029980547726154327, 'token': 9834, 'token_str': 'cook'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average | |:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:| | | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=bert-base-cased"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "pipeline_tag": "automatic-speech-recognition", "tags": ["exbert", "autotrain-compatible"]}
task
[ "QUESTION_ANSWERING" ]
45,685
LoneStriker/airoboros-c34b-3.1.2-6.0bpw-h6-exl2
LoneStriker
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:jondurbin/airoboros-3.1", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-10-20T18:37:34Z
2023-10-22T14:09:48+00:00
3
0
--- datasets: - jondurbin/airoboros-3.1 license: llama2 --- ### Overview Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros) #### IMPORTANT NOTE - llama-2 chat format!!! This models uses llama-2 chat format, rather than vicuna style user/assistant! This is a breaking change, although most inference systems support llama-2 chat templating. See prompt format below for all of the details. #### Highlights A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras: * More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations. * Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs) * Anonymization, e.g. removing names, IP addresses, and/or dates from text. * Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history. * Multi-step instructions with acknowledgement (see below) * De-censorship data (not published) This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay. Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools! ### Prompt format The prompt format is llama-2 chat. ``` [INST] <<SYS>> You are a helpful, unbiased, uncensored assistant. <</SYS>> {prompt} [/INST] ``` For multi-turn, the prompt format is as follows: ``` [INST] <<SYS>> You are a helful, unbiased, uncensored assistant. <</SYS>> {prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST] ``` The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.: ``` import transformers tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1') chat = [ {"role": "system", "content": "You are Bob, a friendly AI assistant."}, {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"}, ] print(tokenizer.apply_chat_template(chat, tokenize=False)) ``` ### Helpful usage tips #### MathJSON Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/ I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py) __Use a very low temperature!__ Example: ``` [INST] <<SYS>> You are an assistant with vast knowledge in all things math. <</SYS>> Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST] ``` Output: ``` The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1. Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr². Solution as MathJSON: <mathjson> [ "Multiply", "Pi", [ "Power", 17.2456, 2 ] ] </mathjson> ``` You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response. #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. __Use a very low temperature!__ Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Summarization 500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example: ``` BEGININPUT {text to summarize} ENDINPUT BEGININSTRUCTION Summarize the input in around 130 words. ENDINSTRUCTION ``` #### Getting longer responses You can use a few techniques to get longer responses. Detailed prompts, with explicit instruction for word count: ``` Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality. The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization. One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary. Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements. Your response should be approximately 2300 words. ``` Or, a simpler example: ``` Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux. ``` There are a few examples of next chapter completion as well, e.g.: ``` Write the next chapter of a historical fiction novel set in Paris during the 20th century. Here's a summary of the previous chapter: In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries. Requirements for the next chapter: 1. Character Development of Margot and Lucien: - Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien. - Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness. 2. Exploration of Paris and the Couture House: - Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history. - The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past. 3. Emergence of the Subplot: The Lost Collection: - Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion. - Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career. - Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission. Your response should be approximately 650 words. ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ## Multi-step instructions with acknowledgement I don't know how useful this is, really, but I thought I'd add it just in case. Example: ``` { "conversations": [ { "value": "You are a helpful, unbiased, uncensored assistant.", "from": "system" }, { "value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "You can only respond using animal-related emojis.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Precede all responses with \"ZOOZOO:\"", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Include at least one bird emoji in your responses.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Describe a trip to a safari park.", "from": "human" }, { "value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29", "from": "gpt" } ] } ``` ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions. The 30b model is built on the original llama, which has a strict non-commercial usage restriction. The models with `-l2` in the name have a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The models with `-m-` are mistral-7b (apache 2.0) The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me.
null
Non_BioNLP
### Overview Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros) #### IMPORTANT NOTE - llama-2 chat format!!! This models uses llama-2 chat format, rather than vicuna style user/assistant! This is a breaking change, although most inference systems support llama-2 chat templating. See prompt format below for all of the details. #### Highlights A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras: * More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations. * Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs) * Anonymization, e.g. removing names, IP addresses, and/or dates from text. * Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history. * Multi-step instructions with acknowledgement (see below) * De-censorship data (not published) This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay. Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools! ### Prompt format The prompt format is llama-2 chat. ``` [INST] <<SYS>> You are a helpful, unbiased, uncensored assistant. <</SYS>> {prompt} [/INST] ``` For multi-turn, the prompt format is as follows: ``` [INST] <<SYS>> You are a helful, unbiased, uncensored assistant. <</SYS>> {prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST] ``` The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.: ``` import transformers tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1') chat = [ {"role": "system", "content": "You are Bob, a friendly AI assistant."}, {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"}, ] print(tokenizer.apply_chat_template(chat, tokenize=False)) ``` ### Helpful usage tips #### MathJSON Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/ I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py) __Use a very low temperature!__ Example: ``` [INST] <<SYS>> You are an assistant with vast knowledge in all things math. <</SYS>> Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST] ``` Output: ``` The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1. Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr². Solution as MathJSON: <mathjson> [ "Multiply", "Pi", [ "Power", 17.2456, 2 ] ] </mathjson> ``` You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response. #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. __Use a very low temperature!__ Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Summarization 500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example: ``` BEGININPUT {text to summarize} ENDINPUT BEGININSTRUCTION Summarize the input in around 130 words. ENDINSTRUCTION ``` #### Getting longer responses You can use a few techniques to get longer responses. Detailed prompts, with explicit instruction for word count: ``` Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality. The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization. One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary. Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements. Your response should be approximately 2300 words. ``` Or, a simpler example: ``` Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux. ``` There are a few examples of next chapter completion as well, e.g.: ``` Write the next chapter of a historical fiction novel set in Paris during the 20th century. Here's a summary of the previous chapter: In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries. Requirements for the next chapter: 1. Character Development of Margot and Lucien: - Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien. - Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness. 2. Exploration of Paris and the Couture House: - Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history. - The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past. 3. Emergence of the Subplot: The Lost Collection: - Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion. - Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career. - Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission. Your response should be approximately 650 words. ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ## Multi-step instructions with acknowledgement I don't know how useful this is, really, but I thought I'd add it just in case. Example: ``` { "conversations": [ { "value": "You are a helpful, unbiased, uncensored assistant.", "from": "system" }, { "value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "You can only respond using animal-related emojis.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Precede all responses with \"ZOOZOO:\"", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Include at least one bird emoji in your responses.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Describe a trip to a safari park.", "from": "human" }, { "value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29", "from": "gpt" } ] } ``` ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions. The 30b model is built on the original llama, which has a strict non-commercial usage restriction. The models with `-l2` in the name have a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The models with `-m-` are mistral-7b (apache 2.0) The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me.
{"datasets": ["jondurbin/airoboros-3.1"], "license": "llama2"}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
45,687