id
stringlengths
6
113
author
stringlengths
2
36
task_category
stringclasses
42 values
tags
listlengths
1
4.05k
created_time
timestamp[ns, tz=UTC]date
2022-03-02 23:29:04
2025-04-10 08:38:38
last_modified
stringdate
2020-05-14 13:13:12
2025-04-19 04:15:39
downloads
int64
0
118M
likes
int64
0
4.86k
README
stringlengths
30
1.01M
matched_bigbio_names
listlengths
1
8
is_bionlp
stringclasses
3 values
model_cards
stringlengths
0
1M
metadata
stringlengths
2
698k
source
stringclasses
2 values
matched_task
listlengths
1
10
__index_level_0__
int64
0
46.9k
smilemikan/marian-finetuned-kde4-en-to-ja
smilemikan
translation
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-en-jap", "base_model:finetune:Helsinki-NLP/opus-mt-en-jap", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-02-26T03:12:05Z
2024-04-04T06:37:49+00:00
43
0
--- base_model: Helsinki-NLP/opus-mt-en-jap license: apache-2.0 metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: marian-finetuned-kde4-en-to-ja results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-ja This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-jap](https://huggingface.co/Helsinki-NLP/opus-mt-en-jap) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.4878 - Bleu: 10.4166 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-ja This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-jap](https://huggingface.co/Helsinki-NLP/opus-mt-en-jap) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.4878 - Bleu: 10.4166 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"base_model": "Helsinki-NLP/opus-mt-en-jap", "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-ja", "results": []}]}
task
[ "TRANSLATION" ]
42,649
economy/st_cr_keywords
economy
text-classification
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "model-index", "region:us" ]
2025-02-24T22:35:39Z
2025-02-25T00:26:42+00:00
24
0
--- base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: curso de marketing digital torremolinos,aprender marketing digital,rock content,estudiar marketing digital torremolinos,platzi curso de marketing digital para negocios - text: 1 bedroom apartments near me alberta edmonton - 1 bedroom apartments in edmonton,2 bedroom apartments in edmonton,right now in edmonton 2 bedroom apartments for rent,low-income apartments edmonton,affordable 55 and older apartments for rent near me edmont - text: winter heating grants for disabled uk england,winter heating grants for disabled uk england barking,check heating grant eligibility 2024,heating grants disabled,grants for beds for disabled adults england,heating grants disabled 2024 england,winter heatin - text: pro bono abogados inmigracion en español usa españa - pro bono abogados inmigracion en español usa,pro bono abogados inmigracion en español usa - criminal lawyer nj 2024,pro bono abogados inmigracion en español in palma - pro bono abogados inmigracio - text: best cloud storage providers,1 tb cloud storage price,online backup server,most secure cloud storage,cloud backup storage,sites like dropbox inference: true model-index: - name: SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.765 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 128 tokens - **Number of Classes:** 237 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:-----------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Caregiver Jobs | <ul><li>'caregiver workers australia - overnight care for elderly needed immediately,home care work australia - caregiver jobs near me pt,now hiring night and day private caregivers usa - private home care for seniors in setúbal (see caregiver jobs in almada),car'</li><li>'private home care for seniors in my area,caregiver needed immediately,childcare nearby,overnight care for elderly near me,disability support worker,caregiver services nearby'</li><li>'live in caregiver jobs,home caregivers,caregiver services near me nsw,night caregiver hiring springvale,daycare caregivers wanted near springvale,caregiver'</li></ul> | | Janitorial Jobs | <ul><li>'reinigungsfirmen in der nähe,hausmeisterreinigung krefeld,hausmeister-reinigungsdienste,medizinische reinigung krefeld,gebäudereinigungsdienste krefeld,reinigungsarbeiten in der nähe krefeld'</li></ul> | | Remote Jobs | <ul><li>'part time job online,part time job positions,online working from home,remote work applications,work from home no experience no fee,job part time online'</li><li>'part time job online,jobs home working,no experience work from home jobs,job part time online,remote jobs philippines,work from home no experience no fee'</li><li>'ganar dinero trabajando desde casa,ganar dinero trabajando desde casa santo domingo,ganar dinero trabajando desde casa 2024'</li></ul> | | Personal Loans | <ul><li>'solicitar préstamo personal,quiero un prestamo para pagar mis deudas apatzingán - préstamo personal michoacán de ocampo,quiero un prestamo para pagar mis deudas apatzingán michoacán de ocampo,préstamos personales con mal crédito - préstamo person'</li><li>'no down payment first time home buyer,online home equity loan application in powder springs,homebuyer programs in powder springs,zero down payment homebuyer programs powder springs,homebuyer loan programs in powder springs,powder springshomebuye'</li><li>'préstamo personal en línea,prestamos en linea,prestamo personal,prestamos rapidos,prestamos al instante,préstamo personal'</li></ul> | | Stress Relief | <ul><li>'stress relievers in cheboygan,treatment for stress in cheboygan,how to manage major depressive disorder in cheboygan,abogados para accidentes de carros in cheboygan,abogados para accidentes in cheboygan,stress toys in cheboygan,massage in cheb'</li></ul> | | Fitness Coach | <ul><li>"cours d'entraînement près de moi vauvert,cours d'entraînement près de moi,cours d'entraînement près de moi vauvert fr,cours d'entraînement près de moi - cours d'entraînement près de moi france,personal trainer in vauvert,cours d'entraînement pr"</li><li>'at home fitness training,fitness app,lose weight fast,women fitness,free workout plan,fitness training'</li><li>'workout classes near me,workout classes near me leominster - workout classes near me,training classes near me,training classes near me leominster - training classes near me,workout classes near me united states,free gym workout plan'</li></ul> | | Debt Consolidation | <ul><li>'debt help,grants for debt relief,debt relief loans,debt consolidation programs,unsecured personal loans,can you get cash from a heloc'</li><li>'debt consolidation loans bad credit,debt consolidation programs,the best debt relief companies,personal loan,need a loan to pay all my debts,the best debt relief companies'</li><li>'free money to start a small business in united states,small business loan approved regardless of credit,emergency loan personal,same day business loan no credit check,small business immediate funding no credit check,grants loans for small business'</li></ul> | | Apartment Rentals | <ul><li>'apartmens in now 400$/month now! - 2 bedroom apartments in west orange new jersey,apartmens in new jersey now 400$/month now! - 2 bedroom apartments in west orange,apartmens in west orange now 400$/month now! - 2 bedroom apartments in new jersey,1 bedroo'</li><li>'2 bedroom apartments in ludhiana,apartments for seniors near me - 1 bedroom apartments near me punjab ludhiana,1-3 bedroom apartments ludhiana - 2 bedroom apartments in ludhiana,1 bedroom apartments near me 2024,1 bedroom apartments near me punjab ludhian'</li><li>'low-cost 2 bedroom apartments amritsar punjab,2 bedroom apartment rental amritsar,1 bed rooms for rent,pet-friendly apartments in amritsar,apartmens in amritsar now 400$/month now! - 1 bedroom apartments for rent near me 2024,apartmens in amritsar now 400'</li></ul> | | Apartments | <ul><li>'rent apartment near me,houses to rent nearby,two bedroom apartment in bonita springs,cheap apartments nearby,houses for rent bonita springs,{key6}'</li><li>'senior apts nearby,current apartment offers near me (must see),income based luxury apartments near me,house rentals,new senior apt for rent near me,affordable modern apartments near me'</li><li>'two bedroom apartment cherry hill,55 and older communities in cherry hill,2 bedroom for rent utilities included,one bed apt for rent,apartments to rent,{key6}'</li></ul> | | Auto Accessories | <ul><li>'best car accessories,car accessories store,car accessories store northampton'</li></ul> | | Outdoor Furniture | <ul><li>'unsold outdoor furniture deals in lafayette,discount outdoor patio furniture,lafayette outdoor furniture clearance,outdoor patio furniture sale,cheap outdoor patio furniture,tienda de muebles baratos'</li><li>'offerta salotti da giardino economici a castiglione del lago,set tavolo e sedie da giardino,offerta salotti da giardino economici a milazzo,mobili da esterno,arredo giardino scontatissimo,mobili da giardino outlet'</li><li>'muebles de terraza baratos,liquidación de muebles de exterior,ofertas de muebles de exterior sin vender en barcelona,muebles para jardin y terraza,muebles de exterior baratos,sofás de jardín'</li></ul> | | Circuit Board Assembler | <ul><li>'smt pcb assembly,prototype pcb assembly,circuit board manufacturing,wafer fabrication companies,pcb assembly services,semiconductor fabrication services'</li><li>'quick turn pcb manufacturer,prototype pcb assembly,prototype pcb assembly mooresville,heavy copper pcb manufacturers,prototype circuit board assembly mooresville,quick turn pcb fabrication'</li><li>'pcb fab and assembly jackson,circuit board manufacturing jackson,circuit board assembly companies near me jackson,circuit board manufacturing company jackson,circuit board manufacturing companies jackson,circuit board assembly companies jackson'</li></ul> | | Dental Implants | <ul><li>'join $2000 dental implant study near you - how to qualify for an dental implants clinical trial near me,join $2000 dental implant study near you,join $2000 dental implant study near you magnolia,best paid dental implant clinical trials near me 2024,$2000'</li><li>'dental implants cost near me,full mouth dental implants cost,dental implant seniors,affordable dental implants near you,full mouth dental implants san antonio,dental implants san antonio'</li><li>'implantes dentales puente alto maiquetía,implantes dentales puente alto,implantes dentales económicos en maiquetía,implantes dentales venezuela,implantes dentales asequibles cerca de mí,implantes dentales para personas mayores'</li></ul> | | Prostate Cancer Treatment | <ul><li>'first symptoms of prostate cancer and treatment bw,first symptoms of prostate cancer and treatment 2024,symptoms of prostate cancer,what are first hints of prostate cancer and best treatment - prostate cancer early detection,prostate cancer first signs,wh'</li><li>'first symptoms of prostate cancer and treatment ls,first symptoms of prostate cancer and treatment 2024,first symptoms of prostate cancer and treatment(orgovyx nubega),early signs of prostate cancer,prostate cancer symptoms,latest treatment for prostate c'</li><li>'first symptoms of prostate cancer and treatment,first symptoms of prostate cancer and treatment ,checking prostate cancer yourself,prostate cancer signs in males over 50,how to check for prostate cancer at home,prostate cancer treatment age 70,prostate ca'</li></ul> | | Erectile Dysfunction | <ul><li>'erectile dysfunction cure video,instant hard on pills over the counter - paid erectile dysfunction treatment clinical trials,erectile dysfunction cure video delevan,erectile dysfunction cure video new york,erectile dysfunction treatment at home'</li><li>'instant hard on pills over the counter - paid erectile dysfunction treatment clinical trials,erectile dysfunction cure video - erectile dysfunction treatment near me in atlanta georgia,instant hard on pills over the counter - paid erectile dysfunction tre'</li><li>'sertleşmeme sorunu nasıl tedavi edilir,paid erectile dysfunction clinical trials istanbul türkiye,treatment for erectile dysfunction - best treatment for erectile dysfunction,instant hard on pills over the counter - paid erectile dysfunction treatment'</li></ul> | | Astrology | <ul><li>'see my future online,predict my future free,free astrology,my future,get my future,बलेनो 2024'</li><li>'get my future,predict my future husband,my future,free astrology,see my future online,sign a pdf for free'</li><li>'see my future online,download the free kindle app,custom design,my future,get my future,बलेनो 2024'</li></ul> | | Sofas | <ul><li>'sofás baratos 1000,liquidación sofás,vista hermosa de negrete ofertas de sofás 2022 sin vender,mejores sofa cama mexico,san pedro de los naranjos ofertas de sofás 2022 sin vender,sofás en liquidación'</li><li>'sofa chile santiago de chile,sofa metropolitana,sofa santiago de chile - sofa santiago de chile 2024,sofa santiago de chile metropolitana,sofa santiago de chile 2024,sofa santiago de chile chile,sofa santiago de chile cl,sofa cl,sofa - sectional sofas'</li><li>'sofás cama baratos,liquidación sofás,tlalpan ofertas de sofás 2022 sin vender,sofás baratos 1000,mexico,sofás en liquidación'</li></ul> | | Car Auctions | <ul><li>'impound lot cars for sale near can monthly instalment,impound lot cars for sale near monthly instalment,impound lot cars for sale near gambo monthly instalment,impound lot cars for sale near nl monthly instalment,impound lot cars for sale near nl monthly'</li><li>'impound car sales near me buy now pay later,impound car sales near philadelphia buy now pay later,impound car sales near {state } buy now pay later,impound car sales near usa buy now pay late,{key5},{key6}'</li><li>'impound car sales near me buy now pay later,impounded vehicle auction,impound lot cars for sale near st catharines monthly instalment,impound lot cars for sale near me monthly instalment,impound car sales near st catharines buy now pay later,{'</li></ul> | | Fitness Apps | <ul><li>'lose weight and fat in 30 days,how to lose belly fat fast women,apps to get a flat stomach,lose weight fast,how to lose belly fat in 3 days,how to lose belly fat in 10 days'</li><li>'palestre milano,free fitness training course,gym workout,lose weight fast,at home fitness training,gym workout routine'</li><li>'lose weight fast,online personal trainer,lose belly fat in a week app,aerobic exercise at home for belly fat'</li></ul> | | Home Furniture | <ul><li>'canapé convertible soldé,magasins meubles,debarrassage de meubles,achat meubles pas cher,liquidation canapé,offres de meubles invendus'</li><li>'offerte di mobili invenduti,sconto divani,svuota tutto arredamento,mobili in liquidazione,cucine in svendita,offerte di mobili invenduti in civitanova marche'</li><li>'onverkochte meubelaanbiedingen,banken bankstellen,hoekbank goedkoop,relax stoelen,tuin meubel,uitverkoop meubelen'</li></ul> | | Disability Grants | <ul><li>'winter heating grants for disabled uk england,winter heating grants for disabled uk england barking,check heating grant eligibility 2024,heating grants disabled,grants for beds for disabled adults england,heating grants disabled 2024 england,winter heatin'</li><li>'apply now: disabled heating grants uk,claim your disabled heating grant uk,winter heating grants for disabled uk northern ireland londonderry'</li><li>'grants for beds for disabled adults england,grants for beds for disabled adults england bilston,grants for beds for disabled adults,government grants for beds for disabled,grants for beds for disabled adults england - grants for beds for disabled adults,g'</li></ul> | | Online Courses | <ul><li>'free only data analyst courses online with certification,online computer certificate courses free,free data analyst courses online with certification,education college online,.!.....data analyst government course with 100% job placement,data analyst gover'</li><li>'data analyst government course with 100% job placement,free data analyst courses online with certification,data analyst certification course,data analytics courses london,.!.....data analyst government course with 100% job placement,analytics courses'</li></ul> | | Babysitting Jobs | <ul><li>'babysitter jobs near me mississauga - babysitting jobs in mississauga,babysitting jobs in mississauga,babysitter jobs near me - babysitting jobs in mississauga canada 2024,babysitter jobs mississauga,babysitter jobs near me mississauga,babysitter jobs nea'</li></ul> | | Wigs | <ul><li>'hair wig shop near me,natural looking human hair wigs,wigs for women over 50,wig sale,gorgeous hair wigs,glueless wigs on sale'</li><li>'wigs for older women,natural looking human hair wigs,glueless wigs on sale,gorgeous hair wigs,wigs for sale,natural hair wigs sale'</li><li>'wigs for women over 50,ali pearl wigs,human hair wigs for older ladies,short wigs for older women,hair wig shop near me,wigs for older women'</li></ul> | | Car Transport Jobs | <ul><li>'recrutement,chauffeur livreur permis b - chauffeur livreur permis b france,chauffeur livreur permis b - chauffeur livreur permis b hauts-de-france'</li><li>'vacatures voor chauffeurs - single car transport driver jobs in the netherlands heiloo,autovervoer - single car transport driver jobs in the netherlands,vacatures voor chauffeurs - single car transport driver jobs in heiloo 2024,vacatures voor chauffeurs'</li><li>'pkw fahrer stellen - 69 € / stunde | transport aufträge fronreute baden-württemberg deutschland,fahre gesucht - autotransport,autotransport,single car transport driver jobs deutschland'</li></ul> | | Flooring Installation | <ul><li>'piso de epóxi uruguaiana,piso de epóxi rio grande do sul uruguaiana,piso de epóxi para garagem perto de mim uruguaiana,piso de epóxi uruguaiana - piso de epóxi,piso de epóxi uruguaiana 2024,piso de epóxi rio grande do sul'</li><li>'piso de epóxi indiara,piso de epóxi indiara 2024,piso colocado preço indiara,piso de epóxi indiara br,piso emborrachado antiderrapante,piso de epóxi,fornecedores de piso vinílico de luxo perto de mim,piso de epóxi para garagem perto de mim'</li><li>'pisos epoxi - pisos epoxi lomas del mirador buenos aires argentina,pisos epoxi - pisos epoxi lomas del mirador,pisos epoxi - pisos epoxi lomas del mirador buenos aires'</li></ul> | | Car Tires | <ul><li>'tire sales in kissimmee,tires packages in kissimmee,tire clearance sale in kissimmee,discount tires in kissimmee,{key5},{key6}'</li><li>'tire sales near me,tire sales rochester,best tire deals,best tire deals in rochester,tire prices,cheapest place to buy tires'</li></ul> | | Perfumes | <ul><li>'liquidación perfumes originales en madrid,perfumes en oferta,perfumes originales en oferta,liquidación perfumes,precio de perfumes para mujer,perfumes de mujer originales y baratos'</li><li>'perfumes de mujer originales y baratos,perfumes mujer,perfumes originales en oferta,liquidación perfumes originales en astorga,liquidación perfumes,precio de perfumes para mujer'</li><li>'parfums pas chers a prix d usine femme,parfums pas chers a prix d usine homme,parfum femme,eau parfum femme,liquidation de parfums,parfum homme'</li></ul> | | Packing Jobs | <ul><li>'trabalhos de empacotador para trabalhar de casa,empresas de embalagem em campinas,vagas disponíveis para empacotadores iniciantes,empregos de empacotador em armazém com benefícios,empregos de empacotador com contratação imediata,empresas de embalagem'</li><li>'empresas de embalagem em bauru,vagas disponíveis para empacotadores iniciantes,trabalhos de empacotador para trabalhar de casa'</li></ul> | | Waste Management Jobs | <ul><li>'klinische abfallsammelunternehmen berlin,müllentsorgungsdienst,entsorgungsdienste für medizinische abfälle berlin,abfallentsorgungsdienste für unternehmen,industrieabfallentsorgungsunternehmen berlin,müllentsorgungsdiens'</li><li>'müllunternehmen in der nähe,müllentsorgungsdienst,müllentsorgungsdienste,industrieabfallentsorgungsunternehmen stralsund,abfallentsorgung für labore stralsund,klinische abfallsammelunternehmen stralsund'</li></ul> | | Online Shopping Deals | <ul><li>'comprar en pricesmart,caribbean cruise,alaska cruise,apartments,online earning websites,retail'</li><li>'товар из китая,sell products on tiktok,compra en linea,compra aqui,vacation packages,vacations packages'</li><li>'online shopping deals,deals on amazon today,amazon best online deals shopping today,shop discount deals amazon,become an amazon seller,online shopping deals'</li></ul> | | Hepatitis C | <ul><li>'hepatitis c,hepatitis c early symptoms,software expediente clínico electrónico - hepatitis c mx,software expediente clínico electrónico - hepatitis c,hepatitis c méxico,software expediente clínico electrónico méxico'</li><li>'first warnings of hepatitis c in women and treatment india,first warnings of hepatitis c in women and treatment - what are the first symptoms of hep c,what does a hepatitis c rash look like india,what are the first symptoms of hep c india,what are the fir'</li><li>'software expediente clínico electrónico - hepatitis c,software expediente clínico electrónico - hepatitis c navaconcejo,picores por todo el cuerpo,gestion historial clinico 2024,hepatitis c early symptoms,gestion historial clinico navaconcejo,software'</li></ul> | | E-Bikes | <ul><li>'find the best migraine treatment,best migraine treatment near me,migraine massage near me,best neurologist for migraines norristown,natural remedies for headaches,best treatment for chronic migraines near me'</li><li>'günstig ebike kaufen,lichtgewicht e-bikes,elektrische fietsen shop,voorjaarspromotie e-bikes,e-bike kopen,mei aanbiedingen elektrische fietsen'</li><li>'rebike fahrrad,e bike langerringen,e bike auf rechnung bestellen,e bike auf raten kaufen ohne klarna,e bike,e bikes langerringen'</li></ul> | | Buy Now Pay Later Smartphone | <ul><li>'buy now pay later iphone 14 pro max,get now pay later phones,iphone 14 pro max buy now pay later,buy phone pay later,get now pay later phones,buy now pay later phones no deposit'</li></ul> | | Warehouse Jobs | <ul><li>'warehouse packing,warehousing,warehousing logistics services,warehouse packing usa,employment,warehouses nearby'</li><li>'lagerpositionen,nachtschicht-lagerjobs in meiner nähe,lagerkommissionierer und packer,aufträge als kommissionierer und packer,hochbezahlte lagerjobs,lageragentur in der nähe von chur'</li><li>'warehouses nearby,food packaging boxes,warehouse picking and packing,warehouses near me hiring,packing company nearby johnson city,packing company nearby'</li></ul> | | Blood Pressure Treatment | <ul><li>'control bp food,how can i get my blood pressure down right now,healthy foods,bp control food items,treat high blood pressure naturally,garlic control high blood pressure'</li></ul> | | Sleep Apnea | <ul><li>'sleep apnea treatment near minot,treatments for sleep apnea besides cpap,sleep apnea treatment center near me,sleep apnea treatment centers near me,treatments for obstructive sleep apnea,non surgical sleep apnea treatments'</li><li>'sleep apnea treatment cpap alternative,best non prescription sleep aid,treatments for sleep apnea,sleep apnea treatment without cpap,non surgical sleep apnea treatments,sleep apnea treatment'</li><li>'transcend 365 cpap,resmed apnealink,philips dreamwear full face mask with headgear,lofta cpap,resmed airmini travel auto cpap machine,cpap airsense 10 elite'</li></ul> | | Online Jobs | <ul><li>'online jobs in usa work from home,usa online jobs from home,online jobs in usa in lusaka,online jobs in usa near me,online tutors in the usa - web developers in the usa,online tutors in the usa'</li><li>'online jobs remote work,find job canada,apply for online jobs in canada,online marketing jobs,{key5},{key6}'</li></ul> | | Plumbing Jobs | <ul><li>'electricians toronto,plumber work training toronto,electricians on,handyman plumber toronto,electrician training on,plumbing work nearby toronto'</li><li>'plumbing jobs,plumber work,plumbers near me,plumbing work near me,plumbers nearby,plumbing jobs near me'</li><li>'handyman near me plumbing,plumber needed rockland,plumbing work near me rockland,plumbing work nearby,{key5},{key6}'</li></ul> | | Tuxedo Rental | <ul><li>'tuxedo rental price'</li></ul> | | Daycare Jobs | <ul><li>'escola infantil proximo a mim - creche berçario itaquaquecetuba são paulo,creches - escola infantil proximo a mim itaquaquecetuba 2025,creches itaquaquecetuba - escola infantil proximo a mim,creches - escola infantil proximo a mim itaquaquecetuba,creche'</li><li>'je recherche emploi mauritanie nouakchott,je recherche emploi mauritanie 2024,je recherche emploi mauritanie ,je recherche emploi mauritanie mr,je recherche emploi mauritanie - garde personnes agées,garde enfant autour de moi nouakchott,garde personnes a'</li><li>'part time daycare near me,part time daycare near me - day care,online childcare courses with certificate california - part time daycare near me,arabic infant daycare riverside - daycare jobs in riverside california,day care riverside california,arabic inf'</li></ul> | | Truck Driving Jobs | <ul><li>'work abroad poland driver,truck company inventory management systems and real-time warehouse tracking in canada,trucking companies in harare,delivery truck - delivery truck harare zimbabwe,delivery truck - delivery truck harare,truck jobs in harare,delive'</li><li>'local trucking jobs near me,local trucking jobs,truck driving jobs for seniors,truck job for seniors,best trucking companies to work for with no experience,instant truck driving jobs'</li><li>'vagas de motorista de caminhão,vagas de motorista de caminhão perto de mim,vagas de motorista categoria b,pequenas empresas de transporte de carga,pequenas empresas de transporte,truck driver job br,truck driver job espírito santo,truck driver job bras'</li></ul> | | Home Remodel | <ul><li>'home remodeling contractor woodbridge,bathroom home remodeling woodbridge,home remodeling near me woodbridge'</li><li>'home remodeling contractor cherry hill,remodeling home remodeling contractor cherry hill,home remodeling cherry hill,home restoration company cherry hillhome remodeling contractor cherry hill,remodeling home remodeling contractor cherry hill,hom'</li><li>'home remodeling baie-comeau,home remodeling contractor baie-comeau,home remodeling near me baie-comeau'</li></ul> | | Pet Insurance | <ul><li>'pet insurance for dogs,pet insurance comparison,pet insurance companies,best pet insurance for cats,best pet care insurance,dog medical insurance'</li></ul> | | Home Improvement Loans | <ul><li>'home improvement loan quotes,equity home improvement loan,best place for home improvement loan,apply for home improvement loan online,home improvement loan near me,home improvement financing no credit check'</li><li>'loans for home improvement,home improvement loan,personal loan for home improvement,best home improvement loans,loan for home improvement,best loan for home improvements'</li><li>'loan on my home,apply for home improvement loan,home improvement loan application,loan on my house,quick home improvement loans,quick home improvement loan'</li></ul> | | Medical Delivery | <ul><li>'medication delivery,medicine delivery at home,medication delivery san leandro,medicine home delivery,best medication delivery services for seniors in 2023,medicine delivery service near me'</li><li>"medicine delivery companies - find 2023's top search results,medicine delivery,medicine delivery denham springs,medicine delivery service near me,medication delivery denham springs,medicine home delivery"</li><li>'medicine delivery,prescription help for seniors - browse thousands of results,medicine delivery service near me,medicine delivery shelbyville,medication delivery,medicine delivery at home'</li></ul> | | Cell Phones | <ul><li>'最高の電話のお得情報,安い携帯電話,無料電話付きの携帯電話プラン,携帯電話を購入する,iphone pro max 16のお得情報,安い電話のお得情報'</li><li>'best phone deals,buy phones,cheap phone deals,cell phone plans with free phone,cheap phone,best cheap phone'</li><li>'best phone deals,cell phone plans with free phone,best cheap phone,mobile phones,cheap phone,best cheap phones'</li></ul> | | Office Cleaning Jobs | <ul><li>'local commercial cleaning companies,commercial and industrial cleaning,deep office cleaning services,corporate cleaning company,office cleaning sutton,local office cleaning companies'</li><li>'office cleaners boscombe,office cleaning services near me,commercial office cleaners,cleaning company uk,office cleaning service,{key6}'</li><li>'office cleaners,office cleaning services near me,cleaning company uk,office cleaning,office cleaning service,office cleaners birmingham'</li></ul> | | Cosmetics | <ul><li>'cheap luxury makeup,luxury makeup,cheap luxury makeup online,makeup clearance sale online'</li></ul> | | Weight Loss | <ul><li>'what to eat on a keto diet,information,how to lose weight,how to lose stomach fat quickly,food for diet,water diet'</li><li>'water intake per day,lose weight drinking water,what to eat on a keto diet,lose weight fast,water diet,water daily intake'</li><li>'food to lose weight,what to eat on a keto diet,food for diet,drink water,how to lose belly fat in a week,to lose belly fat'</li></ul> | | Credit Cards | <ul><li>'best buy credit card no interest,amazon gift card cashback,credit cards for 500 credit score no deposit,best welcome offer credit card,best 0 apr business credit cards,business card offers'</li><li>'今お金借りる,信用調査なしのクレジットカード,即時承認ローン,ブラックでも作れるカード,簡単承認クレジットカード,収入証明書なしお金借りる'</li><li>'apply credit card online singapore,guaranteed instant credit card approval in 02,guaranteed instant credit card approval in singapore,instant credit card approval,credit card application online instant approval,{key6}'</li></ul> | | Tires | <ul><li>'buy new tires in my area,tire discounters near me 2024 florida,tire discounters near me north port,buy new tires in my area in florida,buy new tires in my area in north port'</li><li>'4 tire deals online,4 tire deals online mchenry'</li><li>'buy new tires in my area,tire discounters near me 2024 new jersey,tire discounters near me east brunswick,buy new tires in my area in new jersey,buy new tires in my area in east brunswick'</li></ul> | | Plumbing Services | <ul><li>'apply now: plumbing jobs,klempner stellenangebote in essen,sanitär jobs deutschland in essen,freie installateurstellen,shk stellenangebote,rohrleitungsbauer stellenangebote'</li></ul> | | Alzheimer's Treatment | <ul><li>"alzheimer primeros sintomas,prueba para alzheimer,alzheimer's test for early detection metropolitana santiago de chile,alzheimer primeros sintomas - prueba para alzheimer chile,alzheimer's test for early detection metropolitana 2024,alzheimer primeros sin"</li><li>'test voor alzheimer - beginnende alzheimer herkennen,alzheimer test online,test voor alzheimer,test voor alzheimer - alzheimer test,test voor alzheimer - beginnende alzheimer herkennen 2024,test voor alzheimer - beginnende alzheimer herkennen papendrecht'</li><li>"alzheimer test 12 questions online south australia,alzheimer test 12 questions online - 10 early signs of alzheimer's au australia,treat migraine symptoms,test for alzheimers disease,alzheimer early signs - what are the 7 signs of alzheimer's,tests for de"</li></ul> | | Used Cars | <ul><li>'αναστασακησ hair clinic greece - αγορα αυτοκινητου greece,αναστασακησ hair clinic greece - αγορα αυτοκινητου,αναστασακησ hair clinic greece,αναστασακησ hair clinic greece larissa'</li><li>'acheter voiture occasion,acheter voiture occasion vevey,acheter voiture occasion {country},acheter voiture occasion pres de chez moi vevey'</li><li>'lista de precios autos - autos usados en oferta con hasta 50% de descuento en méxico.,lista de precios autos - ofertas coches seminuevos baratos méxico san nicolás de los garza nuevo león,ofertas coches seminuevos baratos - autos sin enganche y mensua'</li></ul> | | Kidney Cancer | <ul><li>'treatment of diabetic kidney disease,kidney disease treatment guidelines,chronic kidney disease stage 3 treatment,chronic kidney disease treatment medication,stage 5 kidney disease treatment,chronic kidney disease treatment'</li><li>'stage 2 kidney disease treatment,treatment for kidney disease,stage 5 kidney disease treatment,kidney disease treatment guidelines,kidney disease symptoms in females treatment,kidney disease treatment'</li><li>'stage 3b chronic kidney disease in illinois,stage 3b chronic kidney disease in naperville,stage 3b chronic kidney disease naperville,stage 3b chronic kidney disease illinois,new drug for kidney disease,{key6}'</li></ul> | | Chronic Kidney Disease | <ul><li>'chronic kidney disease solution en espanol,chronic kidney disease treatment en espanol,memphis chronic kidney disease solution en espanol,ulcerative colitis treatment video'</li></ul> | | Oven | <ul><li>'complete powder coating systems nassau,induction furnace nassau comilla,powder coat in comilla,powder coat near me,induction furnace nassau bangladesh,induction furnace nassau bd'</li></ul> | | Employee Retention Credit | <ul><li>'how to apply for employee retention credit,employee retention credit deadline 2023,tax incentives,retention policies,employee benefit,employee retention credit'</li><li>'employment retention credit,worker comp coverage,1 employee payroll,employee retention credit apply,leadership skills training,employee retention credits'</li></ul> | | FHA Loans | <ul><li>'fha loans with no credit scores and bad credit tauranga,no down payment first time home buyer coromandel forest park,no down payment first time home buyer swanson,fha loans with no credit scores and bad credit wanaka,fha loans with no credit score'</li><li>'no down payment first time home buyer,fha loan for a credit score under 580,first-time home loans with zero down payment,instant home loan approval online,zero down payment home loan,first home buyers'</li><li>'first home grant,loan to buy house,fha loan application - check your 2023 eligibility,cancel timeshare legally,very bad credit home loans,equirements for fha loan'</li></ul> | | Bedding | <ul><li>'pillow shams,king size bedding,king comforter sets,bedroom comforter sets,bedding sets,bedding sale'</li></ul> | | Mental Health | <ul><li>'check your mental health 2025,check your mental health 2025 br,mental wellness check'</li></ul> | | Senior Apartments | <ul><li>'rent senior apartment nearby sugar land ,low rent senior apartments ,55 up senior living communities ,senior housing for rent (1 and 2 bedrooms) - see prices,independent senior apartments with in home care services nearby ,c'</li><li>'1 bedroom apartments near me - apartments for rent for seniors 55 and older walsall united kingdom,1 bedroom apartments near me - apartments for rent for seniors 55 and older,1 bedroom apartments near me - apartments for rent for seniors 55 and older wals'</li><li>'senior apartments canada,senior housing options in grafton,age-restricted apartments,senior living amenities,retirement apartments in grafton,55 apartments in grafton'</li></ul> | | Backyard Fence | <ul><li>'same day fence installation,fence contractors,fence repair,fence companies spruce grove,backyard fence cost,fence professionals near me'</li></ul> | | Construction Jobs | <ul><li>'construction companies in kumasi - construction company in kumasi,construction companies in kumasi,no experience construction jobs kumasi'</li><li>'construction companies in san juan - construction company in san juan philippines,construction companies in san juan - construction company in san juan,construction company in san juan - construction companies near me san juan'</li><li>'construction jobs for freshers,construction jobs in india,construction jobs salary,private construction jobs,construction jobs in chandigarh,construction jobs in chandigarh-mohali-panchkula'</li></ul> | | Medication Delivery Jobs | <ul><li>'medical courier jobs gerlingen,arzneimittelzustellung 24/7 gerlingen,travel medical assistant jobs gerlingen,suche mitarbeiter heizung sanitär,vollzeitstelle in der medizinischen versorgung gerlingen,free job posting'</li><li>'lieferdienst medizintechnik,travel medical assistant jobs in monchengladbach,arzneimittelzustellung 24/7 in monchengladbach,medical assistant jobs near me,krankenhauszustellung notfall,medical courier jobs in monchengladbach'</li><li>'medical courier jobs gross wittensee,linkedin hiring gross wittensee,medical courier jobs gross wittensee,arzneimittelzustellung 24/7 gross wittensee,free job posting,vollzeitstelle in der medizinischen versorgung gross wittensee'</li></ul> | | Auto Insurance | <ul><li>'55 and older communities in,cheapest car insurance quote,insurance quotes edmonton,cheapest insurance,cheap internet plans for seniors,senior car insurance'</li><li>'best senior car insurance - senior auto insurance,car insurance quote,cheap car insurance quotes,drivers: cheap auto insurance quotes,cheap car insurance skipsea,best cheap car insurance'</li><li>'günstigste kfz-versicherungsangebote donaueschingen,günstige kfz versicherungen,auto versicherung preis,autoversicherungen vergleichen,{key5},{key6}'</li></ul> | | Life Insurance | <ul><li>'cheapest insurance kl,seguro de vida hipoteca,life insurance rates by age,$9.95 life insurance,whole life insurance quotes,simulador seguro coche'</li><li>'life insurance for over 65,seniors: $13/month life insurance quotes,seniors: $13/month life insurance quotes,seniors $13 month life insurance quotes,life insurance rates by age,life insurance quotes seniors'</li><li>'whole life insurance quotes cagayan de oro,compare life insurance quotes msr,$9.95 life insurance msr,select quote life insurance msr,life insurance quote term msr,whole life insurance quote msr'</li></ul> | | Forklift Jobs | <ul><li>'gabelstapler in meiner nähe, lagermitarbeiter staplerfahrer meppen - 127 jobs in meppen für staplerfahrer {country}, gabelstapler in meiner nähe {country}, gabelstapler in meiner nähe meppen, gabelstapler-jobs in vollzeit {country} {state}, jobs als g'</li><li>'praca na wózkach widłowych w pobliżu poznan,praca na wózkach widłowych w magazynie w poland - praca na wózkach widłowych w pobliżu,praca na wózkach widłowych w pobliżu - praca na wózkach widłowych w pobliżu poznan,praca na wózkach widłowyc'</li><li>'vorkheftruck bij mij in de buurt,werk zoeken magazijnmedewerker - heftruckbanen fulltime in belgium be,werk zoeken magazijnmedewerker - heftruckbanen fulltime in belgium mechelen be'</li></ul> | | Home Care Jobs | <ul><li>'hiring for home care [city},at home care for elderly,private home care for elderly near me,family home care services,home care services,homecare uyo'</li><li>'home elderly care services,elderly home care,hiring for home care benson,home care services,home instead care agency,private home care for elderly near me'</li><li>'finding in home care for elderly,home instead care agency,at home care for elderly,hiring for home care whyalla,{key5},{key6}'</li></ul> | | Clinical Trials | <ul><li>'participate in new studies for dental implants in east bank,participate in new studies for dental implants in east bank - how to qualify for an dental implants clinical trial near me,how to qualify for an dental implants clinical trial near me,participate'</li><li>'high paying depression clinical trials online,highest paid depression clinical trials near me cherry hill new jersey,get paid for depression clinical trials near me,best paid depression clinical trials near me 2024,lucrative depression and anxiety clinica'</li><li>'high paying hearing loss clinical trials near me,high paying hearing loss clinical trials in sacramento,best paid hearing loss clinical trials near me 2024,high paying hearing loss clinical trials near me us'</li></ul> | | Doors | <ul><li>'industrial roller doors,exterior front entry doors - exterior doors with glass 2024,36 x 80 fiberglass exterior door,front door replacement,doors in montevideo - exterior front entry doors (see doors in montevideo) montevideo department,exterior front ent'</li><li>'industrial roller doors,roller shutter doors,front door quotes near me - quote front door,front door quotes near me east kazakhstan,front door quotes near me (see doors in ust-kamenogorsk),front door replacement,doors in ust-kamenogorsk - exterior front e'</li><li>'exterior front entry doors - exterior front entry doors mapo-gu,doors in mapo-gu,exterior front entry doors,exterior front entry doors - exterior doors with glass seoul,doors in mapo-gu - exterior front entry doors 2024,doors in mapo-gu - exterior front e'</li></ul> | | Cars For Sale | <ul><li>'neuwagen zum halben preis thailand 2024,unverkaufte 2023 suvs zu verkaufen chonburi,bang lamung: unverkaufte beschlagnahmte autos werden in 2025 fast verschenkt - gebrauchtwagen preise,bang lamung: unverkaufte beschlagnahmte autos werden in 2025 fast vers'</li><li>'buy used cars,best cheapest used vehicles,used cars for sale,bank owned cars,used vehicles,buy a car'</li></ul> | | Paid Sperm Donation | <ul><li>'sperm clinic nearby,donate sperm in {country},sperm donor clinic in {country},sperm donor clinic in ingleside,sperm donor companies in {country},donate sperm in ingleside,sperm donor clinic,donate sperm,where to donate sperm near me'</li><li>'sperm clinic nearby,donate sperm in {country},sperm donor clinic in {country},sperm donor clinic in laramie,sperm donor companies in {country},donate sperm in laramie,sperm donor clinic,donate sperm,where to donate sperm near me'</li></ul> | | Senior Living | <ul><li>'senior housing near me,senior homes nearby coventry,senior homes nearby'</li><li>'sorrento: new two bed senior apartments are stunning! (click & see)'</li></ul> | | Pro Bono Attorneys | <ul><li>'criminal lawyer nj - pro bono abogados inmigracion en español usa,criminal lawyer nj - pro bono abogados inmigracion en español usa 2024,criminal lawyer nj - pro bono abogados inmigracion en español usa país vasco,pro bono abogados inmigracion en espa'</li><li>'pro bono abogados inmigracion en español usa piera - pro bono abogados inmigracion en español usa,criminal lawyer nj - pro bono abogados inmigracion en español usa españa,pro bono abogados inmigracion en español usa - pro bono lawyers for immigrants'</li><li>'criminal lawyer nj - pro bono abogados inmigracion en español usa 2024,criminal lawyer nj - pro bono abogados inmigracion en español usa,criminal lawyer nj - pro bono abogados inmigracion en español usa valencia,pro bono abogados inmigracion en españo'</li></ul> | | Cyber Security Jobs | <ul><li>'paid cybersecurity training - cyber security paid training remote,cyber security paid training online quebec canada,cyber security paid training online 2025'</li><li>'paid cyber security training,cybersecurity paid training,cyber security paid training online abbotsford'</li><li>'cyber security paid training online - cyber security paid training online harare,cyber security paid training online zimbabwe,paid cybersecurity training - cyber security paid training online,paid cyber security training - paid cyber security training 202'</li></ul> | | Dental Fillings | <ul><li>'dental crowns clinic near me in st. petersburgflorida,dental crowns - dfenatal crowns near you in st. petersburgflorida,walk in dentist near you in st. petersburgflorida,emr systems for small practice'</li></ul> | | Dog Food | <ul><li>'best dog food,best food brands for dogs,best dry dog food brands,dog food brands,avoid these dog food,best dog food brands'</li></ul> | | Retirement Planning | <ul><li>'retirement options'</li></ul> | | Urgent Care | <ul><li>'urgent care with ultrasound'</li></ul> | | Prostate Cancer | <ul><li>'first signs of prostate cancer,enlarged prostate,symptoms of prostate cancer in seniors,psa symptoms of prostate cancer,metastatic prostate cancer,prostate cancer symptom checker'</li><li>"what's new in prostate cancer treatment,prostate cancer treatment guidelines 2023,prostate cancer treatment latest news,prostate cancer,how long can you live with gleason 6 prostate cancer,stage 5 prostate cancer treatment"</li></ul> | | Medical | <ul><li>'donation center nearby uttar pradesh,donation center nearby,sperm clinic nearby india - haartransplantatie in nederland uttar pradesh in amroha,cash jobs india,cash jobs,donation center nearby - cash jobs'</li><li>'donation center nearby india,sperm clinic nearby india - haartransplantatie in nederland national capital territory of delhi delhi,donation center nearby,cash jobs 2024,cash jobs national capital territory of delhi,donation center nearby - cash jobs'</li><li>'donation center nearby india,donation center nearby punjab,cash jobs india,donation center nearby,cash jobs,sperm clinic nearby india - invisalign cerca de mí india ludhiana (see medical services donor in ludhiana) 2024'</li></ul> | | Macular Degeneration | <ul><li>'treatments for macular degeneration,cause macular degeneration,eye medication for macular degeneration,early signs of macular degeneration and treatment avacincaptad,macular degeneration care,treatments for dry eye disease'</li><li>'anti vegf for macular degeneration,dry eyes artificial tears,eylea wet amd,dry eyes artificial tears,cure for macular degeneration,treatments for macular degeneration'</li></ul> | | Certificate of Deposit | <ul><li>'rate cds'</li><li>'discover bank cd rates'</li></ul> | | Senior Cruises | <ul><li>'cruise holidays for singles over 50,cruises for single seniors over 60,cruises for single adults only,cruises for single adults only united kingdom,cruises for single adults only (see senior singles cruises in york),cruise holidays for singles over 50 - c'</li><li>'cruises for single adults only,singles over 50 cruises 2024,cheap singles cruises,cruise holidays for singles over 50,cruises for single adults only england,cruises for single seniors over 60 luton'</li></ul> | | Caregiver Services | <ul><li>'caregiving agencies nearby,private sitters for elderly near me,overnight caregiver near me,night caregiver jobs near milpitas,home care jobs near me,caregiver job'</li><li>'caregiving agencies nearby,caregiver service near london,caregivers needed nearby,apply for caregiver jobs,{key5},{key6}'</li><li>'caregiving agencies nearby,private sitters for elderly near me,caregivers needed nearby,night caregivers wanted,caregiver job,private overnight caregiver jobs near me'</li></ul> | | Skin Care | <ul><li>'skin tightening facial,skin tightening treatment,skin tightening procedures face,face wrinkles treatment,top rated wrinkle creams anti aging'</li><li>'botox for face wrinkles,botox treatment for face near me,face slimming botox,botox treatment for face,botox facial near me'</li><li>'skin tightening treatment,skin tightening facial,face wrinkles treatment,skin tightening,face skin tightening'</li></ul> | | Breast Cancer | <ul><li>'signs of breast cancer - early breast cancer signs and symptoms usa,easy signs,early breast cancer signs and symptoms usa - early breast cancer signs and symptoms usa 2024,breast cancer symtoms in sarnia,breast cancer symtoms near me,early breast cancer s'</li><li>'breast cancer first symptoms,early breast cancer signs and symptoms usa - signs of breast cancer us,early breast cancer signs and symptoms usa - breast cancer signs in females over 50,early breast cancer signs and symptoms usa ,early breast cancer signs a'</li><li>'breast cancer first symptoms - early breast cancer symptoms,early breast cancer symptoms,breast cancer beginning'</li></ul> | | Electrician Jobs | <ul><li>'jobs als elektrikerhelfer,elektrikerfirmen,elektriker-assistent-jobs,jobmöglichkeiten als elektriker,elektriker jobs in oppenweiler,elektriker-jobs in meiner nähe'</li><li>'elektriker-jobs in meiner nähe,jobmöglichkeiten als elektriker,elektriker jobs in paderborn,elektrikerfirmen,jobs als elektrikerhelfer,elektriker-assistent-jobs'</li><li>'jobs als elektrikerhelfer,jobmöglichkeiten als elektriker,elektriker-assistent-jobs,elektriker jobs in unterwellenborn,elektriker-jobs in meiner nähe,elektrikerfirmen'</li></ul> | | Migraine Treatment | <ul><li>'best treatment for chronic migraines near me sandy,migrane relief sandy,best migraine treatment near me sandy,headache and migraine relief sandy'</li><li>'migraine treatment study trial in my area,best migraine treatment nearby,headache specialist in my area,migraine treatment study trial mill valley,migraine relief nearby mill valley,headache and migraine relief mill valley,buy migraine relief'</li><li>'buy migraine relief largo,best treatment for chronic migraines near me largo,headache specialist in my area largo,best migraine treatment near me largo,migraine relief online largo'</li></ul> | | Digital Marketing Jobs | <ul><li>'marketing digital trabalhar online,marketing digital trabalhar online inhumas,marketing digital trabalhar online brasil,pós marketing digital - clique aqui - inhumas goiás,pós marketing digital - clique aqui - inhumas,analise de dados marketing - marke'</li></ul> | | Services | <ul><li>'rent a car on monthly basis,promo croisiere,los mejores suv de lujo no vendidos de 2022-2024 ya casi a la venta,renta de carros cerca de mi, precios de los suv no vendidos en 2024,topcar'</li><li>'precios de los suv no vendidos en 2024,lloguer vehicles,las alquiler de coches, los suv usados \u200b\u200bmás baratos,los mejores suv de lujo no vendidos de 2022-2024 ya casi a la venta,rav4 deals'</li></ul> | | Belly Fat Removal | <ul><li>'home remedies to reduce belly fat without exercise,best stomach reduction exercises 2023,belly fat reduction,lose stomach fat,foods that burn belly fat,lose belly fat women'</li><li>'lose belly fat exercise,belly fat reduction,lose belly fat fast,home remedies to reduce belly fat without exercise,exercises to lose belly fat at home,foods that burn belly fat'</li><li>'lose stomach fat,lose belly fat fast,exercises to lose belly fat at home,best stomach reduction exercises 2023,belly fat reduction,home remedies to reduce belly fat without exercise'</li></ul> | | Security Jobs | <ul><li>'सुरक्षा गार्ड की नौकरियां indore,सुरक्षा गार्ड की नौकरियां indore - सुरक्षा गार्ड की नौकरियां indore madhya pradesh,नि'</li><li>'praca w ochronie w pobliżu mnie 2025 poland - praca w ochronie w pobliżu mnie 2025,firma ochroniarska lodz poland - praca w ochronie w pobliżu mnie 2025 poland,praca w ochronie w pobliżu mnie 2025 poland,firma ochroniarska lodz poland,praca w ochronie'</li><li>'security baustellenbewachung lohne - security baustellenbewachung deutschland niedersachsen,lohne it security dienstleistung - security baustellenbewachung lohne,security baustellenbewachung deutschland - security mitarbeiter gesucht niedersachsen'</li></ul> | | Malpractice Attorneys | <ul><li>'medical malpractice lawyer in'</li></ul> | | Ultherapy | <ul><li>'ultherapy best price,cost of thread face lift,ultherapy near me,ultherapy specials near me,ultherapy for 70 year old,ultherapy treatment areas'</li><li>'lifting sin cirugia ultherapy,ultherapy precio,ultherapy cerca de mí,quitar arrugas con laser precio,ultherapy para personas de 70 años,mejor precio para ultherapy'</li></ul> | | Wrinkle Treatment | <ul><li>'botox injections,best wrinkle treatment,botox injections for forehead wrinkles in houston,dermal fillers cost,wrinkle treatment near me,anti-aging treatments'</li></ul> | | Lawn Mowers | <ul><li>'top 10 zero turn mowers'</li></ul> | | Depression Treatment | <ul><li>'drugs to treat severe depression,depression treatment center in sacramento,drugs to treat anxiety and depression,depression and anxiety medication,depression anxiety stress test,take a depression test right now'</li><li>'anxiety and depression treatment memphis,treatment centers memphis,depression and therapy memphis,stress anxiety depression,online depression and anxiety medication,betterhelp as a therapist memphis'</li><li>'take depression test,depression test online,mental health depression test,take depression test india mumbai,test to see if you have depression,test to know if you have depression'</li></ul> | | Food Packing Jobs | <ul><li>"emballage sur voie d'emballage tremblay-en-france,emballage pour livraison de produits surgelés,entreprise d'emballage professionnelle tremblay-en-france,emballage alimentaire de livraison,emballage alimentaire refermable,entreprises d'emballage alimenta"</li><li>"emballage alimentaire de livraison,emballage alimentaire refermable,entreprise d'emballage professionnelle pradines,entreprises d'emballage alimentaire sous contrat pradines,entreprise d'emballage à proximité,emballage sur voie d'emballage pradines"</li><li>'packing company near me,resealable food packaging,delivery food packaging,packaging for frozen food delivery,contract food packaging companies singapore,packlane packaging singapore'</li></ul> | | Addiction Counseling | <ul><li>'counseling degree programs ,counseling degree,counseling degree programs,recovery treatment center,addiction treatment,therapy counseling,recovery treatment center,rehab treatment centers nearby,counseling courses'</li></ul> | | Home Decor | <ul><li>'home decor interior design,elegant luxury home decor - home decor interior design 2024,elegant luxury home decor (see home decor in rawalpindi)'</li><li>'home decor interior design noida,home decor in noida,home decor interior design uttar pradesh'</li></ul> | | Leadership | <ul><li>'leadership training programs'</li><li>'leadership programs in silver spring,leadership training near me silver spring,leadership courses in silver spring,leadership and management courses maryland,leadership certification in maryland,leadership training courses maryland'</li></ul> | | HVAC Services | <ul><li>'best hvac company near me,local heating and air contractors,hvac companies near me'</li><li>'best hvac service near me borger,i need hvac service near me borger,heating and cooling near me borger'</li><li>'hvac companies near me ,hvac near me ,hvac service near me ,best heat and air service near me ,best heating and ac companies near me'</li></ul> | | Bipolar Treatment | <ul><li>'participate in studies for new bipolar medication in edmond,participate in studies for new bipolar medication in edmond - how to qualify for a new bipolar medication clinical trial near me,how to qualify for a new bipolar medication clinical trial near me'</li><li>'participate in studies for new bipolar medication in livermore,participate in studies for new bipolar medication in livermore - how to qualify for a new bipolar medication clinical trial near me,how to qualify for a new bipolar medication clinical trial n'</li><li>'bipolar disorder treatment centers,best bipolar medication,bipolar disorder specialist,bipolar depression treatment,inpatient bipolar treatment,vonore bipolar treatment centers'</li></ul> | | Mental Health Test | <ul><li>'mental health test free,mental health check,mental illness test for teenagers india,mental illness test for teenagers in,mental illness test for teenagers lucknow,discover your mental wellness in - discover your mental wellness 2025,mental health test fre'</li></ul> | | Dental Implant Grants | <ul><li>'apply for dental implant grant in north carolina united states - full set of teeth implants cost 2025,apply for free dental implants (see dental implant grants in decatur),apply for dental implant grant illinois - dental implant grants near me decatur,hos'</li><li>'ayuda para implantes dentales para personas mayores 2024,clínicas dentales gratis cerca de mi,solicite implantes dentales gratuitos puebla de zaragoza méxico'</li></ul> | | Phone Service | <ul><li>'cellphone deals,cellphone plans,fold 5 samsung phone,cheap cellphone plans,cellphone or cell phone,prepaid cellphone'</li><li>'free iphone phone deals,get a new phone for free,best phone deals,iphone 13 pro for cheap,free iphone government phone,best new phone deals'</li></ul> | | Pest Control | <ul><li>'termite and pest control near me,pleasanton pest control companies,local pest control near me,termite and pest control near me,pest control near me,pest control near me'</li><li>'termite treatment brooklyn,termite inspection and pest control,local pest control near me,residential pest control services near me,local pest control service,rodent removal companies near me'</li><li>'nearby exterminator company valley stream,pest control companies near me,pest control products,pest control home,pest control near me valley stream,pest control prevention remedies'</li></ul> | | Meal Delivery Services | <ul><li>'home meal delivery service healthy,cheapest meal delivery service,weight loss diet meal kit delivery,fresh meals delivered to your door,weight loss meal delivery,best weight loss meal plan delivery'</li></ul> | | Cleaning Jobs | <ul><li>'locuri de muncă în românia suceava - angajari suceava,angajari suceava,locuri de muncă în românia suceava - angajari suceava ro,locuri de muncă în românia suceava - angajari suceava 2024,angajari suceava - locuri de muncă în românia suceava,lo'</li><li>'servicios de limpieza de empresas cercanos,servicios de limpieza de oficinas cerca de mí,servicios de limpieza de oficinas cerca de mí madrid,servicios de limpieza de oficinas cerca de mí madrid madrid 2024 españa - servicios de limpieza de oficinas c'</li></ul> | | Medical Alert Systems | <ul><li>'medical alert systems for seniors with fall detection,personal monitor,medicare benefits for seniors,senior alert systems,medical alert systems for seniors with fall detection in united states,health insurance covered medical alert systems for s'</li><li>'best sleep and stress tracker,mattress sleep tracker,sleep tracker samsung watch,sleep tracking mattress,most accurate wearable sleep tracker,sleep monitoring system'</li><li>'medical alert system for seniors,alert devices,medical alert system for senior with fall detection near me,medical alert monitoring for seniors,medicare benefits for seniors,senior alert systems'</li></ul> | | Coats | <ul><li>"heywood: women's coat sale: prices that will surprise you!"</li></ul> | | Eyelid Surgery | <ul><li>'en español eyelid surgery in cuautitlán izcalli,en español eyelid lift in cuautitlán izcalli,méxico eye bag surgery en español in cuautitlán izcalli,eyebag removal near me en español,eyelid tightening near m'</li></ul> | | Hotels | <ul><li>'near hotel,near hotel siddhapur'</li></ul> | | Paid Clinical Trial | <ul><li>'participate in new studies for rheumatoid arthritis in duluth,participate in new studies for rheumatoid arthritis in duluth - how to qualify for a rheumatoid arthritis clinical trial near me,how to qualify for a rheumatoid arthritis clinical trial near me'</li></ul> | | Massage Services | <ul><li>'sentinal one,full body massage services,relaxation massage spa,أقرب مركز مساج,مركز مساج,massage spa nearby'</li><li>'massage training - working as a massage therapist,massages nearby,registered massage therapist sarnia,home service massage near me,special asian girl massage vista,massage for migraine near me'</li><li>'massage complet du corps à proximité,massothérapie à proximité,spa de massage à proximité,services de massages,body massage near me,massage du dos à proximité'</li></ul> | | Veterans Assistance | <ul><li>'va disability claim,disabled veteran benefits,benefits for the employees,disability benefits,va disability benefits,abogado inmigracion consulta'</li></ul> | | Cloud Storage | <ul><li>'best cloud storage providers,1 tb cloud storage price,online backup server,most secure cloud storage,cloud backup storage,sites like dropbox'</li><li>'programa facturación online,cloud storage managed services in united states uruguay 2024 - programa facturación online,cloud storage in ,storage data protection uruguay,cloud storage near me,cloud based storage services'</li></ul> | | Window Replacement | <ul><li>'indio window replacement,window replacement companies in the indio area,window and door replacement indio'</li><li>'window replacement,window replacement companies in my area,residential window replacement near me,window replacement in bell gardens'</li><li>'city} window replacement,window replacement companies in the bemidji area,window and door replacement bemidji,window replacement companies in my area'</li></ul> | | Electric Cars | <ul><li>'small electric car for seniors in india,small electric car for seniors in dehradun,small electric car for seniors in uttarakhand,electric cars for seniors in dehradun,small electric car for seniors,small electric car for seniors india'</li><li>'2012 honda civic sdn lx,2012 honda civic sdn lx usa,2012 honda civic sdn lx in st james,2012 honda civic sdn lx st james,2012 honda civic sdn lx in new york,2012 honda civic sdn lx new york'</li><li>'small electric car for seniors - electric cars for seniors in dehradun,electric cars for seniors in dehradun,small electric car,electric vehicle cost,electric cars for seniors in 2024 (see electric cars for seniors in dehradun),electric cars for seniors i'</li></ul> | | Wart Treatment | <ul><li>'immunotherapy for warts,common wart treatment,genital wart removal ga,plantar wart removal,wart freezing treatment,natural remedies for warts'</li></ul> | | Energy Drinks | <ul><li>'boost energy drink,natural energy drinks for intercourse,best homemade energy booster drink,electrolyte drink,natural healthy energy drink for instant'</li><li>'natural energy drinks for intercourse,natural energy drink for stamina increase,natural energy drinks for bed,boost energy drink,electrolyte drink'</li></ul> | | DIY Christmas Gifts | <ul><li>'overactive bladder leakage,how do you get overactive bladder,what is the best medicine for overactive bladder,overactive bladder treatment,overactive bladder medication,overactive bladder symptoms'</li></ul> | | Welding Jobs | <ul><li>'welding companies durban - welding companies kwazulu-natal,welding companies durban za,welding companies durban - welding jobs nearby,job apply,welding companies kwazulu-natal (see welding jobs in durban),welding companies durban kwazulu-natal'</li><li>'welding jobs nearby,welding companies adelaide 2024,welding jobs hiring nearby'</li><li>'job apply,welding and fabrication company 2024,welding and fabrication company - welding companies abuja,welding and fabrication company - welding companies abuja nigeria,welding and fabrication company - welding companies abuja ng,welding companies abuja'</li></ul> | | Eye Bag Surgery | <ul><li>'under eye bags home treatments,how to remove eyebags easily,how to remove eyebags easily - under eye bags treatment 2024,how to remove eyebags easily - under eye bags treatment nashville,under eye bags home treatments united states,under eye bags home tre'</li></ul> | | UTI Treatment | <ul><li>'uti treatment over the counter,uti treatments,best antibiotic for uti in females,natural cure treatment,uti infection treatment,treatment for uti'</li></ul> | | Teeth Straightening | <ul><li>'invisalign moins cher,invisalign prix a ville,alignement dentaire prix,appareil dentaire adulte invisible,appareil dentaire adulte prix,alignement dentaire invisible'</li></ul> | | Bipolar Disorder | <ul><li>'take depression test mumbai,take bipolar test mumbai,take bipolar test 2024 in india,take depression test maharashtra,bipolar symptoms quiz maharashtra,take anxiety and bipolar test maharashtra,7 signs of bipolar disorder 2024 in - bipolar self test mahar'</li><li>'test de ansiedad y depresion santo domingo,prueba de depresión y ansiedad,test de ansiedad y depresion'</li><li>'masennustesti pirkanmaa,kaksisuuntainen masennustesti - masennustesti,masennustesti finland,masennustesti pirkkala,masennustesti - kaksisuuntainen masennustesti,masennustesti fi'</li></ul> | | Hair Transplant | <ul><li>'ideal laser hair removal prices,hair transplant,hair clinic nearby,hair clinic kot ghulam muhammad,hair transplant cost,hair transplant cost kot ghulam muhammad'</li><li>'prezzo trapianto capelli in italia,trapianto capelli prezzo a naples,trapianto di capelli in italia prezzo,trapianto capelli italia dove farlo,trapianto capelli italia costi,trapianti capelli turchia'</li><li>'best hair transplant in turkey,hair transplant turkey cost,turkey hair transplant center,best hair transplantpennsauken'</li></ul> | | Painting Jobs | <ul><li>'painting contractors - interior painter in my area 2024 us pune in,painting contractors - interior painter in my area 2024 us 2024,trabajo de pintura,interior painter in my area 2024 in,i am a painter looking for work,interior painter in my area 2024 - pa'</li></ul> | | Car Carpets | <ul><li>'car carpets in astana,all weather car floor mats astana,all weather car floor mats kazakhstan,all weather car floor mats astana,car carpets near me,all weather car floor mats 2024'</li><li>'rubber truck floor mats 2024 - rubber truck floor mats (see car carpets in aktobe) kz,rubber truck floor mats 2024 - rubber truck floor mats (see car carpets in aktobe),best car floor mats,all weather car floor mats kazakhstan,all weather car floor mats,r'</li></ul> | | Botox | <ul><li>'botox near me milton,find best injection prices milton,prices for botox near me milton,best botox doctor for seniors near me milton'</li><li>'prices for botox near me,botulinum botox near me,juvederm voluma specials near me in horse cave'</li><li>'topical botox in stony point,botox for fine lines forehead near me,botox in neck lines,botox for feet sweating'</li></ul> | | Skin Rejuvenation | <ul><li>'laser skin rejuvenation center near me india,skin rejuvenation near me delhi,laser skin rejuvenation center near me delhi,skin rejuvenation in delhi 2024 - skin rejuvenation in delhi national capital territory of delhi,skin rejuvenation in delhi - best la'</li></ul> | | Debt Relief | <ul><li>'pay off debt,i want to pay off my debt,apply loan,loans to pay off debt,pay off credit card debt,help paying off debt'</li><li>'free government scheme to clear debt in my city,free government scheme to clear debt,free government scheme to clear debt new jersey,free government scheme to clear debt in martinsville,free government scheme to clear debt in new jersey,free government sc'</li></ul> | | Psoriasis Treatment | <ul><li>'symptoms of atopic dermatitis in toddlers,top-rated eczema creams for atopic dermatitis,pediatric dermatologist specializing in atopic dermatitis,advanced atopic dermatitis treatments 2024,latest biologic treatments for atopic dermatitis,natural'</li></ul> | | Cooktops | <ul><li>'gas cooktops kareli,black friday cooktop clearance sale,induction cooktops,smeg induction cooktop,appliances refrigerators,induction cooktop'</li><li>'30 inch gas range,gas cooktop,small stoves,professional induction cooktop,portable cooktops,gas cooktops'</li><li>'30 inch electric cooktop,gas cooktops faisalabad,electric cooktop,electric stoves,gas cooktops,{key6}'</li></ul> | | Mobility Scooters | <ul><li>'plataforma salvaescaleras precio,light weight folding mobility scooter,electric mobility scooters,disability,scooter electrico adulto,grants for people with disabilities raleigh'</li></ul> | | Metal Roofs | <ul><li>'roofing companies near me,local roofing companies in my waldorf,best roofing company nearby'</li></ul> | | Contractors | <ul><li>'local roofing companies in my fort lee,paving companies near me,couvreur rénovation,best roofing companies nearby,roofing companies near me,junk removal near me,junk removal near me'</li></ul> | | Home Loans | <ul><li>'chase bank usda loan,usda home loan lenders,quicken loans usda,usda loan realtor near me elizabethtown,apply for usda loan online,usda home loan pre approval'</li><li>'fastest online home loan approval,guaranteed online home loan approval,home improvement loans for poor credit,personal loan for home improvement,no credit check home improvement loans'</li></ul> | | Hearing Test | <ul><li>'hearing test,take hearing test,free hearing test,free hearing test on phone,free hearing loss phone,how to do hearing test at home'</li></ul> | | Delivery Jobs | <ul><li>'best food delivery service for my restaurant usa,car delivery service in the usa,usa best food delivery service for my restaurant,amazon delivery jobs,own car delivery jobs,delivery jobs'</li></ul> | | Window Blinds | <ul><li>'window shades and blinds,window shades and blinds молдова,window shades and blinds - blackout curtains,window shades and blinds md,window shades and blinds donduşeni,window shades and blinds дондюшанский район'</li></ul> | | Dentists | <ul><li>'landenberg: best dentist for seniors near you. see the list!'</li></ul> | | Depression Test | <ul><li>'take depression test - free depression test 2025,take depression test,take depression test - test to know if you have depression,start a depression test now 2025,start a depression test now,take depression test - depression test through pictures'</li><li>'addiction degrees online,take depression test,take depression quiz'</li></ul> | | Bank Owned Cars | <ul><li>'coches sin entrada y sin cuota final andalucía - compra de coche segunda mano españa,coches en oferta - coches sin entrada y sin cuota final ,coches en oferta - coches sin entrada y sin cuota final andalucía españa,vehiculo ocasion ,coches sin ent'</li></ul> | | Rehab Treatment Center | <ul><li>'therapy center nearby,therapy centers nearby,therapy centers supply,residental treatment centers near me in supply,residental treatment centers near me'</li></ul> | | Depression | <ul><li>'depression test for teens,depression test,take depression test,mental health self-assessment test,start a depression test now 2025'</li><li>'help with depression princeton,anxiety and depression treatment princeton,depression treatment near me,major depression treatment,treatment resistant depression'</li></ul> | | Mortgage Loans | <ul><li>'no down payment government home loans in navajo,no down payment government home loans navajo,no down payment government home loans usa,no down payment government home loans new mexico,no down payment government home loans in new mexico,no down payment gov'</li><li>'no down payment government home loans orlando,no down payment government home loans,no down payment government home loans in florida,no down payment government home loans in orlando,no down payment government home loans usa,no down payment government home'</li></ul> | | Military School | <ul><li>'online schools for military,military friendly online college pennsylvania,military friendly online college bryn mawr,pennsylvania online schools for military'</li></ul> | | Seizure Medication | <ul><li>'seizure surgery,seizure control methods,seizure treatment centers,seizure management,seizure prevention techniques,vagus nerve stimulation for seizures'</li></ul> | | Buy Now Pay Later | <ul><li>'fabric sofas,reclining sofas,affordable sofas,pay later,mid-century modern sofas,bohemian sofas'</li></ul> | | Planning Tools | <ul><li>'豊橋市 家族葬,豊橋市 家族葬式場,家族葬 豊橋市,家族葬サービス 豊橋市,豊橋市での家族葬'</li></ul> | | Reverse Mortgage | <ul><li>'viviendas a credito,prestamos sin checar buro de credito,best reverse mortgage lenders near me,best jumbo reverse mortgage lenders,best proprietary reverse mortgage lenders,home line of credit bad credit'</li><li>'best reverse mortgage lenders in rockville centre,best jumbo reverse mortgage lenders,fence companies in usa,best proprietary reverse mortgage lenders,best reverse mortgage lenders near me,como financiar una casa'</li></ul> | | Home Care | <ul><li>'caregiving jobs hiring nearby,home care near me,care homes near me - home care in abuja,home care in abuja fct,care homes near me fct,care homes near me (see home care in abuja)'</li></ul> | | Detox Diets | <ul><li>'good diet foods,lose 20 pounds in 1 week detox diet,detox diet meal plan,keto acv gummy,to lose belly fat,juice diet'</li><li>'juice cleanse detox,3 day fast weight loss,detox diet recipes,3 day cleanse,detox local san jose,3 day cleanse to lose belly fat'</li></ul> | | Diabetes | <ul><li>'medicaid diabetes prevention program,medicare diabetes program,blood sugar treatments on health insurance,diabetes diagnosis and treatment,continuous glucose monitoring devices,best medicine for diabetes'</li><li>'best blood sugar treatments near me,medicare for seniors,healthy food meal plans for diabetes,health insurance quotes with 100% diabetes treatment,diabetes diet for seniors with full medicare coverage,medicare competitive bidding diabetic supp'</li><li>'golo for life - best diet for diabetes 2,medicare changes for insulin cap,diabetes medications medicare,best blood sugar treatments near me,continuous glucose monitoring devices,diabetic supplies'</li></ul> | | Dating Apps | <ul><li>'en español matchmaking near me,páginas de citas totalmente gratuitas en español in campohermoso,páginas de citas totalmente gratuitas en español in campohermoso,español páginas de citas totalmente gratuitas near me in campohermoso'</li></ul> | | Rhinoplasty | <ul><li>'rhinoplasty near me,rhinoplasty in esquipulas,rhinoplasty cost en español,nose rhinoplasty nearby,orthopedic knee specialist near me'</li></ul> | | Car Cleaning Kit | <ul><li>"spray foam - spray foam in my city,spray foam lake charles,what's the best car cleaning products - spray foam,car cleaning near me,what's the best car cleaning products lake charles,what's the best car cleaning products - spray foam united states"</li></ul> | | Online Banking | <ul><li>'banque en ligne 150 euros offert,ouvrir un compte bancaire en ligne sans justificatif,ouvrir un compte bancaire en ligne en 5 minutes,banque en ligne 150 euros offert à beauvais,ouverture de compte avec argent offert,ouvrir compte bancaire'</li><li>'open bank account online free no deposit,open bank account online instantly with no deposit,open a bank account online no minimum balance,banks that don t require minimum deposit,open bank account online no deposit,open bank account online zero'</li></ul> | | European Cruises | <ul><li>'all inclusive iceland and greenland,iceland and greenland cruises for seniors - all inclusive iceland and greenland texas,iceland and greenland cruises for seniors texas'</li></ul> | | Digital Marketing Courses | <ul><li>'digital marketing degree course,digital marketing,digital marketing degree,digital marketing business,marketing digital y redes sociales,best way advertise a small business'</li><li>'curso de marketing digital torremolinos,aprender marketing digital,rock content,estudiar marketing digital torremolinos,platzi curso de marketing digital para negocios'</li></ul> | | Gutter Cleaning | <ul><li>'gutter cleaning near me pointe-a-la-croix,gutter and drain cleaning,local gutter cleaning companies,gutter cleaners nearby,gutter contractors in my area'</li><li>'local gutter cleaning companies,gutter cleaning near me pottstown,gutter and drain cleaning'</li></ul> | | Bad Credit Credit Cards | <ul><li>'fast cash direct deposit with bad credit in minutes,need a loan to pay all my debts,instant approval credit cards,same day loan approval regardless of credit,instant approval credit card,easy approval credit card'</li><li>'7k limit instant credit card online,credit card for bad credit,instant credit card approval,get cashback credit card,guaranteed credit card approval,bad credit credit card guaranteed approval'</li></ul> | | Jobs in USA | <ul><li>'empresas para reclutar personal - working visas in the usa ,empresas para reclutar personal - working visas in the usa,working visas in the usa - buscando un trabajo,plataformas de reclutamiento de personal,working visas in the usa - empresas para recluta'</li><li>'empresas para reclutar personal - contratacion de personal tuxtla gtz,empresas para reclutar personal - working visas in the usa,empresas para reclutar personal - working visas in the usa tuxtla gtz,trabajar en - work in usa 2024 california estados unidos'</li></ul> | | Roofing Services | <ul><li>'español roofer insulation company chamba cornelius,español roofer insulation company cornelius hiring,companies that hire insulation workers in cornelius,cornelius insulation local recruitment roofers,headache specialist in my area'</li></ul> | | Cruise Ship Jobs | <ul><li>'cruise ship jobs,jobs vacancies,apply for cruise ship jobs online for free - all cruise jobs,cruise ship jobs in bengaluru 2024,cruise ship jobs in bengaluru,apply for cruise ship jobs online for free karnataka'</li><li>'je recherche emploi - croisière en tout inclus ca,je recherche emploi sherbrooke,je recherche emploi québec,je recherche emploi - croisière en tout inclus,cruise ship jobs canada,je recherche emploi 2024'</li><li>'cruise ship jobs,jobs vacancies,work abroad on cruise ships new delhi,work abroad on cruise ships national capital territory of delhi,work abroad on cruise ships 2024,work abroad on cruise ships - all cruise jobs'</li></ul> | | Houses For Sale | <ul><li>'townhouses for sale,condos and townhomes for sale near me,new construction homes new jersey,townhomes for sale,small town homes for sale,townhouses to buy near me'</li></ul> | | Solar Battery | <ul><li>'solar power system with battery (see solar batteries in multan),solar power system with battery pk,solar battery cost multan,battery technology,solar battery cost pakistan - solar inverter and battery,solar power system with battery - solar power system w'</li></ul> | | Keto Diet | <ul><li>'best keto meal delivery,keto diet,keto diet for beginner,best keto diet for crazy weight loss,perfect keto diet for crazy weight loss,keto diet plan'</li><li>'7-day keto meal plan for beginners,keto diet meal plans,diet meal plans,keto meal plan,free keto diet plans,keto diet meal plan'</li></ul> | | Land For Sale | <ul><li>'land with abandoned house for sale near bathurst,land with abandoned house for sale near au'</li></ul> | | Dental Care | <ul><li>'low cost all on 4 dental implants,dentists nearby,find dentist near me,dentists for seniors near me scarborough,low cost dentist,denture teeth'</li><li>'dental care for seniors in toronto ontario - dental care for seniors in toronto,dental care for seniors in toronto,free dental care for seniors near me - free dental implants for seniors,free dental care for seniors near me - free dental implants for seni'</li></ul> | | Deck Installation | <ul><li>'deck renovation near me west palm beach (see deck installations in west palm beach),deck renovation near me west palm beach,deck renovation near me west palm beach florida'</li></ul> | | GMC Sierra | <ul><li>'2020 gmc sierra 1500 denali 4x4,gmc sierra denali 2500 price,gmc sierra 2500 for sale,gmc sierra 1500 for sale,2020 gmc sierra 1500 denali price,gmc sierra clearance sale'</li></ul> | | Homecare Jobs | <ul><li>'caregiver services near me nsw,home caretaker for elderly wolverhampton,home care agencies hiring,𝐃𝐀𝐘𝐂𝐀𝐑𝐄 𝐂𝐀𝐑𝐄𝐆�'</li><li>'home care agencies hiring,caretaker homes near me,caregivers needed nearby in phl,caregiver services near me wa,senior care home,in home care near me hiring'</li></ul> | | Senior Housing | <ul><li>'retirement homes in bewdley,senior housing costs bewdley,senior housing in bewdley,senior living apartments in bewdley,residential care homes in bewdley,{key6}'</li><li>'residential care homes in grampound,senior living apartments in grampound,senior housing costs grampound,senior housing in grampound,retirement homes in grampound,{key6}'</li></ul> | | Lead Generation | <ul><li>'outbound lead generation services'</li></ul> | | Call Center Jobs | <ul><li>'call center jobs without experience - work from home call center hiring 2024 bedford,call center jobs from home united kingdom - work from home call center hiring 2024,work from home call center jobs united kingdom - call center jobs from home united king'</li></ul> | | Online Doctor | <ul><li>'management services,pipa brasil,consultório para ginecologista,medico especialista em escoliose,medical practice ehr,especialista auditivo'</li></ul> | | Relationship Advice | <ul><li>'masaj salonu türkiye istanbul,aile ve ilişki terapisti türkiye,aile ve ilişki terapisti istanbul türkiye,masaj salonu türkiye - aile ve ilişki terapisti türkiye 2024,masaj salonu türkiye - aile ve ilişki terapisti türkiye tr,aile ve ilişki ter'</li></ul> | | Cleaning Services | <ul><li>'commercial and industrial cleaning,local office cleaning companies,office cleaning swindon,local commercial cleaning companies,deep office cleaning services,corporate cleaning company'</li><li>'office cleaning camden,commercial and industrial cleaning,local office cleaning companies,corporate cleaning company,local commercial cleaning companies,deep office cleaning services'</li><li>'deep office cleaning services,corporate cleaning company,local office cleaning companies,local commercial cleaning companies,commercial and industrial cleaning,office cleaning birmingham'</li></ul> | | Snow Removal Services | <ul><li>'snow removal companies in whiting,snow removal contractor in whiting,commercial snow removal contractors in whiting,snow removal quotes whiting,residential snow removal contractors in whiting,whiting snow removal services,snow removal in whiting'</li><li>'snow removal kansas city, snow removal near me, snow removal services, snow removal equipment'</li><li>'snow removal companies in new orleans,snow removal contractor in new orleans,commercial snow removal contractors in new orleans,snow removal quotes new orleans,residential snow removal contractors in new orleans,new orleans snow removal services,snow remo'</li></ul> | | Email Marketing Jobs | <ul><li>'email marketing jobs from home - email marketing jobs from home colombo,email marketing jobs from home sri lanka - email marketing jobs online,email marketing jobs from home sri lanka - email marketing jobs from home sri lanka 2024,email marketing jobs fr'</li><li>'email marketing jobs from home lebanon - email marketing jobs online 2024,email marketing jobs from home lebanon,email marketing jobs from home lebanon lb,email marketing jobs from home lebanon 2024,email marketing jobs from home lebanon - email marketing'</li></ul> | | Anime Figurines | <ul><li>'manila philippines,top tourist attractions in manila,things to do in manila,explore manila’s culture and history,travel guide for manila philippines'</li></ul> | | Laser Hair Removal | <ul><li>'laser hair removal for private parts in indiana - laser hair removal near me indianapolis us united states,laser hair removal near me indianapolis,laser hair removal for private parts in indiana - laser hair removal near me indianapolis us (see laser hair'</li></ul> | | Medicare | <ul><li>'medicare tablet,free tablet from government 2023,free tablet for medicare recipients,free tablet through medicare,medicare gov tablet,medicare tablet for seniors'</li></ul> | | Electric Bikes | <ul><li>'best ebike for senior citizens,best electric bike for seniors uk,best electric bike for seniors uk (see e-bikes in karachi),e-bikes in san jose,e-bikes in karachi,ebike for sale near me,kona electric near me 2024'</li></ul> | | Miscellaneous Jobs | <ul><li>'español slab pipe leak plumbing service in seattle,español slab leak plumbing service in seattle,español slab pipe leak plumber in seattle,24 7 slab leak plumbing near me,español slab leak broken pipe repair service in seattle'</li></ul> | | Waterproofing Services | <ul><li>'屋根修理業者 見積もり kaminagaya,外壁塗装 耐久性 kanagawa,屋根の修理 費用 kaminagaya,外壁塗装 おすすめ kanagawa,屋根リフォーム 安い kaminagaya,外壁塗料 選び方 kanagawa,屋根の修繕 業者 ka'</li></ul> | | Data Entry Jobs | <ul><li>'data entry work from home egypt,data entry work from home - data entry work from home egypt,data entry remote jobs no experience (see remote data entry jobs in ) - data entry work from home ,data entry work from home - work from home,data entry remote'</li><li>'data entry work from home romford,data entry work from home,data entry remote jobs no experience (see remote data entry jobs in romford) - data entry work from home romford united kingdom,data entry remote jobs no experience (see remote data entry jobs in'</li></ul> | | Envelope Stuffing Jobs | <ul><li>'dafür bezahlt werden, umschläge zu stopfen,amazon-umschlagfüller-job,arbeiten, um von zu hause aus geld zu verdienen,online-nebenjob,dafür bezahlt werden, umschläge von zu hause aus zu stopfen,stellenvermittlung'</li></ul> | | Luxury Cars | <ul><li>'jeep grand cherokee l availability 2025 (see new outlander in columbus),jeep grand cherokee l availability,2024 hyundai santa fe hybrid buy'</li></ul> | | Fertility Clinics | <ul><li>'fertility clinic near me tamil nadu,fertility center near me,fertility clinic near me namakkal'</li><li>'fertility clinic near me shillong,fertility center near me,fertility clinic near me shillong in'</li><li>'donors sperm - fertility center near me tamil nadu,donors sperm - fertility center near me coimbatore 2024,selling business near me india'</li></ul> | | Data Analytics Tools | <ul><li>'programa gestion mantenimiento,data analysis,digital marketing en español,data analytics,programa de gestion de flotas,base de datos en la nube gratis,sistema de gestion empresarial,software de gestion humana,analytics data'</li><li>'programa gestion mantenimiento,sistema de gestion empresarial,data analytics solutions'</li></ul> | | Heart Failure | <ul><li>'prescriptions that cause congestive heart failure and treatment 2024 us south carolina,prescriptions that cause congestive heart failure and treatment 2024,prescriptions that cause congestive heart failure and treatment us,prescriptions that cause congest'</li></ul> | | Investment Services | <ul><li>'tarek buys houses,sell house to flipper,fix and flip lender,local house flippers,fix and flip buyers,one day house flip'</li></ul> | | Auto Loans | <ul><li>'royal enfield bikes zero finance,royal enfield bikes in meerut city,royal enfield bikes near me,royal enfield bikes in meerut city,royal enfield bikes in up,{key6}'</li><li>'royal enfield bikes in up,royal enfield bikes in bareilly,royal enfield bikes zero finance,royal enfield bikes in bareilly,royal enfield bikes near me,{key6}'</li></ul> | | Machinery | <ul><li>'automatic bottle filling capping labeling and sealing machine mexico - capping machine for bottles algeria bd,pharmaceutical packaging morocco - automatic bottle filling capping labeling and sealing machine mexico 2024 bd,automatic bottle filling capping'</li><li>'automatic bottle filling capping labeling and sealing machine mexico - capping machine for bottles algeria,automatic bottle filling capping labeling and sealing machine mexico - capping machine for bottles algeria kg kyrgyzstan,automatic bottle filling ca'</li><li>'micro rugged connector turkey,micro rugged connector turkey - micro rugged connector turkey dz,taiwan semiconductor - micro rugged connector turkey,semiconductor processing equipment near me belgium algiers,semiconductor processing equipment near me belgi'</li></ul> | | Security Guard Jobs | <ul><li>'security guard romford,security guard companies in romford,find the security guard company in romford,security officer romford,{key5},{key6}'</li><li>'security guard companies in big hill (greater bendigo),job apply,security guard employment,event security guard positions,security guard job openings,security guard companies in munich'</li></ul> | | Real Estate | <ul><li>'pousada em,imoveis caixa,imobiliarias em,software para condomínio,curso de corretor de imoveis,apartamento em ubatuba'</li></ul> | | Sclerotherapy | <ul><li>'español sclerotherapy cost near me in irving,free español sclerotherapy consultation near me in texas,español sclerotherapy cost near me in texas,spanish sclerotherapy cost near me in texas,$1 sclerotherapy consultation near me in texas,$1 sclerotherap'</li></ul> | | Attorneys | <ul><li>'family and law attorney,child custody lawyer,top criminal defense attorneys,family lawyers nearby,family lawyer,felony criminal lawyers nearby,themostsuccessful car accident lawyer near me,child custody attorney'</li><li>'top criminal defense attorneys,best car accident lawyer near me,legal help top legal attorney,felony criminal lawyers nearby,family and law attorney,divorce lawyers nearby,family law attorney,child custody attorney'</li><li>'family and law attorney,family court lawyer,child custody attorney,divorce lawyers nearby,themostsuccessful car accident lawyer near me,family lawyers nearby,felony criminal lawyers nearby,legal help top legal attorney'</li></ul> | | Nursing Degrees | <ul><li>'online sjuksköterskekurser,snabba sjuksköterskeprogram online,gratis utbildningar distans,få betalt för att studera sjuksköterskeutbildning,sjuksköterskeutbildningar online,kurser distans'</li></ul> | | Roof Repair | <ul><li>'aislamiento de renovación de techo - aislamiento de renovación de techo fresnillo zacatecas,aislamiento de renovación de techo - aislamiento de renovación de techo fresnillo zacatecas méxico 2024,reparacion de tejados y cubiertas méxico,reparar hume'</li></ul> | | Bed Bugs | <ul><li>'bug treatment,to kill bed bugs with heat,kill bedbugs with heat,best bed bug exterminator near me,killing bedbugs with heat,bed bug fumigation near me'</li></ul> | | Cancer Treatment | <ul><li>'immunotherapy for primary peritoneal cancer,immunotherapy for gist cancer,immunotherapy in elderly cancer patients,types of immunotherapy for leukemia,immunotherapy for primary liver cancer,the immunotherapy revolution'</li></ul> | | High School Diploma | <ul><li>'no cost online high school diploma programs,get a high school diploma online,get a high school diploma online arlington heights,high school diploma online for adults'</li></ul> | | Breast Lift | <ul><li>'breast lift doctors in north lauderdale,breast lift doctors cost in florida,natural breast lift in north lauderdale,breast uplift near me'</li></ul> | | Modular Homes | <ul><li>'top modular home builders davenport,new modular homes davenport,prefabricated home davenport'</li></ul> | | Auto Deals | <ul><li>'buy $800 car now pay later,buy $800 car now pay later la puente,buy $800 car now pay later in la puente,buy $800 car now pay later in us,buy $800 car now pay later la puente city,buy $800 car now pay later in la puente city'</li></ul> | | Allergy Relief | <ul><li>'allergy induced asthma'</li></ul> | | Label Printers | <ul><li>'portable label printer,brother label printer,shipping label printer,sticker printers,sticker printer,label maker'</li><li>'wireless label printer,printable labels,label printers,label printer,portable label printers,dymo label printer'</li></ul> | | Gaming Chairs | <ul><li>'best gaming chair,insane gaming chair (see gaming chairs in sacheon-si) - ergonomic gaming chair,insane gaming chair (see gaming chairs in sacheon-si) 2024,ergonomic gaming chair (see gaming chairs in sacheon-si),insane gaming chair (see gaming chairs in'</li></ul> | | Military Jobs | <ul><li>'jobs available in,veteran employment opportunities,empleo cerca de mi,army job in the united states without experience,military job in the united states for civilians,military job hiring'</li></ul> | | Bathroom Vanity | <ul><li>'cheapest bathroom vanities,bathroom shower tile,tienda de muebles baratos,vanity countertops,bathroom vanities and sinks,small bathroom design'</li></ul> | | Phone Storage | <ul><li>'backup data with cloud storage in usa,backup data with cloud storage,backup data with cloud storage massachusetts,backup data with cloud storage in chilmark,backup data with cloud storage in massachusetts,backup data with cloud storage chilmark'</li><li>'backup data with cloud storage ellon,backup data with cloud storage in ellon,backup data with cloud storage,backup data with cloud storage in gbr,backup data with cloud storage abd,backup data with cloud storage in abd'</li></ul> | | Lawn Care | <ul><li>'cleaning companies nearby,gardening services nearby,yard maintenance services,lawn mowing service,green lawn,starting small business'</li></ul> | | House Cleaning Jobs | <ul><li>'water cleanup rockwall,water cleanup company,water cleanup service,water damage cleanup,water cleanup near me,water damage cleanup near me'</li></ul> | | Pet Adoption | <ul><li>'pet adoption near me,small puppy shelter,canto pricing,buy a dog,adoption dogs,adoption nearby'</li></ul> | | Air Conditioning Services | <ul><li>'mitsubishi ductless air conditioner,air conditioner near me utica,affordable mini split ac san jose,venta de aire acondicionados,ductless ac,affordable mini split ac bergenfield'</li></ul> | | Water Damage | <ul><li>'electric test equipment south korea,water damage near me,basement leak repair 2024 - basement leak repair gwangju kr,basement leak repair 2024 - basement leak repair gwangju nam-gu,basement leak repair 2024 nam-gu gwangju,basement leak repair 2024 - basem'</li></ul> | | Plaque Psoriasis | <ul><li>'best cream for psoriasis over the counter,best body wash for psoriasis cape town,psoriasis removal western cape - how to get rid of plaque psoriasis 2024,psoriasis pictures western cape - psoriasis pictures 2024 cape town,psoriasis pictures - psoriasis pi'</li></ul> | | Senior Care | <ul><li>'home care services for seniors - assisted senior care near me in chennai and tamil nadu,care homes nearby in india,assisted senior care near me in chennai india,senior care homes nearby in chennai,home care workers - home care services for seniors india,h'</li></ul> | | Waterproofing Jobs | <ul><li>'waterproofing companies near me in washington,basement leak repair near me in washington,waterproof basement today in washington,basement water damage in washington,district of columbia water damage repair in washington'</li></ul> | | Spa Services | <ul><li>'sabana de algodon,king bed set,bedroom comforter sets,waterford,queen comforter sets clearance,sabana cama'</li></ul> | | Cyber Security Degrees | <ul><li>'cybersecurity courses,cybersecurity,psychologie ohne abitur,fernuni,cyber security degree near me,studium berufstätige'</li></ul> | | Weight Loss Clinical Trials | <ul><li>'participate in new studies for weight loss in orangevale,participate in new studies for weight loss in orangevale - how to qualify for an weight loss clinical trial near me,how to qualify for a weight loss clinical trial near me,participate in new studies'</li></ul> | | Lymphoma | <ul><li>'new lymphoma treatments montclair - new lymphoma treatments,new lymphoma treatments montclair - new lymphoma treatments united states,new lymphoma treatments montclair - new lymphoma treatments new jersey'</li></ul> | | Virtual Assistant Jobs | <ul><li>'freelance virtual assistant,virtual assistant jobs online,virtual assistant jobs remote,virtual assistant job,online assistant jobs,virtual assistant jobs near me'</li></ul> | | Online Dating | <ul><li>'meet single women india,video conferencing solutions,free dating site - meet single women india,free dating site telangana,free dating site (see online dating in hyderabad),free dating site 2024,meet single women india hyderabad,headache specialist in my'</li></ul> | | Veterinary Assistant School | <ul><li>'curso online auxiliar de veterinaria - curso online auxiliar de veterinaria maia porto,curso online auxiliar de veterinaria - curso online auxiliar de veterinaria porto 2024 maia,curso online auxiliar de veterinaria - curso online auxiliar de veterinaria'</li></ul> | | Cruises | <ul><li>'msc cruises senior discount glen allen,cruise deals 2024 glen allen,last minute cruise deals glen allen,viking cruises for single seniors glen allen,cruise travel insurance for seniors glen allen,caribbean cruise packages glen allen,all inclus'</li></ul> | | Car Values | <ul><li>'kelley blue book values used trucks 2018'</li></ul> | | Job Search | <ul><li>'home work job,food packaging near me,food packaging,packaging,warehousing,box packaging fillmore'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.765 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("economy/st_cr_keywords") # Run inference preds = model("best cloud storage providers,1 tb cloud storage price,online backup server,most secure cloud storage,cloud backup storage,sites like dropbox") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 1 | 23.424 | 47 | | Label | Training Sample Count | |:-----------------------------|:----------------------| | Caregiver Jobs | 25 | | Janitorial Jobs | 1 | | Remote Jobs | 8 | | Personal Loans | 42 | | Stress Relief | 1 | | Fitness Coach | 28 | | Debt Consolidation | 9 | | Apartment Rentals | 31 | | Apartments | 67 | | Auto Accessories | 1 | | Apartments | 67 | | Outdoor Furniture | 3 | | Apartments | 67 | | Apartments | 67 | | Apartments | 67 | | Circuit Board Assembler | 12 | | Apartments | 67 | | Dental Implants | 41 | | Prostate Cancer Treatment | 6 | | Erectile Dysfunction | 7 | | Astrology | 24 | | Sofas | 5 | | Car Auctions | 6 | | Fitness Apps | 15 | | Home Furniture | 14 | | Disability Grants | 5 | | Apartments | 67 | | Online Courses | 2 | | Dental Implants | 41 | | Babysitting Jobs | 1 | | Wigs | 4 | | Car Transport Jobs | 9 | | Flooring Installation | 24 | | Car Tires | 2 | | Perfumes | 6 | | Packing Jobs | 2 | | Car Transport Jobs | 9 | | Waste Management Jobs | 2 | | Online Shopping Deals | 28 | | Hepatitis C | 4 | | E-Bikes | 3 | | Buy Now Pay Later Smartphone | 1 | | Remote Jobs | 8 | | Warehouse Jobs | 48 | | Blood Pressure Treatment | 1 | | Flooring Installation | 24 | | Disability Grants | 5 | | Sleep Apnea | 5 | | Personal Loans | 42 | | Flooring Installation | 24 | | Personal Loans | 42 | | Car Transport Jobs | 9 | | Online Jobs | 2 | | Plumbing Jobs | 4 | | Remote Jobs | 8 | | Fitness Coach | 28 | | Tuxedo Rental | 1 | | Daycare Jobs | 8 | | Personal Loans | 42 | | Fitness Coach | 28 | | Truck Driving Jobs | 5 | | Home Remodel | 6 | | Apartments | 67 | | Personal Loans | 42 | | Home Furniture | 14 | | Personal Loans | 42 | | Circuit Board Assembler | 12 | | Pet Insurance | 1 | | Sleep Apnea | 5 | | Dental Implants | 41 | | Apartments | 67 | | Personal Loans | 42 | | Personal Loans | 42 | | Fitness Coach | 28 | | Online Shopping Deals | 28 | | Warehouse Jobs | 48 | | Home Improvement Loans | 3 | | Car Transport Jobs | 9 | | Medical Delivery | 3 | | Fitness Coach | 28 | | Astrology | 24 | | Cell Phones | 4 | | Office Cleaning Jobs | 8 | | Cosmetics | 1 | | Weight Loss | 9 | | Credit Cards | 12 | | Caregiver Jobs | 25 | | Tires | 3 | | Warehouse Jobs | 48 | | Apartment Rentals | 31 | | Plumbing Services | 1 | | Alzheimer's Treatment | 8 | | Outdoor Furniture | 3 | | Online Shopping Deals | 28 | | Used Cars | 4 | | Apartments | 67 | | Kidney Cancer | 13 | | Chronic Kidney Disease | 1 | | Circuit Board Assembler | 12 | | Oven | 1 | | Office Cleaning Jobs | 8 | | Apartment Rentals | 31 | | Apartments | 67 | | Employee Retention Credit | 2 | | Personal Loans | 42 | | FHA Loans | 9 | | Bedding | 1 | | Mental Health | 1 | | Senior Apartments | 7 | | Backyard Fence | 1 | | Construction Jobs | 4 | | Caregiver Jobs | 25 | | Prostate Cancer Treatment | 6 | | Fitness Coach | 28 | | Apartment Rentals | 31 | | Medication Delivery Jobs | 8 | | Fitness Coach | 28 | | Auto Insurance | 13 | | Sofas | 5 | | Apartment Rentals | 31 | | Life Insurance | 5 | | Astrology | 24 | | Forklift Jobs | 4 | | Home Care Jobs | 5 | | Disability Grants | 5 | | Flooring Installation | 24 | | Prostate Cancer Treatment | 6 | | Clinical Trials | 7 | | Astrology | 24 | | Credit Cards | 12 | | Astrology | 24 | | Apartment Rentals | 31 | | Used Cars | 4 | | Erectile Dysfunction | 7 | | Doors | 6 | | Dental Implants | 41 | | Apartment Rentals | 31 | | Warehouse Jobs | 48 | | Personal Loans | 42 | | Circuit Board Assembler | 12 | | Flooring Installation | 24 | | Cars For Sale | 2 | | Dental Implants | 41 | | Apartment Rentals | 31 | | Dental Implants | 41 | | Kidney Cancer | 13 | | Cell Phones | 4 | | Truck Driving Jobs | 5 | | Fitness Coach | 28 | | Paid Sperm Donation | 2 | | Senior Living | 2 | | Doors | 6 | | Pro Bono Attorneys | 4 | | Weight Loss | 9 | | Cyber Security Jobs | 6 | | Dental Fillings | 1 | | Caregiver Jobs | 25 | | Waste Management Jobs | 2 | | Dog Food | 1 | | Car Auctions | 6 | | Fitness Coach | 28 | | Retirement Planning | 1 | | Urgent Care | 1 | | Credit Cards | 12 | | Caregiver Jobs | 25 | | Fitness Apps | 15 | | Prostate Cancer | 2 | | Wigs | 4 | | Apartment Rentals | 31 | | Medical | 6 | | Life Insurance | 5 | | Personal Loans | 42 | | Macular Degeneration | 2 | | Apartment Rentals | 31 | | Certificate of Deposit | 2 | | Perfumes | 6 | | Debt Consolidation | 9 | | Senior Cruises | 2 | | Warehouse Jobs | 48 | | Caregiver Services | 7 | | Skin Care | 3 | | Dental Implants | 41 | | Breast Cancer | 9 | | Online Shopping Deals | 28 | | Electrician Jobs | 5 | | Apartments | 67 | | Personal Loans | 42 | | Migraine Treatment | 14 | | Digital Marketing Jobs | 1 | | Weight Loss | 9 | | Dental Implants | 41 | | Auto Insurance | 13 | | Services | 2 | | Online Shopping Deals | 28 | | Fitness Coach | 28 | | Belly Fat Removal | 6 | | Medical | 6 | | Wigs | 4 | | Circuit Board Assembler | 12 | | Sleep Apnea | 5 | | Caregiver Jobs | 25 | | Security Jobs | 3 | | Sofas | 5 | | Flooring Installation | 24 | | Malpractice Attorneys | 1 | | Apartments | 67 | | Ultherapy | 2 | | Personal Loans | 42 | | Migraine Treatment | 14 | | Plumbing Jobs | 4 | | Online Shopping Deals | 28 | | Dental Implants | 41 | | Caregiver Jobs | 25 | | Apartments | 67 | | Personal Loans | 42 | | FHA Loans | 9 | | E-Bikes | 3 | | Warehouse Jobs | 48 | | Caregiver Jobs | 25 | | Wrinkle Treatment | 1 | | Apartment Rentals | 31 | | Used Cars | 4 | | Caregiver Jobs | 25 | | Apartments | 67 | | Dental Implants | 41 | | Apartments | 67 | | Lawn Mowers | 1 | | Depression Treatment | 3 | | Electrician Jobs | 5 | | Warehouse Jobs | 48 | | Food Packing Jobs | 11 | | Home Improvement Loans | 3 | | Addiction Counseling | 1 | | Weight Loss | 9 | | Alzheimer's Treatment | 8 | | Auto Insurance | 13 | | Home Decor | 2 | | Daycare Jobs | 8 | | Fitness Apps | 15 | | Leadership | 2 | | HVAC Services | 3 | | Bipolar Treatment | 3 | | Mental Health Test | 1 | | Belly Fat Removal | 6 | | Dental Implant Grants | 2 | | Phone Service | 2 | | Remote Jobs | 8 | | Pest Control | 3 | | Caregiver Services | 7 | | Apartments | 67 | | Meal Delivery Services | 1 | | Fitness Apps | 15 | | Online Shopping Deals | 28 | | Cyber Security Jobs | 6 | | Cleaning Jobs | 2 | | Warehouse Jobs | 48 | | Online Shopping Deals | 28 | | Apartments | 67 | | Online Shopping Deals | 28 | | Online Shopping Deals | 28 | | Dental Implant Grants | 2 | | Bipolar Treatment | 3 | | Medical Alert Systems | 4 | | Coats | 1 | | Eyelid Surgery | 1 | | Hotels | 1 | | Online Shopping Deals | 28 | | Weight Loss | 9 | | Warehouse Jobs | 48 | | Warehouse Jobs | 48 | | Truck Driving Jobs | 5 | | Paid Clinical Trial | 1 | | Caregiver Services | 7 | | Kidney Cancer | 13 | | Massage Services | 11 | | Credit Cards | 12 | | Veterans Assistance | 1 | | Kidney Cancer | 13 | | Apartments | 67 | | Apartments | 67 | | Dental Implants | 41 | | Cloud Storage | 2 | | Medical | 6 | | Window Replacement | 6 | | Apartments | 67 | | Auto Insurance | 13 | | Electric Cars | 3 | | Life Insurance | 5 | | Apartment Rentals | 31 | | Astrology | 24 | | Dental Implants | 41 | | Dental Implants | 41 | | Belly Fat Removal | 6 | | Apartments | 67 | | Personal Loans | 42 | | Massage Services | 11 | | Leadership | 2 | | Personal Loans | 42 | | Wart Treatment | 1 | | Astrology | 24 | | Food Packing Jobs | 11 | | Fitness Coach | 28 | | Food Packing Jobs | 11 | | Packing Jobs | 2 | | Warehouse Jobs | 48 | | Caregiver Jobs | 25 | | Home Remodel | 6 | | Energy Drinks | 2 | | Fitness Coach | 28 | | Flooring Installation | 24 | | Astrology | 24 | | Life Insurance | 5 | | Car Auctions | 6 | | Dental Implants | 41 | | Massage Services | 11 | | Apartments | 67 | | Personal Loans | 42 | | Dental Implants | 41 | | DIY Christmas Gifts | 1 | | Apartment Rentals | 31 | | Flooring Installation | 24 | | Massage Services | 11 | | Car Auctions | 6 | | Dental Implants | 41 | | Warehouse Jobs | 48 | | Apartments | 67 | | Auto Insurance | 13 | | Welding Jobs | 6 | | Caregiver Services | 7 | | Welding Jobs | 6 | | Online Jobs | 2 | | Home Furniture | 14 | | Eye Bag Surgery | 1 | | Kidney Cancer | 13 | | Kidney Cancer | 13 | | UTI Treatment | 1 | | Flooring Installation | 24 | | Fitness Apps | 15 | | Caregiver Jobs | 25 | | Warehouse Jobs | 48 | | Caregiver Jobs | 25 | | Teeth Straightening | 1 | | Circuit Board Assembler | 12 | | Bipolar Disorder | 6 | | Prostate Cancer Treatment | 6 | | Erectile Dysfunction | 7 | | Astrology | 24 | | Apartments | 67 | | Apartments | 67 | | Hair Transplant | 3 | | Migraine Treatment | 14 | | Food Packing Jobs | 11 | | Senior Apartments | 7 | | Apartment Rentals | 31 | | Daycare Jobs | 8 | | Auto Insurance | 13 | | Hair Transplant | 3 | | Painting Jobs | 1 | | Erectile Dysfunction | 7 | | Paid Sperm Donation | 2 | | Fitness Coach | 28 | | Fitness Apps | 15 | | Weight Loss | 9 | | Warehouse Jobs | 48 | | Prostate Cancer Treatment | 6 | | Car Carpets | 2 | | Car Transport Jobs | 9 | | Botox | 3 | | Online Shopping Deals | 28 | | Medical Delivery | 3 | | Car Transport Jobs | 9 | | Medical | 6 | | Weight Loss | 9 | | Window Replacement | 6 | | Apartments | 67 | | Dental Implants | 41 | | Home Remodel | 6 | | Perfumes | 6 | | Astrology | 24 | | Senior Cruises | 2 | | Home Furniture | 14 | | Apartments | 67 | | Skin Rejuvenation | 1 | | Apartment Rentals | 31 | | Debt Relief | 2 | | Personal Loans | 42 | | Fitness Apps | 15 | | Breast Cancer | 9 | | Auto Insurance | 13 | | Psoriasis Treatment | 1 | | Cooktops | 5 | | Warehouse Jobs | 48 | | Circuit Board Assembler | 12 | | Fitness Apps | 15 | | Mobility Scooters | 1 | | Metal Roofs | 1 | | Warehouse Jobs | 48 | | Home Furniture | 14 | | Office Cleaning Jobs | 8 | | Botox | 3 | | Contractors | 1 | | Forklift Jobs | 4 | | Home Loans | 2 | | Construction Jobs | 4 | | Tires | 3 | | Cleaning Jobs | 2 | | Apartments | 67 | | Migraine Treatment | 14 | | Auto Insurance | 13 | | Apartments | 67 | | Hearing Test | 1 | | FHA Loans | 9 | | Delivery Jobs | 1 | | Daycare Jobs | 8 | | Online Shopping Deals | 28 | | Window Blinds | 1 | | Auto Insurance | 13 | | Dental Implants | 41 | | Apartments | 67 | | Home Furniture | 14 | | Dentists | 1 | | Depression Test | 2 | | Medication Delivery Jobs | 8 | | Flooring Installation | 24 | | Construction Jobs | 4 | | Warehouse Jobs | 48 | | Medical Delivery | 3 | | Apartment Rentals | 31 | | Warehouse Jobs | 48 | | Sleep Apnea | 5 | | Bank Owned Cars | 1 | | Medical Alert Systems | 4 | | Online Shopping Deals | 28 | | Apartments | 67 | | Debt Consolidation | 9 | | Bipolar Disorder | 6 | | Kidney Cancer | 13 | | Botox | 3 | | Rehab Treatment Center | 1 | | Depression | 2 | | FHA Loans | 9 | | Car Auctions | 6 | | Mortgage Loans | 2 | | Military School | 1 | | Window Replacement | 6 | | Dental Implants | 41 | | Personal Loans | 42 | | Circuit Board Assembler | 12 | | Auto Insurance | 13 | | Online Shopping Deals | 28 | | Fitness Apps | 15 | | Seizure Medication | 1 | | Disability Grants | 5 | | Online Shopping Deals | 28 | | FHA Loans | 9 | | Debt Relief | 2 | | Buy Now Pay Later | 1 | | Planning Tools | 1 | | Reverse Mortgage | 2 | | Home Care | 1 | | Apartment Rentals | 31 | | Migraine Treatment | 14 | | Cell Phones | 4 | | Detox Diets | 2 | | Diabetes | 4 | | Fitness Apps | 15 | | Bipolar Disorder | 6 | | Astrology | 24 | | Home Care Jobs | 5 | | Dating Apps | 1 | | Apartments | 67 | | Warehouse Jobs | 48 | | Car Tires | 2 | | Personal Loans | 42 | | Rhinoplasty | 1 | | Dental Implants | 41 | | Skin Care | 3 | | Car Cleaning Kit | 1 | | Apartment Rentals | 31 | | Apartment Rentals | 31 | | Daycare Jobs | 8 | | Weight Loss | 9 | | Migraine Treatment | 14 | | Online Banking | 2 | | Breast Cancer | 9 | | Apartments | 67 | | Construction Jobs | 4 | | European Cruises | 1 | | Fitness Coach | 28 | | Home Furniture | 14 | | Hair Transplant | 3 | | Breast Cancer | 9 | | Daycare Jobs | 8 | | Fitness Coach | 28 | | Warehouse Jobs | 48 | | Online Shopping Deals | 28 | | Digital Marketing Courses | 2 | | Sofas | 5 | | Weight Loss | 9 | | Car Transport Jobs | 9 | | Online Shopping Deals | 28 | | Perfumes | 6 | | Dental Implants | 41 | | Credit Cards | 12 | | Window Replacement | 6 | | Food Packing Jobs | 11 | | Cyber Security Jobs | 6 | | Flooring Installation | 24 | | Breast Cancer | 9 | | Home Improvement Loans | 3 | | Warehouse Jobs | 48 | | Caregiver Jobs | 25 | | Dental Implants | 41 | | Home Furniture | 14 | | Online Shopping Deals | 28 | | Credit Cards | 12 | | Kidney Cancer | 13 | | Fitness Coach | 28 | | Dental Implants | 41 | | Hepatitis C | 4 | | Medication Delivery Jobs | 8 | | Flooring Installation | 24 | | Circuit Board Assembler | 12 | | Credit Cards | 12 | | Gutter Cleaning | 2 | | Dental Implants | 41 | | Personal Loans | 42 | | Bad Credit Credit Cards | 2 | | Jobs in USA | 2 | | Roofing Services | 1 | | Astrology | 24 | | Warehouse Jobs | 48 | | Dental Implants | 41 | | Dental Implants | 41 | | Senior Apartments | 7 | | Apartments | 67 | | Home Care Jobs | 5 | | Cruise Ship Jobs | 4 | | Houses For Sale | 1 | | Doors | 6 | | Breast Cancer | 9 | | Solar Battery | 1 | | Keto Diet | 2 | | Warehouse Jobs | 48 | | Flooring Installation | 24 | | Personal Loans | 42 | | Clinical Trials | 7 | | Warehouse Jobs | 48 | | Land For Sale | 1 | | Fitness Coach | 28 | | Hepatitis C | 4 | | Food Packing Jobs | 11 | | Cars For Sale | 2 | | FHA Loans | 9 | | Office Cleaning Jobs | 8 | | Apartments | 67 | | Dental Care | 2 | | Pest Control | 3 | | Home Furniture | 14 | | Tires | 3 | | Personal Loans | 42 | | Deck Installation | 1 | | Alzheimer's Treatment | 8 | | GMC Sierra | 1 | | Homecare Jobs | 2 | | Apartment Rentals | 31 | | Pro Bono Attorneys | 4 | | Online Shopping Deals | 28 | | Medical | 6 | | Plumbing Jobs | 4 | | Senior Housing | 2 | | Home Care Jobs | 5 | | Kidney Cancer | 13 | | Warehouse Jobs | 48 | | HVAC Services | 3 | | Lead Generation | 1 | | Debt Consolidation | 9 | | Call Center Jobs | 1 | | Doors | 6 | | Pro Bono Attorneys | 4 | | Depression Treatment | 3 | | Debt Consolidation | 9 | | Fitness Coach | 28 | | Personal Loans | 42 | | Online Doctor | 1 | | Bipolar Disorder | 6 | | Daycare Jobs | 8 | | Migraine Treatment | 14 | | Clinical Trials | 7 | | Forklift Jobs | 4 | | Personal Loans | 42 | | Personal Loans | 42 | | Online Shopping Deals | 28 | | Relationship Advice | 1 | | Personal Loans | 42 | | Diabetes | 4 | | Dental Implants | 41 | | Cloud Storage | 2 | | Warehouse Jobs | 48 | | Debt Consolidation | 9 | | Belly Fat Removal | 6 | | Debt Consolidation | 9 | | Apartments | 67 | | Home Furniture | 14 | | Apartment Rentals | 31 | | Welding Jobs | 6 | | Flooring Installation | 24 | | Home Furniture | 14 | | Cleaning Services | 7 | | Wigs | 4 | | Erectile Dysfunction | 7 | | Snow Removal Services | 3 | | Apartments | 67 | | Depression Test | 2 | | Email Marketing Jobs | 2 | | Flooring Installation | 24 | | Apartment Rentals | 31 | | Remote Jobs | 8 | | Kidney Cancer | 13 | | Warehouse Jobs | 48 | | Dental Care | 2 | | Anime Figurines | 1 | | Online Banking | 2 | | Auto Insurance | 13 | | Truck Driving Jobs | 5 | | Breast Cancer | 9 | | Office Cleaning Jobs | 8 | | Cruise Ship Jobs | 4 | | Bipolar Disorder | 6 | | Astrology | 24 | | Life Insurance | 5 | | Personal Loans | 42 | | Online Shopping Deals | 28 | | Cyber Security Jobs | 6 | | Caregiver Jobs | 25 | | Caregiver Jobs | 25 | | Flooring Installation | 24 | | Office Cleaning Jobs | 8 | | Cruise Ship Jobs | 4 | | Cleaning Services | 7 | | Belly Fat Removal | 6 | | Apartments | 67 | | Senior Apartments | 7 | | Personal Loans | 42 | | Laser Hair Removal | 1 | | Warehouse Jobs | 48 | | Medicare | 1 | | Caregiver Jobs | 25 | | Electric Bikes | 1 | | Prostate Cancer Treatment | 6 | | Apartments | 67 | | Online Shopping Deals | 28 | | Dental Implants | 41 | | Senior Apartments | 7 | | Digital Marketing Courses | 2 | | Car Auctions | 6 | | Migraine Treatment | 14 | | Warehouse Jobs | 48 | | Apartments | 67 | | Miscellaneous Jobs | 1 | | Apartments | 67 | | Depression Treatment | 3 | | Homecare Jobs | 2 | | Forklift Jobs | 4 | | Waterproofing Services | 1 | | Data Entry Jobs | 2 | | Prostate Cancer | 2 | | Personal Loans | 42 | | Migraine Treatment | 14 | | Erectile Dysfunction | 7 | | Flooring Installation | 24 | | Personal Loans | 42 | | Macular Degeneration | 2 | | Online Shopping Deals | 28 | | Astrology | 24 | | Apartments | 67 | | Depression | 2 | | Home Furniture | 14 | | Warehouse Jobs | 48 | | Envelope Stuffing Jobs | 1 | | Dental Implants | 41 | | Home Care Jobs | 5 | | Luxury Cars | 1 | | Fertility Clinics | 4 | | Warehouse Jobs | 48 | | Migraine Treatment | 14 | | Kidney Cancer | 13 | | Ultherapy | 2 | | Data Analytics Tools | 2 | | Bipolar Treatment | 3 | | Energy Drinks | 2 | | Warehouse Jobs | 48 | | Snow Removal Services | 3 | | Personal Loans | 42 | | Medication Delivery Jobs | 8 | | Apartment Rentals | 31 | | Data Analytics Tools | 2 | | Alzheimer's Treatment | 8 | | Cyber Security Jobs | 6 | | Heart Failure | 1 | | Investment Services | 1 | | Auto Loans | 2 | | Pest Control | 3 | | Gutter Cleaning | 2 | | Keto Diet | 2 | | Security Jobs | 3 | | Home Loans | 2 | | Email Marketing Jobs | 2 | | Apartments | 67 | | Machinery | 3 | | Medical | 6 | | Online Courses | 2 | | Apartments | 67 | | Dental Implants | 41 | | Fitness Coach | 28 | | Services | 2 | | Apartments | 67 | | Machinery | 3 | | FHA Loans | 9 | | Massage Services | 11 | | Senior Apartments | 7 | | Online Shopping Deals | 28 | | Office Cleaning Jobs | 8 | | Breast Cancer | 9 | | Fertility Clinics | 4 | | Kidney Cancer | 13 | | Medication Delivery Jobs | 8 | | Fitness Coach | 28 | | Apartments | 67 | | Caregiver Jobs | 25 | | Fertility Clinics | 4 | | Online Shopping Deals | 28 | | Security Guard Jobs | 2 | | Home Decor | 2 | | Data Entry Jobs | 2 | | Credit Cards | 12 | | Personal Loans | 42 | | Jobs in USA | 2 | | Bipolar Disorder | 6 | | Real Estate | 1 | | Fitness Coach | 28 | | Apartments | 67 | | Outdoor Furniture | 3 | | Personal Loans | 42 | | Astrology | 24 | | Sclerotherapy | 1 | | Warehouse Jobs | 48 | | Attorneys | 4 | | Apartments | 67 | | Caregiver Jobs | 25 | | Reverse Mortgage | 2 | | Warehouse Jobs | 48 | | Nursing Degrees | 1 | | Apartment Rentals | 31 | | Fertility Clinics | 4 | | Apartments | 67 | | Roof Repair | 1 | | Dental Implants | 41 | | Apartment Rentals | 31 | | Pro Bono Attorneys | 4 | | Warehouse Jobs | 48 | | Bed Bugs | 1 | | Warehouse Jobs | 48 | | Apartments | 67 | | Home Remodel | 6 | | Doors | 6 | | Auto Insurance | 13 | | Warehouse Jobs | 48 | | Cleaning Services | 7 | | Cleaning Services | 7 | | Home Remodel | 6 | | Electrician Jobs | 5 | | Credit Cards | 12 | | Caregiver Jobs | 25 | | Fitness Apps | 15 | | Truck Driving Jobs | 5 | | Food Packing Jobs | 11 | | Apartment Rentals | 31 | | Caregiver Services | 7 | | Personal Loans | 42 | | Apartments | 67 | | Online Shopping Deals | 28 | | Home Remodel | 6 | | Caregiver Jobs | 25 | | Security Jobs | 3 | | Attorneys | 4 | | Warehouse Jobs | 48 | | Cancer Treatment | 1 | | Fitness Coach | 28 | | Dental Implants | 41 | | Massage Services | 11 | | High School Diploma | 1 | | Cleaning Services | 7 | | Breast Lift | 1 | | Fitness Coach | 28 | | Dental Implants | 41 | | Mortgage Loans | 2 | | Astrology | 24 | | Window Replacement | 6 | | Migraine Treatment | 14 | | Modular Homes | 1 | | Cooktops | 5 | | Used Cars | 4 | | Apartment Rentals | 31 | | Auto Deals | 1 | | Apartment Rentals | 31 | | Electrician Jobs | 5 | | Flooring Installation | 24 | | Warehouse Jobs | 48 | | Car Carpets | 2 | | Dental Implants | 41 | | Cooktops | 5 | | Warehouse Jobs | 48 | | Allergy Relief | 1 | | Welding Jobs | 6 | | Cooktops | 5 | | Personal Loans | 42 | | Dental Implants | 41 | | Astrology | 24 | | Diabetes | 4 | | Massage Services | 11 | | Sleep Apnea | 5 | | Label Printers | 2 | | Caregiver Jobs | 25 | | Dental Implants | 41 | | Caregiver Services | 7 | | Personal Loans | 42 | | Bad Credit Credit Cards | 2 | | Cyber Security Jobs | 6 | | Debt Consolidation | 9 | | Astrology | 24 | | FHA Loans | 9 | | Clinical Trials | 7 | | FHA Loans | 9 | | Employee Retention Credit | 2 | | Plumbing Jobs | 4 | | Sofas | 5 | | Warehouse Jobs | 48 | | Clinical Trials | 7 | | Electric Cars | 3 | | Fitness Apps | 15 | | Debt Consolidation | 9 | | Medication Delivery Jobs | 8 | | Gaming Chairs | 1 | | Apartments | 67 | | Food Packing Jobs | 11 | | Machinery | 3 | | Migraine Treatment | 14 | | Electrician Jobs | 5 | | Flooring Installation | 24 | | Food Packing Jobs | 11 | | Senior Housing | 2 | | Alzheimer's Treatment | 8 | | Dental Implants | 41 | | Cleaning Services | 7 | | Military Jobs | 1 | | Senior Living | 2 | | Caregiver Jobs | 25 | | Circuit Board Assembler | 12 | | Astrology | 24 | | Flooring Installation | 24 | | Bathroom Vanity | 1 | | Apartments | 67 | | Apartments | 67 | | Fitness Coach | 28 | | Phone Storage | 2 | | E-Bikes | 3 | | Apartment Rentals | 31 | | Apartments | 67 | | Lawn Care | 1 | | Fitness Coach | 28 | | Warehouse Jobs | 48 | | Apartments | 67 | | House Cleaning Jobs | 1 | | Pet Adoption | 1 | | Skin Care | 3 | | Cruise Ship Jobs | 4 | | Warehouse Jobs | 48 | | Apartments | 67 | | Massage Services | 11 | | Dental Implants | 41 | | Auto Insurance | 13 | | Security Guard Jobs | 2 | | Clinical Trials | 7 | | Astrology | 24 | | Apartments | 67 | | Clinical Trials | 7 | | Attorneys | 4 | | Air Conditioning Services | 1 | | Remote Jobs | 8 | | Astrology | 24 | | Senior Apartments | 7 | | Caregiver Jobs | 25 | | Water Damage | 1 | | Plaque Psoriasis | 1 | | Belly Fat Removal | 6 | | Hepatitis C | 4 | | Window Replacement | 6 | | Caregiver Jobs | 25 | | Apartments | 67 | | Senior Care | 1 | | Waterproofing Jobs | 1 | | Astrology | 24 | | Breast Cancer | 9 | | Warehouse Jobs | 48 | | Circuit Board Assembler | 12 | | Fitness Apps | 15 | | Flooring Installation | 24 | | Personal Loans | 42 | | Fitness Coach | 28 | | Alzheimer's Treatment | 8 | | Detox Diets | 2 | | Dental Implants | 41 | | Medical Alert Systems | 4 | | Dental Implants | 41 | | Fitness Coach | 28 | | Personal Loans | 42 | | Migraine Treatment | 14 | | Erectile Dysfunction | 7 | | Spa Services | 1 | | Electric Cars | 3 | | Fitness Apps | 15 | | Apartment Rentals | 31 | | Food Packing Jobs | 11 | | Online Shopping Deals | 28 | | Doors | 6 | | Warehouse Jobs | 48 | | Credit Cards | 12 | | Disability Grants | 5 | | Food Packing Jobs | 11 | | Home Furniture | 14 | | Remote Jobs | 8 | | Attorneys | 4 | | Cyber Security Degrees | 1 | | Fitness Coach | 28 | | Cooktops | 5 | | Massage Services | 11 | | Astrology | 24 | | Massage Services | 11 | | Caregiver Services | 7 | | Credit Cards | 12 | | Weight Loss Clinical Trials | 1 | | Lymphoma | 1 | | Flooring Installation | 24 | | Virtual Assistant Jobs | 1 | | Office Cleaning Jobs | 8 | | Remote Jobs | 8 | | Cell Phones | 4 | | Massage Services | 11 | | Flooring Installation | 24 | | Car Transport Jobs | 9 | | HVAC Services | 3 | | Apartments | 67 | | Kidney Cancer | 13 | | Car Transport Jobs | 9 | | Astrology | 24 | | Personal Loans | 42 | | Daycare Jobs | 8 | | Phone Storage | 2 | | Medication Delivery Jobs | 8 | | Apartments | 67 | | Dental Implants | 41 | | Apartments | 67 | | Label Printers | 2 | | Credit Cards | 12 | | Medication Delivery Jobs | 8 | | Apartments | 67 | | Snow Removal Services | 3 | | Apartments | 67 | | Apartment Rentals | 31 | | Caregiver Jobs | 25 | | Alzheimer's Treatment | 8 | | Medical Alert Systems | 4 | | Welding Jobs | 6 | | Online Dating | 1 | | Personal Loans | 42 | | Apartments | 67 | | Personal Loans | 42 | | Alzheimer's Treatment | 8 | | Circuit Board Assembler | 12 | | Perfumes | 6 | | Phone Service | 2 | | Warehouse Jobs | 48 | | Welding Jobs | 6 | | Apartments | 67 | | Migraine Treatment | 14 | | Veterinary Assistant School | 1 | | Flooring Installation | 24 | | Cruises | 1 | | Perfumes | 6 | | Warehouse Jobs | 48 | | Car Values | 1 | | Job Search | 1 | | Certificate of Deposit | 2 | | Home Furniture | 14 | | Cleaning Services | 7 | | Diabetes | 4 | | Fitness Apps | 15 | | Warehouse Jobs | 48 | | Auto Loans | 2 | ### Training Hyperparameters - batch_size: (16, 2) - num_epochs: (1, 16) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:-----:|:-------------:|:---------------:| | 0.0000 | 1 | 0.149 | - | | 0.0008 | 50 | 0.051 | - | | 0.0016 | 100 | 0.0437 | - | | 0.0024 | 150 | 0.0506 | - | | 0.0033 | 200 | 0.0392 | - | | 0.0041 | 250 | 0.0253 | - | | 0.0049 | 300 | 0.0226 | - | | 0.0057 | 350 | 0.0184 | - | | 0.0065 | 400 | 0.0255 | - | | 0.0073 | 450 | 0.0193 | - | | 0.0081 | 500 | 0.0195 | - | | 0.0090 | 550 | 0.0136 | - | | 0.0098 | 600 | 0.017 | - | | 0.0106 | 650 | 0.0116 | - | | 0.0114 | 700 | 0.0153 | - | | 0.0122 | 750 | 0.0149 | - | | 0.0130 | 800 | 0.007 | - | | 0.0139 | 850 | 0.0135 | - | | 0.0147 | 900 | 0.0112 | - | | 0.0155 | 950 | 0.0127 | - | | 0.0163 | 1000 | 0.0147 | - | | 0.0171 | 1050 | 0.0091 | - | | 0.0179 | 1100 | 0.0132 | - | | 0.0187 | 1150 | 0.0149 | - | | 0.0196 | 1200 | 0.0095 | - | | 0.0204 | 1250 | 0.0061 | - | | 0.0212 | 1300 | 0.0081 | - | | 0.0220 | 1350 | 0.0069 | - | | 0.0228 | 1400 | 0.007 | - | | 0.0236 | 1450 | 0.0079 | - | | 0.0244 | 1500 | 0.0075 | - | | 0.0253 | 1550 | 0.0095 | - | | 0.0261 | 1600 | 0.0066 | - | | 0.0269 | 1650 | 0.0077 | - | | 0.0277 | 1700 | 0.0074 | - | | 0.0285 | 1750 | 0.005 | - | | 0.0293 | 1800 | 0.0072 | - | | 0.0301 | 1850 | 0.0064 | - | | 0.0310 | 1900 | 0.0074 | - | | 0.0318 | 1950 | 0.007 | - | | 0.0326 | 2000 | 0.0074 | - | | 0.0334 | 2050 | 0.0039 | - | | 0.0342 | 2100 | 0.0029 | - | | 0.0350 | 2150 | 0.0059 | - | | 0.0359 | 2200 | 0.006 | - | | 0.0367 | 2250 | 0.0043 | - | | 0.0375 | 2300 | 0.0045 | - | | 0.0383 | 2350 | 0.0022 | - | | 0.0391 | 2400 | 0.0072 | - | | 0.0399 | 2450 | 0.0032 | - | | 0.0407 | 2500 | 0.006 | - | | 0.0416 | 2550 | 0.0066 | - | | 0.0424 | 2600 | 0.0059 | - | | 0.0432 | 2650 | 0.0069 | - | | 0.0440 | 2700 | 0.0076 | - | | 0.0448 | 2750 | 0.0042 | - | | 0.0456 | 2800 | 0.0068 | - | | 0.0464 | 2850 | 0.007 | - | | 0.0473 | 2900 | 0.0039 | - | | 0.0481 | 2950 | 0.0057 | - | | 0.0489 | 3000 | 0.0032 | - | | 0.0497 | 3050 | 0.0076 | - | | 0.0505 | 3100 | 0.0049 | - | | 0.0513 | 3150 | 0.0031 | - | | 0.0522 | 3200 | 0.0021 | - | | 0.0530 | 3250 | 0.0029 | - | | 0.0538 | 3300 | 0.0078 | - | | 0.0546 | 3350 | 0.0048 | - | | 0.0554 | 3400 | 0.0062 | - | | 0.0562 | 3450 | 0.0041 | - | | 0.0570 | 3500 | 0.0014 | - | | 0.0579 | 3550 | 0.0022 | - | | 0.0587 | 3600 | 0.0015 | - | | 0.0595 | 3650 | 0.0011 | - | | 0.0603 | 3700 | 0.001 | - | | 0.0611 | 3750 | 0.0051 | - | | 0.0619 | 3800 | 0.0022 | - | | 0.0627 | 3850 | 0.0041 | - | | 0.0636 | 3900 | 0.0037 | - | | 0.0644 | 3950 | 0.0018 | - | | 0.0652 | 4000 | 0.0031 | - | | 0.0660 | 4050 | 0.0017 | - | | 0.0668 | 4100 | 0.0028 | - | | 0.0676 | 4150 | 0.0038 | - | | 0.0684 | 4200 | 0.0023 | - | | 0.0693 | 4250 | 0.0025 | - | | 0.0701 | 4300 | 0.0033 | - | | 0.0709 | 4350 | 0.0041 | - | | 0.0717 | 4400 | 0.0022 | - | | 0.0725 | 4450 | 0.002 | - | | 0.0733 | 4500 | 0.0048 | - | | 0.0742 | 4550 | 0.0011 | - | | 0.0750 | 4600 | 0.0038 | - | | 0.0758 | 4650 | 0.0048 | - | | 0.0766 | 4700 | 0.0052 | - | | 0.0774 | 4750 | 0.0025 | - | | 0.0782 | 4800 | 0.0011 | - | | 0.0790 | 4850 | 0.0028 | - | | 0.0799 | 4900 | 0.0038 | - | | 0.0807 | 4950 | 0.0012 | - | | 0.0815 | 5000 | 0.0038 | - | | 0.0823 | 5050 | 0.0014 | - | | 0.0831 | 5100 | 0.0011 | - | | 0.0839 | 5150 | 0.0053 | - | | 0.0847 | 5200 | 0.0032 | - | | 0.0856 | 5250 | 0.0025 | - | | 0.0864 | 5300 | 0.0025 | - | | 0.0872 | 5350 | 0.0034 | - | | 0.0880 | 5400 | 0.0053 | - | | 0.0888 | 5450 | 0.0016 | - | | 0.0896 | 5500 | 0.0021 | - | | 0.0904 | 5550 | 0.002 | - | | 0.0913 | 5600 | 0.0032 | - | | 0.0921 | 5650 | 0.0014 | - | | 0.0929 | 5700 | 0.0026 | - | | 0.0937 | 5750 | 0.0042 | - | | 0.0945 | 5800 | 0.0033 | - | | 0.0953 | 5850 | 0.0031 | - | | 0.0962 | 5900 | 0.0008 | - | | 0.0970 | 5950 | 0.004 | - | | 0.0978 | 6000 | 0.001 | - | | 0.0986 | 6050 | 0.0021 | - | | 0.0994 | 6100 | 0.0035 | - | | 0.1002 | 6150 | 0.0033 | - | | 0.1010 | 6200 | 0.0011 | - | | 0.1019 | 6250 | 0.0023 | - | | 0.1027 | 6300 | 0.0017 | - | | 0.1035 | 6350 | 0.0008 | - | | 0.1043 | 6400 | 0.0018 | - | | 0.1051 | 6450 | 0.0033 | - | | 0.1059 | 6500 | 0.0066 | - | | 0.1067 | 6550 | 0.0014 | - | | 0.1076 | 6600 | 0.0009 | - | | 0.1084 | 6650 | 0.0014 | - | | 0.1092 | 6700 | 0.002 | - | | 0.1100 | 6750 | 0.0024 | - | | 0.1108 | 6800 | 0.0046 | - | | 0.1116 | 6850 | 0.0019 | - | | 0.1125 | 6900 | 0.0045 | - | | 0.1133 | 6950 | 0.0009 | - | | 0.1141 | 7000 | 0.001 | - | | 0.1149 | 7050 | 0.0018 | - | | 0.1157 | 7100 | 0.002 | - | | 0.1165 | 7150 | 0.0022 | - | | 0.1173 | 7200 | 0.0037 | - | | 0.1182 | 7250 | 0.0037 | - | | 0.1190 | 7300 | 0.0024 | - | | 0.1198 | 7350 | 0.0025 | - | | 0.1206 | 7400 | 0.0007 | - | | 0.1214 | 7450 | 0.0018 | - | | 0.1222 | 7500 | 0.0043 | - | | 0.1230 | 7550 | 0.0006 | - | | 0.1239 | 7600 | 0.0008 | - | | 0.1247 | 7650 | 0.0021 | - | | 0.1255 | 7700 | 0.002 | - | | 0.1263 | 7750 | 0.0033 | - | | 0.1271 | 7800 | 0.0017 | - | | 0.1279 | 7850 | 0.0007 | - | | 0.1287 | 7900 | 0.0027 | - | | 0.1296 | 7950 | 0.001 | - | | 0.1304 | 8000 | 0.0005 | - | | 0.1312 | 8050 | 0.0008 | - | | 0.1320 | 8100 | 0.0019 | - | | 0.1328 | 8150 | 0.0018 | - | | 0.1336 | 8200 | 0.0014 | - | | 0.1345 | 8250 | 0.0007 | - | | 0.1353 | 8300 | 0.0005 | - | | 0.1361 | 8350 | 0.0006 | - | | 0.1369 | 8400 | 0.0016 | - | | 0.1377 | 8450 | 0.0029 | - | | 0.1385 | 8500 | 0.0011 | - | | 0.1393 | 8550 | 0.0034 | - | | 0.1402 | 8600 | 0.0014 | - | | 0.1410 | 8650 | 0.0006 | - | | 0.1418 | 8700 | 0.0034 | - | | 0.1426 | 8750 | 0.0031 | - | | 0.1434 | 8800 | 0.0031 | - | | 0.1442 | 8850 | 0.0008 | - | | 0.1450 | 8900 | 0.001 | - | | 0.1459 | 8950 | 0.0031 | - | | 0.1467 | 9000 | 0.0024 | - | | 0.1475 | 9050 | 0.0033 | - | | 0.1483 | 9100 | 0.0028 | - | | 0.1491 | 9150 | 0.0013 | - | | 0.1499 | 9200 | 0.0021 | - | | 0.1507 | 9250 | 0.0022 | - | | 0.1516 | 9300 | 0.0018 | - | | 0.1524 | 9350 | 0.0005 | - | | 0.1532 | 9400 | 0.0013 | - | | 0.1540 | 9450 | 0.0005 | - | | 0.1548 | 9500 | 0.0006 | - | | 0.1556 | 9550 | 0.0023 | - | | 0.1565 | 9600 | 0.0006 | - | | 0.1573 | 9650 | 0.0011 | - | | 0.1581 | 9700 | 0.0038 | - | | 0.1589 | 9750 | 0.0018 | - | | 0.1597 | 9800 | 0.0051 | - | | 0.1605 | 9850 | 0.003 | - | | 0.1613 | 9900 | 0.003 | - | | 0.1622 | 9950 | 0.0006 | - | | 0.1630 | 10000 | 0.0012 | - | | 0.1638 | 10050 | 0.003 | - | | 0.1646 | 10100 | 0.0015 | - | | 0.1654 | 10150 | 0.0023 | - | | 0.1662 | 10200 | 0.0029 | - | | 0.1670 | 10250 | 0.002 | - | | 0.1679 | 10300 | 0.0016 | - | | 0.1687 | 10350 | 0.0018 | - | | 0.1695 | 10400 | 0.0011 | - | | 0.1703 | 10450 | 0.0022 | - | | 0.1711 | 10500 | 0.0011 | - | | 0.1719 | 10550 | 0.0006 | - | | 0.1728 | 10600 | 0.0013 | - | | 0.1736 | 10650 | 0.0021 | - | | 0.1744 | 10700 | 0.0027 | - | | 0.1752 | 10750 | 0.002 | - | | 0.1760 | 10800 | 0.0009 | - | | 0.1768 | 10850 | 0.0018 | - | | 0.1776 | 10900 | 0.0018 | - | | 0.1785 | 10950 | 0.0032 | - | | 0.1793 | 11000 | 0.0008 | - | | 0.1801 | 11050 | 0.0009 | - | | 0.1809 | 11100 | 0.0006 | - | | 0.1817 | 11150 | 0.0014 | - | | 0.1825 | 11200 | 0.0025 | - | | 0.1833 | 11250 | 0.0013 | - | | 0.1842 | 11300 | 0.0004 | - | | 0.1850 | 11350 | 0.0022 | - | | 0.1858 | 11400 | 0.0048 | - | | 0.1866 | 11450 | 0.0013 | - | | 0.1874 | 11500 | 0.0009 | - | | 0.1882 | 11550 | 0.0006 | - | | 0.1890 | 11600 | 0.0029 | - | | 0.1899 | 11650 | 0.0018 | - | | 0.1907 | 11700 | 0.0017 | - | | 0.1915 | 11750 | 0.0009 | - | | 0.1923 | 11800 | 0.0006 | - | | 0.1931 | 11850 | 0.0022 | - | | 0.1939 | 11900 | 0.0019 | - | | 0.1948 | 11950 | 0.0029 | - | | 0.1956 | 12000 | 0.0005 | - | | 0.1964 | 12050 | 0.0006 | - | | 0.1972 | 12100 | 0.0009 | - | | 0.1980 | 12150 | 0.0005 | - | | 0.1988 | 12200 | 0.0025 | - | | 0.1996 | 12250 | 0.0018 | - | | 0.2005 | 12300 | 0.0012 | - | | 0.2013 | 12350 | 0.0008 | - | | 0.2021 | 12400 | 0.0018 | - | | 0.2029 | 12450 | 0.0019 | - | | 0.2037 | 12500 | 0.0025 | - | | 0.2045 | 12550 | 0.0022 | - | | 0.2053 | 12600 | 0.0019 | - | | 0.2062 | 12650 | 0.0005 | - | | 0.2070 | 12700 | 0.0017 | - | | 0.2078 | 12750 | 0.0022 | - | | 0.2086 | 12800 | 0.0012 | - | | 0.2094 | 12850 | 0.003 | - | | 0.2102 | 12900 | 0.0039 | - | | 0.2110 | 12950 | 0.002 | - | | 0.2119 | 13000 | 0.0004 | - | | 0.2127 | 13050 | 0.0007 | - | | 0.2135 | 13100 | 0.0006 | - | | 0.2143 | 13150 | 0.0006 | - | | 0.2151 | 13200 | 0.0028 | - | | 0.2159 | 13250 | 0.0009 | - | | 0.2168 | 13300 | 0.0011 | - | | 0.2176 | 13350 | 0.0017 | - | | 0.2184 | 13400 | 0.0009 | - | | 0.2192 | 13450 | 0.0029 | - | | 0.2200 | 13500 | 0.0006 | - | | 0.2208 | 13550 | 0.001 | - | | 0.2216 | 13600 | 0.0023 | - | | 0.2225 | 13650 | 0.0006 | - | | 0.2233 | 13700 | 0.0005 | - | | 0.2241 | 13750 | 0.0026 | - | | 0.2249 | 13800 | 0.0012 | - | | 0.2257 | 13850 | 0.0021 | - | | 0.2265 | 13900 | 0.0017 | - | | 0.2273 | 13950 | 0.0008 | - | | 0.2282 | 14000 | 0.0022 | - | | 0.2290 | 14050 | 0.0031 | - | | 0.2298 | 14100 | 0.0006 | - | | 0.2306 | 14150 | 0.0008 | - | | 0.2314 | 14200 | 0.0019 | - | | 0.2322 | 14250 | 0.0007 | - | | 0.2331 | 14300 | 0.0007 | - | | 0.2339 | 14350 | 0.0006 | - | | 0.2347 | 14400 | 0.0004 | - | | 0.2355 | 14450 | 0.0017 | - | | 0.2363 | 14500 | 0.0004 | - | | 0.2371 | 14550 | 0.0016 | - | | 0.2379 | 14600 | 0.0015 | - | | 0.2388 | 14650 | 0.0018 | - | | 0.2396 | 14700 | 0.0004 | - | | 0.2404 | 14750 | 0.0004 | - | | 0.2412 | 14800 | 0.0004 | - | | 0.2420 | 14850 | 0.0028 | - | | 0.2428 | 14900 | 0.0009 | - | | 0.2436 | 14950 | 0.0028 | - | | 0.2445 | 15000 | 0.0004 | - | | 0.2453 | 15050 | 0.0017 | - | | 0.2461 | 15100 | 0.0003 | - | | 0.2469 | 15150 | 0.0003 | - | | 0.2477 | 15200 | 0.0005 | - | | 0.2485 | 15250 | 0.0015 | - | | 0.2493 | 15300 | 0.0006 | - | | 0.2502 | 15350 | 0.0004 | - | | 0.2510 | 15400 | 0.0006 | - | | 0.2518 | 15450 | 0.0005 | - | | 0.2526 | 15500 | 0.0017 | - | | 0.2534 | 15550 | 0.0026 | - | | 0.2542 | 15600 | 0.0003 | - | | 0.2551 | 15650 | 0.0018 | - | | 0.2559 | 15700 | 0.0015 | - | | 0.2567 | 15750 | 0.0013 | - | | 0.2575 | 15800 | 0.0006 | - | | 0.2583 | 15850 | 0.0014 | - | | 0.2591 | 15900 | 0.0004 | - | | 0.2599 | 15950 | 0.0027 | - | | 0.2608 | 16000 | 0.0006 | - | | 0.2616 | 16050 | 0.004 | - | | 0.2624 | 16100 | 0.0005 | - | | 0.2632 | 16150 | 0.0004 | - | | 0.2640 | 16200 | 0.0005 | - | | 0.2648 | 16250 | 0.0028 | - | | 0.2656 | 16300 | 0.0026 | - | | 0.2665 | 16350 | 0.001 | - | | 0.2673 | 16400 | 0.0029 | - | | 0.2681 | 16450 | 0.003 | - | | 0.2689 | 16500 | 0.0015 | - | | 0.2697 | 16550 | 0.0024 | - | | 0.2705 | 16600 | 0.0022 | - | | 0.2713 | 16650 | 0.0017 | - | | 0.2722 | 16700 | 0.0018 | - | | 0.2730 | 16750 | 0.0011 | - | | 0.2738 | 16800 | 0.0006 | - | | 0.2746 | 16850 | 0.0016 | - | | 0.2754 | 16900 | 0.0036 | - | | 0.2762 | 16950 | 0.0022 | - | | 0.2771 | 17000 | 0.001 | - | | 0.2779 | 17050 | 0.0008 | - | | 0.2787 | 17100 | 0.0017 | - | | 0.2795 | 17150 | 0.0006 | - | | 0.2803 | 17200 | 0.0007 | - | | 0.2811 | 17250 | 0.003 | - | | 0.2819 | 17300 | 0.0019 | - | | 0.2828 | 17350 | 0.0006 | - | | 0.2836 | 17400 | 0.0005 | - | | 0.2844 | 17450 | 0.0004 | - | | 0.2852 | 17500 | 0.0006 | - | | 0.2860 | 17550 | 0.0015 | - | | 0.2868 | 17600 | 0.0003 | - | | 0.2876 | 17650 | 0.0012 | - | | 0.2885 | 17700 | 0.0018 | - | | 0.2893 | 17750 | 0.0017 | - | | 0.2901 | 17800 | 0.0021 | - | | 0.2909 | 17850 | 0.0027 | - | | 0.2917 | 17900 | 0.0017 | - | | 0.2925 | 17950 | 0.0005 | - | | 0.2934 | 18000 | 0.0015 | - | | 0.2942 | 18050 | 0.0005 | - | | 0.2950 | 18100 | 0.0015 | - | | 0.2958 | 18150 | 0.0008 | - | | 0.2966 | 18200 | 0.0007 | - | | 0.2974 | 18250 | 0.002 | - | | 0.2982 | 18300 | 0.0007 | - | | 0.2991 | 18350 | 0.0025 | - | | 0.2999 | 18400 | 0.0008 | - | | 0.3007 | 18450 | 0.0008 | - | | 0.3015 | 18500 | 0.0015 | - | | 0.3023 | 18550 | 0.0006 | - | | 0.3031 | 18600 | 0.0005 | - | | 0.3039 | 18650 | 0.0025 | - | | 0.3048 | 18700 | 0.0005 | - | | 0.3056 | 18750 | 0.0005 | - | | 0.3064 | 18800 | 0.0004 | - | | 0.3072 | 18850 | 0.0015 | - | | 0.3080 | 18900 | 0.0003 | - | | 0.3088 | 18950 | 0.0005 | - | | 0.3096 | 19000 | 0.0019 | - | | 0.3105 | 19050 | 0.0016 | - | | 0.3113 | 19100 | 0.0009 | - | | 0.3121 | 19150 | 0.0003 | - | | 0.3129 | 19200 | 0.0019 | - | | 0.3137 | 19250 | 0.0003 | - | | 0.3145 | 19300 | 0.0004 | - | | 0.3154 | 19350 | 0.004 | - | | 0.3162 | 19400 | 0.003 | - | | 0.3170 | 19450 | 0.0015 | - | | 0.3178 | 19500 | 0.0005 | - | | 0.3186 | 19550 | 0.001 | - | | 0.3194 | 19600 | 0.0009 | - | | 0.3202 | 19650 | 0.003 | - | | 0.3211 | 19700 | 0.001 | - | | 0.3219 | 19750 | 0.0023 | - | | 0.3227 | 19800 | 0.0017 | - | | 0.3235 | 19850 | 0.0003 | - | | 0.3243 | 19900 | 0.0006 | - | | 0.3251 | 19950 | 0.0003 | - | | 0.3259 | 20000 | 0.0005 | - | | 0.3268 | 20050 | 0.0015 | - | | 0.3276 | 20100 | 0.0014 | - | | 0.3284 | 20150 | 0.0016 | - | | 0.3292 | 20200 | 0.0009 | - | | 0.3300 | 20250 | 0.0026 | - | | 0.3308 | 20300 | 0.0004 | - | | 0.3316 | 20350 | 0.0004 | - | | 0.3325 | 20400 | 0.0016 | - | | 0.3333 | 20450 | 0.0004 | - | | 0.3341 | 20500 | 0.0008 | - | | 0.3349 | 20550 | 0.0005 | - | | 0.3357 | 20600 | 0.0005 | - | | 0.3365 | 20650 | 0.0003 | - | | 0.3374 | 20700 | 0.0003 | - | | 0.3382 | 20750 | 0.0017 | - | | 0.3390 | 20800 | 0.0017 | - | | 0.3398 | 20850 | 0.0016 | - | | 0.3406 | 20900 | 0.0003 | - | | 0.3414 | 20950 | 0.0021 | - | | 0.3422 | 21000 | 0.0004 | - | | 0.3431 | 21050 | 0.0007 | - | | 0.3439 | 21100 | 0.0003 | - | | 0.3447 | 21150 | 0.0005 | - | | 0.3455 | 21200 | 0.0003 | - | | 0.3463 | 21250 | 0.0003 | - | | 0.3471 | 21300 | 0.002 | - | | 0.3479 | 21350 | 0.0015 | - | | 0.3488 | 21400 | 0.002 | - | | 0.3496 | 21450 | 0.0005 | - | | 0.3504 | 21500 | 0.0002 | - | | 0.3512 | 21550 | 0.0013 | - | | 0.3520 | 21600 | 0.0003 | - | | 0.3528 | 21650 | 0.0003 | - | | 0.3537 | 21700 | 0.0009 | - | | 0.3545 | 21750 | 0.0003 | - | | 0.3553 | 21800 | 0.0007 | - | | 0.3561 | 21850 | 0.0004 | - | | 0.3569 | 21900 | 0.001 | - | | 0.3577 | 21950 | 0.0003 | - | | 0.3585 | 22000 | 0.0004 | - | | 0.3594 | 22050 | 0.0016 | - | | 0.3602 | 22100 | 0.0008 | - | | 0.3610 | 22150 | 0.0005 | - | | 0.3618 | 22200 | 0.001 | - | | 0.3626 | 22250 | 0.0006 | - | | 0.3634 | 22300 | 0.0003 | - | | 0.3642 | 22350 | 0.0004 | - | | 0.3651 | 22400 | 0.0003 | - | | 0.3659 | 22450 | 0.0003 | - | | 0.3667 | 22500 | 0.0003 | - | | 0.3675 | 22550 | 0.0015 | - | | 0.3683 | 22600 | 0.0021 | - | | 0.3691 | 22650 | 0.0005 | - | | 0.3699 | 22700 | 0.0004 | - | | 0.3708 | 22750 | 0.0041 | - | | 0.3716 | 22800 | 0.001 | - | | 0.3724 | 22850 | 0.0019 | - | | 0.3732 | 22900 | 0.0003 | - | | 0.3740 | 22950 | 0.002 | - | | 0.3748 | 23000 | 0.001 | - | | 0.3757 | 23050 | 0.0003 | - | | 0.3765 | 23100 | 0.0013 | - | | 0.3773 | 23150 | 0.0002 | - | | 0.3781 | 23200 | 0.0009 | - | | 0.3789 | 23250 | 0.0003 | - | | 0.3797 | 23300 | 0.0003 | - | | 0.3805 | 23350 | 0.0003 | - | | 0.3814 | 23400 | 0.0018 | - | | 0.3822 | 23450 | 0.0003 | - | | 0.3830 | 23500 | 0.0008 | - | | 0.3838 | 23550 | 0.0007 | - | | 0.3846 | 23600 | 0.002 | - | | 0.3854 | 23650 | 0.0005 | - | | 0.3862 | 23700 | 0.0011 | - | | 0.3871 | 23750 | 0.0025 | - | | 0.3879 | 23800 | 0.0022 | - | | 0.3887 | 23850 | 0.0029 | - | | 0.3895 | 23900 | 0.0006 | - | | 0.3903 | 23950 | 0.0015 | - | | 0.3911 | 24000 | 0.0016 | - | | 0.3919 | 24050 | 0.0027 | - | | 0.3928 | 24100 | 0.0016 | - | | 0.3936 | 24150 | 0.0004 | - | | 0.3944 | 24200 | 0.0019 | - | | 0.3952 | 24250 | 0.0016 | - | | 0.3960 | 24300 | 0.0003 | - | | 0.3968 | 24350 | 0.0005 | - | | 0.3977 | 24400 | 0.0003 | - | | 0.3985 | 24450 | 0.0004 | - | | 0.3993 | 24500 | 0.0004 | - | | 0.4001 | 24550 | 0.0015 | - | | 0.4009 | 24600 | 0.0014 | - | | 0.4017 | 24650 | 0.0003 | - | | 0.4025 | 24700 | 0.0003 | - | | 0.4034 | 24750 | 0.0016 | - | | 0.4042 | 24800 | 0.0003 | - | | 0.4050 | 24850 | 0.0012 | - | | 0.4058 | 24900 | 0.0003 | - | | 0.4066 | 24950 | 0.0004 | - | | 0.4074 | 25000 | 0.001 | - | | 0.4082 | 25050 | 0.0005 | - | | 0.4091 | 25100 | 0.0013 | - | | 0.4099 | 25150 | 0.0003 | - | | 0.4107 | 25200 | 0.0003 | - | | 0.4115 | 25250 | 0.0012 | - | | 0.4123 | 25300 | 0.0002 | - | | 0.4131 | 25350 | 0.0005 | - | | 0.4140 | 25400 | 0.0016 | - | | 0.4148 | 25450 | 0.0004 | - | | 0.4156 | 25500 | 0.0003 | - | | 0.4164 | 25550 | 0.0015 | - | | 0.4172 | 25600 | 0.0002 | - | | 0.4180 | 25650 | 0.0012 | - | | 0.4188 | 25700 | 0.0006 | - | | 0.4197 | 25750 | 0.0004 | - | | 0.4205 | 25800 | 0.0002 | - | | 0.4213 | 25850 | 0.0016 | - | | 0.4221 | 25900 | 0.0014 | - | | 0.4229 | 25950 | 0.0008 | - | | 0.4237 | 26000 | 0.0003 | - | | 0.4245 | 26050 | 0.0005 | - | | 0.4254 | 26100 | 0.0002 | - | | 0.4262 | 26150 | 0.0018 | - | | 0.4270 | 26200 | 0.0002 | - | | 0.4278 | 26250 | 0.0003 | - | | 0.4286 | 26300 | 0.0003 | - | | 0.4294 | 26350 | 0.0003 | - | | 0.4302 | 26400 | 0.0006 | - | | 0.4311 | 26450 | 0.0027 | - | | 0.4319 | 26500 | 0.0004 | - | | 0.4327 | 26550 | 0.0003 | - | | 0.4335 | 26600 | 0.0003 | - | | 0.4343 | 26650 | 0.0003 | - | | 0.4351 | 26700 | 0.0016 | - | | 0.4360 | 26750 | 0.001 | - | | 0.4368 | 26800 | 0.0016 | - | | 0.4376 | 26850 | 0.0003 | - | | 0.4384 | 26900 | 0.0016 | - | | 0.4392 | 26950 | 0.0014 | - | | 0.4400 | 27000 | 0.0002 | - | | 0.4408 | 27050 | 0.0004 | - | | 0.4417 | 27100 | 0.0002 | - | | 0.4425 | 27150 | 0.0016 | - | | 0.4433 | 27200 | 0.0015 | - | | 0.4441 | 27250 | 0.0003 | - | | 0.4449 | 27300 | 0.0003 | - | | 0.4457 | 27350 | 0.0003 | - | | 0.4465 | 27400 | 0.0007 | - | | 0.4474 | 27450 | 0.0004 | - | | 0.4482 | 27500 | 0.0015 | - | | 0.4490 | 27550 | 0.0002 | - | | 0.4498 | 27600 | 0.0019 | - | | 0.4506 | 27650 | 0.0012 | - | | 0.4514 | 27700 | 0.0002 | - | | 0.4522 | 27750 | 0.0004 | - | | 0.4531 | 27800 | 0.0016 | - | | 0.4539 | 27850 | 0.0003 | - | | 0.4547 | 27900 | 0.0028 | - | | 0.4555 | 27950 | 0.0006 | - | | 0.4563 | 28000 | 0.0004 | - | | 0.4571 | 28050 | 0.0014 | - | | 0.4580 | 28100 | 0.0002 | - | | 0.4588 | 28150 | 0.0004 | - | | 0.4596 | 28200 | 0.0004 | - | | 0.4604 | 28250 | 0.0003 | - | | 0.4612 | 28300 | 0.0003 | - | | 0.4620 | 28350 | 0.0023 | - | | 0.4628 | 28400 | 0.0002 | - | | 0.4637 | 28450 | 0.0003 | - | | 0.4645 | 28500 | 0.0004 | - | | 0.4653 | 28550 | 0.0002 | - | | 0.4661 | 28600 | 0.0003 | - | | 0.4669 | 28650 | 0.0014 | - | | 0.4677 | 28700 | 0.0003 | - | | 0.4685 | 28750 | 0.004 | - | | 0.4694 | 28800 | 0.0007 | - | | 0.4702 | 28850 | 0.0002 | - | | 0.4710 | 28900 | 0.0041 | - | | 0.4718 | 28950 | 0.0016 | - | | 0.4726 | 29000 | 0.0003 | - | | 0.4734 | 29050 | 0.0016 | - | | 0.4743 | 29100 | 0.0006 | - | | 0.4751 | 29150 | 0.0015 | - | | 0.4759 | 29200 | 0.0003 | - | | 0.4767 | 29250 | 0.0004 | - | | 0.4775 | 29300 | 0.0017 | - | | 0.4783 | 29350 | 0.0003 | - | | 0.4791 | 29400 | 0.0012 | - | | 0.4800 | 29450 | 0.0013 | - | | 0.4808 | 29500 | 0.0015 | - | | 0.4816 | 29550 | 0.0004 | - | | 0.4824 | 29600 | 0.0006 | - | | 0.4832 | 29650 | 0.0003 | - | | 0.4840 | 29700 | 0.0003 | - | | 0.4848 | 29750 | 0.0013 | - | | 0.4857 | 29800 | 0.0011 | - | | 0.4865 | 29850 | 0.0029 | - | | 0.4873 | 29900 | 0.0004 | - | | 0.4881 | 29950 | 0.0004 | - | | 0.4889 | 30000 | 0.0003 | - | | 0.4897 | 30050 | 0.0002 | - | | 0.4905 | 30100 | 0.0003 | - | | 0.4914 | 30150 | 0.0025 | - | | 0.4922 | 30200 | 0.0002 | - | | 0.4930 | 30250 | 0.0003 | - | | 0.4938 | 30300 | 0.0015 | - | | 0.4946 | 30350 | 0.0002 | - | | 0.4954 | 30400 | 0.0011 | - | | 0.4963 | 30450 | 0.0003 | - | | 0.4971 | 30500 | 0.0002 | - | | 0.4979 | 30550 | 0.0004 | - | | 0.4987 | 30600 | 0.0002 | - | | 0.4995 | 30650 | 0.0003 | - | | 0.5003 | 30700 | 0.0015 | - | | 0.5011 | 30750 | 0.0003 | - | | 0.5020 | 30800 | 0.0005 | - | | 0.5028 | 30850 | 0.0002 | - | | 0.5036 | 30900 | 0.0015 | - | | 0.5044 | 30950 | 0.0003 | - | | 0.5052 | 31000 | 0.0015 | - | | 0.5060 | 31050 | 0.0015 | - | | 0.5068 | 31100 | 0.0003 | - | | 0.5077 | 31150 | 0.0002 | - | | 0.5085 | 31200 | 0.0027 | - | | 0.5093 | 31250 | 0.0013 | - | | 0.5101 | 31300 | 0.0004 | - | | 0.5109 | 31350 | 0.0012 | - | | 0.5117 | 31400 | 0.0002 | - | | 0.5125 | 31450 | 0.0015 | - | | 0.5134 | 31500 | 0.0009 | - | | 0.5142 | 31550 | 0.0008 | - | | 0.5150 | 31600 | 0.0014 | - | | 0.5158 | 31650 | 0.0002 | - | | 0.5166 | 31700 | 0.0004 | - | | 0.5174 | 31750 | 0.0024 | - | | 0.5183 | 31800 | 0.0015 | - | | 0.5191 | 31850 | 0.0015 | - | | 0.5199 | 31900 | 0.0004 | - | | 0.5207 | 31950 | 0.0003 | - | | 0.5215 | 32000 | 0.0014 | - | | 0.5223 | 32050 | 0.0004 | - | | 0.5231 | 32100 | 0.0003 | - | | 0.5240 | 32150 | 0.001 | - | | 0.5248 | 32200 | 0.0002 | - | | 0.5256 | 32250 | 0.0004 | - | | 0.5264 | 32300 | 0.0014 | - | | 0.5272 | 32350 | 0.0014 | - | | 0.5280 | 32400 | 0.0002 | - | | 0.5288 | 32450 | 0.0003 | - | | 0.5297 | 32500 | 0.0003 | - | | 0.5305 | 32550 | 0.0015 | - | | 0.5313 | 32600 | 0.0026 | - | | 0.5321 | 32650 | 0.0002 | - | | 0.5329 | 32700 | 0.0013 | - | | 0.5337 | 32750 | 0.0004 | - | | 0.5346 | 32800 | 0.0013 | - | | 0.5354 | 32850 | 0.0003 | - | | 0.5362 | 32900 | 0.0005 | - | | 0.5370 | 32950 | 0.0014 | - | | 0.5378 | 33000 | 0.0004 | - | | 0.5386 | 33050 | 0.0014 | - | | 0.5394 | 33100 | 0.0014 | - | | 0.5403 | 33150 | 0.0002 | - | | 0.5411 | 33200 | 0.0003 | - | | 0.5419 | 33250 | 0.001 | - | | 0.5427 | 33300 | 0.0007 | - | | 0.5435 | 33350 | 0.0002 | - | | 0.5443 | 33400 | 0.0003 | - | | 0.5451 | 33450 | 0.0006 | - | | 0.5460 | 33500 | 0.0002 | - | | 0.5468 | 33550 | 0.0014 | - | | 0.5476 | 33600 | 0.0002 | - | | 0.5484 | 33650 | 0.0002 | - | | 0.5492 | 33700 | 0.0042 | - | | 0.5500 | 33750 | 0.0026 | - | | 0.5508 | 33800 | 0.0007 | - | | 0.5517 | 33850 | 0.0004 | - | | 0.5525 | 33900 | 0.0003 | - | | 0.5533 | 33950 | 0.0006 | - | | 0.5541 | 34000 | 0.0003 | - | | 0.5549 | 34050 | 0.0002 | - | | 0.5557 | 34100 | 0.0002 | - | | 0.5566 | 34150 | 0.0025 | - | | 0.5574 | 34200 | 0.0003 | - | | 0.5582 | 34250 | 0.0002 | - | | 0.5590 | 34300 | 0.0002 | - | | 0.5598 | 34350 | 0.0003 | - | | 0.5606 | 34400 | 0.0005 | - | | 0.5614 | 34450 | 0.0008 | - | | 0.5623 | 34500 | 0.0013 | - | | 0.5631 | 34550 | 0.0002 | - | | 0.5639 | 34600 | 0.0026 | - | | 0.5647 | 34650 | 0.0018 | - | | 0.5655 | 34700 | 0.0015 | - | | 0.5663 | 34750 | 0.0017 | - | | 0.5671 | 34800 | 0.0002 | - | | 0.5680 | 34850 | 0.0002 | - | | 0.5688 | 34900 | 0.0003 | - | | 0.5696 | 34950 | 0.0012 | - | | 0.5704 | 35000 | 0.0016 | - | | 0.5712 | 35050 | 0.0003 | - | | 0.5720 | 35100 | 0.0003 | - | | 0.5728 | 35150 | 0.0003 | - | | 0.5737 | 35200 | 0.0002 | - | | 0.5745 | 35250 | 0.0002 | - | | 0.5753 | 35300 | 0.0026 | - | | 0.5761 | 35350 | 0.0002 | - | | 0.5769 | 35400 | 0.0001 | - | | 0.5777 | 35450 | 0.0003 | - | | 0.5786 | 35500 | 0.0014 | - | | 0.5794 | 35550 | 0.0005 | - | | 0.5802 | 35600 | 0.0003 | - | | 0.5810 | 35650 | 0.0001 | - | | 0.5818 | 35700 | 0.0002 | - | | 0.5826 | 35750 | 0.0002 | - | | 0.5834 | 35800 | 0.0002 | - | | 0.5843 | 35850 | 0.0002 | - | | 0.5851 | 35900 | 0.0001 | - | | 0.5859 | 35950 | 0.0001 | - | | 0.5867 | 36000 | 0.0013 | - | | 0.5875 | 36050 | 0.0002 | - | | 0.5883 | 36100 | 0.0003 | - | | 0.5891 | 36150 | 0.0026 | - | | 0.5900 | 36200 | 0.0015 | - | | 0.5908 | 36250 | 0.0012 | - | | 0.5916 | 36300 | 0.0002 | - | | 0.5924 | 36350 | 0.0001 | - | | 0.5932 | 36400 | 0.0015 | - | | 0.5940 | 36450 | 0.0015 | - | | 0.5949 | 36500 | 0.0007 | - | | 0.5957 | 36550 | 0.0002 | - | | 0.5965 | 36600 | 0.0012 | - | | 0.5973 | 36650 | 0.0033 | - | | 0.5981 | 36700 | 0.0002 | - | | 0.5989 | 36750 | 0.0006 | - | | 0.5997 | 36800 | 0.0025 | - | | 0.6006 | 36850 | 0.0017 | - | | 0.6014 | 36900 | 0.0002 | - | | 0.6022 | 36950 | 0.0001 | - | | 0.6030 | 37000 | 0.0014 | - | | 0.6038 | 37050 | 0.0003 | - | | 0.6046 | 37100 | 0.0002 | - | | 0.6054 | 37150 | 0.0013 | - | | 0.6063 | 37200 | 0.0014 | - | | 0.6071 | 37250 | 0.0001 | - | | 0.6079 | 37300 | 0.0002 | - | | 0.6087 | 37350 | 0.0015 | - | | 0.6095 | 37400 | 0.0015 | - | | 0.6103 | 37450 | 0.0012 | - | | 0.6111 | 37500 | 0.0014 | - | | 0.6120 | 37550 | 0.0003 | - | | 0.6128 | 37600 | 0.0004 | - | | 0.6136 | 37650 | 0.0009 | - | | 0.6144 | 37700 | 0.0005 | - | | 0.6152 | 37750 | 0.0018 | - | | 0.6160 | 37800 | 0.0001 | - | | 0.6169 | 37850 | 0.0018 | - | | 0.6177 | 37900 | 0.0006 | - | | 0.6185 | 37950 | 0.0002 | - | | 0.6193 | 38000 | 0.0006 | - | | 0.6201 | 38050 | 0.0014 | - | | 0.6209 | 38100 | 0.0003 | - | | 0.6217 | 38150 | 0.0013 | - | | 0.6226 | 38200 | 0.0001 | - | | 0.6234 | 38250 | 0.0003 | - | | 0.6242 | 38300 | 0.0002 | - | | 0.625 | 38350 | 0.0002 | - | | 0.6258 | 38400 | 0.0012 | - | | 0.6266 | 38450 | 0.0002 | - | | 0.6274 | 38500 | 0.0002 | - | | 0.6283 | 38550 | 0.0003 | - | | 0.6291 | 38600 | 0.0002 | - | | 0.6299 | 38650 | 0.0003 | - | | 0.6307 | 38700 | 0.0002 | - | | 0.6315 | 38750 | 0.0002 | - | | 0.6323 | 38800 | 0.0001 | - | | 0.6331 | 38850 | 0.0003 | - | | 0.6340 | 38900 | 0.0003 | - | | 0.6348 | 38950 | 0.0002 | - | | 0.6356 | 39000 | 0.0002 | - | | 0.6364 | 39050 | 0.0004 | - | | 0.6372 | 39100 | 0.0015 | - | | 0.6380 | 39150 | 0.0002 | - | | 0.6389 | 39200 | 0.0015 | - | | 0.6397 | 39250 | 0.0015 | - | | 0.6405 | 39300 | 0.0015 | - | | 0.6413 | 39350 | 0.002 | - | | 0.6421 | 39400 | 0.0019 | - | | 0.6429 | 39450 | 0.0004 | - | | 0.6437 | 39500 | 0.0012 | - | | 0.6446 | 39550 | 0.0013 | - | | 0.6454 | 39600 | 0.0002 | - | | 0.6462 | 39650 | 0.0027 | - | | 0.6470 | 39700 | 0.003 | - | | 0.6478 | 39750 | 0.0006 | - | | 0.6486 | 39800 | 0.0002 | - | | 0.6494 | 39850 | 0.0002 | - | | 0.6503 | 39900 | 0.0001 | - | | 0.6511 | 39950 | 0.0009 | - | | 0.6519 | 40000 | 0.0007 | - | | 0.6527 | 40050 | 0.0014 | - | | 0.6535 | 40100 | 0.0002 | - | | 0.6543 | 40150 | 0.0003 | - | | 0.6551 | 40200 | 0.0002 | - | | 0.6560 | 40250 | 0.0011 | - | | 0.6568 | 40300 | 0.0005 | - | | 0.6576 | 40350 | 0.0009 | - | | 0.6584 | 40400 | 0.0002 | - | | 0.6592 | 40450 | 0.0013 | - | | 0.6600 | 40500 | 0.0003 | - | | 0.6609 | 40550 | 0.0002 | - | | 0.6617 | 40600 | 0.0006 | - | | 0.6625 | 40650 | 0.0001 | - | | 0.6633 | 40700 | 0.0003 | - | | 0.6641 | 40750 | 0.0014 | - | | 0.6649 | 40800 | 0.0001 | - | | 0.6657 | 40850 | 0.0013 | - | | 0.6666 | 40900 | 0.0002 | - | | 0.6674 | 40950 | 0.0013 | - | | 0.6682 | 41000 | 0.0013 | - | | 0.6690 | 41050 | 0.0017 | - | | 0.6698 | 41100 | 0.0002 | - | | 0.6706 | 41150 | 0.0001 | - | | 0.6714 | 41200 | 0.0002 | - | | 0.6723 | 41250 | 0.0002 | - | | 0.6731 | 41300 | 0.0017 | - | | 0.6739 | 41350 | 0.0014 | - | | 0.6747 | 41400 | 0.0014 | - | | 0.6755 | 41450 | 0.0014 | - | | 0.6763 | 41500 | 0.001 | - | | 0.6772 | 41550 | 0.0002 | - | | 0.6780 | 41600 | 0.0038 | - | | 0.6788 | 41650 | 0.0005 | - | | 0.6796 | 41700 | 0.0014 | - | | 0.6804 | 41750 | 0.0014 | - | | 0.6812 | 41800 | 0.0008 | - | | 0.6820 | 41850 | 0.003 | - | | 0.6829 | 41900 | 0.0005 | - | | 0.6837 | 41950 | 0.0009 | - | | 0.6845 | 42000 | 0.0014 | - | | 0.6853 | 42050 | 0.0001 | - | | 0.6861 | 42100 | 0.0013 | - | | 0.6869 | 42150 | 0.0002 | - | | 0.6877 | 42200 | 0.0013 | - | | 0.6886 | 42250 | 0.0005 | - | | 0.6894 | 42300 | 0.0003 | - | | 0.6902 | 42350 | 0.0003 | - | | 0.6910 | 42400 | 0.0004 | - | | 0.6918 | 42450 | 0.0009 | - | | 0.6926 | 42500 | 0.0002 | - | | 0.6934 | 42550 | 0.0002 | - | | 0.6943 | 42600 | 0.0002 | - | | 0.6951 | 42650 | 0.0001 | - | | 0.6959 | 42700 | 0.0015 | - | | 0.6967 | 42750 | 0.0001 | - | | 0.6975 | 42800 | 0.0013 | - | | 0.6983 | 42850 | 0.0001 | - | | 0.6992 | 42900 | 0.0013 | - | | 0.7000 | 42950 | 0.0002 | - | | 0.7008 | 43000 | 0.0013 | - | | 0.7016 | 43050 | 0.0001 | - | | 0.7024 | 43100 | 0.0002 | - | | 0.7032 | 43150 | 0.0002 | - | | 0.7040 | 43200 | 0.0012 | - | | 0.7049 | 43250 | 0.0007 | - | | 0.7057 | 43300 | 0.0013 | - | | 0.7065 | 43350 | 0.0003 | - | | 0.7073 | 43400 | 0.0002 | - | | 0.7081 | 43450 | 0.0011 | - | | 0.7089 | 43500 | 0.0003 | - | | 0.7097 | 43550 | 0.0002 | - | | 0.7106 | 43600 | 0.0002 | - | | 0.7114 | 43650 | 0.0007 | - | | 0.7122 | 43700 | 0.0014 | - | | 0.7130 | 43750 | 0.0022 | - | | 0.7138 | 43800 | 0.0002 | - | | 0.7146 | 43850 | 0.0005 | - | | 0.7154 | 43900 | 0.0002 | - | | 0.7163 | 43950 | 0.0001 | - | | 0.7171 | 44000 | 0.0002 | - | | 0.7179 | 44050 | 0.0002 | - | | 0.7187 | 44100 | 0.0001 | - | | 0.7195 | 44150 | 0.0003 | - | | 0.7203 | 44200 | 0.0001 | - | | 0.7212 | 44250 | 0.0003 | - | | 0.7220 | 44300 | 0.0015 | - | | 0.7228 | 44350 | 0.0004 | - | | 0.7236 | 44400 | 0.0013 | - | | 0.7244 | 44450 | 0.0003 | - | | 0.7252 | 44500 | 0.0001 | - | | 0.7260 | 44550 | 0.0016 | - | | 0.7269 | 44600 | 0.0014 | - | | 0.7277 | 44650 | 0.0001 | - | | 0.7285 | 44700 | 0.0001 | - | | 0.7293 | 44750 | 0.0001 | - | | 0.7301 | 44800 | 0.0002 | - | | 0.7309 | 44850 | 0.0001 | - | | 0.7317 | 44900 | 0.0002 | - | | 0.7326 | 44950 | 0.0001 | - | | 0.7334 | 45000 | 0.0002 | - | | 0.7342 | 45050 | 0.0002 | - | | 0.7350 | 45100 | 0.0013 | - | | 0.7358 | 45150 | 0.0002 | - | | 0.7366 | 45200 | 0.0006 | - | | 0.7375 | 45250 | 0.0002 | - | | 0.7383 | 45300 | 0.002 | - | | 0.7391 | 45350 | 0.0008 | - | | 0.7399 | 45400 | 0.0001 | - | | 0.7407 | 45450 | 0.0001 | - | | 0.7415 | 45500 | 0.0001 | - | | 0.7423 | 45550 | 0.0003 | - | | 0.7432 | 45600 | 0.0001 | - | | 0.7440 | 45650 | 0.0001 | - | | 0.7448 | 45700 | 0.0026 | - | | 0.7456 | 45750 | 0.0026 | - | | 0.7464 | 45800 | 0.0001 | - | | 0.7472 | 45850 | 0.0002 | - | | 0.7480 | 45900 | 0.0013 | - | | 0.7489 | 45950 | 0.0002 | - | | 0.7497 | 46000 | 0.0025 | - | | 0.7505 | 46050 | 0.0001 | - | | 0.7513 | 46100 | 0.0015 | - | | 0.7521 | 46150 | 0.0001 | - | | 0.7529 | 46200 | 0.0002 | - | | 0.7537 | 46250 | 0.0001 | - | | 0.7546 | 46300 | 0.0001 | - | | 0.7554 | 46350 | 0.0014 | - | | 0.7562 | 46400 | 0.0007 | - | | 0.7570 | 46450 | 0.0001 | - | | 0.7578 | 46500 | 0.0002 | - | | 0.7586 | 46550 | 0.0014 | - | | 0.7595 | 46600 | 0.0013 | - | | 0.7603 | 46650 | 0.0002 | - | | 0.7611 | 46700 | 0.0012 | - | | 0.7619 | 46750 | 0.0001 | - | | 0.7627 | 46800 | 0.0001 | - | | 0.7635 | 46850 | 0.0013 | - | | 0.7643 | 46900 | 0.0002 | - | | 0.7652 | 46950 | 0.0001 | - | | 0.7660 | 47000 | 0.0001 | - | | 0.7668 | 47050 | 0.0014 | - | | 0.7676 | 47100 | 0.0002 | - | | 0.7684 | 47150 | 0.001 | - | | 0.7692 | 47200 | 0.0012 | - | | 0.7700 | 47250 | 0.0003 | - | | 0.7709 | 47300 | 0.0002 | - | | 0.7717 | 47350 | 0.0001 | - | | 0.7725 | 47400 | 0.0001 | - | | 0.7733 | 47450 | 0.0001 | - | | 0.7741 | 47500 | 0.0013 | - | | 0.7749 | 47550 | 0.0001 | - | | 0.7757 | 47600 | 0.0002 | - | | 0.7766 | 47650 | 0.0011 | - | | 0.7774 | 47700 | 0.0013 | - | | 0.7782 | 47750 | 0.0014 | - | | 0.7790 | 47800 | 0.0013 | - | | 0.7798 | 47850 | 0.0008 | - | | 0.7806 | 47900 | 0.0003 | - | | 0.7815 | 47950 | 0.0006 | - | | 0.7823 | 48000 | 0.0015 | - | | 0.7831 | 48050 | 0.0001 | - | | 0.7839 | 48100 | 0.0013 | - | | 0.7847 | 48150 | 0.0002 | - | | 0.7855 | 48200 | 0.0001 | - | | 0.7863 | 48250 | 0.0001 | - | | 0.7872 | 48300 | 0.0013 | - | | 0.7880 | 48350 | 0.0016 | - | | 0.7888 | 48400 | 0.0002 | - | | 0.7896 | 48450 | 0.0002 | - | | 0.7904 | 48500 | 0.001 | - | | 0.7912 | 48550 | 0.0005 | - | | 0.7920 | 48600 | 0.0013 | - | | 0.7929 | 48650 | 0.0013 | - | | 0.7937 | 48700 | 0.0012 | - | | 0.7945 | 48750 | 0.0013 | - | | 0.7953 | 48800 | 0.0002 | - | | 0.7961 | 48850 | 0.0002 | - | | 0.7969 | 48900 | 0.0003 | - | | 0.7978 | 48950 | 0.0007 | - | | 0.7986 | 49000 | 0.0023 | - | | 0.7994 | 49050 | 0.0001 | - | | 0.8002 | 49100 | 0.0001 | - | | 0.8010 | 49150 | 0.0001 | - | | 0.8018 | 49200 | 0.0013 | - | | 0.8026 | 49250 | 0.0001 | - | | 0.8035 | 49300 | 0.0013 | - | | 0.8043 | 49350 | 0.0013 | - | | 0.8051 | 49400 | 0.0003 | - | | 0.8059 | 49450 | 0.0001 | - | | 0.8067 | 49500 | 0.0001 | - | | 0.8075 | 49550 | 0.0001 | - | | 0.8083 | 49600 | 0.0001 | - | | 0.8092 | 49650 | 0.0002 | - | | 0.8100 | 49700 | 0.0002 | - | | 0.8108 | 49750 | 0.001 | - | | 0.8116 | 49800 | 0.0013 | - | | 0.8124 | 49850 | 0.0001 | - | | 0.8132 | 49900 | 0.0006 | - | | 0.8140 | 49950 | 0.0014 | - | | 0.8149 | 50000 | 0.0011 | - | | 0.8157 | 50050 | 0.0003 | - | | 0.8165 | 50100 | 0.0002 | - | | 0.8173 | 50150 | 0.0001 | - | | 0.8181 | 50200 | 0.0001 | - | | 0.8189 | 50250 | 0.0001 | - | | 0.8198 | 50300 | 0.0002 | - | | 0.8206 | 50350 | 0.0001 | - | | 0.8214 | 50400 | 0.0001 | - | | 0.8222 | 50450 | 0.0005 | - | | 0.8230 | 50500 | 0.0001 | - | | 0.8238 | 50550 | 0.0001 | - | | 0.8246 | 50600 | 0.0002 | - | | 0.8255 | 50650 | 0.0001 | - | | 0.8263 | 50700 | 0.0014 | - | | 0.8271 | 50750 | 0.0027 | - | | 0.8279 | 50800 | 0.0001 | - | | 0.8287 | 50850 | 0.0001 | - | | 0.8295 | 50900 | 0.0001 | - | | 0.8303 | 50950 | 0.0003 | - | | 0.8312 | 51000 | 0.0003 | - | | 0.8320 | 51050 | 0.0013 | - | | 0.8328 | 51100 | 0.0013 | - | | 0.8336 | 51150 | 0.0001 | - | | 0.8344 | 51200 | 0.0001 | - | | 0.8352 | 51250 | 0.0001 | - | | 0.8360 | 51300 | 0.0014 | - | | 0.8369 | 51350 | 0.0015 | - | | 0.8377 | 51400 | 0.0013 | - | | 0.8385 | 51450 | 0.0013 | - | | 0.8393 | 51500 | 0.0001 | - | | 0.8401 | 51550 | 0.0002 | - | | 0.8409 | 51600 | 0.0013 | - | | 0.8418 | 51650 | 0.0003 | - | | 0.8426 | 51700 | 0.0013 | - | | 0.8434 | 51750 | 0.0002 | - | | 0.8442 | 51800 | 0.0001 | - | | 0.8450 | 51850 | 0.0003 | - | | 0.8458 | 51900 | 0.0001 | - | | 0.8466 | 51950 | 0.0001 | - | | 0.8475 | 52000 | 0.0001 | - | | 0.8483 | 52050 | 0.0001 | - | | 0.8491 | 52100 | 0.0001 | - | | 0.8499 | 52150 | 0.0001 | - | | 0.8507 | 52200 | 0.001 | - | | 0.8515 | 52250 | 0.0004 | - | | 0.8523 | 52300 | 0.0001 | - | | 0.8532 | 52350 | 0.0001 | - | | 0.8540 | 52400 | 0.0001 | - | | 0.8548 | 52450 | 0.0013 | - | | 0.8556 | 52500 | 0.0007 | - | | 0.8564 | 52550 | 0.0001 | - | | 0.8572 | 52600 | 0.0013 | - | | 0.8581 | 52650 | 0.0004 | - | | 0.8589 | 52700 | 0.0001 | - | | 0.8597 | 52750 | 0.0001 | - | | 0.8605 | 52800 | 0.0001 | - | | 0.8613 | 52850 | 0.0002 | - | | 0.8621 | 52900 | 0.0001 | - | | 0.8629 | 52950 | 0.0012 | - | | 0.8638 | 53000 | 0.0009 | - | | 0.8646 | 53050 | 0.0013 | - | | 0.8654 | 53100 | 0.0013 | - | | 0.8662 | 53150 | 0.0001 | - | | 0.8670 | 53200 | 0.0002 | - | | 0.8678 | 53250 | 0.0001 | - | | 0.8686 | 53300 | 0.0002 | - | | 0.8695 | 53350 | 0.0001 | - | | 0.8703 | 53400 | 0.0013 | - | | 0.8711 | 53450 | 0.0001 | - | | 0.8719 | 53500 | 0.0001 | - | | 0.8727 | 53550 | 0.0001 | - | | 0.8735 | 53600 | 0.0008 | - | | 0.8743 | 53650 | 0.0023 | - | | 0.8752 | 53700 | 0.0002 | - | | 0.8760 | 53750 | 0.0016 | - | | 0.8768 | 53800 | 0.0001 | - | | 0.8776 | 53850 | 0.0001 | - | | 0.8784 | 53900 | 0.0001 | - | | 0.8792 | 53950 | 0.0016 | - | | 0.8801 | 54000 | 0.0001 | - | | 0.8809 | 54050 | 0.0012 | - | | 0.8817 | 54100 | 0.0001 | - | | 0.8825 | 54150 | 0.0001 | - | | 0.8833 | 54200 | 0.0001 | - | | 0.8841 | 54250 | 0.0001 | - | | 0.8849 | 54300 | 0.0014 | - | | 0.8858 | 54350 | 0.0001 | - | | 0.8866 | 54400 | 0.0001 | - | | 0.8874 | 54450 | 0.0001 | - | | 0.8882 | 54500 | 0.0013 | - | | 0.8890 | 54550 | 0.0004 | - | | 0.8898 | 54600 | 0.0013 | - | | 0.8906 | 54650 | 0.0002 | - | | 0.8915 | 54700 | 0.0008 | - | | 0.8923 | 54750 | 0.0001 | - | | 0.8931 | 54800 | 0.0022 | - | | 0.8939 | 54850 | 0.0002 | - | | 0.8947 | 54900 | 0.0001 | - | | 0.8955 | 54950 | 0.0001 | - | | 0.8963 | 55000 | 0.0001 | - | | 0.8972 | 55050 | 0.0001 | - | | 0.8980 | 55100 | 0.0001 | - | | 0.8988 | 55150 | 0.0012 | - | | 0.8996 | 55200 | 0.0001 | - | | 0.9004 | 55250 | 0.0002 | - | | 0.9012 | 55300 | 0.0001 | - | | 0.9021 | 55350 | 0.0001 | - | | 0.9029 | 55400 | 0.0001 | - | | 0.9037 | 55450 | 0.0001 | - | | 0.9045 | 55500 | 0.0013 | - | | 0.9053 | 55550 | 0.0002 | - | | 0.9061 | 55600 | 0.0001 | - | | 0.9069 | 55650 | 0.0001 | - | | 0.9078 | 55700 | 0.0001 | - | | 0.9086 | 55750 | 0.0001 | - | | 0.9094 | 55800 | 0.0001 | - | | 0.9102 | 55850 | 0.0001 | - | | 0.9110 | 55900 | 0.0001 | - | | 0.9118 | 55950 | 0.0001 | - | | 0.9126 | 56000 | 0.0003 | - | | 0.9135 | 56050 | 0.0001 | - | | 0.9143 | 56100 | 0.0002 | - | | 0.9151 | 56150 | 0.0001 | - | | 0.9159 | 56200 | 0.0013 | - | | 0.9167 | 56250 | 0.0001 | - | | 0.9175 | 56300 | 0.0013 | - | | 0.9184 | 56350 | 0.0001 | - | | 0.9192 | 56400 | 0.0001 | - | | 0.9200 | 56450 | 0.0001 | - | | 0.9208 | 56500 | 0.001 | - | | 0.9216 | 56550 | 0.0018 | - | | 0.9224 | 56600 | 0.0001 | - | | 0.9232 | 56650 | 0.0001 | - | | 0.9241 | 56700 | 0.0001 | - | | 0.9249 | 56750 | 0.0001 | - | | 0.9257 | 56800 | 0.0013 | - | | 0.9265 | 56850 | 0.0001 | - | | 0.9273 | 56900 | 0.0013 | - | | 0.9281 | 56950 | 0.0006 | - | | 0.9289 | 57000 | 0.0001 | - | | 0.9298 | 57050 | 0.0001 | - | | 0.9306 | 57100 | 0.0001 | - | | 0.9314 | 57150 | 0.0003 | - | | 0.9322 | 57200 | 0.0017 | - | | 0.9330 | 57250 | 0.0001 | - | | 0.9338 | 57300 | 0.0013 | - | | 0.9346 | 57350 | 0.0025 | - | | 0.9355 | 57400 | 0.0002 | - | | 0.9363 | 57450 | 0.0014 | - | | 0.9371 | 57500 | 0.0004 | - | | 0.9379 | 57550 | 0.0002 | - | | 0.9387 | 57600 | 0.0014 | - | | 0.9395 | 57650 | 0.0001 | - | | 0.9404 | 57700 | 0.0002 | - | | 0.9412 | 57750 | 0.0016 | - | | 0.9420 | 57800 | 0.0001 | - | | 0.9428 | 57850 | 0.0001 | - | | 0.9436 | 57900 | 0.0013 | - | | 0.9444 | 57950 | 0.0002 | - | | 0.9452 | 58000 | 0.0003 | - | | 0.9461 | 58050 | 0.0013 | - | | 0.9469 | 58100 | 0.0001 | - | | 0.9477 | 58150 | 0.0002 | - | | 0.9485 | 58200 | 0.0001 | - | | 0.9493 | 58250 | 0.0013 | - | | 0.9501 | 58300 | 0.0015 | - | | 0.9509 | 58350 | 0.0006 | - | | 0.9518 | 58400 | 0.0001 | - | | 0.9526 | 58450 | 0.0001 | - | | 0.9534 | 58500 | 0.0001 | - | | 0.9542 | 58550 | 0.0001 | - | | 0.9550 | 58600 | 0.0004 | - | | 0.9558 | 58650 | 0.0001 | - | | 0.9566 | 58700 | 0.0002 | - | | 0.9575 | 58750 | 0.0012 | - | | 0.9583 | 58800 | 0.0001 | - | | 0.9591 | 58850 | 0.0001 | - | | 0.9599 | 58900 | 0.0001 | - | | 0.9607 | 58950 | 0.0012 | - | | 0.9615 | 59000 | 0.0002 | - | | 0.9624 | 59050 | 0.0001 | - | | 0.9632 | 59100 | 0.0013 | - | | 0.9640 | 59150 | 0.0002 | - | | 0.9648 | 59200 | 0.0003 | - | | 0.9656 | 59250 | 0.0014 | - | | 0.9664 | 59300 | 0.0001 | - | | 0.9672 | 59350 | 0.0004 | - | | 0.9681 | 59400 | 0.0006 | - | | 0.9689 | 59450 | 0.0002 | - | | 0.9697 | 59500 | 0.0001 | - | | 0.9705 | 59550 | 0.0001 | - | | 0.9713 | 59600 | 0.0001 | - | | 0.9721 | 59650 | 0.0013 | - | | 0.9729 | 59700 | 0.001 | - | | 0.9738 | 59750 | 0.0001 | - | | 0.9746 | 59800 | 0.0013 | - | | 0.9754 | 59850 | 0.0001 | - | | 0.9762 | 59900 | 0.0025 | - | | 0.9770 | 59950 | 0.0001 | - | | 0.9778 | 60000 | 0.0035 | - | | 0.9787 | 60050 | 0.0003 | - | | 0.9795 | 60100 | 0.0001 | - | | 0.9803 | 60150 | 0.0013 | - | | 0.9811 | 60200 | 0.0012 | - | | 0.9819 | 60250 | 0.0001 | - | | 0.9827 | 60300 | 0.0001 | - | | 0.9835 | 60350 | 0.0001 | - | | 0.9844 | 60400 | 0.0008 | - | | 0.9852 | 60450 | 0.0001 | - | | 0.9860 | 60500 | 0.0001 | - | | 0.9868 | 60550 | 0.0002 | - | | 0.9876 | 60600 | 0.0001 | - | | 0.9884 | 60650 | 0.0001 | - | | 0.9892 | 60700 | 0.0001 | - | | 0.9901 | 60750 | 0.0001 | - | | 0.9909 | 60800 | 0.0014 | - | | 0.9917 | 60850 | 0.0001 | - | | 0.9925 | 60900 | 0.0016 | - | | 0.9933 | 60950 | 0.0013 | - | | 0.9941 | 61000 | 0.0024 | - | | 0.9949 | 61050 | 0.0001 | - | | 0.9958 | 61100 | 0.0014 | - | | 0.9966 | 61150 | 0.0014 | - | | 0.9974 | 61200 | 0.0013 | - | | 0.9982 | 61250 | 0.0013 | - | | 0.9990 | 61300 | 0.0025 | - | | 0.9998 | 61350 | 0.0002 | - | ### Framework Versions - Python: 3.10.14 - SetFit: 1.1.1 - Sentence Transformers: 3.4.1 - Transformers: 4.49.0 - PyTorch: 2.2.2 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 128 tokens - **Number of Classes:** 237 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:-----------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Caregiver Jobs | <ul><li>'caregiver workers australia - overnight care for elderly needed immediately,home care work australia - caregiver jobs near me pt,now hiring night and day private caregivers usa - private home care for seniors in setúbal (see caregiver jobs in almada),car'</li><li>'private home care for seniors in my area,caregiver needed immediately,childcare nearby,overnight care for elderly near me,disability support worker,caregiver services nearby'</li><li>'live in caregiver jobs,home caregivers,caregiver services near me nsw,night caregiver hiring springvale,daycare caregivers wanted near springvale,caregiver'</li></ul> | | Janitorial Jobs | <ul><li>'reinigungsfirmen in der nähe,hausmeisterreinigung krefeld,hausmeister-reinigungsdienste,medizinische reinigung krefeld,gebäudereinigungsdienste krefeld,reinigungsarbeiten in der nähe krefeld'</li></ul> | | Remote Jobs | <ul><li>'part time job online,part time job positions,online working from home,remote work applications,work from home no experience no fee,job part time online'</li><li>'part time job online,jobs home working,no experience work from home jobs,job part time online,remote jobs philippines,work from home no experience no fee'</li><li>'ganar dinero trabajando desde casa,ganar dinero trabajando desde casa santo domingo,ganar dinero trabajando desde casa 2024'</li></ul> | | Personal Loans | <ul><li>'solicitar préstamo personal,quiero un prestamo para pagar mis deudas apatzingán - préstamo personal michoacán de ocampo,quiero un prestamo para pagar mis deudas apatzingán michoacán de ocampo,préstamos personales con mal crédito - préstamo person'</li><li>'no down payment first time home buyer,online home equity loan application in powder springs,homebuyer programs in powder springs,zero down payment homebuyer programs powder springs,homebuyer loan programs in powder springs,powder springshomebuye'</li><li>'préstamo personal en línea,prestamos en linea,prestamo personal,prestamos rapidos,prestamos al instante,préstamo personal'</li></ul> | | Stress Relief | <ul><li>'stress relievers in cheboygan,treatment for stress in cheboygan,how to manage major depressive disorder in cheboygan,abogados para accidentes de carros in cheboygan,abogados para accidentes in cheboygan,stress toys in cheboygan,massage in cheb'</li></ul> | | Fitness Coach | <ul><li>"cours d'entraînement près de moi vauvert,cours d'entraînement près de moi,cours d'entraînement près de moi vauvert fr,cours d'entraînement près de moi - cours d'entraînement près de moi france,personal trainer in vauvert,cours d'entraînement pr"</li><li>'at home fitness training,fitness app,lose weight fast,women fitness,free workout plan,fitness training'</li><li>'workout classes near me,workout classes near me leominster - workout classes near me,training classes near me,training classes near me leominster - training classes near me,workout classes near me united states,free gym workout plan'</li></ul> | | Debt Consolidation | <ul><li>'debt help,grants for debt relief,debt relief loans,debt consolidation programs,unsecured personal loans,can you get cash from a heloc'</li><li>'debt consolidation loans bad credit,debt consolidation programs,the best debt relief companies,personal loan,need a loan to pay all my debts,the best debt relief companies'</li><li>'free money to start a small business in united states,small business loan approved regardless of credit,emergency loan personal,same day business loan no credit check,small business immediate funding no credit check,grants loans for small business'</li></ul> | | Apartment Rentals | <ul><li>'apartmens in now 400$/month now! - 2 bedroom apartments in west orange new jersey,apartmens in new jersey now 400$/month now! - 2 bedroom apartments in west orange,apartmens in west orange now 400$/month now! - 2 bedroom apartments in new jersey,1 bedroo'</li><li>'2 bedroom apartments in ludhiana,apartments for seniors near me - 1 bedroom apartments near me punjab ludhiana,1-3 bedroom apartments ludhiana - 2 bedroom apartments in ludhiana,1 bedroom apartments near me 2024,1 bedroom apartments near me punjab ludhian'</li><li>'low-cost 2 bedroom apartments amritsar punjab,2 bedroom apartment rental amritsar,1 bed rooms for rent,pet-friendly apartments in amritsar,apartmens in amritsar now 400$/month now! - 1 bedroom apartments for rent near me 2024,apartmens in amritsar now 400'</li></ul> | | Apartments | <ul><li>'rent apartment near me,houses to rent nearby,two bedroom apartment in bonita springs,cheap apartments nearby,houses for rent bonita springs,{key6}'</li><li>'senior apts nearby,current apartment offers near me (must see),income based luxury apartments near me,house rentals,new senior apt for rent near me,affordable modern apartments near me'</li><li>'two bedroom apartment cherry hill,55 and older communities in cherry hill,2 bedroom for rent utilities included,one bed apt for rent,apartments to rent,{key6}'</li></ul> | | Auto Accessories | <ul><li>'best car accessories,car accessories store,car accessories store northampton'</li></ul> | | Outdoor Furniture | <ul><li>'unsold outdoor furniture deals in lafayette,discount outdoor patio furniture,lafayette outdoor furniture clearance,outdoor patio furniture sale,cheap outdoor patio furniture,tienda de muebles baratos'</li><li>'offerta salotti da giardino economici a castiglione del lago,set tavolo e sedie da giardino,offerta salotti da giardino economici a milazzo,mobili da esterno,arredo giardino scontatissimo,mobili da giardino outlet'</li><li>'muebles de terraza baratos,liquidación de muebles de exterior,ofertas de muebles de exterior sin vender en barcelona,muebles para jardin y terraza,muebles de exterior baratos,sofás de jardín'</li></ul> | | Circuit Board Assembler | <ul><li>'smt pcb assembly,prototype pcb assembly,circuit board manufacturing,wafer fabrication companies,pcb assembly services,semiconductor fabrication services'</li><li>'quick turn pcb manufacturer,prototype pcb assembly,prototype pcb assembly mooresville,heavy copper pcb manufacturers,prototype circuit board assembly mooresville,quick turn pcb fabrication'</li><li>'pcb fab and assembly jackson,circuit board manufacturing jackson,circuit board assembly companies near me jackson,circuit board manufacturing company jackson,circuit board manufacturing companies jackson,circuit board assembly companies jackson'</li></ul> | | Dental Implants | <ul><li>'join $2000 dental implant study near you - how to qualify for an dental implants clinical trial near me,join $2000 dental implant study near you,join $2000 dental implant study near you magnolia,best paid dental implant clinical trials near me 2024,$2000'</li><li>'dental implants cost near me,full mouth dental implants cost,dental implant seniors,affordable dental implants near you,full mouth dental implants san antonio,dental implants san antonio'</li><li>'implantes dentales puente alto maiquetía,implantes dentales puente alto,implantes dentales económicos en maiquetía,implantes dentales venezuela,implantes dentales asequibles cerca de mí,implantes dentales para personas mayores'</li></ul> | | Prostate Cancer Treatment | <ul><li>'first symptoms of prostate cancer and treatment bw,first symptoms of prostate cancer and treatment 2024,symptoms of prostate cancer,what are first hints of prostate cancer and best treatment - prostate cancer early detection,prostate cancer first signs,wh'</li><li>'first symptoms of prostate cancer and treatment ls,first symptoms of prostate cancer and treatment 2024,first symptoms of prostate cancer and treatment(orgovyx nubega),early signs of prostate cancer,prostate cancer symptoms,latest treatment for prostate c'</li><li>'first symptoms of prostate cancer and treatment,first symptoms of prostate cancer and treatment ,checking prostate cancer yourself,prostate cancer signs in males over 50,how to check for prostate cancer at home,prostate cancer treatment age 70,prostate ca'</li></ul> | | Erectile Dysfunction | <ul><li>'erectile dysfunction cure video,instant hard on pills over the counter - paid erectile dysfunction treatment clinical trials,erectile dysfunction cure video delevan,erectile dysfunction cure video new york,erectile dysfunction treatment at home'</li><li>'instant hard on pills over the counter - paid erectile dysfunction treatment clinical trials,erectile dysfunction cure video - erectile dysfunction treatment near me in atlanta georgia,instant hard on pills over the counter - paid erectile dysfunction tre'</li><li>'sertleşmeme sorunu nasıl tedavi edilir,paid erectile dysfunction clinical trials istanbul türkiye,treatment for erectile dysfunction - best treatment for erectile dysfunction,instant hard on pills over the counter - paid erectile dysfunction treatment'</li></ul> | | Astrology | <ul><li>'see my future online,predict my future free,free astrology,my future,get my future,बलेनो 2024'</li><li>'get my future,predict my future husband,my future,free astrology,see my future online,sign a pdf for free'</li><li>'see my future online,download the free kindle app,custom design,my future,get my future,बलेनो 2024'</li></ul> | | Sofas | <ul><li>'sofás baratos 1000,liquidación sofás,vista hermosa de negrete ofertas de sofás 2022 sin vender,mejores sofa cama mexico,san pedro de los naranjos ofertas de sofás 2022 sin vender,sofás en liquidación'</li><li>'sofa chile santiago de chile,sofa metropolitana,sofa santiago de chile - sofa santiago de chile 2024,sofa santiago de chile metropolitana,sofa santiago de chile 2024,sofa santiago de chile chile,sofa santiago de chile cl,sofa cl,sofa - sectional sofas'</li><li>'sofás cama baratos,liquidación sofás,tlalpan ofertas de sofás 2022 sin vender,sofás baratos 1000,mexico,sofás en liquidación'</li></ul> | | Car Auctions | <ul><li>'impound lot cars for sale near can monthly instalment,impound lot cars for sale near monthly instalment,impound lot cars for sale near gambo monthly instalment,impound lot cars for sale near nl monthly instalment,impound lot cars for sale near nl monthly'</li><li>'impound car sales near me buy now pay later,impound car sales near philadelphia buy now pay later,impound car sales near {state } buy now pay later,impound car sales near usa buy now pay late,{key5},{key6}'</li><li>'impound car sales near me buy now pay later,impounded vehicle auction,impound lot cars for sale near st catharines monthly instalment,impound lot cars for sale near me monthly instalment,impound car sales near st catharines buy now pay later,{'</li></ul> | | Fitness Apps | <ul><li>'lose weight and fat in 30 days,how to lose belly fat fast women,apps to get a flat stomach,lose weight fast,how to lose belly fat in 3 days,how to lose belly fat in 10 days'</li><li>'palestre milano,free fitness training course,gym workout,lose weight fast,at home fitness training,gym workout routine'</li><li>'lose weight fast,online personal trainer,lose belly fat in a week app,aerobic exercise at home for belly fat'</li></ul> | | Home Furniture | <ul><li>'canapé convertible soldé,magasins meubles,debarrassage de meubles,achat meubles pas cher,liquidation canapé,offres de meubles invendus'</li><li>'offerte di mobili invenduti,sconto divani,svuota tutto arredamento,mobili in liquidazione,cucine in svendita,offerte di mobili invenduti in civitanova marche'</li><li>'onverkochte meubelaanbiedingen,banken bankstellen,hoekbank goedkoop,relax stoelen,tuin meubel,uitverkoop meubelen'</li></ul> | | Disability Grants | <ul><li>'winter heating grants for disabled uk england,winter heating grants for disabled uk england barking,check heating grant eligibility 2024,heating grants disabled,grants for beds for disabled adults england,heating grants disabled 2024 england,winter heatin'</li><li>'apply now: disabled heating grants uk,claim your disabled heating grant uk,winter heating grants for disabled uk northern ireland londonderry'</li><li>'grants for beds for disabled adults england,grants for beds for disabled adults england bilston,grants for beds for disabled adults,government grants for beds for disabled,grants for beds for disabled adults england - grants for beds for disabled adults,g'</li></ul> | | Online Courses | <ul><li>'free only data analyst courses online with certification,online computer certificate courses free,free data analyst courses online with certification,education college online,.!.....data analyst government course with 100% job placement,data analyst gover'</li><li>'data analyst government course with 100% job placement,free data analyst courses online with certification,data analyst certification course,data analytics courses london,.!.....data analyst government course with 100% job placement,analytics courses'</li></ul> | | Babysitting Jobs | <ul><li>'babysitter jobs near me mississauga - babysitting jobs in mississauga,babysitting jobs in mississauga,babysitter jobs near me - babysitting jobs in mississauga canada 2024,babysitter jobs mississauga,babysitter jobs near me mississauga,babysitter jobs nea'</li></ul> | | Wigs | <ul><li>'hair wig shop near me,natural looking human hair wigs,wigs for women over 50,wig sale,gorgeous hair wigs,glueless wigs on sale'</li><li>'wigs for older women,natural looking human hair wigs,glueless wigs on sale,gorgeous hair wigs,wigs for sale,natural hair wigs sale'</li><li>'wigs for women over 50,ali pearl wigs,human hair wigs for older ladies,short wigs for older women,hair wig shop near me,wigs for older women'</li></ul> | | Car Transport Jobs | <ul><li>'recrutement,chauffeur livreur permis b - chauffeur livreur permis b france,chauffeur livreur permis b - chauffeur livreur permis b hauts-de-france'</li><li>'vacatures voor chauffeurs - single car transport driver jobs in the netherlands heiloo,autovervoer - single car transport driver jobs in the netherlands,vacatures voor chauffeurs - single car transport driver jobs in heiloo 2024,vacatures voor chauffeurs'</li><li>'pkw fahrer stellen - 69 € / stunde | transport aufträge fronreute baden-württemberg deutschland,fahre gesucht - autotransport,autotransport,single car transport driver jobs deutschland'</li></ul> | | Flooring Installation | <ul><li>'piso de epóxi uruguaiana,piso de epóxi rio grande do sul uruguaiana,piso de epóxi para garagem perto de mim uruguaiana,piso de epóxi uruguaiana - piso de epóxi,piso de epóxi uruguaiana 2024,piso de epóxi rio grande do sul'</li><li>'piso de epóxi indiara,piso de epóxi indiara 2024,piso colocado preço indiara,piso de epóxi indiara br,piso emborrachado antiderrapante,piso de epóxi,fornecedores de piso vinílico de luxo perto de mim,piso de epóxi para garagem perto de mim'</li><li>'pisos epoxi - pisos epoxi lomas del mirador buenos aires argentina,pisos epoxi - pisos epoxi lomas del mirador,pisos epoxi - pisos epoxi lomas del mirador buenos aires'</li></ul> | | Car Tires | <ul><li>'tire sales in kissimmee,tires packages in kissimmee,tire clearance sale in kissimmee,discount tires in kissimmee,{key5},{key6}'</li><li>'tire sales near me,tire sales rochester,best tire deals,best tire deals in rochester,tire prices,cheapest place to buy tires'</li></ul> | | Perfumes | <ul><li>'liquidación perfumes originales en madrid,perfumes en oferta,perfumes originales en oferta,liquidación perfumes,precio de perfumes para mujer,perfumes de mujer originales y baratos'</li><li>'perfumes de mujer originales y baratos,perfumes mujer,perfumes originales en oferta,liquidación perfumes originales en astorga,liquidación perfumes,precio de perfumes para mujer'</li><li>'parfums pas chers a prix d usine femme,parfums pas chers a prix d usine homme,parfum femme,eau parfum femme,liquidation de parfums,parfum homme'</li></ul> | | Packing Jobs | <ul><li>'trabalhos de empacotador para trabalhar de casa,empresas de embalagem em campinas,vagas disponíveis para empacotadores iniciantes,empregos de empacotador em armazém com benefícios,empregos de empacotador com contratação imediata,empresas de embalagem'</li><li>'empresas de embalagem em bauru,vagas disponíveis para empacotadores iniciantes,trabalhos de empacotador para trabalhar de casa'</li></ul> | | Waste Management Jobs | <ul><li>'klinische abfallsammelunternehmen berlin,müllentsorgungsdienst,entsorgungsdienste für medizinische abfälle berlin,abfallentsorgungsdienste für unternehmen,industrieabfallentsorgungsunternehmen berlin,müllentsorgungsdiens'</li><li>'müllunternehmen in der nähe,müllentsorgungsdienst,müllentsorgungsdienste,industrieabfallentsorgungsunternehmen stralsund,abfallentsorgung für labore stralsund,klinische abfallsammelunternehmen stralsund'</li></ul> | | Online Shopping Deals | <ul><li>'comprar en pricesmart,caribbean cruise,alaska cruise,apartments,online earning websites,retail'</li><li>'товар из китая,sell products on tiktok,compra en linea,compra aqui,vacation packages,vacations packages'</li><li>'online shopping deals,deals on amazon today,amazon best online deals shopping today,shop discount deals amazon,become an amazon seller,online shopping deals'</li></ul> | | Hepatitis C | <ul><li>'hepatitis c,hepatitis c early symptoms,software expediente clínico electrónico - hepatitis c mx,software expediente clínico electrónico - hepatitis c,hepatitis c méxico,software expediente clínico electrónico méxico'</li><li>'first warnings of hepatitis c in women and treatment india,first warnings of hepatitis c in women and treatment - what are the first symptoms of hep c,what does a hepatitis c rash look like india,what are the first symptoms of hep c india,what are the fir'</li><li>'software expediente clínico electrónico - hepatitis c,software expediente clínico electrónico - hepatitis c navaconcejo,picores por todo el cuerpo,gestion historial clinico 2024,hepatitis c early symptoms,gestion historial clinico navaconcejo,software'</li></ul> | | E-Bikes | <ul><li>'find the best migraine treatment,best migraine treatment near me,migraine massage near me,best neurologist for migraines norristown,natural remedies for headaches,best treatment for chronic migraines near me'</li><li>'günstig ebike kaufen,lichtgewicht e-bikes,elektrische fietsen shop,voorjaarspromotie e-bikes,e-bike kopen,mei aanbiedingen elektrische fietsen'</li><li>'rebike fahrrad,e bike langerringen,e bike auf rechnung bestellen,e bike auf raten kaufen ohne klarna,e bike,e bikes langerringen'</li></ul> | | Buy Now Pay Later Smartphone | <ul><li>'buy now pay later iphone 14 pro max,get now pay later phones,iphone 14 pro max buy now pay later,buy phone pay later,get now pay later phones,buy now pay later phones no deposit'</li></ul> | | Warehouse Jobs | <ul><li>'warehouse packing,warehousing,warehousing logistics services,warehouse packing usa,employment,warehouses nearby'</li><li>'lagerpositionen,nachtschicht-lagerjobs in meiner nähe,lagerkommissionierer und packer,aufträge als kommissionierer und packer,hochbezahlte lagerjobs,lageragentur in der nähe von chur'</li><li>'warehouses nearby,food packaging boxes,warehouse picking and packing,warehouses near me hiring,packing company nearby johnson city,packing company nearby'</li></ul> | | Blood Pressure Treatment | <ul><li>'control bp food,how can i get my blood pressure down right now,healthy foods,bp control food items,treat high blood pressure naturally,garlic control high blood pressure'</li></ul> | | Sleep Apnea | <ul><li>'sleep apnea treatment near minot,treatments for sleep apnea besides cpap,sleep apnea treatment center near me,sleep apnea treatment centers near me,treatments for obstructive sleep apnea,non surgical sleep apnea treatments'</li><li>'sleep apnea treatment cpap alternative,best non prescription sleep aid,treatments for sleep apnea,sleep apnea treatment without cpap,non surgical sleep apnea treatments,sleep apnea treatment'</li><li>'transcend 365 cpap,resmed apnealink,philips dreamwear full face mask with headgear,lofta cpap,resmed airmini travel auto cpap machine,cpap airsense 10 elite'</li></ul> | | Online Jobs | <ul><li>'online jobs in usa work from home,usa online jobs from home,online jobs in usa in lusaka,online jobs in usa near me,online tutors in the usa - web developers in the usa,online tutors in the usa'</li><li>'online jobs remote work,find job canada,apply for online jobs in canada,online marketing jobs,{key5},{key6}'</li></ul> | | Plumbing Jobs | <ul><li>'electricians toronto,plumber work training toronto,electricians on,handyman plumber toronto,electrician training on,plumbing work nearby toronto'</li><li>'plumbing jobs,plumber work,plumbers near me,plumbing work near me,plumbers nearby,plumbing jobs near me'</li><li>'handyman near me plumbing,plumber needed rockland,plumbing work near me rockland,plumbing work nearby,{key5},{key6}'</li></ul> | | Tuxedo Rental | <ul><li>'tuxedo rental price'</li></ul> | | Daycare Jobs | <ul><li>'escola infantil proximo a mim - creche berçario itaquaquecetuba são paulo,creches - escola infantil proximo a mim itaquaquecetuba 2025,creches itaquaquecetuba - escola infantil proximo a mim,creches - escola infantil proximo a mim itaquaquecetuba,creche'</li><li>'je recherche emploi mauritanie nouakchott,je recherche emploi mauritanie 2024,je recherche emploi mauritanie ,je recherche emploi mauritanie mr,je recherche emploi mauritanie - garde personnes agées,garde enfant autour de moi nouakchott,garde personnes a'</li><li>'part time daycare near me,part time daycare near me - day care,online childcare courses with certificate california - part time daycare near me,arabic infant daycare riverside - daycare jobs in riverside california,day care riverside california,arabic inf'</li></ul> | | Truck Driving Jobs | <ul><li>'work abroad poland driver,truck company inventory management systems and real-time warehouse tracking in canada,trucking companies in harare,delivery truck - delivery truck harare zimbabwe,delivery truck - delivery truck harare,truck jobs in harare,delive'</li><li>'local trucking jobs near me,local trucking jobs,truck driving jobs for seniors,truck job for seniors,best trucking companies to work for with no experience,instant truck driving jobs'</li><li>'vagas de motorista de caminhão,vagas de motorista de caminhão perto de mim,vagas de motorista categoria b,pequenas empresas de transporte de carga,pequenas empresas de transporte,truck driver job br,truck driver job espírito santo,truck driver job bras'</li></ul> | | Home Remodel | <ul><li>'home remodeling contractor woodbridge,bathroom home remodeling woodbridge,home remodeling near me woodbridge'</li><li>'home remodeling contractor cherry hill,remodeling home remodeling contractor cherry hill,home remodeling cherry hill,home restoration company cherry hillhome remodeling contractor cherry hill,remodeling home remodeling contractor cherry hill,hom'</li><li>'home remodeling baie-comeau,home remodeling contractor baie-comeau,home remodeling near me baie-comeau'</li></ul> | | Pet Insurance | <ul><li>'pet insurance for dogs,pet insurance comparison,pet insurance companies,best pet insurance for cats,best pet care insurance,dog medical insurance'</li></ul> | | Home Improvement Loans | <ul><li>'home improvement loan quotes,equity home improvement loan,best place for home improvement loan,apply for home improvement loan online,home improvement loan near me,home improvement financing no credit check'</li><li>'loans for home improvement,home improvement loan,personal loan for home improvement,best home improvement loans,loan for home improvement,best loan for home improvements'</li><li>'loan on my home,apply for home improvement loan,home improvement loan application,loan on my house,quick home improvement loans,quick home improvement loan'</li></ul> | | Medical Delivery | <ul><li>'medication delivery,medicine delivery at home,medication delivery san leandro,medicine home delivery,best medication delivery services for seniors in 2023,medicine delivery service near me'</li><li>"medicine delivery companies - find 2023's top search results,medicine delivery,medicine delivery denham springs,medicine delivery service near me,medication delivery denham springs,medicine home delivery"</li><li>'medicine delivery,prescription help for seniors - browse thousands of results,medicine delivery service near me,medicine delivery shelbyville,medication delivery,medicine delivery at home'</li></ul> | | Cell Phones | <ul><li>'最高の電話のお得情報,安い携帯電話,無料電話付きの携帯電話プラン,携帯電話を購入する,iphone pro max 16のお得情報,安い電話のお得情報'</li><li>'best phone deals,buy phones,cheap phone deals,cell phone plans with free phone,cheap phone,best cheap phone'</li><li>'best phone deals,cell phone plans with free phone,best cheap phone,mobile phones,cheap phone,best cheap phones'</li></ul> | | Office Cleaning Jobs | <ul><li>'local commercial cleaning companies,commercial and industrial cleaning,deep office cleaning services,corporate cleaning company,office cleaning sutton,local office cleaning companies'</li><li>'office cleaners boscombe,office cleaning services near me,commercial office cleaners,cleaning company uk,office cleaning service,{key6}'</li><li>'office cleaners,office cleaning services near me,cleaning company uk,office cleaning,office cleaning service,office cleaners birmingham'</li></ul> | | Cosmetics | <ul><li>'cheap luxury makeup,luxury makeup,cheap luxury makeup online,makeup clearance sale online'</li></ul> | | Weight Loss | <ul><li>'what to eat on a keto diet,information,how to lose weight,how to lose stomach fat quickly,food for diet,water diet'</li><li>'water intake per day,lose weight drinking water,what to eat on a keto diet,lose weight fast,water diet,water daily intake'</li><li>'food to lose weight,what to eat on a keto diet,food for diet,drink water,how to lose belly fat in a week,to lose belly fat'</li></ul> | | Credit Cards | <ul><li>'best buy credit card no interest,amazon gift card cashback,credit cards for 500 credit score no deposit,best welcome offer credit card,best 0 apr business credit cards,business card offers'</li><li>'今お金借りる,信用調査なしのクレジットカード,即時承認ローン,ブラックでも作れるカード,簡単承認クレジットカード,収入証明書なしお金借りる'</li><li>'apply credit card online singapore,guaranteed instant credit card approval in 02,guaranteed instant credit card approval in singapore,instant credit card approval,credit card application online instant approval,{key6}'</li></ul> | | Tires | <ul><li>'buy new tires in my area,tire discounters near me 2024 florida,tire discounters near me north port,buy new tires in my area in florida,buy new tires in my area in north port'</li><li>'4 tire deals online,4 tire deals online mchenry'</li><li>'buy new tires in my area,tire discounters near me 2024 new jersey,tire discounters near me east brunswick,buy new tires in my area in new jersey,buy new tires in my area in east brunswick'</li></ul> | | Plumbing Services | <ul><li>'apply now: plumbing jobs,klempner stellenangebote in essen,sanitär jobs deutschland in essen,freie installateurstellen,shk stellenangebote,rohrleitungsbauer stellenangebote'</li></ul> | | Alzheimer's Treatment | <ul><li>"alzheimer primeros sintomas,prueba para alzheimer,alzheimer's test for early detection metropolitana santiago de chile,alzheimer primeros sintomas - prueba para alzheimer chile,alzheimer's test for early detection metropolitana 2024,alzheimer primeros sin"</li><li>'test voor alzheimer - beginnende alzheimer herkennen,alzheimer test online,test voor alzheimer,test voor alzheimer - alzheimer test,test voor alzheimer - beginnende alzheimer herkennen 2024,test voor alzheimer - beginnende alzheimer herkennen papendrecht'</li><li>"alzheimer test 12 questions online south australia,alzheimer test 12 questions online - 10 early signs of alzheimer's au australia,treat migraine symptoms,test for alzheimers disease,alzheimer early signs - what are the 7 signs of alzheimer's,tests for de"</li></ul> | | Used Cars | <ul><li>'αναστασακησ hair clinic greece - αγορα αυτοκινητου greece,αναστασακησ hair clinic greece - αγορα αυτοκινητου,αναστασακησ hair clinic greece,αναστασακησ hair clinic greece larissa'</li><li>'acheter voiture occasion,acheter voiture occasion vevey,acheter voiture occasion {country},acheter voiture occasion pres de chez moi vevey'</li><li>'lista de precios autos - autos usados en oferta con hasta 50% de descuento en méxico.,lista de precios autos - ofertas coches seminuevos baratos méxico san nicolás de los garza nuevo león,ofertas coches seminuevos baratos - autos sin enganche y mensua'</li></ul> | | Kidney Cancer | <ul><li>'treatment of diabetic kidney disease,kidney disease treatment guidelines,chronic kidney disease stage 3 treatment,chronic kidney disease treatment medication,stage 5 kidney disease treatment,chronic kidney disease treatment'</li><li>'stage 2 kidney disease treatment,treatment for kidney disease,stage 5 kidney disease treatment,kidney disease treatment guidelines,kidney disease symptoms in females treatment,kidney disease treatment'</li><li>'stage 3b chronic kidney disease in illinois,stage 3b chronic kidney disease in naperville,stage 3b chronic kidney disease naperville,stage 3b chronic kidney disease illinois,new drug for kidney disease,{key6}'</li></ul> | | Chronic Kidney Disease | <ul><li>'chronic kidney disease solution en espanol,chronic kidney disease treatment en espanol,memphis chronic kidney disease solution en espanol,ulcerative colitis treatment video'</li></ul> | | Oven | <ul><li>'complete powder coating systems nassau,induction furnace nassau comilla,powder coat in comilla,powder coat near me,induction furnace nassau bangladesh,induction furnace nassau bd'</li></ul> | | Employee Retention Credit | <ul><li>'how to apply for employee retention credit,employee retention credit deadline 2023,tax incentives,retention policies,employee benefit,employee retention credit'</li><li>'employment retention credit,worker comp coverage,1 employee payroll,employee retention credit apply,leadership skills training,employee retention credits'</li></ul> | | FHA Loans | <ul><li>'fha loans with no credit scores and bad credit tauranga,no down payment first time home buyer coromandel forest park,no down payment first time home buyer swanson,fha loans with no credit scores and bad credit wanaka,fha loans with no credit score'</li><li>'no down payment first time home buyer,fha loan for a credit score under 580,first-time home loans with zero down payment,instant home loan approval online,zero down payment home loan,first home buyers'</li><li>'first home grant,loan to buy house,fha loan application - check your 2023 eligibility,cancel timeshare legally,very bad credit home loans,equirements for fha loan'</li></ul> | | Bedding | <ul><li>'pillow shams,king size bedding,king comforter sets,bedroom comforter sets,bedding sets,bedding sale'</li></ul> | | Mental Health | <ul><li>'check your mental health 2025,check your mental health 2025 br,mental wellness check'</li></ul> | | Senior Apartments | <ul><li>'rent senior apartment nearby sugar land ,low rent senior apartments ,55 up senior living communities ,senior housing for rent (1 and 2 bedrooms) - see prices,independent senior apartments with in home care services nearby ,c'</li><li>'1 bedroom apartments near me - apartments for rent for seniors 55 and older walsall united kingdom,1 bedroom apartments near me - apartments for rent for seniors 55 and older,1 bedroom apartments near me - apartments for rent for seniors 55 and older wals'</li><li>'senior apartments canada,senior housing options in grafton,age-restricted apartments,senior living amenities,retirement apartments in grafton,55 apartments in grafton'</li></ul> | | Backyard Fence | <ul><li>'same day fence installation,fence contractors,fence repair,fence companies spruce grove,backyard fence cost,fence professionals near me'</li></ul> | | Construction Jobs | <ul><li>'construction companies in kumasi - construction company in kumasi,construction companies in kumasi,no experience construction jobs kumasi'</li><li>'construction companies in san juan - construction company in san juan philippines,construction companies in san juan - construction company in san juan,construction company in san juan - construction companies near me san juan'</li><li>'construction jobs for freshers,construction jobs in india,construction jobs salary,private construction jobs,construction jobs in chandigarh,construction jobs in chandigarh-mohali-panchkula'</li></ul> | | Medication Delivery Jobs | <ul><li>'medical courier jobs gerlingen,arzneimittelzustellung 24/7 gerlingen,travel medical assistant jobs gerlingen,suche mitarbeiter heizung sanitär,vollzeitstelle in der medizinischen versorgung gerlingen,free job posting'</li><li>'lieferdienst medizintechnik,travel medical assistant jobs in monchengladbach,arzneimittelzustellung 24/7 in monchengladbach,medical assistant jobs near me,krankenhauszustellung notfall,medical courier jobs in monchengladbach'</li><li>'medical courier jobs gross wittensee,linkedin hiring gross wittensee,medical courier jobs gross wittensee,arzneimittelzustellung 24/7 gross wittensee,free job posting,vollzeitstelle in der medizinischen versorgung gross wittensee'</li></ul> | | Auto Insurance | <ul><li>'55 and older communities in,cheapest car insurance quote,insurance quotes edmonton,cheapest insurance,cheap internet plans for seniors,senior car insurance'</li><li>'best senior car insurance - senior auto insurance,car insurance quote,cheap car insurance quotes,drivers: cheap auto insurance quotes,cheap car insurance skipsea,best cheap car insurance'</li><li>'günstigste kfz-versicherungsangebote donaueschingen,günstige kfz versicherungen,auto versicherung preis,autoversicherungen vergleichen,{key5},{key6}'</li></ul> | | Life Insurance | <ul><li>'cheapest insurance kl,seguro de vida hipoteca,life insurance rates by age,$9.95 life insurance,whole life insurance quotes,simulador seguro coche'</li><li>'life insurance for over 65,seniors: $13/month life insurance quotes,seniors: $13/month life insurance quotes,seniors $13 month life insurance quotes,life insurance rates by age,life insurance quotes seniors'</li><li>'whole life insurance quotes cagayan de oro,compare life insurance quotes msr,$9.95 life insurance msr,select quote life insurance msr,life insurance quote term msr,whole life insurance quote msr'</li></ul> | | Forklift Jobs | <ul><li>'gabelstapler in meiner nähe, lagermitarbeiter staplerfahrer meppen - 127 jobs in meppen für staplerfahrer {country}, gabelstapler in meiner nähe {country}, gabelstapler in meiner nähe meppen, gabelstapler-jobs in vollzeit {country} {state}, jobs als g'</li><li>'praca na wózkach widłowych w pobliżu poznan,praca na wózkach widłowych w magazynie w poland - praca na wózkach widłowych w pobliżu,praca na wózkach widłowych w pobliżu - praca na wózkach widłowych w pobliżu poznan,praca na wózkach widłowyc'</li><li>'vorkheftruck bij mij in de buurt,werk zoeken magazijnmedewerker - heftruckbanen fulltime in belgium be,werk zoeken magazijnmedewerker - heftruckbanen fulltime in belgium mechelen be'</li></ul> | | Home Care Jobs | <ul><li>'hiring for home care [city},at home care for elderly,private home care for elderly near me,family home care services,home care services,homecare uyo'</li><li>'home elderly care services,elderly home care,hiring for home care benson,home care services,home instead care agency,private home care for elderly near me'</li><li>'finding in home care for elderly,home instead care agency,at home care for elderly,hiring for home care whyalla,{key5},{key6}'</li></ul> | | Clinical Trials | <ul><li>'participate in new studies for dental implants in east bank,participate in new studies for dental implants in east bank - how to qualify for an dental implants clinical trial near me,how to qualify for an dental implants clinical trial near me,participate'</li><li>'high paying depression clinical trials online,highest paid depression clinical trials near me cherry hill new jersey,get paid for depression clinical trials near me,best paid depression clinical trials near me 2024,lucrative depression and anxiety clinica'</li><li>'high paying hearing loss clinical trials near me,high paying hearing loss clinical trials in sacramento,best paid hearing loss clinical trials near me 2024,high paying hearing loss clinical trials near me us'</li></ul> | | Doors | <ul><li>'industrial roller doors,exterior front entry doors - exterior doors with glass 2024,36 x 80 fiberglass exterior door,front door replacement,doors in montevideo - exterior front entry doors (see doors in montevideo) montevideo department,exterior front ent'</li><li>'industrial roller doors,roller shutter doors,front door quotes near me - quote front door,front door quotes near me east kazakhstan,front door quotes near me (see doors in ust-kamenogorsk),front door replacement,doors in ust-kamenogorsk - exterior front e'</li><li>'exterior front entry doors - exterior front entry doors mapo-gu,doors in mapo-gu,exterior front entry doors,exterior front entry doors - exterior doors with glass seoul,doors in mapo-gu - exterior front entry doors 2024,doors in mapo-gu - exterior front e'</li></ul> | | Cars For Sale | <ul><li>'neuwagen zum halben preis thailand 2024,unverkaufte 2023 suvs zu verkaufen chonburi,bang lamung: unverkaufte beschlagnahmte autos werden in 2025 fast verschenkt - gebrauchtwagen preise,bang lamung: unverkaufte beschlagnahmte autos werden in 2025 fast vers'</li><li>'buy used cars,best cheapest used vehicles,used cars for sale,bank owned cars,used vehicles,buy a car'</li></ul> | | Paid Sperm Donation | <ul><li>'sperm clinic nearby,donate sperm in {country},sperm donor clinic in {country},sperm donor clinic in ingleside,sperm donor companies in {country},donate sperm in ingleside,sperm donor clinic,donate sperm,where to donate sperm near me'</li><li>'sperm clinic nearby,donate sperm in {country},sperm donor clinic in {country},sperm donor clinic in laramie,sperm donor companies in {country},donate sperm in laramie,sperm donor clinic,donate sperm,where to donate sperm near me'</li></ul> | | Senior Living | <ul><li>'senior housing near me,senior homes nearby coventry,senior homes nearby'</li><li>'sorrento: new two bed senior apartments are stunning! (click & see)'</li></ul> | | Pro Bono Attorneys | <ul><li>'criminal lawyer nj - pro bono abogados inmigracion en español usa,criminal lawyer nj - pro bono abogados inmigracion en español usa 2024,criminal lawyer nj - pro bono abogados inmigracion en español usa país vasco,pro bono abogados inmigracion en espa'</li><li>'pro bono abogados inmigracion en español usa piera - pro bono abogados inmigracion en español usa,criminal lawyer nj - pro bono abogados inmigracion en español usa españa,pro bono abogados inmigracion en español usa - pro bono lawyers for immigrants'</li><li>'criminal lawyer nj - pro bono abogados inmigracion en español usa 2024,criminal lawyer nj - pro bono abogados inmigracion en español usa,criminal lawyer nj - pro bono abogados inmigracion en español usa valencia,pro bono abogados inmigracion en españo'</li></ul> | | Cyber Security Jobs | <ul><li>'paid cybersecurity training - cyber security paid training remote,cyber security paid training online quebec canada,cyber security paid training online 2025'</li><li>'paid cyber security training,cybersecurity paid training,cyber security paid training online abbotsford'</li><li>'cyber security paid training online - cyber security paid training online harare,cyber security paid training online zimbabwe,paid cybersecurity training - cyber security paid training online,paid cyber security training - paid cyber security training 202'</li></ul> | | Dental Fillings | <ul><li>'dental crowns clinic near me in st. petersburgflorida,dental crowns - dfenatal crowns near you in st. petersburgflorida,walk in dentist near you in st. petersburgflorida,emr systems for small practice'</li></ul> | | Dog Food | <ul><li>'best dog food,best food brands for dogs,best dry dog food brands,dog food brands,avoid these dog food,best dog food brands'</li></ul> | | Retirement Planning | <ul><li>'retirement options'</li></ul> | | Urgent Care | <ul><li>'urgent care with ultrasound'</li></ul> | | Prostate Cancer | <ul><li>'first signs of prostate cancer,enlarged prostate,symptoms of prostate cancer in seniors,psa symptoms of prostate cancer,metastatic prostate cancer,prostate cancer symptom checker'</li><li>"what's new in prostate cancer treatment,prostate cancer treatment guidelines 2023,prostate cancer treatment latest news,prostate cancer,how long can you live with gleason 6 prostate cancer,stage 5 prostate cancer treatment"</li></ul> | | Medical | <ul><li>'donation center nearby uttar pradesh,donation center nearby,sperm clinic nearby india - haartransplantatie in nederland uttar pradesh in amroha,cash jobs india,cash jobs,donation center nearby - cash jobs'</li><li>'donation center nearby india,sperm clinic nearby india - haartransplantatie in nederland national capital territory of delhi delhi,donation center nearby,cash jobs 2024,cash jobs national capital territory of delhi,donation center nearby - cash jobs'</li><li>'donation center nearby india,donation center nearby punjab,cash jobs india,donation center nearby,cash jobs,sperm clinic nearby india - invisalign cerca de mí india ludhiana (see medical services donor in ludhiana) 2024'</li></ul> | | Macular Degeneration | <ul><li>'treatments for macular degeneration,cause macular degeneration,eye medication for macular degeneration,early signs of macular degeneration and treatment avacincaptad,macular degeneration care,treatments for dry eye disease'</li><li>'anti vegf for macular degeneration,dry eyes artificial tears,eylea wet amd,dry eyes artificial tears,cure for macular degeneration,treatments for macular degeneration'</li></ul> | | Certificate of Deposit | <ul><li>'rate cds'</li><li>'discover bank cd rates'</li></ul> | | Senior Cruises | <ul><li>'cruise holidays for singles over 50,cruises for single seniors over 60,cruises for single adults only,cruises for single adults only united kingdom,cruises for single adults only (see senior singles cruises in york),cruise holidays for singles over 50 - c'</li><li>'cruises for single adults only,singles over 50 cruises 2024,cheap singles cruises,cruise holidays for singles over 50,cruises for single adults only england,cruises for single seniors over 60 luton'</li></ul> | | Caregiver Services | <ul><li>'caregiving agencies nearby,private sitters for elderly near me,overnight caregiver near me,night caregiver jobs near milpitas,home care jobs near me,caregiver job'</li><li>'caregiving agencies nearby,caregiver service near london,caregivers needed nearby,apply for caregiver jobs,{key5},{key6}'</li><li>'caregiving agencies nearby,private sitters for elderly near me,caregivers needed nearby,night caregivers wanted,caregiver job,private overnight caregiver jobs near me'</li></ul> | | Skin Care | <ul><li>'skin tightening facial,skin tightening treatment,skin tightening procedures face,face wrinkles treatment,top rated wrinkle creams anti aging'</li><li>'botox for face wrinkles,botox treatment for face near me,face slimming botox,botox treatment for face,botox facial near me'</li><li>'skin tightening treatment,skin tightening facial,face wrinkles treatment,skin tightening,face skin tightening'</li></ul> | | Breast Cancer | <ul><li>'signs of breast cancer - early breast cancer signs and symptoms usa,easy signs,early breast cancer signs and symptoms usa - early breast cancer signs and symptoms usa 2024,breast cancer symtoms in sarnia,breast cancer symtoms near me,early breast cancer s'</li><li>'breast cancer first symptoms,early breast cancer signs and symptoms usa - signs of breast cancer us,early breast cancer signs and symptoms usa - breast cancer signs in females over 50,early breast cancer signs and symptoms usa ,early breast cancer signs a'</li><li>'breast cancer first symptoms - early breast cancer symptoms,early breast cancer symptoms,breast cancer beginning'</li></ul> | | Electrician Jobs | <ul><li>'jobs als elektrikerhelfer,elektrikerfirmen,elektriker-assistent-jobs,jobmöglichkeiten als elektriker,elektriker jobs in oppenweiler,elektriker-jobs in meiner nähe'</li><li>'elektriker-jobs in meiner nähe,jobmöglichkeiten als elektriker,elektriker jobs in paderborn,elektrikerfirmen,jobs als elektrikerhelfer,elektriker-assistent-jobs'</li><li>'jobs als elektrikerhelfer,jobmöglichkeiten als elektriker,elektriker-assistent-jobs,elektriker jobs in unterwellenborn,elektriker-jobs in meiner nähe,elektrikerfirmen'</li></ul> | | Migraine Treatment | <ul><li>'best treatment for chronic migraines near me sandy,migrane relief sandy,best migraine treatment near me sandy,headache and migraine relief sandy'</li><li>'migraine treatment study trial in my area,best migraine treatment nearby,headache specialist in my area,migraine treatment study trial mill valley,migraine relief nearby mill valley,headache and migraine relief mill valley,buy migraine relief'</li><li>'buy migraine relief largo,best treatment for chronic migraines near me largo,headache specialist in my area largo,best migraine treatment near me largo,migraine relief online largo'</li></ul> | | Digital Marketing Jobs | <ul><li>'marketing digital trabalhar online,marketing digital trabalhar online inhumas,marketing digital trabalhar online brasil,pós marketing digital - clique aqui - inhumas goiás,pós marketing digital - clique aqui - inhumas,analise de dados marketing - marke'</li></ul> | | Services | <ul><li>'rent a car on monthly basis,promo croisiere,los mejores suv de lujo no vendidos de 2022-2024 ya casi a la venta,renta de carros cerca de mi, precios de los suv no vendidos en 2024,topcar'</li><li>'precios de los suv no vendidos en 2024,lloguer vehicles,las alquiler de coches, los suv usados \u200b\u200bmás baratos,los mejores suv de lujo no vendidos de 2022-2024 ya casi a la venta,rav4 deals'</li></ul> | | Belly Fat Removal | <ul><li>'home remedies to reduce belly fat without exercise,best stomach reduction exercises 2023,belly fat reduction,lose stomach fat,foods that burn belly fat,lose belly fat women'</li><li>'lose belly fat exercise,belly fat reduction,lose belly fat fast,home remedies to reduce belly fat without exercise,exercises to lose belly fat at home,foods that burn belly fat'</li><li>'lose stomach fat,lose belly fat fast,exercises to lose belly fat at home,best stomach reduction exercises 2023,belly fat reduction,home remedies to reduce belly fat without exercise'</li></ul> | | Security Jobs | <ul><li>'सुरक्षा गार्ड की नौकरियां indore,सुरक्षा गार्ड की नौकरियां indore - सुरक्षा गार्ड की नौकरियां indore madhya pradesh,नि'</li><li>'praca w ochronie w pobliżu mnie 2025 poland - praca w ochronie w pobliżu mnie 2025,firma ochroniarska lodz poland - praca w ochronie w pobliżu mnie 2025 poland,praca w ochronie w pobliżu mnie 2025 poland,firma ochroniarska lodz poland,praca w ochronie'</li><li>'security baustellenbewachung lohne - security baustellenbewachung deutschland niedersachsen,lohne it security dienstleistung - security baustellenbewachung lohne,security baustellenbewachung deutschland - security mitarbeiter gesucht niedersachsen'</li></ul> | | Malpractice Attorneys | <ul><li>'medical malpractice lawyer in'</li></ul> | | Ultherapy | <ul><li>'ultherapy best price,cost of thread face lift,ultherapy near me,ultherapy specials near me,ultherapy for 70 year old,ultherapy treatment areas'</li><li>'lifting sin cirugia ultherapy,ultherapy precio,ultherapy cerca de mí,quitar arrugas con laser precio,ultherapy para personas de 70 años,mejor precio para ultherapy'</li></ul> | | Wrinkle Treatment | <ul><li>'botox injections,best wrinkle treatment,botox injections for forehead wrinkles in houston,dermal fillers cost,wrinkle treatment near me,anti-aging treatments'</li></ul> | | Lawn Mowers | <ul><li>'top 10 zero turn mowers'</li></ul> | | Depression Treatment | <ul><li>'drugs to treat severe depression,depression treatment center in sacramento,drugs to treat anxiety and depression,depression and anxiety medication,depression anxiety stress test,take a depression test right now'</li><li>'anxiety and depression treatment memphis,treatment centers memphis,depression and therapy memphis,stress anxiety depression,online depression and anxiety medication,betterhelp as a therapist memphis'</li><li>'take depression test,depression test online,mental health depression test,take depression test india mumbai,test to see if you have depression,test to know if you have depression'</li></ul> | | Food Packing Jobs | <ul><li>"emballage sur voie d'emballage tremblay-en-france,emballage pour livraison de produits surgelés,entreprise d'emballage professionnelle tremblay-en-france,emballage alimentaire de livraison,emballage alimentaire refermable,entreprises d'emballage alimenta"</li><li>"emballage alimentaire de livraison,emballage alimentaire refermable,entreprise d'emballage professionnelle pradines,entreprises d'emballage alimentaire sous contrat pradines,entreprise d'emballage à proximité,emballage sur voie d'emballage pradines"</li><li>'packing company near me,resealable food packaging,delivery food packaging,packaging for frozen food delivery,contract food packaging companies singapore,packlane packaging singapore'</li></ul> | | Addiction Counseling | <ul><li>'counseling degree programs ,counseling degree,counseling degree programs,recovery treatment center,addiction treatment,therapy counseling,recovery treatment center,rehab treatment centers nearby,counseling courses'</li></ul> | | Home Decor | <ul><li>'home decor interior design,elegant luxury home decor - home decor interior design 2024,elegant luxury home decor (see home decor in rawalpindi)'</li><li>'home decor interior design noida,home decor in noida,home decor interior design uttar pradesh'</li></ul> | | Leadership | <ul><li>'leadership training programs'</li><li>'leadership programs in silver spring,leadership training near me silver spring,leadership courses in silver spring,leadership and management courses maryland,leadership certification in maryland,leadership training courses maryland'</li></ul> | | HVAC Services | <ul><li>'best hvac company near me,local heating and air contractors,hvac companies near me'</li><li>'best hvac service near me borger,i need hvac service near me borger,heating and cooling near me borger'</li><li>'hvac companies near me ,hvac near me ,hvac service near me ,best heat and air service near me ,best heating and ac companies near me'</li></ul> | | Bipolar Treatment | <ul><li>'participate in studies for new bipolar medication in edmond,participate in studies for new bipolar medication in edmond - how to qualify for a new bipolar medication clinical trial near me,how to qualify for a new bipolar medication clinical trial near me'</li><li>'participate in studies for new bipolar medication in livermore,participate in studies for new bipolar medication in livermore - how to qualify for a new bipolar medication clinical trial near me,how to qualify for a new bipolar medication clinical trial n'</li><li>'bipolar disorder treatment centers,best bipolar medication,bipolar disorder specialist,bipolar depression treatment,inpatient bipolar treatment,vonore bipolar treatment centers'</li></ul> | | Mental Health Test | <ul><li>'mental health test free,mental health check,mental illness test for teenagers india,mental illness test for teenagers in,mental illness test for teenagers lucknow,discover your mental wellness in - discover your mental wellness 2025,mental health test fre'</li></ul> | | Dental Implant Grants | <ul><li>'apply for dental implant grant in north carolina united states - full set of teeth implants cost 2025,apply for free dental implants (see dental implant grants in decatur),apply for dental implant grant illinois - dental implant grants near me decatur,hos'</li><li>'ayuda para implantes dentales para personas mayores 2024,clínicas dentales gratis cerca de mi,solicite implantes dentales gratuitos puebla de zaragoza méxico'</li></ul> | | Phone Service | <ul><li>'cellphone deals,cellphone plans,fold 5 samsung phone,cheap cellphone plans,cellphone or cell phone,prepaid cellphone'</li><li>'free iphone phone deals,get a new phone for free,best phone deals,iphone 13 pro for cheap,free iphone government phone,best new phone deals'</li></ul> | | Pest Control | <ul><li>'termite and pest control near me,pleasanton pest control companies,local pest control near me,termite and pest control near me,pest control near me,pest control near me'</li><li>'termite treatment brooklyn,termite inspection and pest control,local pest control near me,residential pest control services near me,local pest control service,rodent removal companies near me'</li><li>'nearby exterminator company valley stream,pest control companies near me,pest control products,pest control home,pest control near me valley stream,pest control prevention remedies'</li></ul> | | Meal Delivery Services | <ul><li>'home meal delivery service healthy,cheapest meal delivery service,weight loss diet meal kit delivery,fresh meals delivered to your door,weight loss meal delivery,best weight loss meal plan delivery'</li></ul> | | Cleaning Jobs | <ul><li>'locuri de muncă în românia suceava - angajari suceava,angajari suceava,locuri de muncă în românia suceava - angajari suceava ro,locuri de muncă în românia suceava - angajari suceava 2024,angajari suceava - locuri de muncă în românia suceava,lo'</li><li>'servicios de limpieza de empresas cercanos,servicios de limpieza de oficinas cerca de mí,servicios de limpieza de oficinas cerca de mí madrid,servicios de limpieza de oficinas cerca de mí madrid madrid 2024 españa - servicios de limpieza de oficinas c'</li></ul> | | Medical Alert Systems | <ul><li>'medical alert systems for seniors with fall detection,personal monitor,medicare benefits for seniors,senior alert systems,medical alert systems for seniors with fall detection in united states,health insurance covered medical alert systems for s'</li><li>'best sleep and stress tracker,mattress sleep tracker,sleep tracker samsung watch,sleep tracking mattress,most accurate wearable sleep tracker,sleep monitoring system'</li><li>'medical alert system for seniors,alert devices,medical alert system for senior with fall detection near me,medical alert monitoring for seniors,medicare benefits for seniors,senior alert systems'</li></ul> | | Coats | <ul><li>"heywood: women's coat sale: prices that will surprise you!"</li></ul> | | Eyelid Surgery | <ul><li>'en español eyelid surgery in cuautitlán izcalli,en español eyelid lift in cuautitlán izcalli,méxico eye bag surgery en español in cuautitlán izcalli,eyebag removal near me en español,eyelid tightening near m'</li></ul> | | Hotels | <ul><li>'near hotel,near hotel siddhapur'</li></ul> | | Paid Clinical Trial | <ul><li>'participate in new studies for rheumatoid arthritis in duluth,participate in new studies for rheumatoid arthritis in duluth - how to qualify for a rheumatoid arthritis clinical trial near me,how to qualify for a rheumatoid arthritis clinical trial near me'</li></ul> | | Massage Services | <ul><li>'sentinal one,full body massage services,relaxation massage spa,أقرب مركز مساج,مركز مساج,massage spa nearby'</li><li>'massage training - working as a massage therapist,massages nearby,registered massage therapist sarnia,home service massage near me,special asian girl massage vista,massage for migraine near me'</li><li>'massage complet du corps à proximité,massothérapie à proximité,spa de massage à proximité,services de massages,body massage near me,massage du dos à proximité'</li></ul> | | Veterans Assistance | <ul><li>'va disability claim,disabled veteran benefits,benefits for the employees,disability benefits,va disability benefits,abogado inmigracion consulta'</li></ul> | | Cloud Storage | <ul><li>'best cloud storage providers,1 tb cloud storage price,online backup server,most secure cloud storage,cloud backup storage,sites like dropbox'</li><li>'programa facturación online,cloud storage managed services in united states uruguay 2024 - programa facturación online,cloud storage in ,storage data protection uruguay,cloud storage near me,cloud based storage services'</li></ul> | | Window Replacement | <ul><li>'indio window replacement,window replacement companies in the indio area,window and door replacement indio'</li><li>'window replacement,window replacement companies in my area,residential window replacement near me,window replacement in bell gardens'</li><li>'city} window replacement,window replacement companies in the bemidji area,window and door replacement bemidji,window replacement companies in my area'</li></ul> | | Electric Cars | <ul><li>'small electric car for seniors in india,small electric car for seniors in dehradun,small electric car for seniors in uttarakhand,electric cars for seniors in dehradun,small electric car for seniors,small electric car for seniors india'</li><li>'2012 honda civic sdn lx,2012 honda civic sdn lx usa,2012 honda civic sdn lx in st james,2012 honda civic sdn lx st james,2012 honda civic sdn lx in new york,2012 honda civic sdn lx new york'</li><li>'small electric car for seniors - electric cars for seniors in dehradun,electric cars for seniors in dehradun,small electric car,electric vehicle cost,electric cars for seniors in 2024 (see electric cars for seniors in dehradun),electric cars for seniors i'</li></ul> | | Wart Treatment | <ul><li>'immunotherapy for warts,common wart treatment,genital wart removal ga,plantar wart removal,wart freezing treatment,natural remedies for warts'</li></ul> | | Energy Drinks | <ul><li>'boost energy drink,natural energy drinks for intercourse,best homemade energy booster drink,electrolyte drink,natural healthy energy drink for instant'</li><li>'natural energy drinks for intercourse,natural energy drink for stamina increase,natural energy drinks for bed,boost energy drink,electrolyte drink'</li></ul> | | DIY Christmas Gifts | <ul><li>'overactive bladder leakage,how do you get overactive bladder,what is the best medicine for overactive bladder,overactive bladder treatment,overactive bladder medication,overactive bladder symptoms'</li></ul> | | Welding Jobs | <ul><li>'welding companies durban - welding companies kwazulu-natal,welding companies durban za,welding companies durban - welding jobs nearby,job apply,welding companies kwazulu-natal (see welding jobs in durban),welding companies durban kwazulu-natal'</li><li>'welding jobs nearby,welding companies adelaide 2024,welding jobs hiring nearby'</li><li>'job apply,welding and fabrication company 2024,welding and fabrication company - welding companies abuja,welding and fabrication company - welding companies abuja nigeria,welding and fabrication company - welding companies abuja ng,welding companies abuja'</li></ul> | | Eye Bag Surgery | <ul><li>'under eye bags home treatments,how to remove eyebags easily,how to remove eyebags easily - under eye bags treatment 2024,how to remove eyebags easily - under eye bags treatment nashville,under eye bags home treatments united states,under eye bags home tre'</li></ul> | | UTI Treatment | <ul><li>'uti treatment over the counter,uti treatments,best antibiotic for uti in females,natural cure treatment,uti infection treatment,treatment for uti'</li></ul> | | Teeth Straightening | <ul><li>'invisalign moins cher,invisalign prix a ville,alignement dentaire prix,appareil dentaire adulte invisible,appareil dentaire adulte prix,alignement dentaire invisible'</li></ul> | | Bipolar Disorder | <ul><li>'take depression test mumbai,take bipolar test mumbai,take bipolar test 2024 in india,take depression test maharashtra,bipolar symptoms quiz maharashtra,take anxiety and bipolar test maharashtra,7 signs of bipolar disorder 2024 in - bipolar self test mahar'</li><li>'test de ansiedad y depresion santo domingo,prueba de depresión y ansiedad,test de ansiedad y depresion'</li><li>'masennustesti pirkanmaa,kaksisuuntainen masennustesti - masennustesti,masennustesti finland,masennustesti pirkkala,masennustesti - kaksisuuntainen masennustesti,masennustesti fi'</li></ul> | | Hair Transplant | <ul><li>'ideal laser hair removal prices,hair transplant,hair clinic nearby,hair clinic kot ghulam muhammad,hair transplant cost,hair transplant cost kot ghulam muhammad'</li><li>'prezzo trapianto capelli in italia,trapianto capelli prezzo a naples,trapianto di capelli in italia prezzo,trapianto capelli italia dove farlo,trapianto capelli italia costi,trapianti capelli turchia'</li><li>'best hair transplant in turkey,hair transplant turkey cost,turkey hair transplant center,best hair transplantpennsauken'</li></ul> | | Painting Jobs | <ul><li>'painting contractors - interior painter in my area 2024 us pune in,painting contractors - interior painter in my area 2024 us 2024,trabajo de pintura,interior painter in my area 2024 in,i am a painter looking for work,interior painter in my area 2024 - pa'</li></ul> | | Car Carpets | <ul><li>'car carpets in astana,all weather car floor mats astana,all weather car floor mats kazakhstan,all weather car floor mats astana,car carpets near me,all weather car floor mats 2024'</li><li>'rubber truck floor mats 2024 - rubber truck floor mats (see car carpets in aktobe) kz,rubber truck floor mats 2024 - rubber truck floor mats (see car carpets in aktobe),best car floor mats,all weather car floor mats kazakhstan,all weather car floor mats,r'</li></ul> | | Botox | <ul><li>'botox near me milton,find best injection prices milton,prices for botox near me milton,best botox doctor for seniors near me milton'</li><li>'prices for botox near me,botulinum botox near me,juvederm voluma specials near me in horse cave'</li><li>'topical botox in stony point,botox for fine lines forehead near me,botox in neck lines,botox for feet sweating'</li></ul> | | Skin Rejuvenation | <ul><li>'laser skin rejuvenation center near me india,skin rejuvenation near me delhi,laser skin rejuvenation center near me delhi,skin rejuvenation in delhi 2024 - skin rejuvenation in delhi national capital territory of delhi,skin rejuvenation in delhi - best la'</li></ul> | | Debt Relief | <ul><li>'pay off debt,i want to pay off my debt,apply loan,loans to pay off debt,pay off credit card debt,help paying off debt'</li><li>'free government scheme to clear debt in my city,free government scheme to clear debt,free government scheme to clear debt new jersey,free government scheme to clear debt in martinsville,free government scheme to clear debt in new jersey,free government sc'</li></ul> | | Psoriasis Treatment | <ul><li>'symptoms of atopic dermatitis in toddlers,top-rated eczema creams for atopic dermatitis,pediatric dermatologist specializing in atopic dermatitis,advanced atopic dermatitis treatments 2024,latest biologic treatments for atopic dermatitis,natural'</li></ul> | | Cooktops | <ul><li>'gas cooktops kareli,black friday cooktop clearance sale,induction cooktops,smeg induction cooktop,appliances refrigerators,induction cooktop'</li><li>'30 inch gas range,gas cooktop,small stoves,professional induction cooktop,portable cooktops,gas cooktops'</li><li>'30 inch electric cooktop,gas cooktops faisalabad,electric cooktop,electric stoves,gas cooktops,{key6}'</li></ul> | | Mobility Scooters | <ul><li>'plataforma salvaescaleras precio,light weight folding mobility scooter,electric mobility scooters,disability,scooter electrico adulto,grants for people with disabilities raleigh'</li></ul> | | Metal Roofs | <ul><li>'roofing companies near me,local roofing companies in my waldorf,best roofing company nearby'</li></ul> | | Contractors | <ul><li>'local roofing companies in my fort lee,paving companies near me,couvreur rénovation,best roofing companies nearby,roofing companies near me,junk removal near me,junk removal near me'</li></ul> | | Home Loans | <ul><li>'chase bank usda loan,usda home loan lenders,quicken loans usda,usda loan realtor near me elizabethtown,apply for usda loan online,usda home loan pre approval'</li><li>'fastest online home loan approval,guaranteed online home loan approval,home improvement loans for poor credit,personal loan for home improvement,no credit check home improvement loans'</li></ul> | | Hearing Test | <ul><li>'hearing test,take hearing test,free hearing test,free hearing test on phone,free hearing loss phone,how to do hearing test at home'</li></ul> | | Delivery Jobs | <ul><li>'best food delivery service for my restaurant usa,car delivery service in the usa,usa best food delivery service for my restaurant,amazon delivery jobs,own car delivery jobs,delivery jobs'</li></ul> | | Window Blinds | <ul><li>'window shades and blinds,window shades and blinds молдова,window shades and blinds - blackout curtains,window shades and blinds md,window shades and blinds donduşeni,window shades and blinds дондюшанский район'</li></ul> | | Dentists | <ul><li>'landenberg: best dentist for seniors near you. see the list!'</li></ul> | | Depression Test | <ul><li>'take depression test - free depression test 2025,take depression test,take depression test - test to know if you have depression,start a depression test now 2025,start a depression test now,take depression test - depression test through pictures'</li><li>'addiction degrees online,take depression test,take depression quiz'</li></ul> | | Bank Owned Cars | <ul><li>'coches sin entrada y sin cuota final andalucía - compra de coche segunda mano españa,coches en oferta - coches sin entrada y sin cuota final ,coches en oferta - coches sin entrada y sin cuota final andalucía españa,vehiculo ocasion ,coches sin ent'</li></ul> | | Rehab Treatment Center | <ul><li>'therapy center nearby,therapy centers nearby,therapy centers supply,residental treatment centers near me in supply,residental treatment centers near me'</li></ul> | | Depression | <ul><li>'depression test for teens,depression test,take depression test,mental health self-assessment test,start a depression test now 2025'</li><li>'help with depression princeton,anxiety and depression treatment princeton,depression treatment near me,major depression treatment,treatment resistant depression'</li></ul> | | Mortgage Loans | <ul><li>'no down payment government home loans in navajo,no down payment government home loans navajo,no down payment government home loans usa,no down payment government home loans new mexico,no down payment government home loans in new mexico,no down payment gov'</li><li>'no down payment government home loans orlando,no down payment government home loans,no down payment government home loans in florida,no down payment government home loans in orlando,no down payment government home loans usa,no down payment government home'</li></ul> | | Military School | <ul><li>'online schools for military,military friendly online college pennsylvania,military friendly online college bryn mawr,pennsylvania online schools for military'</li></ul> | | Seizure Medication | <ul><li>'seizure surgery,seizure control methods,seizure treatment centers,seizure management,seizure prevention techniques,vagus nerve stimulation for seizures'</li></ul> | | Buy Now Pay Later | <ul><li>'fabric sofas,reclining sofas,affordable sofas,pay later,mid-century modern sofas,bohemian sofas'</li></ul> | | Planning Tools | <ul><li>'豊橋市 家族葬,豊橋市 家族葬式場,家族葬 豊橋市,家族葬サービス 豊橋市,豊橋市での家族葬'</li></ul> | | Reverse Mortgage | <ul><li>'viviendas a credito,prestamos sin checar buro de credito,best reverse mortgage lenders near me,best jumbo reverse mortgage lenders,best proprietary reverse mortgage lenders,home line of credit bad credit'</li><li>'best reverse mortgage lenders in rockville centre,best jumbo reverse mortgage lenders,fence companies in usa,best proprietary reverse mortgage lenders,best reverse mortgage lenders near me,como financiar una casa'</li></ul> | | Home Care | <ul><li>'caregiving jobs hiring nearby,home care near me,care homes near me - home care in abuja,home care in abuja fct,care homes near me fct,care homes near me (see home care in abuja)'</li></ul> | | Detox Diets | <ul><li>'good diet foods,lose 20 pounds in 1 week detox diet,detox diet meal plan,keto acv gummy,to lose belly fat,juice diet'</li><li>'juice cleanse detox,3 day fast weight loss,detox diet recipes,3 day cleanse,detox local san jose,3 day cleanse to lose belly fat'</li></ul> | | Diabetes | <ul><li>'medicaid diabetes prevention program,medicare diabetes program,blood sugar treatments on health insurance,diabetes diagnosis and treatment,continuous glucose monitoring devices,best medicine for diabetes'</li><li>'best blood sugar treatments near me,medicare for seniors,healthy food meal plans for diabetes,health insurance quotes with 100% diabetes treatment,diabetes diet for seniors with full medicare coverage,medicare competitive bidding diabetic supp'</li><li>'golo for life - best diet for diabetes 2,medicare changes for insulin cap,diabetes medications medicare,best blood sugar treatments near me,continuous glucose monitoring devices,diabetic supplies'</li></ul> | | Dating Apps | <ul><li>'en español matchmaking near me,páginas de citas totalmente gratuitas en español in campohermoso,páginas de citas totalmente gratuitas en español in campohermoso,español páginas de citas totalmente gratuitas near me in campohermoso'</li></ul> | | Rhinoplasty | <ul><li>'rhinoplasty near me,rhinoplasty in esquipulas,rhinoplasty cost en español,nose rhinoplasty nearby,orthopedic knee specialist near me'</li></ul> | | Car Cleaning Kit | <ul><li>"spray foam - spray foam in my city,spray foam lake charles,what's the best car cleaning products - spray foam,car cleaning near me,what's the best car cleaning products lake charles,what's the best car cleaning products - spray foam united states"</li></ul> | | Online Banking | <ul><li>'banque en ligne 150 euros offert,ouvrir un compte bancaire en ligne sans justificatif,ouvrir un compte bancaire en ligne en 5 minutes,banque en ligne 150 euros offert à beauvais,ouverture de compte avec argent offert,ouvrir compte bancaire'</li><li>'open bank account online free no deposit,open bank account online instantly with no deposit,open a bank account online no minimum balance,banks that don t require minimum deposit,open bank account online no deposit,open bank account online zero'</li></ul> | | European Cruises | <ul><li>'all inclusive iceland and greenland,iceland and greenland cruises for seniors - all inclusive iceland and greenland texas,iceland and greenland cruises for seniors texas'</li></ul> | | Digital Marketing Courses | <ul><li>'digital marketing degree course,digital marketing,digital marketing degree,digital marketing business,marketing digital y redes sociales,best way advertise a small business'</li><li>'curso de marketing digital torremolinos,aprender marketing digital,rock content,estudiar marketing digital torremolinos,platzi curso de marketing digital para negocios'</li></ul> | | Gutter Cleaning | <ul><li>'gutter cleaning near me pointe-a-la-croix,gutter and drain cleaning,local gutter cleaning companies,gutter cleaners nearby,gutter contractors in my area'</li><li>'local gutter cleaning companies,gutter cleaning near me pottstown,gutter and drain cleaning'</li></ul> | | Bad Credit Credit Cards | <ul><li>'fast cash direct deposit with bad credit in minutes,need a loan to pay all my debts,instant approval credit cards,same day loan approval regardless of credit,instant approval credit card,easy approval credit card'</li><li>'7k limit instant credit card online,credit card for bad credit,instant credit card approval,get cashback credit card,guaranteed credit card approval,bad credit credit card guaranteed approval'</li></ul> | | Jobs in USA | <ul><li>'empresas para reclutar personal - working visas in the usa ,empresas para reclutar personal - working visas in the usa,working visas in the usa - buscando un trabajo,plataformas de reclutamiento de personal,working visas in the usa - empresas para recluta'</li><li>'empresas para reclutar personal - contratacion de personal tuxtla gtz,empresas para reclutar personal - working visas in the usa,empresas para reclutar personal - working visas in the usa tuxtla gtz,trabajar en - work in usa 2024 california estados unidos'</li></ul> | | Roofing Services | <ul><li>'español roofer insulation company chamba cornelius,español roofer insulation company cornelius hiring,companies that hire insulation workers in cornelius,cornelius insulation local recruitment roofers,headache specialist in my area'</li></ul> | | Cruise Ship Jobs | <ul><li>'cruise ship jobs,jobs vacancies,apply for cruise ship jobs online for free - all cruise jobs,cruise ship jobs in bengaluru 2024,cruise ship jobs in bengaluru,apply for cruise ship jobs online for free karnataka'</li><li>'je recherche emploi - croisière en tout inclus ca,je recherche emploi sherbrooke,je recherche emploi québec,je recherche emploi - croisière en tout inclus,cruise ship jobs canada,je recherche emploi 2024'</li><li>'cruise ship jobs,jobs vacancies,work abroad on cruise ships new delhi,work abroad on cruise ships national capital territory of delhi,work abroad on cruise ships 2024,work abroad on cruise ships - all cruise jobs'</li></ul> | | Houses For Sale | <ul><li>'townhouses for sale,condos and townhomes for sale near me,new construction homes new jersey,townhomes for sale,small town homes for sale,townhouses to buy near me'</li></ul> | | Solar Battery | <ul><li>'solar power system with battery (see solar batteries in multan),solar power system with battery pk,solar battery cost multan,battery technology,solar battery cost pakistan - solar inverter and battery,solar power system with battery - solar power system w'</li></ul> | | Keto Diet | <ul><li>'best keto meal delivery,keto diet,keto diet for beginner,best keto diet for crazy weight loss,perfect keto diet for crazy weight loss,keto diet plan'</li><li>'7-day keto meal plan for beginners,keto diet meal plans,diet meal plans,keto meal plan,free keto diet plans,keto diet meal plan'</li></ul> | | Land For Sale | <ul><li>'land with abandoned house for sale near bathurst,land with abandoned house for sale near au'</li></ul> | | Dental Care | <ul><li>'low cost all on 4 dental implants,dentists nearby,find dentist near me,dentists for seniors near me scarborough,low cost dentist,denture teeth'</li><li>'dental care for seniors in toronto ontario - dental care for seniors in toronto,dental care for seniors in toronto,free dental care for seniors near me - free dental implants for seniors,free dental care for seniors near me - free dental implants for seni'</li></ul> | | Deck Installation | <ul><li>'deck renovation near me west palm beach (see deck installations in west palm beach),deck renovation near me west palm beach,deck renovation near me west palm beach florida'</li></ul> | | GMC Sierra | <ul><li>'2020 gmc sierra 1500 denali 4x4,gmc sierra denali 2500 price,gmc sierra 2500 for sale,gmc sierra 1500 for sale,2020 gmc sierra 1500 denali price,gmc sierra clearance sale'</li></ul> | | Homecare Jobs | <ul><li>'caregiver services near me nsw,home caretaker for elderly wolverhampton,home care agencies hiring,𝐃𝐀𝐘𝐂𝐀𝐑𝐄 𝐂𝐀𝐑𝐄𝐆�'</li><li>'home care agencies hiring,caretaker homes near me,caregivers needed nearby in phl,caregiver services near me wa,senior care home,in home care near me hiring'</li></ul> | | Senior Housing | <ul><li>'retirement homes in bewdley,senior housing costs bewdley,senior housing in bewdley,senior living apartments in bewdley,residential care homes in bewdley,{key6}'</li><li>'residential care homes in grampound,senior living apartments in grampound,senior housing costs grampound,senior housing in grampound,retirement homes in grampound,{key6}'</li></ul> | | Lead Generation | <ul><li>'outbound lead generation services'</li></ul> | | Call Center Jobs | <ul><li>'call center jobs without experience - work from home call center hiring 2024 bedford,call center jobs from home united kingdom - work from home call center hiring 2024,work from home call center jobs united kingdom - call center jobs from home united king'</li></ul> | | Online Doctor | <ul><li>'management services,pipa brasil,consultório para ginecologista,medico especialista em escoliose,medical practice ehr,especialista auditivo'</li></ul> | | Relationship Advice | <ul><li>'masaj salonu türkiye istanbul,aile ve ilişki terapisti türkiye,aile ve ilişki terapisti istanbul türkiye,masaj salonu türkiye - aile ve ilişki terapisti türkiye 2024,masaj salonu türkiye - aile ve ilişki terapisti türkiye tr,aile ve ilişki ter'</li></ul> | | Cleaning Services | <ul><li>'commercial and industrial cleaning,local office cleaning companies,office cleaning swindon,local commercial cleaning companies,deep office cleaning services,corporate cleaning company'</li><li>'office cleaning camden,commercial and industrial cleaning,local office cleaning companies,corporate cleaning company,local commercial cleaning companies,deep office cleaning services'</li><li>'deep office cleaning services,corporate cleaning company,local office cleaning companies,local commercial cleaning companies,commercial and industrial cleaning,office cleaning birmingham'</li></ul> | | Snow Removal Services | <ul><li>'snow removal companies in whiting,snow removal contractor in whiting,commercial snow removal contractors in whiting,snow removal quotes whiting,residential snow removal contractors in whiting,whiting snow removal services,snow removal in whiting'</li><li>'snow removal kansas city, snow removal near me, snow removal services, snow removal equipment'</li><li>'snow removal companies in new orleans,snow removal contractor in new orleans,commercial snow removal contractors in new orleans,snow removal quotes new orleans,residential snow removal contractors in new orleans,new orleans snow removal services,snow remo'</li></ul> | | Email Marketing Jobs | <ul><li>'email marketing jobs from home - email marketing jobs from home colombo,email marketing jobs from home sri lanka - email marketing jobs online,email marketing jobs from home sri lanka - email marketing jobs from home sri lanka 2024,email marketing jobs fr'</li><li>'email marketing jobs from home lebanon - email marketing jobs online 2024,email marketing jobs from home lebanon,email marketing jobs from home lebanon lb,email marketing jobs from home lebanon 2024,email marketing jobs from home lebanon - email marketing'</li></ul> | | Anime Figurines | <ul><li>'manila philippines,top tourist attractions in manila,things to do in manila,explore manila’s culture and history,travel guide for manila philippines'</li></ul> | | Laser Hair Removal | <ul><li>'laser hair removal for private parts in indiana - laser hair removal near me indianapolis us united states,laser hair removal near me indianapolis,laser hair removal for private parts in indiana - laser hair removal near me indianapolis us (see laser hair'</li></ul> | | Medicare | <ul><li>'medicare tablet,free tablet from government 2023,free tablet for medicare recipients,free tablet through medicare,medicare gov tablet,medicare tablet for seniors'</li></ul> | | Electric Bikes | <ul><li>'best ebike for senior citizens,best electric bike for seniors uk,best electric bike for seniors uk (see e-bikes in karachi),e-bikes in san jose,e-bikes in karachi,ebike for sale near me,kona electric near me 2024'</li></ul> | | Miscellaneous Jobs | <ul><li>'español slab pipe leak plumbing service in seattle,español slab leak plumbing service in seattle,español slab pipe leak plumber in seattle,24 7 slab leak plumbing near me,español slab leak broken pipe repair service in seattle'</li></ul> | | Waterproofing Services | <ul><li>'屋根修理業者 見積もり kaminagaya,外壁塗装 耐久性 kanagawa,屋根の修理 費用 kaminagaya,外壁塗装 おすすめ kanagawa,屋根リフォーム 安い kaminagaya,外壁塗料 選び方 kanagawa,屋根の修繕 業者 ka'</li></ul> | | Data Entry Jobs | <ul><li>'data entry work from home egypt,data entry work from home - data entry work from home egypt,data entry remote jobs no experience (see remote data entry jobs in ) - data entry work from home ,data entry work from home - work from home,data entry remote'</li><li>'data entry work from home romford,data entry work from home,data entry remote jobs no experience (see remote data entry jobs in romford) - data entry work from home romford united kingdom,data entry remote jobs no experience (see remote data entry jobs in'</li></ul> | | Envelope Stuffing Jobs | <ul><li>'dafür bezahlt werden, umschläge zu stopfen,amazon-umschlagfüller-job,arbeiten, um von zu hause aus geld zu verdienen,online-nebenjob,dafür bezahlt werden, umschläge von zu hause aus zu stopfen,stellenvermittlung'</li></ul> | | Luxury Cars | <ul><li>'jeep grand cherokee l availability 2025 (see new outlander in columbus),jeep grand cherokee l availability,2024 hyundai santa fe hybrid buy'</li></ul> | | Fertility Clinics | <ul><li>'fertility clinic near me tamil nadu,fertility center near me,fertility clinic near me namakkal'</li><li>'fertility clinic near me shillong,fertility center near me,fertility clinic near me shillong in'</li><li>'donors sperm - fertility center near me tamil nadu,donors sperm - fertility center near me coimbatore 2024,selling business near me india'</li></ul> | | Data Analytics Tools | <ul><li>'programa gestion mantenimiento,data analysis,digital marketing en español,data analytics,programa de gestion de flotas,base de datos en la nube gratis,sistema de gestion empresarial,software de gestion humana,analytics data'</li><li>'programa gestion mantenimiento,sistema de gestion empresarial,data analytics solutions'</li></ul> | | Heart Failure | <ul><li>'prescriptions that cause congestive heart failure and treatment 2024 us south carolina,prescriptions that cause congestive heart failure and treatment 2024,prescriptions that cause congestive heart failure and treatment us,prescriptions that cause congest'</li></ul> | | Investment Services | <ul><li>'tarek buys houses,sell house to flipper,fix and flip lender,local house flippers,fix and flip buyers,one day house flip'</li></ul> | | Auto Loans | <ul><li>'royal enfield bikes zero finance,royal enfield bikes in meerut city,royal enfield bikes near me,royal enfield bikes in meerut city,royal enfield bikes in up,{key6}'</li><li>'royal enfield bikes in up,royal enfield bikes in bareilly,royal enfield bikes zero finance,royal enfield bikes in bareilly,royal enfield bikes near me,{key6}'</li></ul> | | Machinery | <ul><li>'automatic bottle filling capping labeling and sealing machine mexico - capping machine for bottles algeria bd,pharmaceutical packaging morocco - automatic bottle filling capping labeling and sealing machine mexico 2024 bd,automatic bottle filling capping'</li><li>'automatic bottle filling capping labeling and sealing machine mexico - capping machine for bottles algeria,automatic bottle filling capping labeling and sealing machine mexico - capping machine for bottles algeria kg kyrgyzstan,automatic bottle filling ca'</li><li>'micro rugged connector turkey,micro rugged connector turkey - micro rugged connector turkey dz,taiwan semiconductor - micro rugged connector turkey,semiconductor processing equipment near me belgium algiers,semiconductor processing equipment near me belgi'</li></ul> | | Security Guard Jobs | <ul><li>'security guard romford,security guard companies in romford,find the security guard company in romford,security officer romford,{key5},{key6}'</li><li>'security guard companies in big hill (greater bendigo),job apply,security guard employment,event security guard positions,security guard job openings,security guard companies in munich'</li></ul> | | Real Estate | <ul><li>'pousada em,imoveis caixa,imobiliarias em,software para condomínio,curso de corretor de imoveis,apartamento em ubatuba'</li></ul> | | Sclerotherapy | <ul><li>'español sclerotherapy cost near me in irving,free español sclerotherapy consultation near me in texas,español sclerotherapy cost near me in texas,spanish sclerotherapy cost near me in texas,$1 sclerotherapy consultation near me in texas,$1 sclerotherap'</li></ul> | | Attorneys | <ul><li>'family and law attorney,child custody lawyer,top criminal defense attorneys,family lawyers nearby,family lawyer,felony criminal lawyers nearby,themostsuccessful car accident lawyer near me,child custody attorney'</li><li>'top criminal defense attorneys,best car accident lawyer near me,legal help top legal attorney,felony criminal lawyers nearby,family and law attorney,divorce lawyers nearby,family law attorney,child custody attorney'</li><li>'family and law attorney,family court lawyer,child custody attorney,divorce lawyers nearby,themostsuccessful car accident lawyer near me,family lawyers nearby,felony criminal lawyers nearby,legal help top legal attorney'</li></ul> | | Nursing Degrees | <ul><li>'online sjuksköterskekurser,snabba sjuksköterskeprogram online,gratis utbildningar distans,få betalt för att studera sjuksköterskeutbildning,sjuksköterskeutbildningar online,kurser distans'</li></ul> | | Roof Repair | <ul><li>'aislamiento de renovación de techo - aislamiento de renovación de techo fresnillo zacatecas,aislamiento de renovación de techo - aislamiento de renovación de techo fresnillo zacatecas méxico 2024,reparacion de tejados y cubiertas méxico,reparar hume'</li></ul> | | Bed Bugs | <ul><li>'bug treatment,to kill bed bugs with heat,kill bedbugs with heat,best bed bug exterminator near me,killing bedbugs with heat,bed bug fumigation near me'</li></ul> | | Cancer Treatment | <ul><li>'immunotherapy for primary peritoneal cancer,immunotherapy for gist cancer,immunotherapy in elderly cancer patients,types of immunotherapy for leukemia,immunotherapy for primary liver cancer,the immunotherapy revolution'</li></ul> | | High School Diploma | <ul><li>'no cost online high school diploma programs,get a high school diploma online,get a high school diploma online arlington heights,high school diploma online for adults'</li></ul> | | Breast Lift | <ul><li>'breast lift doctors in north lauderdale,breast lift doctors cost in florida,natural breast lift in north lauderdale,breast uplift near me'</li></ul> | | Modular Homes | <ul><li>'top modular home builders davenport,new modular homes davenport,prefabricated home davenport'</li></ul> | | Auto Deals | <ul><li>'buy $800 car now pay later,buy $800 car now pay later la puente,buy $800 car now pay later in la puente,buy $800 car now pay later in us,buy $800 car now pay later la puente city,buy $800 car now pay later in la puente city'</li></ul> | | Allergy Relief | <ul><li>'allergy induced asthma'</li></ul> | | Label Printers | <ul><li>'portable label printer,brother label printer,shipping label printer,sticker printers,sticker printer,label maker'</li><li>'wireless label printer,printable labels,label printers,label printer,portable label printers,dymo label printer'</li></ul> | | Gaming Chairs | <ul><li>'best gaming chair,insane gaming chair (see gaming chairs in sacheon-si) - ergonomic gaming chair,insane gaming chair (see gaming chairs in sacheon-si) 2024,ergonomic gaming chair (see gaming chairs in sacheon-si),insane gaming chair (see gaming chairs in'</li></ul> | | Military Jobs | <ul><li>'jobs available in,veteran employment opportunities,empleo cerca de mi,army job in the united states without experience,military job in the united states for civilians,military job hiring'</li></ul> | | Bathroom Vanity | <ul><li>'cheapest bathroom vanities,bathroom shower tile,tienda de muebles baratos,vanity countertops,bathroom vanities and sinks,small bathroom design'</li></ul> | | Phone Storage | <ul><li>'backup data with cloud storage in usa,backup data with cloud storage,backup data with cloud storage massachusetts,backup data with cloud storage in chilmark,backup data with cloud storage in massachusetts,backup data with cloud storage chilmark'</li><li>'backup data with cloud storage ellon,backup data with cloud storage in ellon,backup data with cloud storage,backup data with cloud storage in gbr,backup data with cloud storage abd,backup data with cloud storage in abd'</li></ul> | | Lawn Care | <ul><li>'cleaning companies nearby,gardening services nearby,yard maintenance services,lawn mowing service,green lawn,starting small business'</li></ul> | | House Cleaning Jobs | <ul><li>'water cleanup rockwall,water cleanup company,water cleanup service,water damage cleanup,water cleanup near me,water damage cleanup near me'</li></ul> | | Pet Adoption | <ul><li>'pet adoption near me,small puppy shelter,canto pricing,buy a dog,adoption dogs,adoption nearby'</li></ul> | | Air Conditioning Services | <ul><li>'mitsubishi ductless air conditioner,air conditioner near me utica,affordable mini split ac san jose,venta de aire acondicionados,ductless ac,affordable mini split ac bergenfield'</li></ul> | | Water Damage | <ul><li>'electric test equipment south korea,water damage near me,basement leak repair 2024 - basement leak repair gwangju kr,basement leak repair 2024 - basement leak repair gwangju nam-gu,basement leak repair 2024 nam-gu gwangju,basement leak repair 2024 - basem'</li></ul> | | Plaque Psoriasis | <ul><li>'best cream for psoriasis over the counter,best body wash for psoriasis cape town,psoriasis removal western cape - how to get rid of plaque psoriasis 2024,psoriasis pictures western cape - psoriasis pictures 2024 cape town,psoriasis pictures - psoriasis pi'</li></ul> | | Senior Care | <ul><li>'home care services for seniors - assisted senior care near me in chennai and tamil nadu,care homes nearby in india,assisted senior care near me in chennai india,senior care homes nearby in chennai,home care workers - home care services for seniors india,h'</li></ul> | | Waterproofing Jobs | <ul><li>'waterproofing companies near me in washington,basement leak repair near me in washington,waterproof basement today in washington,basement water damage in washington,district of columbia water damage repair in washington'</li></ul> | | Spa Services | <ul><li>'sabana de algodon,king bed set,bedroom comforter sets,waterford,queen comforter sets clearance,sabana cama'</li></ul> | | Cyber Security Degrees | <ul><li>'cybersecurity courses,cybersecurity,psychologie ohne abitur,fernuni,cyber security degree near me,studium berufstätige'</li></ul> | | Weight Loss Clinical Trials | <ul><li>'participate in new studies for weight loss in orangevale,participate in new studies for weight loss in orangevale - how to qualify for an weight loss clinical trial near me,how to qualify for a weight loss clinical trial near me,participate in new studies'</li></ul> | | Lymphoma | <ul><li>'new lymphoma treatments montclair - new lymphoma treatments,new lymphoma treatments montclair - new lymphoma treatments united states,new lymphoma treatments montclair - new lymphoma treatments new jersey'</li></ul> | | Virtual Assistant Jobs | <ul><li>'freelance virtual assistant,virtual assistant jobs online,virtual assistant jobs remote,virtual assistant job,online assistant jobs,virtual assistant jobs near me'</li></ul> | | Online Dating | <ul><li>'meet single women india,video conferencing solutions,free dating site - meet single women india,free dating site telangana,free dating site (see online dating in hyderabad),free dating site 2024,meet single women india hyderabad,headache specialist in my'</li></ul> | | Veterinary Assistant School | <ul><li>'curso online auxiliar de veterinaria - curso online auxiliar de veterinaria maia porto,curso online auxiliar de veterinaria - curso online auxiliar de veterinaria porto 2024 maia,curso online auxiliar de veterinaria - curso online auxiliar de veterinaria'</li></ul> | | Cruises | <ul><li>'msc cruises senior discount glen allen,cruise deals 2024 glen allen,last minute cruise deals glen allen,viking cruises for single seniors glen allen,cruise travel insurance for seniors glen allen,caribbean cruise packages glen allen,all inclus'</li></ul> | | Car Values | <ul><li>'kelley blue book values used trucks 2018'</li></ul> | | Job Search | <ul><li>'home work job,food packaging near me,food packaging,packaging,warehousing,box packaging fillmore'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.765 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("economy/st_cr_keywords") # Run inference preds = model("best cloud storage providers,1 tb cloud storage price,online backup server,most secure cloud storage,cloud backup storage,sites like dropbox") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 1 | 23.424 | 47 | | Label | Training Sample Count | |:-----------------------------|:----------------------| | Caregiver Jobs | 25 | | Janitorial Jobs | 1 | | Remote Jobs | 8 | | Personal Loans | 42 | | Stress Relief | 1 | | Fitness Coach | 28 | | Debt Consolidation | 9 | | Apartment Rentals | 31 | | Apartments | 67 | | Auto Accessories | 1 | | Apartments | 67 | | Outdoor Furniture | 3 | | Apartments | 67 | | Apartments | 67 | | Apartments | 67 | | Circuit Board Assembler | 12 | | Apartments | 67 | | Dental Implants | 41 | | Prostate Cancer Treatment | 6 | | Erectile Dysfunction | 7 | | Astrology | 24 | | Sofas | 5 | | Car Auctions | 6 | | Fitness Apps | 15 | | Home Furniture | 14 | | Disability Grants | 5 | | Apartments | 67 | | Online Courses | 2 | | Dental Implants | 41 | | Babysitting Jobs | 1 | | Wigs | 4 | | Car Transport Jobs | 9 | | Flooring Installation | 24 | | Car Tires | 2 | | Perfumes | 6 | | Packing Jobs | 2 | | Car Transport Jobs | 9 | | Waste Management Jobs | 2 | | Online Shopping Deals | 28 | | Hepatitis C | 4 | | E-Bikes | 3 | | Buy Now Pay Later Smartphone | 1 | | Remote Jobs | 8 | | Warehouse Jobs | 48 | | Blood Pressure Treatment | 1 | | Flooring Installation | 24 | | Disability Grants | 5 | | Sleep Apnea | 5 | | Personal Loans | 42 | | Flooring Installation | 24 | | Personal Loans | 42 | | Car Transport Jobs | 9 | | Online Jobs | 2 | | Plumbing Jobs | 4 | | Remote Jobs | 8 | | Fitness Coach | 28 | | Tuxedo Rental | 1 | | Daycare Jobs | 8 | | Personal Loans | 42 | | Fitness Coach | 28 | | Truck Driving Jobs | 5 | | Home Remodel | 6 | | Apartments | 67 | | Personal Loans | 42 | | Home Furniture | 14 | | Personal Loans | 42 | | Circuit Board Assembler | 12 | | Pet Insurance | 1 | | Sleep Apnea | 5 | | Dental Implants | 41 | | Apartments | 67 | | Personal Loans | 42 | | Personal Loans | 42 | | Fitness Coach | 28 | | Online Shopping Deals | 28 | | Warehouse Jobs | 48 | | Home Improvement Loans | 3 | | Car Transport Jobs | 9 | | Medical Delivery | 3 | | Fitness Coach | 28 | | Astrology | 24 | | Cell Phones | 4 | | Office Cleaning Jobs | 8 | | Cosmetics | 1 | | Weight Loss | 9 | | Credit Cards | 12 | | Caregiver Jobs | 25 | | Tires | 3 | | Warehouse Jobs | 48 | | Apartment Rentals | 31 | | Plumbing Services | 1 | | Alzheimer's Treatment | 8 | | Outdoor Furniture | 3 | | Online Shopping Deals | 28 | | Used Cars | 4 | | Apartments | 67 | | Kidney Cancer | 13 | | Chronic Kidney Disease | 1 | | Circuit Board Assembler | 12 | | Oven | 1 | | Office Cleaning Jobs | 8 | | Apartment Rentals | 31 | | Apartments | 67 | | Employee Retention Credit | 2 | | Personal Loans | 42 | | FHA Loans | 9 | | Bedding | 1 | | Mental Health | 1 | | Senior Apartments | 7 | | Backyard Fence | 1 | | Construction Jobs | 4 | | Caregiver Jobs | 25 | | Prostate Cancer Treatment | 6 | | Fitness Coach | 28 | | Apartment Rentals | 31 | | Medication Delivery Jobs | 8 | | Fitness Coach | 28 | | Auto Insurance | 13 | | Sofas | 5 | | Apartment Rentals | 31 | | Life Insurance | 5 | | Astrology | 24 | | Forklift Jobs | 4 | | Home Care Jobs | 5 | | Disability Grants | 5 | | Flooring Installation | 24 | | Prostate Cancer Treatment | 6 | | Clinical Trials | 7 | | Astrology | 24 | | Credit Cards | 12 | | Astrology | 24 | | Apartment Rentals | 31 | | Used Cars | 4 | | Erectile Dysfunction | 7 | | Doors | 6 | | Dental Implants | 41 | | Apartment Rentals | 31 | | Warehouse Jobs | 48 | | Personal Loans | 42 | | Circuit Board Assembler | 12 | | Flooring Installation | 24 | | Cars For Sale | 2 | | Dental Implants | 41 | | Apartment Rentals | 31 | | Dental Implants | 41 | | Kidney Cancer | 13 | | Cell Phones | 4 | | Truck Driving Jobs | 5 | | Fitness Coach | 28 | | Paid Sperm Donation | 2 | | Senior Living | 2 | | Doors | 6 | | Pro Bono Attorneys | 4 | | Weight Loss | 9 | | Cyber Security Jobs | 6 | | Dental Fillings | 1 | | Caregiver Jobs | 25 | | Waste Management Jobs | 2 | | Dog Food | 1 | | Car Auctions | 6 | | Fitness Coach | 28 | | Retirement Planning | 1 | | Urgent Care | 1 | | Credit Cards | 12 | | Caregiver Jobs | 25 | | Fitness Apps | 15 | | Prostate Cancer | 2 | | Wigs | 4 | | Apartment Rentals | 31 | | Medical | 6 | | Life Insurance | 5 | | Personal Loans | 42 | | Macular Degeneration | 2 | | Apartment Rentals | 31 | | Certificate of Deposit | 2 | | Perfumes | 6 | | Debt Consolidation | 9 | | Senior Cruises | 2 | | Warehouse Jobs | 48 | | Caregiver Services | 7 | | Skin Care | 3 | | Dental Implants | 41 | | Breast Cancer | 9 | | Online Shopping Deals | 28 | | Electrician Jobs | 5 | | Apartments | 67 | | Personal Loans | 42 | | Migraine Treatment | 14 | | Digital Marketing Jobs | 1 | | Weight Loss | 9 | | Dental Implants | 41 | | Auto Insurance | 13 | | Services | 2 | | Online Shopping Deals | 28 | | Fitness Coach | 28 | | Belly Fat Removal | 6 | | Medical | 6 | | Wigs | 4 | | Circuit Board Assembler | 12 | | Sleep Apnea | 5 | | Caregiver Jobs | 25 | | Security Jobs | 3 | | Sofas | 5 | | Flooring Installation | 24 | | Malpractice Attorneys | 1 | | Apartments | 67 | | Ultherapy | 2 | | Personal Loans | 42 | | Migraine Treatment | 14 | | Plumbing Jobs | 4 | | Online Shopping Deals | 28 | | Dental Implants | 41 | | Caregiver Jobs | 25 | | Apartments | 67 | | Personal Loans | 42 | | FHA Loans | 9 | | E-Bikes | 3 | | Warehouse Jobs | 48 | | Caregiver Jobs | 25 | | Wrinkle Treatment | 1 | | Apartment Rentals | 31 | | Used Cars | 4 | | Caregiver Jobs | 25 | | Apartments | 67 | | Dental Implants | 41 | | Apartments | 67 | | Lawn Mowers | 1 | | Depression Treatment | 3 | | Electrician Jobs | 5 | | Warehouse Jobs | 48 | | Food Packing Jobs | 11 | | Home Improvement Loans | 3 | | Addiction Counseling | 1 | | Weight Loss | 9 | | Alzheimer's Treatment | 8 | | Auto Insurance | 13 | | Home Decor | 2 | | Daycare Jobs | 8 | | Fitness Apps | 15 | | Leadership | 2 | | HVAC Services | 3 | | Bipolar Treatment | 3 | | Mental Health Test | 1 | | Belly Fat Removal | 6 | | Dental Implant Grants | 2 | | Phone Service | 2 | | Remote Jobs | 8 | | Pest Control | 3 | | Caregiver Services | 7 | | Apartments | 67 | | Meal Delivery Services | 1 | | Fitness Apps | 15 | | Online Shopping Deals | 28 | | Cyber Security Jobs | 6 | | Cleaning Jobs | 2 | | Warehouse Jobs | 48 | | Online Shopping Deals | 28 | | Apartments | 67 | | Online Shopping Deals | 28 | | Online Shopping Deals | 28 | | Dental Implant Grants | 2 | | Bipolar Treatment | 3 | | Medical Alert Systems | 4 | | Coats | 1 | | Eyelid Surgery | 1 | | Hotels | 1 | | Online Shopping Deals | 28 | | Weight Loss | 9 | | Warehouse Jobs | 48 | | Warehouse Jobs | 48 | | Truck Driving Jobs | 5 | | Paid Clinical Trial | 1 | | Caregiver Services | 7 | | Kidney Cancer | 13 | | Massage Services | 11 | | Credit Cards | 12 | | Veterans Assistance | 1 | | Kidney Cancer | 13 | | Apartments | 67 | | Apartments | 67 | | Dental Implants | 41 | | Cloud Storage | 2 | | Medical | 6 | | Window Replacement | 6 | | Apartments | 67 | | Auto Insurance | 13 | | Electric Cars | 3 | | Life Insurance | 5 | | Apartment Rentals | 31 | | Astrology | 24 | | Dental Implants | 41 | | Dental Implants | 41 | | Belly Fat Removal | 6 | | Apartments | 67 | | Personal Loans | 42 | | Massage Services | 11 | | Leadership | 2 | | Personal Loans | 42 | | Wart Treatment | 1 | | Astrology | 24 | | Food Packing Jobs | 11 | | Fitness Coach | 28 | | Food Packing Jobs | 11 | | Packing Jobs | 2 | | Warehouse Jobs | 48 | | Caregiver Jobs | 25 | | Home Remodel | 6 | | Energy Drinks | 2 | | Fitness Coach | 28 | | Flooring Installation | 24 | | Astrology | 24 | | Life Insurance | 5 | | Car Auctions | 6 | | Dental Implants | 41 | | Massage Services | 11 | | Apartments | 67 | | Personal Loans | 42 | | Dental Implants | 41 | | DIY Christmas Gifts | 1 | | Apartment Rentals | 31 | | Flooring Installation | 24 | | Massage Services | 11 | | Car Auctions | 6 | | Dental Implants | 41 | | Warehouse Jobs | 48 | | Apartments | 67 | | Auto Insurance | 13 | | Welding Jobs | 6 | | Caregiver Services | 7 | | Welding Jobs | 6 | | Online Jobs | 2 | | Home Furniture | 14 | | Eye Bag Surgery | 1 | | Kidney Cancer | 13 | | Kidney Cancer | 13 | | UTI Treatment | 1 | | Flooring Installation | 24 | | Fitness Apps | 15 | | Caregiver Jobs | 25 | | Warehouse Jobs | 48 | | Caregiver Jobs | 25 | | Teeth Straightening | 1 | | Circuit Board Assembler | 12 | | Bipolar Disorder | 6 | | Prostate Cancer Treatment | 6 | | Erectile Dysfunction | 7 | | Astrology | 24 | | Apartments | 67 | | Apartments | 67 | | Hair Transplant | 3 | | Migraine Treatment | 14 | | Food Packing Jobs | 11 | | Senior Apartments | 7 | | Apartment Rentals | 31 | | Daycare Jobs | 8 | | Auto Insurance | 13 | | Hair Transplant | 3 | | Painting Jobs | 1 | | Erectile Dysfunction | 7 | | Paid Sperm Donation | 2 | | Fitness Coach | 28 | | Fitness Apps | 15 | | Weight Loss | 9 | | Warehouse Jobs | 48 | | Prostate Cancer Treatment | 6 | | Car Carpets | 2 | | Car Transport Jobs | 9 | | Botox | 3 | | Online Shopping Deals | 28 | | Medical Delivery | 3 | | Car Transport Jobs | 9 | | Medical | 6 | | Weight Loss | 9 | | Window Replacement | 6 | | Apartments | 67 | | Dental Implants | 41 | | Home Remodel | 6 | | Perfumes | 6 | | Astrology | 24 | | Senior Cruises | 2 | | Home Furniture | 14 | | Apartments | 67 | | Skin Rejuvenation | 1 | | Apartment Rentals | 31 | | Debt Relief | 2 | | Personal Loans | 42 | | Fitness Apps | 15 | | Breast Cancer | 9 | | Auto Insurance | 13 | | Psoriasis Treatment | 1 | | Cooktops | 5 | | Warehouse Jobs | 48 | | Circuit Board Assembler | 12 | | Fitness Apps | 15 | | Mobility Scooters | 1 | | Metal Roofs | 1 | | Warehouse Jobs | 48 | | Home Furniture | 14 | | Office Cleaning Jobs | 8 | | Botox | 3 | | Contractors | 1 | | Forklift Jobs | 4 | | Home Loans | 2 | | Construction Jobs | 4 | | Tires | 3 | | Cleaning Jobs | 2 | | Apartments | 67 | | Migraine Treatment | 14 | | Auto Insurance | 13 | | Apartments | 67 | | Hearing Test | 1 | | FHA Loans | 9 | | Delivery Jobs | 1 | | Daycare Jobs | 8 | | Online Shopping Deals | 28 | | Window Blinds | 1 | | Auto Insurance | 13 | | Dental Implants | 41 | | Apartments | 67 | | Home Furniture | 14 | | Dentists | 1 | | Depression Test | 2 | | Medication Delivery Jobs | 8 | | Flooring Installation | 24 | | Construction Jobs | 4 | | Warehouse Jobs | 48 | | Medical Delivery | 3 | | Apartment Rentals | 31 | | Warehouse Jobs | 48 | | Sleep Apnea | 5 | | Bank Owned Cars | 1 | | Medical Alert Systems | 4 | | Online Shopping Deals | 28 | | Apartments | 67 | | Debt Consolidation | 9 | | Bipolar Disorder | 6 | | Kidney Cancer | 13 | | Botox | 3 | | Rehab Treatment Center | 1 | | Depression | 2 | | FHA Loans | 9 | | Car Auctions | 6 | | Mortgage Loans | 2 | | Military School | 1 | | Window Replacement | 6 | | Dental Implants | 41 | | Personal Loans | 42 | | Circuit Board Assembler | 12 | | Auto Insurance | 13 | | Online Shopping Deals | 28 | | Fitness Apps | 15 | | Seizure Medication | 1 | | Disability Grants | 5 | | Online Shopping Deals | 28 | | FHA Loans | 9 | | Debt Relief | 2 | | Buy Now Pay Later | 1 | | Planning Tools | 1 | | Reverse Mortgage | 2 | | Home Care | 1 | | Apartment Rentals | 31 | | Migraine Treatment | 14 | | Cell Phones | 4 | | Detox Diets | 2 | | Diabetes | 4 | | Fitness Apps | 15 | | Bipolar Disorder | 6 | | Astrology | 24 | | Home Care Jobs | 5 | | Dating Apps | 1 | | Apartments | 67 | | Warehouse Jobs | 48 | | Car Tires | 2 | | Personal Loans | 42 | | Rhinoplasty | 1 | | Dental Implants | 41 | | Skin Care | 3 | | Car Cleaning Kit | 1 | | Apartment Rentals | 31 | | Apartment Rentals | 31 | | Daycare Jobs | 8 | | Weight Loss | 9 | | Migraine Treatment | 14 | | Online Banking | 2 | | Breast Cancer | 9 | | Apartments | 67 | | Construction Jobs | 4 | | European Cruises | 1 | | Fitness Coach | 28 | | Home Furniture | 14 | | Hair Transplant | 3 | | Breast Cancer | 9 | | Daycare Jobs | 8 | | Fitness Coach | 28 | | Warehouse Jobs | 48 | | Online Shopping Deals | 28 | | Digital Marketing Courses | 2 | | Sofas | 5 | | Weight Loss | 9 | | Car Transport Jobs | 9 | | Online Shopping Deals | 28 | | Perfumes | 6 | | Dental Implants | 41 | | Credit Cards | 12 | | Window Replacement | 6 | | Food Packing Jobs | 11 | | Cyber Security Jobs | 6 | | Flooring Installation | 24 | | Breast Cancer | 9 | | Home Improvement Loans | 3 | | Warehouse Jobs | 48 | | Caregiver Jobs | 25 | | Dental Implants | 41 | | Home Furniture | 14 | | Online Shopping Deals | 28 | | Credit Cards | 12 | | Kidney Cancer | 13 | | Fitness Coach | 28 | | Dental Implants | 41 | | Hepatitis C | 4 | | Medication Delivery Jobs | 8 | | Flooring Installation | 24 | | Circuit Board Assembler | 12 | | Credit Cards | 12 | | Gutter Cleaning | 2 | | Dental Implants | 41 | | Personal Loans | 42 | | Bad Credit Credit Cards | 2 | | Jobs in USA | 2 | | Roofing Services | 1 | | Astrology | 24 | | Warehouse Jobs | 48 | | Dental Implants | 41 | | Dental Implants | 41 | | Senior Apartments | 7 | | Apartments | 67 | | Home Care Jobs | 5 | | Cruise Ship Jobs | 4 | | Houses For Sale | 1 | | Doors | 6 | | Breast Cancer | 9 | | Solar Battery | 1 | | Keto Diet | 2 | | Warehouse Jobs | 48 | | Flooring Installation | 24 | | Personal Loans | 42 | | Clinical Trials | 7 | | Warehouse Jobs | 48 | | Land For Sale | 1 | | Fitness Coach | 28 | | Hepatitis C | 4 | | Food Packing Jobs | 11 | | Cars For Sale | 2 | | FHA Loans | 9 | | Office Cleaning Jobs | 8 | | Apartments | 67 | | Dental Care | 2 | | Pest Control | 3 | | Home Furniture | 14 | | Tires | 3 | | Personal Loans | 42 | | Deck Installation | 1 | | Alzheimer's Treatment | 8 | | GMC Sierra | 1 | | Homecare Jobs | 2 | | Apartment Rentals | 31 | | Pro Bono Attorneys | 4 | | Online Shopping Deals | 28 | | Medical | 6 | | Plumbing Jobs | 4 | | Senior Housing | 2 | | Home Care Jobs | 5 | | Kidney Cancer | 13 | | Warehouse Jobs | 48 | | HVAC Services | 3 | | Lead Generation | 1 | | Debt Consolidation | 9 | | Call Center Jobs | 1 | | Doors | 6 | | Pro Bono Attorneys | 4 | | Depression Treatment | 3 | | Debt Consolidation | 9 | | Fitness Coach | 28 | | Personal Loans | 42 | | Online Doctor | 1 | | Bipolar Disorder | 6 | | Daycare Jobs | 8 | | Migraine Treatment | 14 | | Clinical Trials | 7 | | Forklift Jobs | 4 | | Personal Loans | 42 | | Personal Loans | 42 | | Online Shopping Deals | 28 | | Relationship Advice | 1 | | Personal Loans | 42 | | Diabetes | 4 | | Dental Implants | 41 | | Cloud Storage | 2 | | Warehouse Jobs | 48 | | Debt Consolidation | 9 | | Belly Fat Removal | 6 | | Debt Consolidation | 9 | | Apartments | 67 | | Home Furniture | 14 | | Apartment Rentals | 31 | | Welding Jobs | 6 | | Flooring Installation | 24 | | Home Furniture | 14 | | Cleaning Services | 7 | | Wigs | 4 | | Erectile Dysfunction | 7 | | Snow Removal Services | 3 | | Apartments | 67 | | Depression Test | 2 | | Email Marketing Jobs | 2 | | Flooring Installation | 24 | | Apartment Rentals | 31 | | Remote Jobs | 8 | | Kidney Cancer | 13 | | Warehouse Jobs | 48 | | Dental Care | 2 | | Anime Figurines | 1 | | Online Banking | 2 | | Auto Insurance | 13 | | Truck Driving Jobs | 5 | | Breast Cancer | 9 | | Office Cleaning Jobs | 8 | | Cruise Ship Jobs | 4 | | Bipolar Disorder | 6 | | Astrology | 24 | | Life Insurance | 5 | | Personal Loans | 42 | | Online Shopping Deals | 28 | | Cyber Security Jobs | 6 | | Caregiver Jobs | 25 | | Caregiver Jobs | 25 | | Flooring Installation | 24 | | Office Cleaning Jobs | 8 | | Cruise Ship Jobs | 4 | | Cleaning Services | 7 | | Belly Fat Removal | 6 | | Apartments | 67 | | Senior Apartments | 7 | | Personal Loans | 42 | | Laser Hair Removal | 1 | | Warehouse Jobs | 48 | | Medicare | 1 | | Caregiver Jobs | 25 | | Electric Bikes | 1 | | Prostate Cancer Treatment | 6 | | Apartments | 67 | | Online Shopping Deals | 28 | | Dental Implants | 41 | | Senior Apartments | 7 | | Digital Marketing Courses | 2 | | Car Auctions | 6 | | Migraine Treatment | 14 | | Warehouse Jobs | 48 | | Apartments | 67 | | Miscellaneous Jobs | 1 | | Apartments | 67 | | Depression Treatment | 3 | | Homecare Jobs | 2 | | Forklift Jobs | 4 | | Waterproofing Services | 1 | | Data Entry Jobs | 2 | | Prostate Cancer | 2 | | Personal Loans | 42 | | Migraine Treatment | 14 | | Erectile Dysfunction | 7 | | Flooring Installation | 24 | | Personal Loans | 42 | | Macular Degeneration | 2 | | Online Shopping Deals | 28 | | Astrology | 24 | | Apartments | 67 | | Depression | 2 | | Home Furniture | 14 | | Warehouse Jobs | 48 | | Envelope Stuffing Jobs | 1 | | Dental Implants | 41 | | Home Care Jobs | 5 | | Luxury Cars | 1 | | Fertility Clinics | 4 | | Warehouse Jobs | 48 | | Migraine Treatment | 14 | | Kidney Cancer | 13 | | Ultherapy | 2 | | Data Analytics Tools | 2 | | Bipolar Treatment | 3 | | Energy Drinks | 2 | | Warehouse Jobs | 48 | | Snow Removal Services | 3 | | Personal Loans | 42 | | Medication Delivery Jobs | 8 | | Apartment Rentals | 31 | | Data Analytics Tools | 2 | | Alzheimer's Treatment | 8 | | Cyber Security Jobs | 6 | | Heart Failure | 1 | | Investment Services | 1 | | Auto Loans | 2 | | Pest Control | 3 | | Gutter Cleaning | 2 | | Keto Diet | 2 | | Security Jobs | 3 | | Home Loans | 2 | | Email Marketing Jobs | 2 | | Apartments | 67 | | Machinery | 3 | | Medical | 6 | | Online Courses | 2 | | Apartments | 67 | | Dental Implants | 41 | | Fitness Coach | 28 | | Services | 2 | | Apartments | 67 | | Machinery | 3 | | FHA Loans | 9 | | Massage Services | 11 | | Senior Apartments | 7 | | Online Shopping Deals | 28 | | Office Cleaning Jobs | 8 | | Breast Cancer | 9 | | Fertility Clinics | 4 | | Kidney Cancer | 13 | | Medication Delivery Jobs | 8 | | Fitness Coach | 28 | | Apartments | 67 | | Caregiver Jobs | 25 | | Fertility Clinics | 4 | | Online Shopping Deals | 28 | | Security Guard Jobs | 2 | | Home Decor | 2 | | Data Entry Jobs | 2 | | Credit Cards | 12 | | Personal Loans | 42 | | Jobs in USA | 2 | | Bipolar Disorder | 6 | | Real Estate | 1 | | Fitness Coach | 28 | | Apartments | 67 | | Outdoor Furniture | 3 | | Personal Loans | 42 | | Astrology | 24 | | Sclerotherapy | 1 | | Warehouse Jobs | 48 | | Attorneys | 4 | | Apartments | 67 | | Caregiver Jobs | 25 | | Reverse Mortgage | 2 | | Warehouse Jobs | 48 | | Nursing Degrees | 1 | | Apartment Rentals | 31 | | Fertility Clinics | 4 | | Apartments | 67 | | Roof Repair | 1 | | Dental Implants | 41 | | Apartment Rentals | 31 | | Pro Bono Attorneys | 4 | | Warehouse Jobs | 48 | | Bed Bugs | 1 | | Warehouse Jobs | 48 | | Apartments | 67 | | Home Remodel | 6 | | Doors | 6 | | Auto Insurance | 13 | | Warehouse Jobs | 48 | | Cleaning Services | 7 | | Cleaning Services | 7 | | Home Remodel | 6 | | Electrician Jobs | 5 | | Credit Cards | 12 | | Caregiver Jobs | 25 | | Fitness Apps | 15 | | Truck Driving Jobs | 5 | | Food Packing Jobs | 11 | | Apartment Rentals | 31 | | Caregiver Services | 7 | | Personal Loans | 42 | | Apartments | 67 | | Online Shopping Deals | 28 | | Home Remodel | 6 | | Caregiver Jobs | 25 | | Security Jobs | 3 | | Attorneys | 4 | | Warehouse Jobs | 48 | | Cancer Treatment | 1 | | Fitness Coach | 28 | | Dental Implants | 41 | | Massage Services | 11 | | High School Diploma | 1 | | Cleaning Services | 7 | | Breast Lift | 1 | | Fitness Coach | 28 | | Dental Implants | 41 | | Mortgage Loans | 2 | | Astrology | 24 | | Window Replacement | 6 | | Migraine Treatment | 14 | | Modular Homes | 1 | | Cooktops | 5 | | Used Cars | 4 | | Apartment Rentals | 31 | | Auto Deals | 1 | | Apartment Rentals | 31 | | Electrician Jobs | 5 | | Flooring Installation | 24 | | Warehouse Jobs | 48 | | Car Carpets | 2 | | Dental Implants | 41 | | Cooktops | 5 | | Warehouse Jobs | 48 | | Allergy Relief | 1 | | Welding Jobs | 6 | | Cooktops | 5 | | Personal Loans | 42 | | Dental Implants | 41 | | Astrology | 24 | | Diabetes | 4 | | Massage Services | 11 | | Sleep Apnea | 5 | | Label Printers | 2 | | Caregiver Jobs | 25 | | Dental Implants | 41 | | Caregiver Services | 7 | | Personal Loans | 42 | | Bad Credit Credit Cards | 2 | | Cyber Security Jobs | 6 | | Debt Consolidation | 9 | | Astrology | 24 | | FHA Loans | 9 | | Clinical Trials | 7 | | FHA Loans | 9 | | Employee Retention Credit | 2 | | Plumbing Jobs | 4 | | Sofas | 5 | | Warehouse Jobs | 48 | | Clinical Trials | 7 | | Electric Cars | 3 | | Fitness Apps | 15 | | Debt Consolidation | 9 | | Medication Delivery Jobs | 8 | | Gaming Chairs | 1 | | Apartments | 67 | | Food Packing Jobs | 11 | | Machinery | 3 | | Migraine Treatment | 14 | | Electrician Jobs | 5 | | Flooring Installation | 24 | | Food Packing Jobs | 11 | | Senior Housing | 2 | | Alzheimer's Treatment | 8 | | Dental Implants | 41 | | Cleaning Services | 7 | | Military Jobs | 1 | | Senior Living | 2 | | Caregiver Jobs | 25 | | Circuit Board Assembler | 12 | | Astrology | 24 | | Flooring Installation | 24 | | Bathroom Vanity | 1 | | Apartments | 67 | | Apartments | 67 | | Fitness Coach | 28 | | Phone Storage | 2 | | E-Bikes | 3 | | Apartment Rentals | 31 | | Apartments | 67 | | Lawn Care | 1 | | Fitness Coach | 28 | | Warehouse Jobs | 48 | | Apartments | 67 | | House Cleaning Jobs | 1 | | Pet Adoption | 1 | | Skin Care | 3 | | Cruise Ship Jobs | 4 | | Warehouse Jobs | 48 | | Apartments | 67 | | Massage Services | 11 | | Dental Implants | 41 | | Auto Insurance | 13 | | Security Guard Jobs | 2 | | Clinical Trials | 7 | | Astrology | 24 | | Apartments | 67 | | Clinical Trials | 7 | | Attorneys | 4 | | Air Conditioning Services | 1 | | Remote Jobs | 8 | | Astrology | 24 | | Senior Apartments | 7 | | Caregiver Jobs | 25 | | Water Damage | 1 | | Plaque Psoriasis | 1 | | Belly Fat Removal | 6 | | Hepatitis C | 4 | | Window Replacement | 6 | | Caregiver Jobs | 25 | | Apartments | 67 | | Senior Care | 1 | | Waterproofing Jobs | 1 | | Astrology | 24 | | Breast Cancer | 9 | | Warehouse Jobs | 48 | | Circuit Board Assembler | 12 | | Fitness Apps | 15 | | Flooring Installation | 24 | | Personal Loans | 42 | | Fitness Coach | 28 | | Alzheimer's Treatment | 8 | | Detox Diets | 2 | | Dental Implants | 41 | | Medical Alert Systems | 4 | | Dental Implants | 41 | | Fitness Coach | 28 | | Personal Loans | 42 | | Migraine Treatment | 14 | | Erectile Dysfunction | 7 | | Spa Services | 1 | | Electric Cars | 3 | | Fitness Apps | 15 | | Apartment Rentals | 31 | | Food Packing Jobs | 11 | | Online Shopping Deals | 28 | | Doors | 6 | | Warehouse Jobs | 48 | | Credit Cards | 12 | | Disability Grants | 5 | | Food Packing Jobs | 11 | | Home Furniture | 14 | | Remote Jobs | 8 | | Attorneys | 4 | | Cyber Security Degrees | 1 | | Fitness Coach | 28 | | Cooktops | 5 | | Massage Services | 11 | | Astrology | 24 | | Massage Services | 11 | | Caregiver Services | 7 | | Credit Cards | 12 | | Weight Loss Clinical Trials | 1 | | Lymphoma | 1 | | Flooring Installation | 24 | | Virtual Assistant Jobs | 1 | | Office Cleaning Jobs | 8 | | Remote Jobs | 8 | | Cell Phones | 4 | | Massage Services | 11 | | Flooring Installation | 24 | | Car Transport Jobs | 9 | | HVAC Services | 3 | | Apartments | 67 | | Kidney Cancer | 13 | | Car Transport Jobs | 9 | | Astrology | 24 | | Personal Loans | 42 | | Daycare Jobs | 8 | | Phone Storage | 2 | | Medication Delivery Jobs | 8 | | Apartments | 67 | | Dental Implants | 41 | | Apartments | 67 | | Label Printers | 2 | | Credit Cards | 12 | | Medication Delivery Jobs | 8 | | Apartments | 67 | | Snow Removal Services | 3 | | Apartments | 67 | | Apartment Rentals | 31 | | Caregiver Jobs | 25 | | Alzheimer's Treatment | 8 | | Medical Alert Systems | 4 | | Welding Jobs | 6 | | Online Dating | 1 | | Personal Loans | 42 | | Apartments | 67 | | Personal Loans | 42 | | Alzheimer's Treatment | 8 | | Circuit Board Assembler | 12 | | Perfumes | 6 | | Phone Service | 2 | | Warehouse Jobs | 48 | | Welding Jobs | 6 | | Apartments | 67 | | Migraine Treatment | 14 | | Veterinary Assistant School | 1 | | Flooring Installation | 24 | | Cruises | 1 | | Perfumes | 6 | | Warehouse Jobs | 48 | | Car Values | 1 | | Job Search | 1 | | Certificate of Deposit | 2 | | Home Furniture | 14 | | Cleaning Services | 7 | | Diabetes | 4 | | Fitness Apps | 15 | | Warehouse Jobs | 48 | | Auto Loans | 2 | ### Training Hyperparameters - batch_size: (16, 2) - num_epochs: (1, 16) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:-----:|:-------------:|:---------------:| | 0.0000 | 1 | 0.149 | - | | 0.0008 | 50 | 0.051 | - | | 0.0016 | 100 | 0.0437 | - | | 0.0024 | 150 | 0.0506 | - | | 0.0033 | 200 | 0.0392 | - | | 0.0041 | 250 | 0.0253 | - | | 0.0049 | 300 | 0.0226 | - | | 0.0057 | 350 | 0.0184 | - | | 0.0065 | 400 | 0.0255 | - | | 0.0073 | 450 | 0.0193 | - | | 0.0081 | 500 | 0.0195 | - | | 0.0090 | 550 | 0.0136 | - | | 0.0098 | 600 | 0.017 | - | | 0.0106 | 650 | 0.0116 | - | | 0.0114 | 700 | 0.0153 | - | | 0.0122 | 750 | 0.0149 | - | | 0.0130 | 800 | 0.007 | - | | 0.0139 | 850 | 0.0135 | - | | 0.0147 | 900 | 0.0112 | - | | 0.0155 | 950 | 0.0127 | - | | 0.0163 | 1000 | 0.0147 | - | | 0.0171 | 1050 | 0.0091 | - | | 0.0179 | 1100 | 0.0132 | - | | 0.0187 | 1150 | 0.0149 | - | | 0.0196 | 1200 | 0.0095 | - | | 0.0204 | 1250 | 0.0061 | - | | 0.0212 | 1300 | 0.0081 | - | | 0.0220 | 1350 | 0.0069 | - | | 0.0228 | 1400 | 0.007 | - | | 0.0236 | 1450 | 0.0079 | - | | 0.0244 | 1500 | 0.0075 | - | | 0.0253 | 1550 | 0.0095 | - | | 0.0261 | 1600 | 0.0066 | - | | 0.0269 | 1650 | 0.0077 | - | | 0.0277 | 1700 | 0.0074 | - | | 0.0285 | 1750 | 0.005 | - | | 0.0293 | 1800 | 0.0072 | - | | 0.0301 | 1850 | 0.0064 | - | | 0.0310 | 1900 | 0.0074 | - | | 0.0318 | 1950 | 0.007 | - | | 0.0326 | 2000 | 0.0074 | - | | 0.0334 | 2050 | 0.0039 | - | | 0.0342 | 2100 | 0.0029 | - | | 0.0350 | 2150 | 0.0059 | - | | 0.0359 | 2200 | 0.006 | - | | 0.0367 | 2250 | 0.0043 | - | | 0.0375 | 2300 | 0.0045 | - | | 0.0383 | 2350 | 0.0022 | - | | 0.0391 | 2400 | 0.0072 | - | | 0.0399 | 2450 | 0.0032 | - | | 0.0407 | 2500 | 0.006 | - | | 0.0416 | 2550 | 0.0066 | - | | 0.0424 | 2600 | 0.0059 | - | | 0.0432 | 2650 | 0.0069 | - | | 0.0440 | 2700 | 0.0076 | - | | 0.0448 | 2750 | 0.0042 | - | | 0.0456 | 2800 | 0.0068 | - | | 0.0464 | 2850 | 0.007 | - | | 0.0473 | 2900 | 0.0039 | - | | 0.0481 | 2950 | 0.0057 | - | | 0.0489 | 3000 | 0.0032 | - | | 0.0497 | 3050 | 0.0076 | - | | 0.0505 | 3100 | 0.0049 | - | | 0.0513 | 3150 | 0.0031 | - | | 0.0522 | 3200 | 0.0021 | - | | 0.0530 | 3250 | 0.0029 | - | | 0.0538 | 3300 | 0.0078 | - | | 0.0546 | 3350 | 0.0048 | - | | 0.0554 | 3400 | 0.0062 | - | | 0.0562 | 3450 | 0.0041 | - | | 0.0570 | 3500 | 0.0014 | - | | 0.0579 | 3550 | 0.0022 | - | | 0.0587 | 3600 | 0.0015 | - | | 0.0595 | 3650 | 0.0011 | - | | 0.0603 | 3700 | 0.001 | - | | 0.0611 | 3750 | 0.0051 | - | | 0.0619 | 3800 | 0.0022 | - | | 0.0627 | 3850 | 0.0041 | - | | 0.0636 | 3900 | 0.0037 | - | | 0.0644 | 3950 | 0.0018 | - | | 0.0652 | 4000 | 0.0031 | - | | 0.0660 | 4050 | 0.0017 | - | | 0.0668 | 4100 | 0.0028 | - | | 0.0676 | 4150 | 0.0038 | - | | 0.0684 | 4200 | 0.0023 | - | | 0.0693 | 4250 | 0.0025 | - | | 0.0701 | 4300 | 0.0033 | - | | 0.0709 | 4350 | 0.0041 | - | | 0.0717 | 4400 | 0.0022 | - | | 0.0725 | 4450 | 0.002 | - | | 0.0733 | 4500 | 0.0048 | - | | 0.0742 | 4550 | 0.0011 | - | | 0.0750 | 4600 | 0.0038 | - | | 0.0758 | 4650 | 0.0048 | - | | 0.0766 | 4700 | 0.0052 | - | | 0.0774 | 4750 | 0.0025 | - | | 0.0782 | 4800 | 0.0011 | - | | 0.0790 | 4850 | 0.0028 | - | | 0.0799 | 4900 | 0.0038 | - | | 0.0807 | 4950 | 0.0012 | - | | 0.0815 | 5000 | 0.0038 | - | | 0.0823 | 5050 | 0.0014 | - | | 0.0831 | 5100 | 0.0011 | - | | 0.0839 | 5150 | 0.0053 | - | | 0.0847 | 5200 | 0.0032 | - | | 0.0856 | 5250 | 0.0025 | - | | 0.0864 | 5300 | 0.0025 | - | | 0.0872 | 5350 | 0.0034 | - | | 0.0880 | 5400 | 0.0053 | - | | 0.0888 | 5450 | 0.0016 | - | | 0.0896 | 5500 | 0.0021 | - | | 0.0904 | 5550 | 0.002 | - | | 0.0913 | 5600 | 0.0032 | - | | 0.0921 | 5650 | 0.0014 | - | | 0.0929 | 5700 | 0.0026 | - | | 0.0937 | 5750 | 0.0042 | - | | 0.0945 | 5800 | 0.0033 | - | | 0.0953 | 5850 | 0.0031 | - | | 0.0962 | 5900 | 0.0008 | - | | 0.0970 | 5950 | 0.004 | - | | 0.0978 | 6000 | 0.001 | - | | 0.0986 | 6050 | 0.0021 | - | | 0.0994 | 6100 | 0.0035 | - | | 0.1002 | 6150 | 0.0033 | - | | 0.1010 | 6200 | 0.0011 | - | | 0.1019 | 6250 | 0.0023 | - | | 0.1027 | 6300 | 0.0017 | - | | 0.1035 | 6350 | 0.0008 | - | | 0.1043 | 6400 | 0.0018 | - | | 0.1051 | 6450 | 0.0033 | - | | 0.1059 | 6500 | 0.0066 | - | | 0.1067 | 6550 | 0.0014 | - | | 0.1076 | 6600 | 0.0009 | - | | 0.1084 | 6650 | 0.0014 | - | | 0.1092 | 6700 | 0.002 | - | | 0.1100 | 6750 | 0.0024 | - | | 0.1108 | 6800 | 0.0046 | - | | 0.1116 | 6850 | 0.0019 | - | | 0.1125 | 6900 | 0.0045 | - | | 0.1133 | 6950 | 0.0009 | - | | 0.1141 | 7000 | 0.001 | - | | 0.1149 | 7050 | 0.0018 | - | | 0.1157 | 7100 | 0.002 | - | | 0.1165 | 7150 | 0.0022 | - | | 0.1173 | 7200 | 0.0037 | - | | 0.1182 | 7250 | 0.0037 | - | | 0.1190 | 7300 | 0.0024 | - | | 0.1198 | 7350 | 0.0025 | - | | 0.1206 | 7400 | 0.0007 | - | | 0.1214 | 7450 | 0.0018 | - | | 0.1222 | 7500 | 0.0043 | - | | 0.1230 | 7550 | 0.0006 | - | | 0.1239 | 7600 | 0.0008 | - | | 0.1247 | 7650 | 0.0021 | - | | 0.1255 | 7700 | 0.002 | - | | 0.1263 | 7750 | 0.0033 | - | | 0.1271 | 7800 | 0.0017 | - | | 0.1279 | 7850 | 0.0007 | - | | 0.1287 | 7900 | 0.0027 | - | | 0.1296 | 7950 | 0.001 | - | | 0.1304 | 8000 | 0.0005 | - | | 0.1312 | 8050 | 0.0008 | - | | 0.1320 | 8100 | 0.0019 | - | | 0.1328 | 8150 | 0.0018 | - | | 0.1336 | 8200 | 0.0014 | - | | 0.1345 | 8250 | 0.0007 | - | | 0.1353 | 8300 | 0.0005 | - | | 0.1361 | 8350 | 0.0006 | - | | 0.1369 | 8400 | 0.0016 | - | | 0.1377 | 8450 | 0.0029 | - | | 0.1385 | 8500 | 0.0011 | - | | 0.1393 | 8550 | 0.0034 | - | | 0.1402 | 8600 | 0.0014 | - | | 0.1410 | 8650 | 0.0006 | - | | 0.1418 | 8700 | 0.0034 | - | | 0.1426 | 8750 | 0.0031 | - | | 0.1434 | 8800 | 0.0031 | - | | 0.1442 | 8850 | 0.0008 | - | | 0.1450 | 8900 | 0.001 | - | | 0.1459 | 8950 | 0.0031 | - | | 0.1467 | 9000 | 0.0024 | - | | 0.1475 | 9050 | 0.0033 | - | | 0.1483 | 9100 | 0.0028 | - | | 0.1491 | 9150 | 0.0013 | - | | 0.1499 | 9200 | 0.0021 | - | | 0.1507 | 9250 | 0.0022 | - | | 0.1516 | 9300 | 0.0018 | - | | 0.1524 | 9350 | 0.0005 | - | | 0.1532 | 9400 | 0.0013 | - | | 0.1540 | 9450 | 0.0005 | - | | 0.1548 | 9500 | 0.0006 | - | | 0.1556 | 9550 | 0.0023 | - | | 0.1565 | 9600 | 0.0006 | - | | 0.1573 | 9650 | 0.0011 | - | | 0.1581 | 9700 | 0.0038 | - | | 0.1589 | 9750 | 0.0018 | - | | 0.1597 | 9800 | 0.0051 | - | | 0.1605 | 9850 | 0.003 | - | | 0.1613 | 9900 | 0.003 | - | | 0.1622 | 9950 | 0.0006 | - | | 0.1630 | 10000 | 0.0012 | - | | 0.1638 | 10050 | 0.003 | - | | 0.1646 | 10100 | 0.0015 | - | | 0.1654 | 10150 | 0.0023 | - | | 0.1662 | 10200 | 0.0029 | - | | 0.1670 | 10250 | 0.002 | - | | 0.1679 | 10300 | 0.0016 | - | | 0.1687 | 10350 | 0.0018 | - | | 0.1695 | 10400 | 0.0011 | - | | 0.1703 | 10450 | 0.0022 | - | | 0.1711 | 10500 | 0.0011 | - | | 0.1719 | 10550 | 0.0006 | - | | 0.1728 | 10600 | 0.0013 | - | | 0.1736 | 10650 | 0.0021 | - | | 0.1744 | 10700 | 0.0027 | - | | 0.1752 | 10750 | 0.002 | - | | 0.1760 | 10800 | 0.0009 | - | | 0.1768 | 10850 | 0.0018 | - | | 0.1776 | 10900 | 0.0018 | - | | 0.1785 | 10950 | 0.0032 | - | | 0.1793 | 11000 | 0.0008 | - | | 0.1801 | 11050 | 0.0009 | - | | 0.1809 | 11100 | 0.0006 | - | | 0.1817 | 11150 | 0.0014 | - | | 0.1825 | 11200 | 0.0025 | - | | 0.1833 | 11250 | 0.0013 | - | | 0.1842 | 11300 | 0.0004 | - | | 0.1850 | 11350 | 0.0022 | - | | 0.1858 | 11400 | 0.0048 | - | | 0.1866 | 11450 | 0.0013 | - | | 0.1874 | 11500 | 0.0009 | - | | 0.1882 | 11550 | 0.0006 | - | | 0.1890 | 11600 | 0.0029 | - | | 0.1899 | 11650 | 0.0018 | - | | 0.1907 | 11700 | 0.0017 | - | | 0.1915 | 11750 | 0.0009 | - | | 0.1923 | 11800 | 0.0006 | - | | 0.1931 | 11850 | 0.0022 | - | | 0.1939 | 11900 | 0.0019 | - | | 0.1948 | 11950 | 0.0029 | - | | 0.1956 | 12000 | 0.0005 | - | | 0.1964 | 12050 | 0.0006 | - | | 0.1972 | 12100 | 0.0009 | - | | 0.1980 | 12150 | 0.0005 | - | | 0.1988 | 12200 | 0.0025 | - | | 0.1996 | 12250 | 0.0018 | - | | 0.2005 | 12300 | 0.0012 | - | | 0.2013 | 12350 | 0.0008 | - | | 0.2021 | 12400 | 0.0018 | - | | 0.2029 | 12450 | 0.0019 | - | | 0.2037 | 12500 | 0.0025 | - | | 0.2045 | 12550 | 0.0022 | - | | 0.2053 | 12600 | 0.0019 | - | | 0.2062 | 12650 | 0.0005 | - | | 0.2070 | 12700 | 0.0017 | - | | 0.2078 | 12750 | 0.0022 | - | | 0.2086 | 12800 | 0.0012 | - | | 0.2094 | 12850 | 0.003 | - | | 0.2102 | 12900 | 0.0039 | - | | 0.2110 | 12950 | 0.002 | - | | 0.2119 | 13000 | 0.0004 | - | | 0.2127 | 13050 | 0.0007 | - | | 0.2135 | 13100 | 0.0006 | - | | 0.2143 | 13150 | 0.0006 | - | | 0.2151 | 13200 | 0.0028 | - | | 0.2159 | 13250 | 0.0009 | - | | 0.2168 | 13300 | 0.0011 | - | | 0.2176 | 13350 | 0.0017 | - | | 0.2184 | 13400 | 0.0009 | - | | 0.2192 | 13450 | 0.0029 | - | | 0.2200 | 13500 | 0.0006 | - | | 0.2208 | 13550 | 0.001 | - | | 0.2216 | 13600 | 0.0023 | - | | 0.2225 | 13650 | 0.0006 | - | | 0.2233 | 13700 | 0.0005 | - | | 0.2241 | 13750 | 0.0026 | - | | 0.2249 | 13800 | 0.0012 | - | | 0.2257 | 13850 | 0.0021 | - | | 0.2265 | 13900 | 0.0017 | - | | 0.2273 | 13950 | 0.0008 | - | | 0.2282 | 14000 | 0.0022 | - | | 0.2290 | 14050 | 0.0031 | - | | 0.2298 | 14100 | 0.0006 | - | | 0.2306 | 14150 | 0.0008 | - | | 0.2314 | 14200 | 0.0019 | - | | 0.2322 | 14250 | 0.0007 | - | | 0.2331 | 14300 | 0.0007 | - | | 0.2339 | 14350 | 0.0006 | - | | 0.2347 | 14400 | 0.0004 | - | | 0.2355 | 14450 | 0.0017 | - | | 0.2363 | 14500 | 0.0004 | - | | 0.2371 | 14550 | 0.0016 | - | | 0.2379 | 14600 | 0.0015 | - | | 0.2388 | 14650 | 0.0018 | - | | 0.2396 | 14700 | 0.0004 | - | | 0.2404 | 14750 | 0.0004 | - | | 0.2412 | 14800 | 0.0004 | - | | 0.2420 | 14850 | 0.0028 | - | | 0.2428 | 14900 | 0.0009 | - | | 0.2436 | 14950 | 0.0028 | - | | 0.2445 | 15000 | 0.0004 | - | | 0.2453 | 15050 | 0.0017 | - | | 0.2461 | 15100 | 0.0003 | - | | 0.2469 | 15150 | 0.0003 | - | | 0.2477 | 15200 | 0.0005 | - | | 0.2485 | 15250 | 0.0015 | - | | 0.2493 | 15300 | 0.0006 | - | | 0.2502 | 15350 | 0.0004 | - | | 0.2510 | 15400 | 0.0006 | - | | 0.2518 | 15450 | 0.0005 | - | | 0.2526 | 15500 | 0.0017 | - | | 0.2534 | 15550 | 0.0026 | - | | 0.2542 | 15600 | 0.0003 | - | | 0.2551 | 15650 | 0.0018 | - | | 0.2559 | 15700 | 0.0015 | - | | 0.2567 | 15750 | 0.0013 | - | | 0.2575 | 15800 | 0.0006 | - | | 0.2583 | 15850 | 0.0014 | - | | 0.2591 | 15900 | 0.0004 | - | | 0.2599 | 15950 | 0.0027 | - | | 0.2608 | 16000 | 0.0006 | - | | 0.2616 | 16050 | 0.004 | - | | 0.2624 | 16100 | 0.0005 | - | | 0.2632 | 16150 | 0.0004 | - | | 0.2640 | 16200 | 0.0005 | - | | 0.2648 | 16250 | 0.0028 | - | | 0.2656 | 16300 | 0.0026 | - | | 0.2665 | 16350 | 0.001 | - | | 0.2673 | 16400 | 0.0029 | - | | 0.2681 | 16450 | 0.003 | - | | 0.2689 | 16500 | 0.0015 | - | | 0.2697 | 16550 | 0.0024 | - | | 0.2705 | 16600 | 0.0022 | - | | 0.2713 | 16650 | 0.0017 | - | | 0.2722 | 16700 | 0.0018 | - | | 0.2730 | 16750 | 0.0011 | - | | 0.2738 | 16800 | 0.0006 | - | | 0.2746 | 16850 | 0.0016 | - | | 0.2754 | 16900 | 0.0036 | - | | 0.2762 | 16950 | 0.0022 | - | | 0.2771 | 17000 | 0.001 | - | | 0.2779 | 17050 | 0.0008 | - | | 0.2787 | 17100 | 0.0017 | - | | 0.2795 | 17150 | 0.0006 | - | | 0.2803 | 17200 | 0.0007 | - | | 0.2811 | 17250 | 0.003 | - | | 0.2819 | 17300 | 0.0019 | - | | 0.2828 | 17350 | 0.0006 | - | | 0.2836 | 17400 | 0.0005 | - | | 0.2844 | 17450 | 0.0004 | - | | 0.2852 | 17500 | 0.0006 | - | | 0.2860 | 17550 | 0.0015 | - | | 0.2868 | 17600 | 0.0003 | - | | 0.2876 | 17650 | 0.0012 | - | | 0.2885 | 17700 | 0.0018 | - | | 0.2893 | 17750 | 0.0017 | - | | 0.2901 | 17800 | 0.0021 | - | | 0.2909 | 17850 | 0.0027 | - | | 0.2917 | 17900 | 0.0017 | - | | 0.2925 | 17950 | 0.0005 | - | | 0.2934 | 18000 | 0.0015 | - | | 0.2942 | 18050 | 0.0005 | - | | 0.2950 | 18100 | 0.0015 | - | | 0.2958 | 18150 | 0.0008 | - | | 0.2966 | 18200 | 0.0007 | - | | 0.2974 | 18250 | 0.002 | - | | 0.2982 | 18300 | 0.0007 | - | | 0.2991 | 18350 | 0.0025 | - | | 0.2999 | 18400 | 0.0008 | - | | 0.3007 | 18450 | 0.0008 | - | | 0.3015 | 18500 | 0.0015 | - | | 0.3023 | 18550 | 0.0006 | - | | 0.3031 | 18600 | 0.0005 | - | | 0.3039 | 18650 | 0.0025 | - | | 0.3048 | 18700 | 0.0005 | - | | 0.3056 | 18750 | 0.0005 | - | | 0.3064 | 18800 | 0.0004 | - | | 0.3072 | 18850 | 0.0015 | - | | 0.3080 | 18900 | 0.0003 | - | | 0.3088 | 18950 | 0.0005 | - | | 0.3096 | 19000 | 0.0019 | - | | 0.3105 | 19050 | 0.0016 | - | | 0.3113 | 19100 | 0.0009 | - | | 0.3121 | 19150 | 0.0003 | - | | 0.3129 | 19200 | 0.0019 | - | | 0.3137 | 19250 | 0.0003 | - | | 0.3145 | 19300 | 0.0004 | - | | 0.3154 | 19350 | 0.004 | - | | 0.3162 | 19400 | 0.003 | - | | 0.3170 | 19450 | 0.0015 | - | | 0.3178 | 19500 | 0.0005 | - | | 0.3186 | 19550 | 0.001 | - | | 0.3194 | 19600 | 0.0009 | - | | 0.3202 | 19650 | 0.003 | - | | 0.3211 | 19700 | 0.001 | - | | 0.3219 | 19750 | 0.0023 | - | | 0.3227 | 19800 | 0.0017 | - | | 0.3235 | 19850 | 0.0003 | - | | 0.3243 | 19900 | 0.0006 | - | | 0.3251 | 19950 | 0.0003 | - | | 0.3259 | 20000 | 0.0005 | - | | 0.3268 | 20050 | 0.0015 | - | | 0.3276 | 20100 | 0.0014 | - | | 0.3284 | 20150 | 0.0016 | - | | 0.3292 | 20200 | 0.0009 | - | | 0.3300 | 20250 | 0.0026 | - | | 0.3308 | 20300 | 0.0004 | - | | 0.3316 | 20350 | 0.0004 | - | | 0.3325 | 20400 | 0.0016 | - | | 0.3333 | 20450 | 0.0004 | - | | 0.3341 | 20500 | 0.0008 | - | | 0.3349 | 20550 | 0.0005 | - | | 0.3357 | 20600 | 0.0005 | - | | 0.3365 | 20650 | 0.0003 | - | | 0.3374 | 20700 | 0.0003 | - | | 0.3382 | 20750 | 0.0017 | - | | 0.3390 | 20800 | 0.0017 | - | | 0.3398 | 20850 | 0.0016 | - | | 0.3406 | 20900 | 0.0003 | - | | 0.3414 | 20950 | 0.0021 | - | | 0.3422 | 21000 | 0.0004 | - | | 0.3431 | 21050 | 0.0007 | - | | 0.3439 | 21100 | 0.0003 | - | | 0.3447 | 21150 | 0.0005 | - | | 0.3455 | 21200 | 0.0003 | - | | 0.3463 | 21250 | 0.0003 | - | | 0.3471 | 21300 | 0.002 | - | | 0.3479 | 21350 | 0.0015 | - | | 0.3488 | 21400 | 0.002 | - | | 0.3496 | 21450 | 0.0005 | - | | 0.3504 | 21500 | 0.0002 | - | | 0.3512 | 21550 | 0.0013 | - | | 0.3520 | 21600 | 0.0003 | - | | 0.3528 | 21650 | 0.0003 | - | | 0.3537 | 21700 | 0.0009 | - | | 0.3545 | 21750 | 0.0003 | - | | 0.3553 | 21800 | 0.0007 | - | | 0.3561 | 21850 | 0.0004 | - | | 0.3569 | 21900 | 0.001 | - | | 0.3577 | 21950 | 0.0003 | - | | 0.3585 | 22000 | 0.0004 | - | | 0.3594 | 22050 | 0.0016 | - | | 0.3602 | 22100 | 0.0008 | - | | 0.3610 | 22150 | 0.0005 | - | | 0.3618 | 22200 | 0.001 | - | | 0.3626 | 22250 | 0.0006 | - | | 0.3634 | 22300 | 0.0003 | - | | 0.3642 | 22350 | 0.0004 | - | | 0.3651 | 22400 | 0.0003 | - | | 0.3659 | 22450 | 0.0003 | - | | 0.3667 | 22500 | 0.0003 | - | | 0.3675 | 22550 | 0.0015 | - | | 0.3683 | 22600 | 0.0021 | - | | 0.3691 | 22650 | 0.0005 | - | | 0.3699 | 22700 | 0.0004 | - | | 0.3708 | 22750 | 0.0041 | - | | 0.3716 | 22800 | 0.001 | - | | 0.3724 | 22850 | 0.0019 | - | | 0.3732 | 22900 | 0.0003 | - | | 0.3740 | 22950 | 0.002 | - | | 0.3748 | 23000 | 0.001 | - | | 0.3757 | 23050 | 0.0003 | - | | 0.3765 | 23100 | 0.0013 | - | | 0.3773 | 23150 | 0.0002 | - | | 0.3781 | 23200 | 0.0009 | - | | 0.3789 | 23250 | 0.0003 | - | | 0.3797 | 23300 | 0.0003 | - | | 0.3805 | 23350 | 0.0003 | - | | 0.3814 | 23400 | 0.0018 | - | | 0.3822 | 23450 | 0.0003 | - | | 0.3830 | 23500 | 0.0008 | - | | 0.3838 | 23550 | 0.0007 | - | | 0.3846 | 23600 | 0.002 | - | | 0.3854 | 23650 | 0.0005 | - | | 0.3862 | 23700 | 0.0011 | - | | 0.3871 | 23750 | 0.0025 | - | | 0.3879 | 23800 | 0.0022 | - | | 0.3887 | 23850 | 0.0029 | - | | 0.3895 | 23900 | 0.0006 | - | | 0.3903 | 23950 | 0.0015 | - | | 0.3911 | 24000 | 0.0016 | - | | 0.3919 | 24050 | 0.0027 | - | | 0.3928 | 24100 | 0.0016 | - | | 0.3936 | 24150 | 0.0004 | - | | 0.3944 | 24200 | 0.0019 | - | | 0.3952 | 24250 | 0.0016 | - | | 0.3960 | 24300 | 0.0003 | - | | 0.3968 | 24350 | 0.0005 | - | | 0.3977 | 24400 | 0.0003 | - | | 0.3985 | 24450 | 0.0004 | - | | 0.3993 | 24500 | 0.0004 | - | | 0.4001 | 24550 | 0.0015 | - | | 0.4009 | 24600 | 0.0014 | - | | 0.4017 | 24650 | 0.0003 | - | | 0.4025 | 24700 | 0.0003 | - | | 0.4034 | 24750 | 0.0016 | - | | 0.4042 | 24800 | 0.0003 | - | | 0.4050 | 24850 | 0.0012 | - | | 0.4058 | 24900 | 0.0003 | - | | 0.4066 | 24950 | 0.0004 | - | | 0.4074 | 25000 | 0.001 | - | | 0.4082 | 25050 | 0.0005 | - | | 0.4091 | 25100 | 0.0013 | - | | 0.4099 | 25150 | 0.0003 | - | | 0.4107 | 25200 | 0.0003 | - | | 0.4115 | 25250 | 0.0012 | - | | 0.4123 | 25300 | 0.0002 | - | | 0.4131 | 25350 | 0.0005 | - | | 0.4140 | 25400 | 0.0016 | - | | 0.4148 | 25450 | 0.0004 | - | | 0.4156 | 25500 | 0.0003 | - | | 0.4164 | 25550 | 0.0015 | - | | 0.4172 | 25600 | 0.0002 | - | | 0.4180 | 25650 | 0.0012 | - | | 0.4188 | 25700 | 0.0006 | - | | 0.4197 | 25750 | 0.0004 | - | | 0.4205 | 25800 | 0.0002 | - | | 0.4213 | 25850 | 0.0016 | - | | 0.4221 | 25900 | 0.0014 | - | | 0.4229 | 25950 | 0.0008 | - | | 0.4237 | 26000 | 0.0003 | - | | 0.4245 | 26050 | 0.0005 | - | | 0.4254 | 26100 | 0.0002 | - | | 0.4262 | 26150 | 0.0018 | - | | 0.4270 | 26200 | 0.0002 | - | | 0.4278 | 26250 | 0.0003 | - | | 0.4286 | 26300 | 0.0003 | - | | 0.4294 | 26350 | 0.0003 | - | | 0.4302 | 26400 | 0.0006 | - | | 0.4311 | 26450 | 0.0027 | - | | 0.4319 | 26500 | 0.0004 | - | | 0.4327 | 26550 | 0.0003 | - | | 0.4335 | 26600 | 0.0003 | - | | 0.4343 | 26650 | 0.0003 | - | | 0.4351 | 26700 | 0.0016 | - | | 0.4360 | 26750 | 0.001 | - | | 0.4368 | 26800 | 0.0016 | - | | 0.4376 | 26850 | 0.0003 | - | | 0.4384 | 26900 | 0.0016 | - | | 0.4392 | 26950 | 0.0014 | - | | 0.4400 | 27000 | 0.0002 | - | | 0.4408 | 27050 | 0.0004 | - | | 0.4417 | 27100 | 0.0002 | - | | 0.4425 | 27150 | 0.0016 | - | | 0.4433 | 27200 | 0.0015 | - | | 0.4441 | 27250 | 0.0003 | - | | 0.4449 | 27300 | 0.0003 | - | | 0.4457 | 27350 | 0.0003 | - | | 0.4465 | 27400 | 0.0007 | - | | 0.4474 | 27450 | 0.0004 | - | | 0.4482 | 27500 | 0.0015 | - | | 0.4490 | 27550 | 0.0002 | - | | 0.4498 | 27600 | 0.0019 | - | | 0.4506 | 27650 | 0.0012 | - | | 0.4514 | 27700 | 0.0002 | - | | 0.4522 | 27750 | 0.0004 | - | | 0.4531 | 27800 | 0.0016 | - | | 0.4539 | 27850 | 0.0003 | - | | 0.4547 | 27900 | 0.0028 | - | | 0.4555 | 27950 | 0.0006 | - | | 0.4563 | 28000 | 0.0004 | - | | 0.4571 | 28050 | 0.0014 | - | | 0.4580 | 28100 | 0.0002 | - | | 0.4588 | 28150 | 0.0004 | - | | 0.4596 | 28200 | 0.0004 | - | | 0.4604 | 28250 | 0.0003 | - | | 0.4612 | 28300 | 0.0003 | - | | 0.4620 | 28350 | 0.0023 | - | | 0.4628 | 28400 | 0.0002 | - | | 0.4637 | 28450 | 0.0003 | - | | 0.4645 | 28500 | 0.0004 | - | | 0.4653 | 28550 | 0.0002 | - | | 0.4661 | 28600 | 0.0003 | - | | 0.4669 | 28650 | 0.0014 | - | | 0.4677 | 28700 | 0.0003 | - | | 0.4685 | 28750 | 0.004 | - | | 0.4694 | 28800 | 0.0007 | - | | 0.4702 | 28850 | 0.0002 | - | | 0.4710 | 28900 | 0.0041 | - | | 0.4718 | 28950 | 0.0016 | - | | 0.4726 | 29000 | 0.0003 | - | | 0.4734 | 29050 | 0.0016 | - | | 0.4743 | 29100 | 0.0006 | - | | 0.4751 | 29150 | 0.0015 | - | | 0.4759 | 29200 | 0.0003 | - | | 0.4767 | 29250 | 0.0004 | - | | 0.4775 | 29300 | 0.0017 | - | | 0.4783 | 29350 | 0.0003 | - | | 0.4791 | 29400 | 0.0012 | - | | 0.4800 | 29450 | 0.0013 | - | | 0.4808 | 29500 | 0.0015 | - | | 0.4816 | 29550 | 0.0004 | - | | 0.4824 | 29600 | 0.0006 | - | | 0.4832 | 29650 | 0.0003 | - | | 0.4840 | 29700 | 0.0003 | - | | 0.4848 | 29750 | 0.0013 | - | | 0.4857 | 29800 | 0.0011 | - | | 0.4865 | 29850 | 0.0029 | - | | 0.4873 | 29900 | 0.0004 | - | | 0.4881 | 29950 | 0.0004 | - | | 0.4889 | 30000 | 0.0003 | - | | 0.4897 | 30050 | 0.0002 | - | | 0.4905 | 30100 | 0.0003 | - | | 0.4914 | 30150 | 0.0025 | - | | 0.4922 | 30200 | 0.0002 | - | | 0.4930 | 30250 | 0.0003 | - | | 0.4938 | 30300 | 0.0015 | - | | 0.4946 | 30350 | 0.0002 | - | | 0.4954 | 30400 | 0.0011 | - | | 0.4963 | 30450 | 0.0003 | - | | 0.4971 | 30500 | 0.0002 | - | | 0.4979 | 30550 | 0.0004 | - | | 0.4987 | 30600 | 0.0002 | - | | 0.4995 | 30650 | 0.0003 | - | | 0.5003 | 30700 | 0.0015 | - | | 0.5011 | 30750 | 0.0003 | - | | 0.5020 | 30800 | 0.0005 | - | | 0.5028 | 30850 | 0.0002 | - | | 0.5036 | 30900 | 0.0015 | - | | 0.5044 | 30950 | 0.0003 | - | | 0.5052 | 31000 | 0.0015 | - | | 0.5060 | 31050 | 0.0015 | - | | 0.5068 | 31100 | 0.0003 | - | | 0.5077 | 31150 | 0.0002 | - | | 0.5085 | 31200 | 0.0027 | - | | 0.5093 | 31250 | 0.0013 | - | | 0.5101 | 31300 | 0.0004 | - | | 0.5109 | 31350 | 0.0012 | - | | 0.5117 | 31400 | 0.0002 | - | | 0.5125 | 31450 | 0.0015 | - | | 0.5134 | 31500 | 0.0009 | - | | 0.5142 | 31550 | 0.0008 | - | | 0.5150 | 31600 | 0.0014 | - | | 0.5158 | 31650 | 0.0002 | - | | 0.5166 | 31700 | 0.0004 | - | | 0.5174 | 31750 | 0.0024 | - | | 0.5183 | 31800 | 0.0015 | - | | 0.5191 | 31850 | 0.0015 | - | | 0.5199 | 31900 | 0.0004 | - | | 0.5207 | 31950 | 0.0003 | - | | 0.5215 | 32000 | 0.0014 | - | | 0.5223 | 32050 | 0.0004 | - | | 0.5231 | 32100 | 0.0003 | - | | 0.5240 | 32150 | 0.001 | - | | 0.5248 | 32200 | 0.0002 | - | | 0.5256 | 32250 | 0.0004 | - | | 0.5264 | 32300 | 0.0014 | - | | 0.5272 | 32350 | 0.0014 | - | | 0.5280 | 32400 | 0.0002 | - | | 0.5288 | 32450 | 0.0003 | - | | 0.5297 | 32500 | 0.0003 | - | | 0.5305 | 32550 | 0.0015 | - | | 0.5313 | 32600 | 0.0026 | - | | 0.5321 | 32650 | 0.0002 | - | | 0.5329 | 32700 | 0.0013 | - | | 0.5337 | 32750 | 0.0004 | - | | 0.5346 | 32800 | 0.0013 | - | | 0.5354 | 32850 | 0.0003 | - | | 0.5362 | 32900 | 0.0005 | - | | 0.5370 | 32950 | 0.0014 | - | | 0.5378 | 33000 | 0.0004 | - | | 0.5386 | 33050 | 0.0014 | - | | 0.5394 | 33100 | 0.0014 | - | | 0.5403 | 33150 | 0.0002 | - | | 0.5411 | 33200 | 0.0003 | - | | 0.5419 | 33250 | 0.001 | - | | 0.5427 | 33300 | 0.0007 | - | | 0.5435 | 33350 | 0.0002 | - | | 0.5443 | 33400 | 0.0003 | - | | 0.5451 | 33450 | 0.0006 | - | | 0.5460 | 33500 | 0.0002 | - | | 0.5468 | 33550 | 0.0014 | - | | 0.5476 | 33600 | 0.0002 | - | | 0.5484 | 33650 | 0.0002 | - | | 0.5492 | 33700 | 0.0042 | - | | 0.5500 | 33750 | 0.0026 | - | | 0.5508 | 33800 | 0.0007 | - | | 0.5517 | 33850 | 0.0004 | - | | 0.5525 | 33900 | 0.0003 | - | | 0.5533 | 33950 | 0.0006 | - | | 0.5541 | 34000 | 0.0003 | - | | 0.5549 | 34050 | 0.0002 | - | | 0.5557 | 34100 | 0.0002 | - | | 0.5566 | 34150 | 0.0025 | - | | 0.5574 | 34200 | 0.0003 | - | | 0.5582 | 34250 | 0.0002 | - | | 0.5590 | 34300 | 0.0002 | - | | 0.5598 | 34350 | 0.0003 | - | | 0.5606 | 34400 | 0.0005 | - | | 0.5614 | 34450 | 0.0008 | - | | 0.5623 | 34500 | 0.0013 | - | | 0.5631 | 34550 | 0.0002 | - | | 0.5639 | 34600 | 0.0026 | - | | 0.5647 | 34650 | 0.0018 | - | | 0.5655 | 34700 | 0.0015 | - | | 0.5663 | 34750 | 0.0017 | - | | 0.5671 | 34800 | 0.0002 | - | | 0.5680 | 34850 | 0.0002 | - | | 0.5688 | 34900 | 0.0003 | - | | 0.5696 | 34950 | 0.0012 | - | | 0.5704 | 35000 | 0.0016 | - | | 0.5712 | 35050 | 0.0003 | - | | 0.5720 | 35100 | 0.0003 | - | | 0.5728 | 35150 | 0.0003 | - | | 0.5737 | 35200 | 0.0002 | - | | 0.5745 | 35250 | 0.0002 | - | | 0.5753 | 35300 | 0.0026 | - | | 0.5761 | 35350 | 0.0002 | - | | 0.5769 | 35400 | 0.0001 | - | | 0.5777 | 35450 | 0.0003 | - | | 0.5786 | 35500 | 0.0014 | - | | 0.5794 | 35550 | 0.0005 | - | | 0.5802 | 35600 | 0.0003 | - | | 0.5810 | 35650 | 0.0001 | - | | 0.5818 | 35700 | 0.0002 | - | | 0.5826 | 35750 | 0.0002 | - | | 0.5834 | 35800 | 0.0002 | - | | 0.5843 | 35850 | 0.0002 | - | | 0.5851 | 35900 | 0.0001 | - | | 0.5859 | 35950 | 0.0001 | - | | 0.5867 | 36000 | 0.0013 | - | | 0.5875 | 36050 | 0.0002 | - | | 0.5883 | 36100 | 0.0003 | - | | 0.5891 | 36150 | 0.0026 | - | | 0.5900 | 36200 | 0.0015 | - | | 0.5908 | 36250 | 0.0012 | - | | 0.5916 | 36300 | 0.0002 | - | | 0.5924 | 36350 | 0.0001 | - | | 0.5932 | 36400 | 0.0015 | - | | 0.5940 | 36450 | 0.0015 | - | | 0.5949 | 36500 | 0.0007 | - | | 0.5957 | 36550 | 0.0002 | - | | 0.5965 | 36600 | 0.0012 | - | | 0.5973 | 36650 | 0.0033 | - | | 0.5981 | 36700 | 0.0002 | - | | 0.5989 | 36750 | 0.0006 | - | | 0.5997 | 36800 | 0.0025 | - | | 0.6006 | 36850 | 0.0017 | - | | 0.6014 | 36900 | 0.0002 | - | | 0.6022 | 36950 | 0.0001 | - | | 0.6030 | 37000 | 0.0014 | - | | 0.6038 | 37050 | 0.0003 | - | | 0.6046 | 37100 | 0.0002 | - | | 0.6054 | 37150 | 0.0013 | - | | 0.6063 | 37200 | 0.0014 | - | | 0.6071 | 37250 | 0.0001 | - | | 0.6079 | 37300 | 0.0002 | - | | 0.6087 | 37350 | 0.0015 | - | | 0.6095 | 37400 | 0.0015 | - | | 0.6103 | 37450 | 0.0012 | - | | 0.6111 | 37500 | 0.0014 | - | | 0.6120 | 37550 | 0.0003 | - | | 0.6128 | 37600 | 0.0004 | - | | 0.6136 | 37650 | 0.0009 | - | | 0.6144 | 37700 | 0.0005 | - | | 0.6152 | 37750 | 0.0018 | - | | 0.6160 | 37800 | 0.0001 | - | | 0.6169 | 37850 | 0.0018 | - | | 0.6177 | 37900 | 0.0006 | - | | 0.6185 | 37950 | 0.0002 | - | | 0.6193 | 38000 | 0.0006 | - | | 0.6201 | 38050 | 0.0014 | - | | 0.6209 | 38100 | 0.0003 | - | | 0.6217 | 38150 | 0.0013 | - | | 0.6226 | 38200 | 0.0001 | - | | 0.6234 | 38250 | 0.0003 | - | | 0.6242 | 38300 | 0.0002 | - | | 0.625 | 38350 | 0.0002 | - | | 0.6258 | 38400 | 0.0012 | - | | 0.6266 | 38450 | 0.0002 | - | | 0.6274 | 38500 | 0.0002 | - | | 0.6283 | 38550 | 0.0003 | - | | 0.6291 | 38600 | 0.0002 | - | | 0.6299 | 38650 | 0.0003 | - | | 0.6307 | 38700 | 0.0002 | - | | 0.6315 | 38750 | 0.0002 | - | | 0.6323 | 38800 | 0.0001 | - | | 0.6331 | 38850 | 0.0003 | - | | 0.6340 | 38900 | 0.0003 | - | | 0.6348 | 38950 | 0.0002 | - | | 0.6356 | 39000 | 0.0002 | - | | 0.6364 | 39050 | 0.0004 | - | | 0.6372 | 39100 | 0.0015 | - | | 0.6380 | 39150 | 0.0002 | - | | 0.6389 | 39200 | 0.0015 | - | | 0.6397 | 39250 | 0.0015 | - | | 0.6405 | 39300 | 0.0015 | - | | 0.6413 | 39350 | 0.002 | - | | 0.6421 | 39400 | 0.0019 | - | | 0.6429 | 39450 | 0.0004 | - | | 0.6437 | 39500 | 0.0012 | - | | 0.6446 | 39550 | 0.0013 | - | | 0.6454 | 39600 | 0.0002 | - | | 0.6462 | 39650 | 0.0027 | - | | 0.6470 | 39700 | 0.003 | - | | 0.6478 | 39750 | 0.0006 | - | | 0.6486 | 39800 | 0.0002 | - | | 0.6494 | 39850 | 0.0002 | - | | 0.6503 | 39900 | 0.0001 | - | | 0.6511 | 39950 | 0.0009 | - | | 0.6519 | 40000 | 0.0007 | - | | 0.6527 | 40050 | 0.0014 | - | | 0.6535 | 40100 | 0.0002 | - | | 0.6543 | 40150 | 0.0003 | - | | 0.6551 | 40200 | 0.0002 | - | | 0.6560 | 40250 | 0.0011 | - | | 0.6568 | 40300 | 0.0005 | - | | 0.6576 | 40350 | 0.0009 | - | | 0.6584 | 40400 | 0.0002 | - | | 0.6592 | 40450 | 0.0013 | - | | 0.6600 | 40500 | 0.0003 | - | | 0.6609 | 40550 | 0.0002 | - | | 0.6617 | 40600 | 0.0006 | - | | 0.6625 | 40650 | 0.0001 | - | | 0.6633 | 40700 | 0.0003 | - | | 0.6641 | 40750 | 0.0014 | - | | 0.6649 | 40800 | 0.0001 | - | | 0.6657 | 40850 | 0.0013 | - | | 0.6666 | 40900 | 0.0002 | - | | 0.6674 | 40950 | 0.0013 | - | | 0.6682 | 41000 | 0.0013 | - | | 0.6690 | 41050 | 0.0017 | - | | 0.6698 | 41100 | 0.0002 | - | | 0.6706 | 41150 | 0.0001 | - | | 0.6714 | 41200 | 0.0002 | - | | 0.6723 | 41250 | 0.0002 | - | | 0.6731 | 41300 | 0.0017 | - | | 0.6739 | 41350 | 0.0014 | - | | 0.6747 | 41400 | 0.0014 | - | | 0.6755 | 41450 | 0.0014 | - | | 0.6763 | 41500 | 0.001 | - | | 0.6772 | 41550 | 0.0002 | - | | 0.6780 | 41600 | 0.0038 | - | | 0.6788 | 41650 | 0.0005 | - | | 0.6796 | 41700 | 0.0014 | - | | 0.6804 | 41750 | 0.0014 | - | | 0.6812 | 41800 | 0.0008 | - | | 0.6820 | 41850 | 0.003 | - | | 0.6829 | 41900 | 0.0005 | - | | 0.6837 | 41950 | 0.0009 | - | | 0.6845 | 42000 | 0.0014 | - | | 0.6853 | 42050 | 0.0001 | - | | 0.6861 | 42100 | 0.0013 | - | | 0.6869 | 42150 | 0.0002 | - | | 0.6877 | 42200 | 0.0013 | - | | 0.6886 | 42250 | 0.0005 | - | | 0.6894 | 42300 | 0.0003 | - | | 0.6902 | 42350 | 0.0003 | - | | 0.6910 | 42400 | 0.0004 | - | | 0.6918 | 42450 | 0.0009 | - | | 0.6926 | 42500 | 0.0002 | - | | 0.6934 | 42550 | 0.0002 | - | | 0.6943 | 42600 | 0.0002 | - | | 0.6951 | 42650 | 0.0001 | - | | 0.6959 | 42700 | 0.0015 | - | | 0.6967 | 42750 | 0.0001 | - | | 0.6975 | 42800 | 0.0013 | - | | 0.6983 | 42850 | 0.0001 | - | | 0.6992 | 42900 | 0.0013 | - | | 0.7000 | 42950 | 0.0002 | - | | 0.7008 | 43000 | 0.0013 | - | | 0.7016 | 43050 | 0.0001 | - | | 0.7024 | 43100 | 0.0002 | - | | 0.7032 | 43150 | 0.0002 | - | | 0.7040 | 43200 | 0.0012 | - | | 0.7049 | 43250 | 0.0007 | - | | 0.7057 | 43300 | 0.0013 | - | | 0.7065 | 43350 | 0.0003 | - | | 0.7073 | 43400 | 0.0002 | - | | 0.7081 | 43450 | 0.0011 | - | | 0.7089 | 43500 | 0.0003 | - | | 0.7097 | 43550 | 0.0002 | - | | 0.7106 | 43600 | 0.0002 | - | | 0.7114 | 43650 | 0.0007 | - | | 0.7122 | 43700 | 0.0014 | - | | 0.7130 | 43750 | 0.0022 | - | | 0.7138 | 43800 | 0.0002 | - | | 0.7146 | 43850 | 0.0005 | - | | 0.7154 | 43900 | 0.0002 | - | | 0.7163 | 43950 | 0.0001 | - | | 0.7171 | 44000 | 0.0002 | - | | 0.7179 | 44050 | 0.0002 | - | | 0.7187 | 44100 | 0.0001 | - | | 0.7195 | 44150 | 0.0003 | - | | 0.7203 | 44200 | 0.0001 | - | | 0.7212 | 44250 | 0.0003 | - | | 0.7220 | 44300 | 0.0015 | - | | 0.7228 | 44350 | 0.0004 | - | | 0.7236 | 44400 | 0.0013 | - | | 0.7244 | 44450 | 0.0003 | - | | 0.7252 | 44500 | 0.0001 | - | | 0.7260 | 44550 | 0.0016 | - | | 0.7269 | 44600 | 0.0014 | - | | 0.7277 | 44650 | 0.0001 | - | | 0.7285 | 44700 | 0.0001 | - | | 0.7293 | 44750 | 0.0001 | - | | 0.7301 | 44800 | 0.0002 | - | | 0.7309 | 44850 | 0.0001 | - | | 0.7317 | 44900 | 0.0002 | - | | 0.7326 | 44950 | 0.0001 | - | | 0.7334 | 45000 | 0.0002 | - | | 0.7342 | 45050 | 0.0002 | - | | 0.7350 | 45100 | 0.0013 | - | | 0.7358 | 45150 | 0.0002 | - | | 0.7366 | 45200 | 0.0006 | - | | 0.7375 | 45250 | 0.0002 | - | | 0.7383 | 45300 | 0.002 | - | | 0.7391 | 45350 | 0.0008 | - | | 0.7399 | 45400 | 0.0001 | - | | 0.7407 | 45450 | 0.0001 | - | | 0.7415 | 45500 | 0.0001 | - | | 0.7423 | 45550 | 0.0003 | - | | 0.7432 | 45600 | 0.0001 | - | | 0.7440 | 45650 | 0.0001 | - | | 0.7448 | 45700 | 0.0026 | - | | 0.7456 | 45750 | 0.0026 | - | | 0.7464 | 45800 | 0.0001 | - | | 0.7472 | 45850 | 0.0002 | - | | 0.7480 | 45900 | 0.0013 | - | | 0.7489 | 45950 | 0.0002 | - | | 0.7497 | 46000 | 0.0025 | - | | 0.7505 | 46050 | 0.0001 | - | | 0.7513 | 46100 | 0.0015 | - | | 0.7521 | 46150 | 0.0001 | - | | 0.7529 | 46200 | 0.0002 | - | | 0.7537 | 46250 | 0.0001 | - | | 0.7546 | 46300 | 0.0001 | - | | 0.7554 | 46350 | 0.0014 | - | | 0.7562 | 46400 | 0.0007 | - | | 0.7570 | 46450 | 0.0001 | - | | 0.7578 | 46500 | 0.0002 | - | | 0.7586 | 46550 | 0.0014 | - | | 0.7595 | 46600 | 0.0013 | - | | 0.7603 | 46650 | 0.0002 | - | | 0.7611 | 46700 | 0.0012 | - | | 0.7619 | 46750 | 0.0001 | - | | 0.7627 | 46800 | 0.0001 | - | | 0.7635 | 46850 | 0.0013 | - | | 0.7643 | 46900 | 0.0002 | - | | 0.7652 | 46950 | 0.0001 | - | | 0.7660 | 47000 | 0.0001 | - | | 0.7668 | 47050 | 0.0014 | - | | 0.7676 | 47100 | 0.0002 | - | | 0.7684 | 47150 | 0.001 | - | | 0.7692 | 47200 | 0.0012 | - | | 0.7700 | 47250 | 0.0003 | - | | 0.7709 | 47300 | 0.0002 | - | | 0.7717 | 47350 | 0.0001 | - | | 0.7725 | 47400 | 0.0001 | - | | 0.7733 | 47450 | 0.0001 | - | | 0.7741 | 47500 | 0.0013 | - | | 0.7749 | 47550 | 0.0001 | - | | 0.7757 | 47600 | 0.0002 | - | | 0.7766 | 47650 | 0.0011 | - | | 0.7774 | 47700 | 0.0013 | - | | 0.7782 | 47750 | 0.0014 | - | | 0.7790 | 47800 | 0.0013 | - | | 0.7798 | 47850 | 0.0008 | - | | 0.7806 | 47900 | 0.0003 | - | | 0.7815 | 47950 | 0.0006 | - | | 0.7823 | 48000 | 0.0015 | - | | 0.7831 | 48050 | 0.0001 | - | | 0.7839 | 48100 | 0.0013 | - | | 0.7847 | 48150 | 0.0002 | - | | 0.7855 | 48200 | 0.0001 | - | | 0.7863 | 48250 | 0.0001 | - | | 0.7872 | 48300 | 0.0013 | - | | 0.7880 | 48350 | 0.0016 | - | | 0.7888 | 48400 | 0.0002 | - | | 0.7896 | 48450 | 0.0002 | - | | 0.7904 | 48500 | 0.001 | - | | 0.7912 | 48550 | 0.0005 | - | | 0.7920 | 48600 | 0.0013 | - | | 0.7929 | 48650 | 0.0013 | - | | 0.7937 | 48700 | 0.0012 | - | | 0.7945 | 48750 | 0.0013 | - | | 0.7953 | 48800 | 0.0002 | - | | 0.7961 | 48850 | 0.0002 | - | | 0.7969 | 48900 | 0.0003 | - | | 0.7978 | 48950 | 0.0007 | - | | 0.7986 | 49000 | 0.0023 | - | | 0.7994 | 49050 | 0.0001 | - | | 0.8002 | 49100 | 0.0001 | - | | 0.8010 | 49150 | 0.0001 | - | | 0.8018 | 49200 | 0.0013 | - | | 0.8026 | 49250 | 0.0001 | - | | 0.8035 | 49300 | 0.0013 | - | | 0.8043 | 49350 | 0.0013 | - | | 0.8051 | 49400 | 0.0003 | - | | 0.8059 | 49450 | 0.0001 | - | | 0.8067 | 49500 | 0.0001 | - | | 0.8075 | 49550 | 0.0001 | - | | 0.8083 | 49600 | 0.0001 | - | | 0.8092 | 49650 | 0.0002 | - | | 0.8100 | 49700 | 0.0002 | - | | 0.8108 | 49750 | 0.001 | - | | 0.8116 | 49800 | 0.0013 | - | | 0.8124 | 49850 | 0.0001 | - | | 0.8132 | 49900 | 0.0006 | - | | 0.8140 | 49950 | 0.0014 | - | | 0.8149 | 50000 | 0.0011 | - | | 0.8157 | 50050 | 0.0003 | - | | 0.8165 | 50100 | 0.0002 | - | | 0.8173 | 50150 | 0.0001 | - | | 0.8181 | 50200 | 0.0001 | - | | 0.8189 | 50250 | 0.0001 | - | | 0.8198 | 50300 | 0.0002 | - | | 0.8206 | 50350 | 0.0001 | - | | 0.8214 | 50400 | 0.0001 | - | | 0.8222 | 50450 | 0.0005 | - | | 0.8230 | 50500 | 0.0001 | - | | 0.8238 | 50550 | 0.0001 | - | | 0.8246 | 50600 | 0.0002 | - | | 0.8255 | 50650 | 0.0001 | - | | 0.8263 | 50700 | 0.0014 | - | | 0.8271 | 50750 | 0.0027 | - | | 0.8279 | 50800 | 0.0001 | - | | 0.8287 | 50850 | 0.0001 | - | | 0.8295 | 50900 | 0.0001 | - | | 0.8303 | 50950 | 0.0003 | - | | 0.8312 | 51000 | 0.0003 | - | | 0.8320 | 51050 | 0.0013 | - | | 0.8328 | 51100 | 0.0013 | - | | 0.8336 | 51150 | 0.0001 | - | | 0.8344 | 51200 | 0.0001 | - | | 0.8352 | 51250 | 0.0001 | - | | 0.8360 | 51300 | 0.0014 | - | | 0.8369 | 51350 | 0.0015 | - | | 0.8377 | 51400 | 0.0013 | - | | 0.8385 | 51450 | 0.0013 | - | | 0.8393 | 51500 | 0.0001 | - | | 0.8401 | 51550 | 0.0002 | - | | 0.8409 | 51600 | 0.0013 | - | | 0.8418 | 51650 | 0.0003 | - | | 0.8426 | 51700 | 0.0013 | - | | 0.8434 | 51750 | 0.0002 | - | | 0.8442 | 51800 | 0.0001 | - | | 0.8450 | 51850 | 0.0003 | - | | 0.8458 | 51900 | 0.0001 | - | | 0.8466 | 51950 | 0.0001 | - | | 0.8475 | 52000 | 0.0001 | - | | 0.8483 | 52050 | 0.0001 | - | | 0.8491 | 52100 | 0.0001 | - | | 0.8499 | 52150 | 0.0001 | - | | 0.8507 | 52200 | 0.001 | - | | 0.8515 | 52250 | 0.0004 | - | | 0.8523 | 52300 | 0.0001 | - | | 0.8532 | 52350 | 0.0001 | - | | 0.8540 | 52400 | 0.0001 | - | | 0.8548 | 52450 | 0.0013 | - | | 0.8556 | 52500 | 0.0007 | - | | 0.8564 | 52550 | 0.0001 | - | | 0.8572 | 52600 | 0.0013 | - | | 0.8581 | 52650 | 0.0004 | - | | 0.8589 | 52700 | 0.0001 | - | | 0.8597 | 52750 | 0.0001 | - | | 0.8605 | 52800 | 0.0001 | - | | 0.8613 | 52850 | 0.0002 | - | | 0.8621 | 52900 | 0.0001 | - | | 0.8629 | 52950 | 0.0012 | - | | 0.8638 | 53000 | 0.0009 | - | | 0.8646 | 53050 | 0.0013 | - | | 0.8654 | 53100 | 0.0013 | - | | 0.8662 | 53150 | 0.0001 | - | | 0.8670 | 53200 | 0.0002 | - | | 0.8678 | 53250 | 0.0001 | - | | 0.8686 | 53300 | 0.0002 | - | | 0.8695 | 53350 | 0.0001 | - | | 0.8703 | 53400 | 0.0013 | - | | 0.8711 | 53450 | 0.0001 | - | | 0.8719 | 53500 | 0.0001 | - | | 0.8727 | 53550 | 0.0001 | - | | 0.8735 | 53600 | 0.0008 | - | | 0.8743 | 53650 | 0.0023 | - | | 0.8752 | 53700 | 0.0002 | - | | 0.8760 | 53750 | 0.0016 | - | | 0.8768 | 53800 | 0.0001 | - | | 0.8776 | 53850 | 0.0001 | - | | 0.8784 | 53900 | 0.0001 | - | | 0.8792 | 53950 | 0.0016 | - | | 0.8801 | 54000 | 0.0001 | - | | 0.8809 | 54050 | 0.0012 | - | | 0.8817 | 54100 | 0.0001 | - | | 0.8825 | 54150 | 0.0001 | - | | 0.8833 | 54200 | 0.0001 | - | | 0.8841 | 54250 | 0.0001 | - | | 0.8849 | 54300 | 0.0014 | - | | 0.8858 | 54350 | 0.0001 | - | | 0.8866 | 54400 | 0.0001 | - | | 0.8874 | 54450 | 0.0001 | - | | 0.8882 | 54500 | 0.0013 | - | | 0.8890 | 54550 | 0.0004 | - | | 0.8898 | 54600 | 0.0013 | - | | 0.8906 | 54650 | 0.0002 | - | | 0.8915 | 54700 | 0.0008 | - | | 0.8923 | 54750 | 0.0001 | - | | 0.8931 | 54800 | 0.0022 | - | | 0.8939 | 54850 | 0.0002 | - | | 0.8947 | 54900 | 0.0001 | - | | 0.8955 | 54950 | 0.0001 | - | | 0.8963 | 55000 | 0.0001 | - | | 0.8972 | 55050 | 0.0001 | - | | 0.8980 | 55100 | 0.0001 | - | | 0.8988 | 55150 | 0.0012 | - | | 0.8996 | 55200 | 0.0001 | - | | 0.9004 | 55250 | 0.0002 | - | | 0.9012 | 55300 | 0.0001 | - | | 0.9021 | 55350 | 0.0001 | - | | 0.9029 | 55400 | 0.0001 | - | | 0.9037 | 55450 | 0.0001 | - | | 0.9045 | 55500 | 0.0013 | - | | 0.9053 | 55550 | 0.0002 | - | | 0.9061 | 55600 | 0.0001 | - | | 0.9069 | 55650 | 0.0001 | - | | 0.9078 | 55700 | 0.0001 | - | | 0.9086 | 55750 | 0.0001 | - | | 0.9094 | 55800 | 0.0001 | - | | 0.9102 | 55850 | 0.0001 | - | | 0.9110 | 55900 | 0.0001 | - | | 0.9118 | 55950 | 0.0001 | - | | 0.9126 | 56000 | 0.0003 | - | | 0.9135 | 56050 | 0.0001 | - | | 0.9143 | 56100 | 0.0002 | - | | 0.9151 | 56150 | 0.0001 | - | | 0.9159 | 56200 | 0.0013 | - | | 0.9167 | 56250 | 0.0001 | - | | 0.9175 | 56300 | 0.0013 | - | | 0.9184 | 56350 | 0.0001 | - | | 0.9192 | 56400 | 0.0001 | - | | 0.9200 | 56450 | 0.0001 | - | | 0.9208 | 56500 | 0.001 | - | | 0.9216 | 56550 | 0.0018 | - | | 0.9224 | 56600 | 0.0001 | - | | 0.9232 | 56650 | 0.0001 | - | | 0.9241 | 56700 | 0.0001 | - | | 0.9249 | 56750 | 0.0001 | - | | 0.9257 | 56800 | 0.0013 | - | | 0.9265 | 56850 | 0.0001 | - | | 0.9273 | 56900 | 0.0013 | - | | 0.9281 | 56950 | 0.0006 | - | | 0.9289 | 57000 | 0.0001 | - | | 0.9298 | 57050 | 0.0001 | - | | 0.9306 | 57100 | 0.0001 | - | | 0.9314 | 57150 | 0.0003 | - | | 0.9322 | 57200 | 0.0017 | - | | 0.9330 | 57250 | 0.0001 | - | | 0.9338 | 57300 | 0.0013 | - | | 0.9346 | 57350 | 0.0025 | - | | 0.9355 | 57400 | 0.0002 | - | | 0.9363 | 57450 | 0.0014 | - | | 0.9371 | 57500 | 0.0004 | - | | 0.9379 | 57550 | 0.0002 | - | | 0.9387 | 57600 | 0.0014 | - | | 0.9395 | 57650 | 0.0001 | - | | 0.9404 | 57700 | 0.0002 | - | | 0.9412 | 57750 | 0.0016 | - | | 0.9420 | 57800 | 0.0001 | - | | 0.9428 | 57850 | 0.0001 | - | | 0.9436 | 57900 | 0.0013 | - | | 0.9444 | 57950 | 0.0002 | - | | 0.9452 | 58000 | 0.0003 | - | | 0.9461 | 58050 | 0.0013 | - | | 0.9469 | 58100 | 0.0001 | - | | 0.9477 | 58150 | 0.0002 | - | | 0.9485 | 58200 | 0.0001 | - | | 0.9493 | 58250 | 0.0013 | - | | 0.9501 | 58300 | 0.0015 | - | | 0.9509 | 58350 | 0.0006 | - | | 0.9518 | 58400 | 0.0001 | - | | 0.9526 | 58450 | 0.0001 | - | | 0.9534 | 58500 | 0.0001 | - | | 0.9542 | 58550 | 0.0001 | - | | 0.9550 | 58600 | 0.0004 | - | | 0.9558 | 58650 | 0.0001 | - | | 0.9566 | 58700 | 0.0002 | - | | 0.9575 | 58750 | 0.0012 | - | | 0.9583 | 58800 | 0.0001 | - | | 0.9591 | 58850 | 0.0001 | - | | 0.9599 | 58900 | 0.0001 | - | | 0.9607 | 58950 | 0.0012 | - | | 0.9615 | 59000 | 0.0002 | - | | 0.9624 | 59050 | 0.0001 | - | | 0.9632 | 59100 | 0.0013 | - | | 0.9640 | 59150 | 0.0002 | - | | 0.9648 | 59200 | 0.0003 | - | | 0.9656 | 59250 | 0.0014 | - | | 0.9664 | 59300 | 0.0001 | - | | 0.9672 | 59350 | 0.0004 | - | | 0.9681 | 59400 | 0.0006 | - | | 0.9689 | 59450 | 0.0002 | - | | 0.9697 | 59500 | 0.0001 | - | | 0.9705 | 59550 | 0.0001 | - | | 0.9713 | 59600 | 0.0001 | - | | 0.9721 | 59650 | 0.0013 | - | | 0.9729 | 59700 | 0.001 | - | | 0.9738 | 59750 | 0.0001 | - | | 0.9746 | 59800 | 0.0013 | - | | 0.9754 | 59850 | 0.0001 | - | | 0.9762 | 59900 | 0.0025 | - | | 0.9770 | 59950 | 0.0001 | - | | 0.9778 | 60000 | 0.0035 | - | | 0.9787 | 60050 | 0.0003 | - | | 0.9795 | 60100 | 0.0001 | - | | 0.9803 | 60150 | 0.0013 | - | | 0.9811 | 60200 | 0.0012 | - | | 0.9819 | 60250 | 0.0001 | - | | 0.9827 | 60300 | 0.0001 | - | | 0.9835 | 60350 | 0.0001 | - | | 0.9844 | 60400 | 0.0008 | - | | 0.9852 | 60450 | 0.0001 | - | | 0.9860 | 60500 | 0.0001 | - | | 0.9868 | 60550 | 0.0002 | - | | 0.9876 | 60600 | 0.0001 | - | | 0.9884 | 60650 | 0.0001 | - | | 0.9892 | 60700 | 0.0001 | - | | 0.9901 | 60750 | 0.0001 | - | | 0.9909 | 60800 | 0.0014 | - | | 0.9917 | 60850 | 0.0001 | - | | 0.9925 | 60900 | 0.0016 | - | | 0.9933 | 60950 | 0.0013 | - | | 0.9941 | 61000 | 0.0024 | - | | 0.9949 | 61050 | 0.0001 | - | | 0.9958 | 61100 | 0.0014 | - | | 0.9966 | 61150 | 0.0014 | - | | 0.9974 | 61200 | 0.0013 | - | | 0.9982 | 61250 | 0.0013 | - | | 0.9990 | 61300 | 0.0025 | - | | 0.9998 | 61350 | 0.0002 | - | ### Framework Versions - Python: 3.10.14 - SetFit: 1.1.1 - Sentence Transformers: 3.4.1 - Transformers: 4.49.0 - PyTorch: 2.2.2 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "curso de marketing digital torremolinos,aprender marketing digital,rock content,estudiar marketing digital torremolinos,platzi curso de marketing digital para negocios"}, {"text": "1 bedroom apartments near me alberta edmonton - 1 bedroom apartments in edmonton,2 bedroom apartments in edmonton,right now in edmonton 2 bedroom apartments for rent,low-income apartments edmonton,affordable 55 and older apartments for rent near me edmont"}, {"text": "winter heating grants for disabled uk england,winter heating grants for disabled uk england barking,check heating grant eligibility 2024,heating grants disabled,grants for beds for disabled adults england,heating grants disabled 2024 england,winter heatin"}, {"text": "pro bono abogados inmigracion en español usa españa - pro bono abogados inmigracion en español usa,pro bono abogados inmigracion en español usa - criminal lawyer nj 2024,pro bono abogados inmigracion en español in palma - pro bono abogados inmigracio"}, {"text": "best cloud storage providers,1 tb cloud storage price,online backup server,most secure cloud storage,cloud backup storage,sites like dropbox"}], "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.765, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,650
masa-research/me5-small-preskripsi-embedding-pos-multineg
masa-research
sentence-similarity
[ "sentence-transformers", "tensorboard", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:46095", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:intfloat/multilingual-e5-small", "base_model:finetune:intfloat/multilingual-e5-small", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2025-01-30T12:42:41Z
2025-01-30T13:27:13+00:00
39
0
--- base_model: intfloat/multilingual-e5-small library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:46095 - loss:MultipleNegativesRankingLoss widget: - source_sentence: 'Paslon Atang Annida Dapat Dukungan Relawan Badani di Pilkada 2024. Bandung, VIVA - Elemen dukungan bagi pasangan Calon Wali Kota dan Wakil Wali Kota Bogor pada Pilkada 2024 yaitu Atang Trisnanto - Annida Allivia terus bermunculan. Salahsatunya Balad Abah Diani (Badani) menyatakan telah menjadi relawan untuk pemenangan Atang - Annida. Badani mendeklarasikan dukungannya di Posko Badani, Jalan Dadali No. 39, Kecamatan Tanah Sareal, pada Minggu 27 Oktober 2024. Deklarasi ini dihadiri 500 orang pendukung yang terdiri dari pensiunan ASN, organisasi masyarakat, komunitas relawan, dan tokoh masyarakat.\xa0 "Mereka berkomitmen untuk mendukung visi dan misi Atang-Annida dalam memajukan Kota Bogor serta mewujudkan Bogor Nyaman untuk Semua," kata Dewan Pembina Badani, H. Diani Budiarto dalam keterangannya, Rabu 30 Oktober 2024. Menurutnya, deklarasi ini bertujuan mengumpulkan potensi berbagai kalangan, mulai dari organisasi masyarakat, pensiunan, kelompok anak muda, komunitas relawan, maupun tokoh masyarakat yang memiliki visi yang sama dalam memajukan Kota Bogor.\xa0 "Kami ingin bergabung memenangkan pasangan Atang-Annida sebagai Wali Kota dan Wakil Wali Kota Bogor periode 2024-2029," ujar Diani. Selain itu, mantan Wali Kota Bogor periode 2004-2014 ini memproyeksikan relawan ini memberikan semangat bagi Partai Keadilan Sejahtera (PKS) dan memperlihatkan bahwa mereka tidak berjalan sendiri. , "Banyak elemen masyarakat berharap PKS mampu menghadirkan kepemimpinan baru yang dekat dengan masyarakat," katanya., Menurutnya, pasca memantau seluruh kandidat, Badani memilih untuk mendukung pasangan Atang-Annida. "Kami menilai Atang-Annida adalah pasangan calon yang paling sejalan dengan aspirasi masyarakat Kota Bogor,” katanya., , “Memiliki komitmen untuk hadir di tengah-tengah masyarakat, mendengarkan aspirasi, dan mampu memberikan solusi untuk permasalahan yang ada," terangnya., Menyikapi hal ini, Atang Trisnanto menyampaikan rasa syukur, terima kasih, dan apresiasi kepada Badani atas dukungan yang diberikan. "Saya bertekad untuk mendengar aspirasi masyarakat, dan bersama-sama kita akan membangun Bogor yang kita cintai ini. Mari kita wujudkan Bogor Nyaman untuk Semua: nyaman pelayanannya, nyaman rezekinya, nyaman kotanya, dan nyaman hidupnya," terangnya. *' sentences: - Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap timnas indonesia? - Seberapa netral artikel ini? Apakah artikel ini hanya menyajikan fakta atau informasi tanpa memberikan opini atau emosi terhadap timnas indonesia? - Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi timnas indonesia? - source_sentence: 'My fears if we cut out GPs from breast checks. I’m 86 and I’ve been on HRT since my early 40s when I had a hysterectomy. I believe it has kept me healthy and active. I’ve never broken a bone – even though osteoporosis runs in my family – and this must be thanks to the HRT. Now my GP wants me to stop using the patches as he says I’m too old to be on HRT. What should I do? Dr Ellie Cannon replies: Doctors are always cautious about older patients taking prescription medicines because they are more likely to suffer side effects – some of which can be dangerous. In the case of hormone replacement therapy (HRT), it can raise the risk of a stroke or life-threatening blood clot. NHS guidelines also state that patients should be on HRT for as short a time as possible, and recommend that women come off it every few years to find out if it is making a significant difference to their symptoms. However, that does not mean a woman in her 80s categorically cannot take it. Firstly, there is no upper age limit to taking HRT. Secondly, menopause treatment is supposed to be personalised to each patient. This is because the symptoms affect women in different ways. Some women will need larger – or smaller – HRT doses than others. And there are those who benefit from taking these replacement hormones for decades, while others may need them for only a year or two. It’s also important to consider the different types of HRT that are available. Patches are considered to have a lower stroke risk than tablets, and there are also low-strength patches which contain less oestrogen that can reduce this risk even further. Moreover, HRT gel is even safer than patches, because it can be used in very small doses. These are all options worth discussing with a GP. However, for anyone worried about the risk of developing osteoporosis, it’s crucial to understand that HRT – while shown to protect against the bone-thinning disease – is not the only way to prevent it. Walking can keep it at bay, as well as a diet with sufficient amounts of calcium and vitamin D. Adults need 1,000mg of calcium a day to lower the risk of bone breaks. In June I developed a rash on my feet, legs and buttocks. My GP diagnosed me with scabies and it does appear that the treatment is working. However, apparently it is spread through close – often sexual – sexual contact, and I have not been intimate with anyone for well over a year. Could this really be scabies? Dr Ellie replies: Scabies is a skin infestation caused by a parasite called a mite. The itching is caused by the body’s response to the mites and their eggs on the skin. The parasites often congregate in the fingers, tummy, genitals, arms and bottom. In recent years I have noticed a rise in scabies patients at my GP surgery, which suggests it is more common than it once was. These mites are also slowly becoming resistant to current treatments. It is true that it usually requires close physical contact to spread scabies, which is why it is often transmitted during sex. Brief physical touch – such as a handshake – is unlikely to lead to scabies. However, mites can also survive for several days in humid environments. Contaminated bedsheets and clothes should always be washed at hot temperatures to kill the mite and eggs. The most common treatment is an anti-parasitic liquid called Derbac M – also known as malathion. For most patients, it is quick and effective. I’ve had balance problems since I had a pacemaker fitted in 2019. It feels like I’m drunk at all times. I’ve been seen by a neurologist who says I don’t have any brain issues such as Parkinson’s. Could my pacemaker be to blame? Dr Ellie replies: It''s important to always see a GP about balance problems. Not only is it a very distressing symptom, it can also lead to falls and life-changing injuries. But unpicking its cause can be complicated, because there are several possible triggers. For example, if the dizziness occurs when standing or sitting up suddenly, it might be due to out-of-control blood pressure. However, pacemakers, in rare cases, do have side effects. The devices are fitted to control the beating of the heart. For most who receive them, they massively improve quality of life and are even life-saving. However, a small percentage of patients can suffer something known as pacemaker syndrome. This is when electrical activity in the heart is not regulated properly, meaning the organ does not beat as it should. As a result, people may experience dizziness, fatigue and light-headedness. Some even faint. Anyone who experiences balance problems after a pacemaker is fitted should see a cardiologist to explore the possibility of this syndrome. There are tests that specialists can carry out in hospital to work out whether the device is to blame. If it is, then a different type of pacemaker can be used instead. A GP can make a referral to see a cardiologist. I was concerned to read about a Government scheme that will allow women to refer themselves for a breast cancer check without seeing a GP first. In Somerset, they will be able to call the non-emergency phone line 111 – or use the NHS app – to report a breast lump, and then will be directed to a specialist clinic for a cancer check. If successful, the scheme will be rolled out across the country. I don’t support this. Delays in cancer diagnoses do cost lives, and we need to speed up this process, but the vast majority of new lumps are not cancerous. Testing every lump could lead to unnecessary invasive investigations and will certainly clog up NHS resources. Do you agree? Do you believe your breast cancer could have been diagnosed sooner if you had been able to self-refer? Please let me know using the email address below. I was taken aback last week when the end-of-life charity Marie Curie released a list of the most popular songs that people want to listen to in their final moments. Frank Sinatra’s My Way, The Beatles’ Hey Jude, and Over The Rainbow by Judy Garland were among the top answers. I’m sure these are all great songs, but the idea of playing music at the end of life is a strange concept to me. Sadly, in the past year, I’ve experienced the deaths of four family members. Not one of them requested that music was to be played at the end. However, that’s not to say that it doesn’t bring peace and happiness to others. Do you have a song that you would like played on your death bed? Has a family member requested one? Please write in and let me know. Do you have a question for Dr Ellie Cannon? Email [email protected] Dr Cannon cannot enter into personal correspondence and her replies should be taken in a general context.' sentences: - Seberapa netral artikel ini? Apakah artikel ini hanya menyajikan fakta atau informasi tanpa memberikan opini atau emosi terhadap kementerian kesehatan? - Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap kementerian kesehatan? - Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi kementerian kesehatan? - source_sentence: 'Cara Efektif Hadapi Orang yang Susah Menerima Kritik, Bicara Tanpa Baper. MEDIA PAKUAN - Berhadapan dengan orang yang anti kritik bisa jadi hal yang cukup menantang. Mereka seringkali sulit menerima masukan, bahkan jika itu demi kebaikan mereka sendiri. Namun, jika kita menggunakan pendekatan yang tepat, komunikasi bisa berjalan lebih efektif. Berikut adalah beberapa tips yang bisa Anda coba ketika harus berhadapan dengan mereka. Jika Anda sudah mencoba berbagai cara untuk berkomunikasi dengan seseorang yang anti kritik dan tidak ada perubahan yang berarti, mungkin sudah saatnya untuk membatasi interaksi dengan mereka. Terlalu sering berurusan dengan orang seperti itu dapat berdampak negatif pada kesehatan mental Anda.*** Berita Pilihan Perbedaan Sakit Perut Saat Haid dan Hamil, Mana Kamu? Ciri-Ciri Hamil Tanpa Morning Sickness, Jangan Terlewat! Nyeri Payudara: Tanda Haid atau Kehamilan? Benarkah Bentuk Perut Bisa Jadi Tanda Awal Kehamilan? Ini Faktanya! Cara Mengetahui Kehamilan Sejak Minggu Pertama, Ini Tanda-Tanda Awalnya!' sentences: - Seberapa netral artikel ini? Apakah artikel ini hanya menyajikan fakta atau informasi tanpa memberikan opini atau emosi terhadap kesehatan mental? - Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap kesehatan mental? - Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi kesehatan mental? - source_sentence: Dukung Program Presiden RI, Polresta Bandung Tindak Tegas Pelaku Tambang Ilegal. 60MENIT.COM, Bandung - Dalam rangka mendukung 17 Program Prioritas Presiden Prabowo Subianto - Gibran, khususnya nomor 11 terkait kelestarian lingkungan hidup. Unit Tipidter Sat Reskrim Polresta Bandung berhasil mengungkap dugaan tindak pidana penambangan tanpa izin atau ilegal di kawasan Gunung Sungapan, Jalan Raya Soreang-Ciwidey, Kecamatan Soreang, Kabupaten Bandung. Pengungkapan ini berhasil diungkap berdasarkan hasil penyelidikan. Dimana aktivitas penambangan ilegal tersebut telah berlangsung selama kurang lebih tiga bulan, terhitung sejak Agustus 2024. Kapolresta Bandung, Kombes Pol Kusworo Wibowo mengatakan terungkapnya tambang ilegal ini, pihaknya mengamankan satu pelaku berinisial EMK (52) yang berperan sebagai pengelola. "Menurut laporan, kegiatan tambang ilegal ini telah berlangsung selama tiga bulan terakhir. Dalam kurun waktu tersebut, para pelaku diketahui menjual hasil tambang berupa tanah berbatu ke beberapa tempat, termasuk perumahan dan real estate di wilayah Bandung," kata Kusworo saat menggerlar konferensi pers di lokasi penambangan ilegal. Selasa, 5 November 2024. "Dari aktivitas ini, pencatatan penjualan material tambang dijual dengan harga Rp300ribu per tronton (24 kubik) dan Rp100ribu per dump truck (7 kubik). Konsumen yang membutuhkan material tambang dapat datang langsung ke lokasi atau memesannya terlebih dahulu," ujarnya. Namun, ia menegaskan bahwa selain keuntungan finansial, ada risiko besar yang dihadapi masyarakat. Kegiatan tambang ilegal di area tersebut berpotensi menyebabkan bencana longsor di perbukitan sekitar. "Longsor ini dapat membahayakan para pengguna Jalan Raya Soreang-Ciwidey, yang berada di jalur berdekatan dengan lokasi tambang," tuturnya. "Jika longsor terjadi, terutama saat arus lalu lintas sedang padat, kendaraan yang melintas bisa tersapu ke jurang di sisi kanan jalan, mengancam keselamatan para pengendara," jelasnya. Kusworo mengungkapkan untuk mencegah risiko tersebut, pihaknya langsung mengambil tindakan tegas, yakni menyita sejumlah barang bukti yang berada di lokasi. "Kami mengimbau masyarakat untuk lebih waspada terhadap kegiatan serupa dan turut melaporkan apabila menemukan aktivitas yang membahayakan lingkungan hidup di wilayah mereka," ujarnya. Guna mempertanggungjawabkan perbuatannya, pelaku dijerat Pasal 158 Jo. Pasal 35 UU RI No. 3 Tahun 2020 tentang perubahan atas UU RI No. 4 Tahun 2009 tentang Pertambangan Mineral dan Batubara sebagaimana telah diubah dengan UU RI No. 6 Tahun 2023 tentang penetapan peratutan pemerintah pengganti UU RI No. 2 Tahun 2022 tentang Cipta Kerja Menjadi Undang-undang dengan ancaman maksimal 5 tahun penjara dan denda Rp. 100 miliar. (Taupik) sentences: - Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi pelaku tambang ilegal (emk)? - Seberapa netral artikel ini? Apakah artikel ini hanya menyajikan fakta atau informasi tanpa memberikan opini atau emosi terhadap pelaku tambang ilegal (emk)? - Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap pelaku tambang ilegal (emk)? - source_sentence: Bambang Hermanto Serukan Ketertiban Pasca Insiden Penghadangan Kampanye Calon Bupati. RADARINDRAMAYU.ID - Kampanye politik di Kabupaten Indramayu mulai memanas setelah terjadi penghadangan oleh sekelompok orang terhadap calon Bupati Indramayu nomor urut 3, Nina Agustina, saat ia hendak berkampanye di Desa Tegal Taman, Kecamatan Sukra, beberapa waktu lalu.\xa0\n Kejadian tersebut tentunya telah memicu reaksi dari berbagai elemen masyarakat, yang khawatir akan stabilitas dan keamanan dalam pelaksanaan Pilkada 2024.\n Sejumlah organisasi masyarakat dan tokoh lokal mengecam tindakan penghadangan tersebut, menilai bahwa hal itu bertentangan dengan semangat demokrasi.\xa0\n Mereka menekankan pentingnya kebebasan berpendapat dan berkampanye tanpa intimidasi.\xa0\n Situasi ini menunjukkan bahwa suhu politik di Indramayu mulai meningkat menjelang pemilihan, dan masyarakat diimbau untuk tetap tenang dan bijaksana.\n Menanggapi situasi ini, Bambang Hermanto, calon Bupati Indramayu nomor urut 1, mengajak seluruh masyarakat untuk menjaga ketertiban dan keamanan selama proses Pilkada.\xa0\n Dalam pernyataan resmi di Instagram pribadinya, pada Sabtu, 2 November 2024, Bambang menegaskan,\n "Saya Bambang Hermanto, calon bupati wong Dermayu nomor urut 1. Saya ingin mengimbau kepada seluruh lapisan masyarakat Indramayu untuk sama-sama menjaga kondusifitas lingkungan, jangan sampai kita terpecah belah,"\n Jika terpecah belah, kata Bambang, nanti masyarakat Indramayu akan merasakan sendiri akibat dari perpecahan tersebut.\xa0\n Bambang menambahkan bahwa masyarakat Indramayu harus cerdas dalam memilih pemimpin yang mampu memberikan kenyamanan dan mengayomi.\xa0\n Ia percaya bahwa warga Indramayu sudah memiliki pemahaman yang baik tentang calon-calon mereka, dan berharap agar pemilihan berlangsung dengan damai.\xa0\n "Saya yakin dan percaya masyarakat Indramayu sudah pada cerdas semua, sudah punya pilihan siapa calon Bupatinya yang betul-betul bisa memimpin Indramayu lebih baik lagi," ujarnya.\n Diharapkan semua pihak dapat saling bekerja sama dalam menjaga situasi tetap kondusif, demi terciptanya pemilihan yang adil dan demokratis. sentences: - Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap pilkada 2024? - Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi pilkada 2024? - Seberapa netral artikel ini? Apakah artikel ini hanya menyajikan fakta atau informasi tanpa memberikan opini atau emosi terhadap pilkada 2024? --- # SentenceTransformer based on intfloat/multilingual-e5-small This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) on the preskripsi-sentiment-analysis-dataset-for-embedding-ft dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision fd1525a9fd15316a2d503bf26ab031a61d056e98 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - preskripsi-sentiment-analysis-dataset-for-embedding-ft <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("masa-research/me5-small-preskripsi-embedding-pos-multineg") # Run inference sentences = [ 'Bambang Hermanto Serukan Ketertiban Pasca Insiden Penghadangan Kampanye Calon Bupati. RADARINDRAMAYU.ID - Kampanye politik di Kabupaten Indramayu mulai memanas setelah terjadi penghadangan oleh sekelompok orang terhadap calon Bupati Indramayu nomor urut 3, Nina Agustina, saat ia hendak berkampanye di Desa Tegal Taman, Kecamatan Sukra, beberapa waktu lalu.\\xa0\\n Kejadian tersebut tentunya telah memicu reaksi dari berbagai elemen masyarakat, yang khawatir akan stabilitas dan keamanan dalam pelaksanaan Pilkada 2024.\\n Sejumlah organisasi masyarakat dan tokoh lokal mengecam tindakan penghadangan tersebut, menilai bahwa hal itu bertentangan dengan semangat demokrasi.\\xa0\\n Mereka menekankan pentingnya kebebasan berpendapat dan berkampanye tanpa intimidasi.\\xa0\\n Situasi ini menunjukkan bahwa suhu politik di Indramayu mulai meningkat menjelang pemilihan, dan masyarakat diimbau untuk tetap tenang dan bijaksana.\\n Menanggapi situasi ini, Bambang Hermanto, calon Bupati Indramayu nomor urut 1, mengajak seluruh masyarakat untuk menjaga ketertiban dan keamanan selama proses Pilkada.\\xa0\\n Dalam pernyataan resmi di Instagram pribadinya, pada Sabtu, 2 November 2024, Bambang menegaskan,\\n "Saya Bambang Hermanto, calon bupati wong Dermayu nomor urut 1. Saya ingin mengimbau kepada seluruh lapisan masyarakat Indramayu untuk sama-sama menjaga kondusifitas lingkungan, jangan sampai kita terpecah belah,"\\n Jika terpecah belah, kata Bambang, nanti masyarakat Indramayu akan merasakan sendiri akibat dari perpecahan tersebut.\\xa0\\n Bambang menambahkan bahwa masyarakat Indramayu harus cerdas dalam memilih pemimpin yang mampu memberikan kenyamanan dan mengayomi.\\xa0\\n Ia percaya bahwa warga Indramayu sudah memiliki pemahaman yang baik tentang calon-calon mereka, dan berharap agar pemilihan berlangsung dengan damai.\\xa0\\n "Saya yakin dan percaya masyarakat Indramayu sudah pada cerdas semua, sudah punya pilihan siapa calon Bupatinya yang betul-betul bisa memimpin Indramayu lebih baik lagi," ujarnya.\\n Diharapkan semua pihak dapat saling bekerja sama dalam menjaga situasi tetap kondusif, demi terciptanya pemilihan yang adil dan demokratis.', 'Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap pilkada 2024?', 'Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi pilkada 2024?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### preskripsi-sentiment-analysis-dataset-for-embedding-ft * Dataset: preskripsi-sentiment-analysis-dataset-for-embedding-ft * Size: 46,095 training samples * Columns: <code>artikel</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code> * Approximate statistics based on the first 1000 samples: | | artikel | positive | negative_1 | negative_2 | |:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | string | | details | <ul><li>min: 32 tokens</li><li>mean: 439.08 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 28.16 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 31.16 tokens</li><li>max: 81 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 31.16 tokens</li><li>max: 81 tokens</li></ul> | * Samples: | artikel | positive | negative_1 | negative_2 | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Apa Kabar Elkan Baggott Setelah Absen 10 Bulan dari Timnas Indonesia? Lagi Terapi Cedera Pergelangan Kaki dan Segera Comeback. Bola.com, Blackpool - Cedera pergelangan kaki Elkan Baggott hampir sembuh. Bek berusia 22 tahun itu segera kembali bermain untuk Blackpool FC di kasta ketiga Liga Inggris? Elkan Baggott telah absen membela Blackpool sejak Agustus 2024 akibat cedera pergelangan kaki yang membuatnya baru bermain empat kali pada musim ini. Penampilan terakhir Elkan Baggott di Blackpool terjadi pada 24 Agustus 2024, ketika bermain imbang 4-4 melawan Cambridge United dalam matchday ketiga League One 2024/2025. Elkan Baggott sedang menjalani peminjaman di Blackpool selama semusim dari Ipswich Town. Pemain kelahiran Bangkok, Thailand, itu diperkirakan akan kembali membela Blackpool kontra Liverpool U-21 dalam EFL Trophy 2024/2025 pada 7 November 2024. "Elkan Baggott menjalani sesi latihan pertamanya, jadi senang bisa melihatnya kembali bersama tim," ujar pelatih Blackpool, Stephen Dob...</code> | <code>Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap timnas indonesia?</code> | <code>Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi timnas indonesia?</code> | <code>Seberapa netral artikel ini? Apakah artikel ini hanya menyajikan fakta atau informasi tanpa memberikan opini atau emosi terhadap timnas indonesia?</code> | | <code>Di Balik Penangkapan Terduga Teroris di 3 Wilayah Jawa Timur dan NTB. KOMPAS.com- Detasemen Khusus (Densus) 88 Antiteror Mabes Polri menangkap sejumlah terduga teroris di Nusa Tenggara Barat dan Jawa Timur, beberapa hari terakhir. Mulanya penangkapan dilakukan terhadap MT di Bima, NTB pada 31 Mei 2023. Dari keterangan Mabes Polri, MT difasilitasi untuk berangkat ke Yaman dan bergabung dengan organisasi AQAP (cabang AlQaedah). Selanjutnya, berdasarkan pengembangan, Densus menangkap sejumlah terduga teroris lainnya. Pada Jumat (2/6/2023) pagi Densus menangkap ABU, terduga teroris di Jalan Kalimas Madya III Nyamplungan, Kecamatan Pabean Cantian, Surabaya, Jawa Timur. "Diamankan 07.30 WIB, setahu saya di rumahnya," kata Ketua RT M. Abri, Sabtu (3/6/2023). Setelah penangkapan tersebut dilakukan penggeledahan di rumah ABU. Abri yang turut serta dalam penggeledahan mengatakan polisi menyita sejumlah buku, busur, dan anak panah. "Iya ada panah. Ujungnya memang sangat tajam. Tapi tadi saya liha...</code> | <code>Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap MT?</code> | <code>Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi MT?</code> | <code>Seberapa netral artikel ini? Apakah artikel ini hanya menyajikan fakta atau informasi tanpa memberikan opini atau emosi terhadap MT?</code> | | <code>Dirjen Imigrasi Enggan Salahkan Siapapun Soal Peretasan: Sesama Bus Kota Enggak Boleh Saling Menyalip. JAKARTA, KOMPAS.com - Direktur Jenderal (Dirjen) Imigrasi Silmy Karim enggan menyalahkan siapa pun atas peristiwa gangguan sistem pelayanan imigrasi, yang terjadi karena peretasan Pusat Data Nasional (PDN). Menurutnya, ia tak berwenang untuk menilai kementerian/lembaga lain atas peristiwa peretasan yang terjadi. "Ya enggak mungkin saya gitu (menilai)," kata Silmy dalam konferensi pers di kawasan Jakarta Selatan, Jumat (28/6/2024). "Sudah lah, sesama bus kota enggak boleh saling menyalip," tambahnya. Ia menambahkan, hanya pakar di luar kementerian/lembaga yang berhak menilai atas peristiwa yang terjadi. Termasuk, siapa yang patut bertanggungjawab atas peristiwa ini. "Itu yang bisa ngomong kayak gitu pengamat, bisa lah. Saya enggak bisa," tegas Silmy. Sebelumnya, muncul petisi online untuk meminta Menkominfo Budi Arie Setiadi mundur karena dianggap tak bisa menyelesaikan PDN yang disera...</code> | <code>Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap Budi Arie Setiadi?</code> | <code>Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi Budi Arie Setiadi?</code> | <code>Seberapa netral artikel ini? Apakah artikel ini hanya menyajikan fakta atau informasi tanpa memberikan opini atau emosi terhadap Budi Arie Setiadi?</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `push_to_hub`: True - `hub_model_id`: masa-research/me5-small-preskripsi-embedding-pos-multineg - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: True - `resume_from_checkpoint`: None - `hub_model_id`: masa-research/me5-small-preskripsi-embedding-pos-multineg - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.1041 | 300 | 1.5302 | | 0.2083 | 600 | 0.8311 | | 0.3124 | 900 | 0.7495 | | 0.4165 | 1200 | 1.1801 | | 0.5207 | 1500 | 0.9683 | | 0.6248 | 1800 | 0.8723 | | 0.7289 | 2100 | 0.9991 | | 0.8330 | 2400 | 0.4833 | | 0.9372 | 2700 | 0.4694 | ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 3.3.1 - Transformers: 4.47.1 - PyTorch: 2.5.1+cu124 - Accelerate: 1.2.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on intfloat/multilingual-e5-small This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) on the preskripsi-sentiment-analysis-dataset-for-embedding-ft dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision fd1525a9fd15316a2d503bf26ab031a61d056e98 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - preskripsi-sentiment-analysis-dataset-for-embedding-ft <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("masa-research/me5-small-preskripsi-embedding-pos-multineg") # Run inference sentences = [ 'Bambang Hermanto Serukan Ketertiban Pasca Insiden Penghadangan Kampanye Calon Bupati. RADARINDRAMAYU.ID - Kampanye politik di Kabupaten Indramayu mulai memanas setelah terjadi penghadangan oleh sekelompok orang terhadap calon Bupati Indramayu nomor urut 3, Nina Agustina, saat ia hendak berkampanye di Desa Tegal Taman, Kecamatan Sukra, beberapa waktu lalu.\\xa0\\n Kejadian tersebut tentunya telah memicu reaksi dari berbagai elemen masyarakat, yang khawatir akan stabilitas dan keamanan dalam pelaksanaan Pilkada 2024.\\n Sejumlah organisasi masyarakat dan tokoh lokal mengecam tindakan penghadangan tersebut, menilai bahwa hal itu bertentangan dengan semangat demokrasi.\\xa0\\n Mereka menekankan pentingnya kebebasan berpendapat dan berkampanye tanpa intimidasi.\\xa0\\n Situasi ini menunjukkan bahwa suhu politik di Indramayu mulai meningkat menjelang pemilihan, dan masyarakat diimbau untuk tetap tenang dan bijaksana.\\n Menanggapi situasi ini, Bambang Hermanto, calon Bupati Indramayu nomor urut 1, mengajak seluruh masyarakat untuk menjaga ketertiban dan keamanan selama proses Pilkada.\\xa0\\n Dalam pernyataan resmi di Instagram pribadinya, pada Sabtu, 2 November 2024, Bambang menegaskan,\\n "Saya Bambang Hermanto, calon bupati wong Dermayu nomor urut 1. Saya ingin mengimbau kepada seluruh lapisan masyarakat Indramayu untuk sama-sama menjaga kondusifitas lingkungan, jangan sampai kita terpecah belah,"\\n Jika terpecah belah, kata Bambang, nanti masyarakat Indramayu akan merasakan sendiri akibat dari perpecahan tersebut.\\xa0\\n Bambang menambahkan bahwa masyarakat Indramayu harus cerdas dalam memilih pemimpin yang mampu memberikan kenyamanan dan mengayomi.\\xa0\\n Ia percaya bahwa warga Indramayu sudah memiliki pemahaman yang baik tentang calon-calon mereka, dan berharap agar pemilihan berlangsung dengan damai.\\xa0\\n "Saya yakin dan percaya masyarakat Indramayu sudah pada cerdas semua, sudah punya pilihan siapa calon Bupatinya yang betul-betul bisa memimpin Indramayu lebih baik lagi," ujarnya.\\n Diharapkan semua pihak dapat saling bekerja sama dalam menjaga situasi tetap kondusif, demi terciptanya pemilihan yang adil dan demokratis.', 'Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap pilkada 2024?', 'Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi pilkada 2024?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### preskripsi-sentiment-analysis-dataset-for-embedding-ft * Dataset: preskripsi-sentiment-analysis-dataset-for-embedding-ft * Size: 46,095 training samples * Columns: <code>artikel</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code> * Approximate statistics based on the first 1000 samples: | | artikel | positive | negative_1 | negative_2 | |:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | string | | details | <ul><li>min: 32 tokens</li><li>mean: 439.08 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 28.16 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 31.16 tokens</li><li>max: 81 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 31.16 tokens</li><li>max: 81 tokens</li></ul> | * Samples: | artikel | positive | negative_1 | negative_2 | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Apa Kabar Elkan Baggott Setelah Absen 10 Bulan dari Timnas Indonesia? Lagi Terapi Cedera Pergelangan Kaki dan Segera Comeback. Bola.com, Blackpool - Cedera pergelangan kaki Elkan Baggott hampir sembuh. Bek berusia 22 tahun itu segera kembali bermain untuk Blackpool FC di kasta ketiga Liga Inggris? Elkan Baggott telah absen membela Blackpool sejak Agustus 2024 akibat cedera pergelangan kaki yang membuatnya baru bermain empat kali pada musim ini. Penampilan terakhir Elkan Baggott di Blackpool terjadi pada 24 Agustus 2024, ketika bermain imbang 4-4 melawan Cambridge United dalam matchday ketiga League One 2024/2025. Elkan Baggott sedang menjalani peminjaman di Blackpool selama semusim dari Ipswich Town. Pemain kelahiran Bangkok, Thailand, itu diperkirakan akan kembali membela Blackpool kontra Liverpool U-21 dalam EFL Trophy 2024/2025 pada 7 November 2024. "Elkan Baggott menjalani sesi latihan pertamanya, jadi senang bisa melihatnya kembali bersama tim," ujar pelatih Blackpool, Stephen Dob...</code> | <code>Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap timnas indonesia?</code> | <code>Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi timnas indonesia?</code> | <code>Seberapa netral artikel ini? Apakah artikel ini hanya menyajikan fakta atau informasi tanpa memberikan opini atau emosi terhadap timnas indonesia?</code> | | <code>Di Balik Penangkapan Terduga Teroris di 3 Wilayah Jawa Timur dan NTB. KOMPAS.com- Detasemen Khusus (Densus) 88 Antiteror Mabes Polri menangkap sejumlah terduga teroris di Nusa Tenggara Barat dan Jawa Timur, beberapa hari terakhir. Mulanya penangkapan dilakukan terhadap MT di Bima, NTB pada 31 Mei 2023. Dari keterangan Mabes Polri, MT difasilitasi untuk berangkat ke Yaman dan bergabung dengan organisasi AQAP (cabang AlQaedah). Selanjutnya, berdasarkan pengembangan, Densus menangkap sejumlah terduga teroris lainnya. Pada Jumat (2/6/2023) pagi Densus menangkap ABU, terduga teroris di Jalan Kalimas Madya III Nyamplungan, Kecamatan Pabean Cantian, Surabaya, Jawa Timur. "Diamankan 07.30 WIB, setahu saya di rumahnya," kata Ketua RT M. Abri, Sabtu (3/6/2023). Setelah penangkapan tersebut dilakukan penggeledahan di rumah ABU. Abri yang turut serta dalam penggeledahan mengatakan polisi menyita sejumlah buku, busur, dan anak panah. "Iya ada panah. Ujungnya memang sangat tajam. Tapi tadi saya liha...</code> | <code>Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap MT?</code> | <code>Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi MT?</code> | <code>Seberapa netral artikel ini? Apakah artikel ini hanya menyajikan fakta atau informasi tanpa memberikan opini atau emosi terhadap MT?</code> | | <code>Dirjen Imigrasi Enggan Salahkan Siapapun Soal Peretasan: Sesama Bus Kota Enggak Boleh Saling Menyalip. JAKARTA, KOMPAS.com - Direktur Jenderal (Dirjen) Imigrasi Silmy Karim enggan menyalahkan siapa pun atas peristiwa gangguan sistem pelayanan imigrasi, yang terjadi karena peretasan Pusat Data Nasional (PDN). Menurutnya, ia tak berwenang untuk menilai kementerian/lembaga lain atas peristiwa peretasan yang terjadi. "Ya enggak mungkin saya gitu (menilai)," kata Silmy dalam konferensi pers di kawasan Jakarta Selatan, Jumat (28/6/2024). "Sudah lah, sesama bus kota enggak boleh saling menyalip," tambahnya. Ia menambahkan, hanya pakar di luar kementerian/lembaga yang berhak menilai atas peristiwa yang terjadi. Termasuk, siapa yang patut bertanggungjawab atas peristiwa ini. "Itu yang bisa ngomong kayak gitu pengamat, bisa lah. Saya enggak bisa," tegas Silmy. Sebelumnya, muncul petisi online untuk meminta Menkominfo Budi Arie Setiadi mundur karena dianggap tak bisa menyelesaikan PDN yang disera...</code> | <code>Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap Budi Arie Setiadi?</code> | <code>Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi Budi Arie Setiadi?</code> | <code>Seberapa netral artikel ini? Apakah artikel ini hanya menyajikan fakta atau informasi tanpa memberikan opini atau emosi terhadap Budi Arie Setiadi?</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `push_to_hub`: True - `hub_model_id`: masa-research/me5-small-preskripsi-embedding-pos-multineg - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: True - `resume_from_checkpoint`: None - `hub_model_id`: masa-research/me5-small-preskripsi-embedding-pos-multineg - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.1041 | 300 | 1.5302 | | 0.2083 | 600 | 0.8311 | | 0.3124 | 900 | 0.7495 | | 0.4165 | 1200 | 1.1801 | | 0.5207 | 1500 | 0.9683 | | 0.6248 | 1800 | 0.8723 | | 0.7289 | 2100 | 0.9991 | | 0.8330 | 2400 | 0.4833 | | 0.9372 | 2700 | 0.4694 | ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 3.3.1 - Transformers: 4.47.1 - PyTorch: 2.5.1+cu124 - Accelerate: 1.2.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "intfloat/multilingual-e5-small", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:46095", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Paslon Atang Annida Dapat Dukungan Relawan Badani di Pilkada 2024. Bandung, VIVA - Elemen dukungan bagi pasangan Calon Wali Kota dan Wakil Wali Kota Bogor pada Pilkada 2024 yaitu Atang Trisnanto - Annida Allivia terus bermunculan. Salahsatunya Balad Abah Diani (Badani) menyatakan telah menjadi relawan untuk pemenangan Atang - Annida. Badani mendeklarasikan dukungannya di Posko Badani, Jalan Dadali No. 39, Kecamatan Tanah Sareal, pada Minggu 27 Oktober 2024. Deklarasi ini dihadiri 500 orang pendukung yang terdiri dari pensiunan ASN, organisasi masyarakat, komunitas relawan, dan tokoh masyarakat.\\xa0 \"Mereka berkomitmen untuk mendukung visi dan misi Atang-Annida dalam memajukan Kota Bogor serta mewujudkan Bogor Nyaman untuk Semua,\" kata Dewan Pembina Badani, H. Diani Budiarto dalam keterangannya, Rabu 30 Oktober 2024. Menurutnya, deklarasi ini bertujuan mengumpulkan potensi berbagai kalangan, mulai dari organisasi masyarakat, pensiunan, kelompok anak muda, komunitas relawan, maupun tokoh masyarakat yang memiliki visi yang sama dalam memajukan Kota Bogor.\\xa0 \"Kami ingin bergabung memenangkan pasangan Atang-Annida sebagai Wali Kota dan Wakil Wali Kota Bogor periode 2024-2029,\" ujar Diani. Selain itu, mantan Wali Kota Bogor periode 2004-2014 ini memproyeksikan relawan ini memberikan semangat bagi Partai Keadilan Sejahtera (PKS) dan memperlihatkan bahwa mereka tidak berjalan sendiri. , \"Banyak elemen masyarakat berharap PKS mampu menghadirkan kepemimpinan baru yang dekat dengan masyarakat,\" katanya., Menurutnya, pasca memantau seluruh kandidat, Badani memilih untuk mendukung pasangan Atang-Annida. \"Kami menilai Atang-Annida adalah pasangan calon yang paling sejalan dengan aspirasi masyarakat Kota Bogor,” katanya., , “Memiliki komitmen untuk hadir di tengah-tengah masyarakat, mendengarkan aspirasi, dan mampu memberikan solusi untuk permasalahan yang ada,\" terangnya., Menyikapi hal ini, Atang Trisnanto menyampaikan rasa syukur, terima kasih, dan apresiasi kepada Badani atas dukungan yang diberikan. \"Saya bertekad untuk mendengar aspirasi masyarakat, dan bersama-sama kita akan membangun Bogor yang kita cintai ini. Mari kita wujudkan Bogor Nyaman untuk Semua: nyaman pelayanannya, nyaman rezekinya, nyaman kotanya, dan nyaman hidupnya,\" terangnya. *", "sentences": ["Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap timnas indonesia?", "Seberapa netral artikel ini? Apakah artikel ini hanya menyajikan fakta atau informasi tanpa memberikan opini atau emosi terhadap timnas indonesia?", "Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi timnas indonesia?"]}, {"source_sentence": "My fears if we cut out GPs from breast checks. I’m 86 and I’ve been on HRT since my early 40s when I had a hysterectomy. I believe it has kept me healthy and active. I’ve never broken a bone – even though osteoporosis runs in my family – and this must be thanks to the HRT. Now my GP wants me to stop using the patches as he says I’m too old to be on HRT. What should I do? Dr Ellie Cannon replies: Doctors are always cautious about older patients taking prescription medicines because they are more likely to suffer side effects – some of which can be dangerous. In the case of hormone replacement therapy (HRT), it can raise the risk of a stroke or life-threatening blood clot. NHS guidelines also state that patients should be on HRT for as short a time as possible, and recommend that women come off it every few years to find out if it is making a significant difference to their symptoms. However, that does not mean a woman in her 80s categorically cannot take it. Firstly, there is no upper age limit to taking HRT. Secondly, menopause treatment is supposed to be personalised to each patient. This is because the symptoms affect women in different ways. Some women will need larger – or smaller – HRT doses than others. And there are those who benefit from taking these replacement hormones for decades, while others may need them for only a year or two. It’s also important to consider the different types of HRT that are available. Patches are considered to have a lower stroke risk than tablets, and there are also low-strength patches which contain less oestrogen that can reduce this risk even further. Moreover, HRT gel is even safer than patches, because it can be used in very small doses. These are all options worth discussing with a GP. However, for anyone worried about the risk of developing osteoporosis, it’s crucial to understand that HRT – while shown to protect against the bone-thinning disease – is not the only way to prevent it. Walking can keep it at bay, as well as a diet with sufficient amounts of calcium and vitamin D. Adults need 1,000mg of calcium a day to lower the risk of bone breaks. In June I developed a rash on my feet, legs and buttocks. My GP diagnosed me with scabies and it does appear that the treatment is working. However, apparently it is spread through close – often sexual – sexual contact, and I have not been intimate with anyone for well over a year. Could this really be scabies? Dr Ellie replies: Scabies is a skin infestation caused by a parasite called a mite. The itching is caused by the body’s response to the mites and their eggs on the skin. The parasites often congregate in the fingers, tummy, genitals, arms and bottom. In recent years I have noticed a rise in scabies patients at my GP surgery, which suggests it is more common than it once was. These mites are also slowly becoming resistant to current treatments. It is true that it usually requires close physical contact to spread scabies, which is why it is often transmitted during sex. Brief physical touch – such as a handshake – is unlikely to lead to scabies. However, mites can also survive for several days in humid environments. Contaminated bedsheets and clothes should always be washed at hot temperatures to kill the mite and eggs. The most common treatment is an anti-parasitic liquid called Derbac M – also known as malathion. For most patients, it is quick and effective. I’ve had balance problems since I had a pacemaker fitted in 2019. It feels like I’m drunk at all times. I’ve been seen by a neurologist who says I don’t have any brain issues such as Parkinson’s. Could my pacemaker be to blame? Dr Ellie replies: It's important to always see a GP about balance problems. Not only is it a very distressing symptom, it can also lead to falls and life-changing injuries. But unpicking its cause can be complicated, because there are several possible triggers. For example, if the dizziness occurs when standing or sitting up suddenly, it might be due to out-of-control blood pressure. However, pacemakers, in rare cases, do have side effects. The devices are fitted to control the beating of the heart. For most who receive them, they massively improve quality of life and are even life-saving. However, a small percentage of patients can suffer something known as pacemaker syndrome. This is when electrical activity in the heart is not regulated properly, meaning the organ does not beat as it should. As a result, people may experience dizziness, fatigue and light-headedness. Some even faint. Anyone who experiences balance problems after a pacemaker is fitted should see a cardiologist to explore the possibility of this syndrome. There are tests that specialists can carry out in hospital to work out whether the device is to blame. If it is, then a different type of pacemaker can be used instead. A GP can make a referral to see a cardiologist. I was concerned to read about a Government scheme that will allow women to refer themselves for a breast cancer check without seeing a GP first. In Somerset, they will be able to call the non-emergency phone line 111 – or use the NHS app – to report a breast lump, and then will be directed to a specialist clinic for a cancer check. If successful, the scheme will be rolled out across the country. I don’t support this. Delays in cancer diagnoses do cost lives, and we need to speed up this process, but the vast majority of new lumps are not cancerous. Testing every lump could lead to unnecessary invasive investigations and will certainly clog up NHS resources. Do you agree? Do you believe your breast cancer could have been diagnosed sooner if you had been able to self-refer? Please let me know using the email address below. I was taken aback last week when the end-of-life charity Marie Curie released a list of the most popular songs that people want to listen to in their final moments. Frank Sinatra’s My Way, The Beatles’ Hey Jude, and Over The Rainbow by Judy Garland were among the top answers. I’m sure these are all great songs, but the idea of playing music at the end of life is a strange concept to me. Sadly, in the past year, I’ve experienced the deaths of four family members. Not one of them requested that music was to be played at the end. However, that’s not to say that it doesn’t bring peace and happiness to others. Do you have a song that you would like played on your death bed? Has a family member requested one? Please write in and let me know. Do you have a question for Dr Ellie Cannon? Email [email protected] Dr Cannon cannot enter into personal correspondence and her replies should be taken in a general context.", "sentences": ["Seberapa netral artikel ini? Apakah artikel ini hanya menyajikan fakta atau informasi tanpa memberikan opini atau emosi terhadap kementerian kesehatan?", "Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap kementerian kesehatan?", "Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi kementerian kesehatan?"]}, {"source_sentence": "Cara Efektif Hadapi Orang yang Susah Menerima Kritik, Bicara Tanpa Baper. MEDIA PAKUAN - Berhadapan dengan orang yang anti kritik bisa jadi hal yang cukup menantang. Mereka seringkali sulit menerima masukan, bahkan jika itu demi kebaikan mereka sendiri. Namun, jika kita menggunakan pendekatan yang tepat, komunikasi bisa berjalan lebih efektif. Berikut adalah beberapa tips yang bisa Anda coba ketika harus berhadapan dengan mereka. Jika Anda sudah mencoba berbagai cara untuk berkomunikasi dengan seseorang yang anti kritik dan tidak ada perubahan yang berarti, mungkin sudah saatnya untuk membatasi interaksi dengan mereka. Terlalu sering berurusan dengan orang seperti itu dapat berdampak negatif pada kesehatan mental Anda.*** Berita Pilihan Perbedaan Sakit Perut Saat Haid dan Hamil, Mana Kamu? Ciri-Ciri Hamil Tanpa Morning Sickness, Jangan Terlewat! Nyeri Payudara: Tanda Haid atau Kehamilan? Benarkah Bentuk Perut Bisa Jadi Tanda Awal Kehamilan? Ini Faktanya! Cara Mengetahui Kehamilan Sejak Minggu Pertama, Ini Tanda-Tanda Awalnya!", "sentences": ["Seberapa netral artikel ini? Apakah artikel ini hanya menyajikan fakta atau informasi tanpa memberikan opini atau emosi terhadap kesehatan mental?", "Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap kesehatan mental?", "Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi kesehatan mental?"]}, {"source_sentence": "Dukung Program Presiden RI, Polresta Bandung Tindak Tegas Pelaku Tambang Ilegal. 60MENIT.COM, Bandung - Dalam rangka mendukung 17 Program Prioritas Presiden Prabowo Subianto - Gibran, khususnya nomor 11 terkait kelestarian lingkungan hidup. Unit Tipidter Sat Reskrim Polresta Bandung berhasil mengungkap dugaan tindak pidana penambangan tanpa izin atau ilegal di kawasan Gunung Sungapan, Jalan Raya Soreang-Ciwidey, Kecamatan Soreang, Kabupaten Bandung. Pengungkapan ini berhasil diungkap berdasarkan hasil penyelidikan. Dimana aktivitas penambangan ilegal tersebut telah berlangsung selama kurang lebih tiga bulan, terhitung sejak Agustus 2024. Kapolresta Bandung, Kombes Pol Kusworo Wibowo mengatakan terungkapnya tambang ilegal ini, pihaknya mengamankan satu pelaku berinisial EMK (52) yang berperan sebagai pengelola. \"Menurut laporan, kegiatan tambang ilegal ini telah berlangsung selama tiga bulan terakhir. Dalam kurun waktu tersebut, para pelaku diketahui menjual hasil tambang berupa tanah berbatu ke beberapa tempat, termasuk perumahan dan real estate di wilayah Bandung,\" kata Kusworo saat menggerlar konferensi pers di lokasi penambangan ilegal. Selasa, 5 November 2024. \"Dari aktivitas ini, pencatatan penjualan material tambang dijual dengan harga Rp300ribu per tronton (24 kubik) dan Rp100ribu per dump truck (7 kubik). Konsumen yang membutuhkan material tambang dapat datang langsung ke lokasi atau memesannya terlebih dahulu,\" ujarnya. Namun, ia menegaskan bahwa selain keuntungan finansial, ada risiko besar yang dihadapi masyarakat. Kegiatan tambang ilegal di area tersebut berpotensi menyebabkan bencana longsor di perbukitan sekitar. \"Longsor ini dapat membahayakan para pengguna Jalan Raya Soreang-Ciwidey, yang berada di jalur berdekatan dengan lokasi tambang,\" tuturnya. \"Jika longsor terjadi, terutama saat arus lalu lintas sedang padat, kendaraan yang melintas bisa tersapu ke jurang di sisi kanan jalan, mengancam keselamatan para pengendara,\" jelasnya. Kusworo mengungkapkan untuk mencegah risiko tersebut, pihaknya langsung mengambil tindakan tegas, yakni menyita sejumlah barang bukti yang berada di lokasi. \"Kami mengimbau masyarakat untuk lebih waspada terhadap kegiatan serupa dan turut melaporkan apabila menemukan aktivitas yang membahayakan lingkungan hidup di wilayah mereka,\" ujarnya. Guna mempertanggungjawabkan perbuatannya, pelaku dijerat Pasal 158 Jo. Pasal 35 UU RI No. 3 Tahun 2020 tentang perubahan atas UU RI No. 4 Tahun 2009 tentang Pertambangan Mineral dan Batubara sebagaimana telah diubah dengan UU RI No. 6 Tahun 2023 tentang penetapan peratutan pemerintah pengganti UU RI No. 2 Tahun 2022 tentang Cipta Kerja Menjadi Undang-undang dengan ancaman maksimal 5 tahun penjara dan denda Rp. 100 miliar. (Taupik)", "sentences": ["Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi pelaku tambang ilegal (emk)?", "Seberapa netral artikel ini? Apakah artikel ini hanya menyajikan fakta atau informasi tanpa memberikan opini atau emosi terhadap pelaku tambang ilegal (emk)?", "Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap pelaku tambang ilegal (emk)?"]}, {"source_sentence": "Bambang Hermanto Serukan Ketertiban Pasca Insiden Penghadangan Kampanye Calon Bupati. RADARINDRAMAYU.ID - Kampanye politik di Kabupaten Indramayu mulai memanas setelah terjadi penghadangan oleh sekelompok orang terhadap calon Bupati Indramayu nomor urut 3, Nina Agustina, saat ia hendak berkampanye di Desa Tegal Taman, Kecamatan Sukra, beberapa waktu lalu.\\xa0\\n Kejadian tersebut tentunya telah memicu reaksi dari berbagai elemen masyarakat, yang khawatir akan stabilitas dan keamanan dalam pelaksanaan Pilkada 2024.\\n Sejumlah organisasi masyarakat dan tokoh lokal mengecam tindakan penghadangan tersebut, menilai bahwa hal itu bertentangan dengan semangat demokrasi.\\xa0\\n Mereka menekankan pentingnya kebebasan berpendapat dan berkampanye tanpa intimidasi.\\xa0\\n Situasi ini menunjukkan bahwa suhu politik di Indramayu mulai meningkat menjelang pemilihan, dan masyarakat diimbau untuk tetap tenang dan bijaksana.\\n Menanggapi situasi ini, Bambang Hermanto, calon Bupati Indramayu nomor urut 1, mengajak seluruh masyarakat untuk menjaga ketertiban dan keamanan selama proses Pilkada.\\xa0\\n Dalam pernyataan resmi di Instagram pribadinya, pada Sabtu, 2 November 2024, Bambang menegaskan,\\n \"Saya Bambang Hermanto, calon bupati wong Dermayu nomor urut 1. Saya ingin mengimbau kepada seluruh lapisan masyarakat Indramayu untuk sama-sama menjaga kondusifitas lingkungan, jangan sampai kita terpecah belah,\"\\n Jika terpecah belah, kata Bambang, nanti masyarakat Indramayu akan merasakan sendiri akibat dari perpecahan tersebut.\\xa0\\n Bambang menambahkan bahwa masyarakat Indramayu harus cerdas dalam memilih pemimpin yang mampu memberikan kenyamanan dan mengayomi.\\xa0\\n Ia percaya bahwa warga Indramayu sudah memiliki pemahaman yang baik tentang calon-calon mereka, dan berharap agar pemilihan berlangsung dengan damai.\\xa0\\n \"Saya yakin dan percaya masyarakat Indramayu sudah pada cerdas semua, sudah punya pilihan siapa calon Bupatinya yang betul-betul bisa memimpin Indramayu lebih baik lagi,\" ujarnya.\\n Diharapkan semua pihak dapat saling bekerja sama dalam menjaga situasi tetap kondusif, demi terciptanya pemilihan yang adil dan demokratis.", "sentences": ["Seberapa kuat artikel ini mengandung sentimen negatif atau kritik yang dapat merugikan atau menciptakan persepsi buruk terhadap pilkada 2024?", "Seberapa kuat artikel ini mengandung sentimen positif, dukungan, atau manfaat yang dapat meningkatkan citra atau reputasi pilkada 2024?", "Seberapa netral artikel ini? Apakah artikel ini hanya menyajikan fakta atau informasi tanpa memberikan opini atau emosi terhadap pilkada 2024?"]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,651
PhaseTechnologies/RoBERTo
PhaseTechnologies
text-classification
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-01-26T15:09:34Z
2025-04-04T15:51:09+00:00
36
0
--- library_name: transformers pipeline_tag: text-classification tags: - text-classification widget: - text: I love AutoTrain --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Inference You can use the following Python code to perform inference with this model using the `transformers` library: ```python from transformers import pipeline # Load the model classifier = pipeline("text-classification", model="Phase-Technologies/RoBERTo") # Perform inference text = "Phase Technologies is a company" result = classifier(text) print(result) ``` # Use through Google Colab https://colab.research.google.com/drive/1wxfY0lWPRRYhj6oklj-p9mw5HcSFWXCL?usp=sharing
null
Non_BioNLP
# RoBERTo - Problem type: Text Classification ## Inference You can use the following Python code to perform inference with this model using the `transformers` library: ```python from transformers import pipeline # Load the model classifier = pipeline("text-classification", model="Phase-Technologies/RoBERTo") # Perform inference text = "Phase Technologies is a company" result = classifier(text) print(result) ``` # Use through Google Colab https://colab.research.google.com/drive/1wxfY0lWPRRYhj6oklj-p9mw5HcSFWXCL?usp=sharing
{"library_name": "transformers", "pipeline_tag": "text-classification", "tags": ["text-classification"], "widget": [{"text": "RoBERTo"}]}
task
[ "TEXT_CLASSIFICATION" ]
42,652
songsong123/my_awesome_model_classification_w_adapter
songsong123
text-classification
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:imdb", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-09-01T10:07:08Z
2023-09-01T10:09:54+00:00
13
0
--- datasets: - imdb license: mit metrics: - accuracy tags: - generated_from_trainer model-index: - name: my_awesome_model_classification_w_adapter results: - task: type: text-classification name: Text Classification dataset: name: imdb type: imdb config: plain_text split: train[:300] args: plain_text metrics: - type: accuracy value: 1.0 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model_classification_w_adapter This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.0033 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 15 | 0.0116 | 1.0 | | No log | 2.0 | 30 | 0.0033 | 1.0 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model_classification_w_adapter This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.0033 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 15 | 0.0116 | 1.0 | | No log | 2.0 | 30 | 0.0033 | 1.0 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
{"datasets": ["imdb"], "license": "mit", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "my_awesome_model_classification_w_adapter", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "train[:300]", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,653
lewtun/distilbert-base-uncased-finetuned-emotion-test-01
lewtun
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2022-01-14T10:29:26+00:00
118
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion-test-01 results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion args: default metrics: - type: accuracy value: 0.39 name: Accuracy - type: f1 value: 0.21884892086330932 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion-test-01 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 1.7510 - Accuracy: 0.39 - F1: 0.2188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 2 | 1.7634 | 0.39 | 0.2188 | | No log | 2.0 | 4 | 1.7510 | 0.39 | 0.2188 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion-test-01 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 1.7510 - Accuracy: 0.39 - F1: 0.2188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 2 | 1.7634 | 0.39 | 0.2188 | | No log | 2.0 | 4 | 1.7510 | 0.39 | 0.2188 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion-test-01", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.39, "name": "Accuracy"}, {"type": "f1", "value": 0.21884892086330932, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,654
versae/gzipbert_imdb_roberta_lowlr
versae
text-classification
[ "transformers", "pytorch", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "dataset:imdb", "base_model:versae/gzip-bert", "base_model:finetune:versae/gzip-bert", "license:openrail", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-20T14:05:50Z
2025-01-18T08:05:38+00:00
28
0
--- base_model: versae/gzip-bert datasets: - imdb license: openrail metrics: - accuracy tags: - generated_from_trainer model-index: - name: gzipbert_imdb_roberta_lowlr results: - task: type: text-classification name: Text Classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - type: accuracy value: 0.5 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gzipbert_imdb_roberta_lowlr This model is a fine-tuned version of [versae/gzip-bert](https://huggingface.co/versae/gzip-bert) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.6932 - Accuracy: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7068 | 1.0 | 1563 | 0.7024 | 0.5 | | 0.7027 | 2.0 | 3126 | 0.6952 | 0.5 | | 0.6999 | 3.0 | 4689 | 0.6943 | 0.5 | | 0.6994 | 4.0 | 6252 | 0.6933 | 0.5 | | 0.6976 | 5.0 | 7815 | 0.6932 | 0.5 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gzipbert_imdb_roberta_lowlr This model is a fine-tuned version of [versae/gzip-bert](https://huggingface.co/versae/gzip-bert) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.6932 - Accuracy: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7068 | 1.0 | 1563 | 0.7024 | 0.5 | | 0.7027 | 2.0 | 3126 | 0.6952 | 0.5 | | 0.6999 | 3.0 | 4689 | 0.6943 | 0.5 | | 0.6994 | 4.0 | 6252 | 0.6933 | 0.5 | | 0.6976 | 5.0 | 7815 | 0.6932 | 0.5 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
{"base_model": "versae/gzip-bert", "datasets": ["imdb"], "license": "openrail", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "gzipbert_imdb_roberta_lowlr", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "test", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.5, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,655
LoneStriker/Nous-Hermes-2-Mixtral-8x7B-SFT-3.0bpw-h6-exl2
LoneStriker
text-generation
[ "transformers", "safetensors", "mixtral", "text-generation", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "conversational", "en", "base_model:mistralai/Mixtral-8x7B-v0.1", "base_model:finetune:mistralai/Mixtral-8x7B-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-01-16T13:33:07Z
2024-01-16T13:40:30+00:00
4
0
--- base_model: mistralai/Mixtral-8x7B-v0.1 language: - en license: apache-2.0 tags: - Mixtral - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation model-index: - name: Nous-Hermes-2-Mixtral-8x7B-SFT results: [] --- # Nous Hermes 2 - Mixtral 8x7B - SFT ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/btRmXWMG7PXatTs-u3G85.jpeg) ## Model description Nous Hermes 2 Mixtral 8x7B SFT is the supervised finetune only version of our new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1). The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks. This is the SFT only version of Mixtral Hermes 2, we have also released an SFT+DPO version, for people to find which works best for them, which can be found here: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO ## We are grateful to Together.ai for sponsoring our compute during the many experiments both training Mixtral and working on DPO! # Table of Contents 1. [Example Outputs](#example-outputs) 2. [Benchmark Results](#benchmark-results) - GPT4All - AGIEval - BigBench - Comparison to Mixtral-Instruct 3. [Prompt Format](#prompt-format) 4. [Inference Example Code](#inference-code) 5. [Quantized Models](#quantized-models) ## Example Outputs ### Writing Code for Data Visualization ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/QJ5RHrOqB5GMP7ZAZ5NTk.png) ### Writing Cyberpunk Psychedelic Poems ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/wuKnMlM2HBGdyUFO7mY_H.png) ### Performing Backtranslation to Create Prompts from Input Text ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/QElwK1UI9PQQT6WosXpo1.png) ## Benchmark Results Nous-Hermes 2 on Mixtral 8x7B SFT is the bedrock for major improvements on many of the benchmarks below compared to the base Mixtral model, and is the SFT only version of our first model to beat the flagship Mixtral Finetune by MistralAI (the DPO version). ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5904|± |0.0144| | | |acc_norm|0.6323|± |0.0141| |arc_easy | 0|acc |0.8594|± |0.0071| | | |acc_norm|0.8607|± |0.0071| |boolq | 1|acc |0.8783|± |0.0057| |hellaswag | 0|acc |0.6592|± |0.0047| | | |acc_norm|0.8434|± |0.0036| |openbookqa | 0|acc |0.3400|± |0.0212| | | |acc_norm|0.4660|± |0.0223| |piqa | 0|acc |0.8324|± |0.0087| | | |acc_norm|0.8379|± |0.0086| |winogrande | 0|acc |0.7569|± |0.0121| ``` Average: 75.36 ## AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.2441|± |0.0270| | | |acc_norm|0.2598|± |0.0276| |agieval_logiqa_en | 0|acc |0.4025|± |0.0192| | | |acc_norm|0.3978|± |0.0192| |agieval_lsat_ar | 0|acc |0.2391|± |0.0282| | | |acc_norm|0.2043|± |0.0266| |agieval_lsat_lr | 0|acc |0.5353|± |0.0221| | | |acc_norm|0.5098|± |0.0222| |agieval_lsat_rc | 0|acc |0.6617|± |0.0289| | | |acc_norm|0.5948|± |0.0300| |agieval_sat_en | 0|acc |0.7961|± |0.0281| | | |acc_norm|0.7816|± |0.0289| |agieval_sat_en_without_passage| 0|acc |0.4757|± |0.0349| | | |acc_norm|0.4515|± |0.0348| |agieval_sat_math | 0|acc |0.4818|± |0.0338| | | |acc_norm|0.3909|± |0.0330| ``` Average: 44.89 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5789|± |0.0359| |bigbench_date_understanding | 0|multiple_choice_grade|0.7154|± |0.0235| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.5388|± |0.0311| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.4680|± |0.0264| | | |exact_str_match |0.0000|± |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3260|± |0.0210| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2443|± |0.0163| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5233|± |0.0289| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.3700|± |0.0216| |bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6665|± |0.0105| |bigbench_ruin_names | 0|multiple_choice_grade|0.6317|± |0.0228| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2505|± |0.0137| |bigbench_snarks | 0|multiple_choice_grade|0.7127|± |0.0337| |bigbench_sports_understanding | 0|multiple_choice_grade|0.6592|± |0.0151| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.6860|± |0.0147| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2200|± |0.0117| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1503|± |0.0085| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5233|± |0.0289| ``` Average: 48.69 # Benchmark Comparison Charts ## GPT4All ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/S3_tdH822r9UvkGFDiYam.png) ## AGI-Eval ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/paet9FsASWPWa6KJs3mm-.png) ## BigBench Reasoning Test ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/rHmkUnYLTWwq0cuVzMegL.png) # Prompt Format Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: even in 4bit, it will require more than 24GB of VRAM) ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import LlamaTokenizer, MixtralForCausalLM import bitsandbytes, flash_attn tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO', trust_remote_code=True) model = MixtralForCausalLM.from_pretrained( "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` # Quantized Models: ## All sizes of GGUF Quantizations are available here: ### SFT+DPO Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF ### SFT Only Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
null
Non_BioNLP
# Nous Hermes 2 - Mixtral 8x7B - SFT ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/btRmXWMG7PXatTs-u3G85.jpeg) ## Model description Nous Hermes 2 Mixtral 8x7B SFT is the supervised finetune only version of our new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1). The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks. This is the SFT only version of Mixtral Hermes 2, we have also released an SFT+DPO version, for people to find which works best for them, which can be found here: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO ## We are grateful to Together.ai for sponsoring our compute during the many experiments both training Mixtral and working on DPO! # Table of Contents 1. [Example Outputs](#example-outputs) 2. [Benchmark Results](#benchmark-results) - GPT4All - AGIEval - BigBench - Comparison to Mixtral-Instruct 3. [Prompt Format](#prompt-format) 4. [Inference Example Code](#inference-code) 5. [Quantized Models](#quantized-models) ## Example Outputs ### Writing Code for Data Visualization ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/QJ5RHrOqB5GMP7ZAZ5NTk.png) ### Writing Cyberpunk Psychedelic Poems ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/wuKnMlM2HBGdyUFO7mY_H.png) ### Performing Backtranslation to Create Prompts from Input Text ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/QElwK1UI9PQQT6WosXpo1.png) ## Benchmark Results Nous-Hermes 2 on Mixtral 8x7B SFT is the bedrock for major improvements on many of the benchmarks below compared to the base Mixtral model, and is the SFT only version of our first model to beat the flagship Mixtral Finetune by MistralAI (the DPO version). ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5904|± |0.0144| | | |acc_norm|0.6323|± |0.0141| |arc_easy | 0|acc |0.8594|± |0.0071| | | |acc_norm|0.8607|± |0.0071| |boolq | 1|acc |0.8783|± |0.0057| |hellaswag | 0|acc |0.6592|± |0.0047| | | |acc_norm|0.8434|± |0.0036| |openbookqa | 0|acc |0.3400|± |0.0212| | | |acc_norm|0.4660|± |0.0223| |piqa | 0|acc |0.8324|± |0.0087| | | |acc_norm|0.8379|± |0.0086| |winogrande | 0|acc |0.7569|± |0.0121| ``` Average: 75.36 ## AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.2441|± |0.0270| | | |acc_norm|0.2598|± |0.0276| |agieval_logiqa_en | 0|acc |0.4025|± |0.0192| | | |acc_norm|0.3978|± |0.0192| |agieval_lsat_ar | 0|acc |0.2391|± |0.0282| | | |acc_norm|0.2043|± |0.0266| |agieval_lsat_lr | 0|acc |0.5353|± |0.0221| | | |acc_norm|0.5098|± |0.0222| |agieval_lsat_rc | 0|acc |0.6617|± |0.0289| | | |acc_norm|0.5948|± |0.0300| |agieval_sat_en | 0|acc |0.7961|± |0.0281| | | |acc_norm|0.7816|± |0.0289| |agieval_sat_en_without_passage| 0|acc |0.4757|± |0.0349| | | |acc_norm|0.4515|± |0.0348| |agieval_sat_math | 0|acc |0.4818|± |0.0338| | | |acc_norm|0.3909|± |0.0330| ``` Average: 44.89 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5789|± |0.0359| |bigbench_date_understanding | 0|multiple_choice_grade|0.7154|± |0.0235| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.5388|± |0.0311| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.4680|± |0.0264| | | |exact_str_match |0.0000|± |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3260|± |0.0210| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2443|± |0.0163| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5233|± |0.0289| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.3700|± |0.0216| |bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6665|± |0.0105| |bigbench_ruin_names | 0|multiple_choice_grade|0.6317|± |0.0228| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2505|± |0.0137| |bigbench_snarks | 0|multiple_choice_grade|0.7127|± |0.0337| |bigbench_sports_understanding | 0|multiple_choice_grade|0.6592|± |0.0151| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.6860|± |0.0147| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2200|± |0.0117| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1503|± |0.0085| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5233|± |0.0289| ``` Average: 48.69 # Benchmark Comparison Charts ## GPT4All ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/S3_tdH822r9UvkGFDiYam.png) ## AGI-Eval ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/paet9FsASWPWa6KJs3mm-.png) ## BigBench Reasoning Test ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/rHmkUnYLTWwq0cuVzMegL.png) # Prompt Format Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: even in 4bit, it will require more than 24GB of VRAM) ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import LlamaTokenizer, MixtralForCausalLM import bitsandbytes, flash_attn tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO', trust_remote_code=True) model = MixtralForCausalLM.from_pretrained( "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` # Quantized Models: ## All sizes of GGUF Quantizations are available here: ### SFT+DPO Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF ### SFT Only Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
{"base_model": "mistralai/Mixtral-8x7B-v0.1", "language": ["en"], "license": "apache-2.0", "tags": ["Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation"], "model-index": [{"name": "Nous-Hermes-2-Mixtral-8x7B-SFT", "results": []}]}
task
[ "TRANSLATION" ]
42,656
Helsinki-NLP/opus-mt-en-et
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "et", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T11:29:29+00:00
1,225
0
--- license: apache-2.0 tags: - translation --- ### opus-mt-en-et * source languages: en * target languages: et * OPUS readme: [en-et](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-et/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2018-enet.en.et | 21.8 | 0.540 | | newstest2018-enet.en.et | 23.3 | 0.556 | | Tatoeba.en.et | 54.0 | 0.717 |
null
Non_BioNLP
### opus-mt-en-et * source languages: en * target languages: et * OPUS readme: [en-et](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-et/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2018-enet.en.et | 21.8 | 0.540 | | newstest2018-enet.en.et | 23.3 | 0.556 | | Tatoeba.en.et | 54.0 | 0.717 |
{"license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
42,657
purpleor/autotrain-7jijm-bvxud
purpleor
text-classification
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "text-classification", "autotrain", "dataset:autotrain-7jijm-bvxud/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-04-26T17:14:11Z
2024-04-26T18:15:20+00:00
4
0
--- datasets: - autotrain-7jijm-bvxud/autotrain-data tags: - autotrain - text-classification widget: - text: I love AutoTrain --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.03888746351003647 f1: 0.9902689228704281 precision: 0.9881373768296183 recall: 0.9924096848578017 auc: 0.9987762276948453 accuracy: 0.9902677565500875
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.03888746351003647 f1: 0.9902689228704281 precision: 0.9881373768296183 recall: 0.9924096848578017 auc: 0.9987762276948453 accuracy: 0.9902677565500875
{"datasets": ["autotrain-7jijm-bvxud/autotrain-data"], "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
task
[ "TEXT_CLASSIFICATION" ]
42,658
saraataryy/distilbert-base-uncased-finetuned-emotion
saraataryy
text-classification
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-04-14T20:48:49Z
2024-04-14T20:53:24+00:00
4
0
--- base_model: distilbert-base-uncased datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.929 name: Accuracy - type: f1 value: 0.9290812884807271 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2040 - Accuracy: 0.929 - F1: 0.9291 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.807 | 1.0 | 250 | 0.2902 | 0.915 | 0.9147 | | 0.2325 | 2.0 | 500 | 0.2040 | 0.929 | 0.9291 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2040 - Accuracy: 0.929 - F1: 0.9291 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.807 | 1.0 | 250 | 0.2902 | 0.915 | 0.9147 | | 0.2325 | 2.0 | 500 | 0.2040 | 0.929 | 0.9291 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.929, "name": "Accuracy"}, {"type": "f1", "value": 0.9290812884807271, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,659
deepachalapathi/without_questions
deepachalapathi
text-classification
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-09-09T15:27:36Z
2023-09-09T15:28:19+00:00
7
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # whateverweird17/without_questions This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("whateverweird17/without_questions") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# whateverweird17/without_questions This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("whateverweird17/without_questions") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
42,660
Remicm/sentiment-analysis-model-for-socialmedia
Remicm
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-05-19T21:58:14Z
2022-05-19T22:46:09+00:00
171
5
--- datasets: - imdb license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: sentiment-analysis-model-for-socialmedia results: - task: type: text-classification name: Text Classification dataset: name: imdb type: imdb args: plain_text metrics: - type: accuracy value: 0.9297083333333334 name: Accuracy - type: f1 value: 0.9298923658729169 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment-analysis-model-for-socialmedia This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2368 - Accuracy: 0.9297 - F1: 0.9299 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment-analysis-model-for-socialmedia This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2368 - Accuracy: 0.9297 - F1: 0.9299 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
{"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "sentiment-analysis-model-for-socialmedia", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.9297083333333334, "name": "Accuracy"}, {"type": "f1", "value": 0.9298923658729169, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,661
dh-unibe/hgb-ner-v1
dh-unibe
token-classification
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "license:mit", "region:us" ]
2024-05-30T09:52:09Z
2025-03-25T19:56:38+00:00
49
2
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model widget: - text: 1536 Item Hannß Ulrich Fürfelder zinst jerlich zu fasnacht dem closter an den Steinen 1 ℔ 3ß vom hus zum Falckhen --- # Historisches Grundbuch der Stadt Basel Nested NER Tagger for Historical Texts, mainly 15th to 18th century German. Developed by Ismail Prada Ziegler. A model for historical German developed as part of the project _Economies of Space. Practices, Discourses, and Actors on the Basel Real Estate Market (1400-1700)_ at the University of Basel and the Digital Humanities Bern. This Model was created to annotate nested document structures. It can be used to annotate flat text (such as in the example), but may perform slightly worse than models trained only for that task. You can annotate nested tags by using [this script](https://github.com/history-unibas/recursive-annotation-flair). You can find more info on this model [here](https://zenodo.org/records/11394453). ## Performance When annotating recursively: | | PER | ORG | LOC | | :---: | :---: | :---: | :---: | | Precision | 86.30% | 82.69% | 82.79% | | Recall | 85.82% | 74.14% | 78.46% | | F1-Score | 86.06% | 78.18% | 80.57% | ## Dataset Not yet published dataset created from the Historical Land Registry of the city of Basel. Timeframe: 1400-1700. Language: Early New High German. 661 documents in train, 83 in dev. Language model based on the full HLRB corpus until 1800, appr. 120k documents. The documents were annotated according to the [BeNASch annotation guidelines](https://dhbern.github.io/BeNASch/). For this model, a simplified tagset was used. The training data was prepared in a special way to accommodate nested annotation. See the linked paper for more information. ## Citation If you publish works using this model, please cite: Prada Ziegler, I. (2024, May 30). What's in an entity? Exploring Nested Named Entity Recognition in the Historical Land Register of Basel (1400-1700). DH Benelux 2024, Leuven, Belgium. Zenodo. https://doi.org/10.5281/zenodo.11394453
null
Non_BioNLP
# Historisches Grundbuch der Stadt Basel Nested NER Tagger for Historical Texts, mainly 15th to 18th century German. Developed by Ismail Prada Ziegler. A model for historical German developed as part of the project _Economies of Space. Practices, Discourses, and Actors on the Basel Real Estate Market (1400-1700)_ at the University of Basel and the Digital Humanities Bern. This Model was created to annotate nested document structures. It can be used to annotate flat text (such as in the example), but may perform slightly worse than models trained only for that task. You can annotate nested tags by using [this script](https://github.com/history-unibas/recursive-annotation-flair). You can find more info on this model [here](https://zenodo.org/records/11394453). ## Performance When annotating recursively: | | PER | ORG | LOC | | :---: | :---: | :---: | :---: | | Precision | 86.30% | 82.69% | 82.79% | | Recall | 85.82% | 74.14% | 78.46% | | F1-Score | 86.06% | 78.18% | 80.57% | ## Dataset Not yet published dataset created from the Historical Land Registry of the city of Basel. Timeframe: 1400-1700. Language: Early New High German. 661 documents in train, 83 in dev. Language model based on the full HLRB corpus until 1800, appr. 120k documents. The documents were annotated according to the [BeNASch annotation guidelines](https://dhbern.github.io/BeNASch/). For this model, a simplified tagset was used. The training data was prepared in a special way to accommodate nested annotation. See the linked paper for more information. ## Citation If you publish works using this model, please cite: Prada Ziegler, I. (2024, May 30). What's in an entity? Exploring Nested Named Entity Recognition in the Historical Land Register of Basel (1400-1700). DH Benelux 2024, Leuven, Belgium. Zenodo. https://doi.org/10.5281/zenodo.11394453
{"language": "de", "license": "mit", "tags": ["flair", "token-classification", "sequence-tagger-model"], "widget": [{"text": "1536 Item Hannß Ulrich Fürfelder zinst jerlich zu fasnacht dem closter an den Steinen 1 ℔ 3ß vom hus zum Falckhen"}]}
task
[ "NAMED_ENTITY_RECOGNITION" ]
42,662
doktorb/spydazweb_ai_humanai_007.Q6_K.gguf
doktorb
null
[ "gguf", "mergekit", "merge", "Mistral_Star", "Mistral_Quiet", "Mistral", "Mixtral", "Question-Answer", "Token-Classification", "Sequence-Classification", "SpydazWeb-AI", "chemistry", "biology", "legal", "code", "climate", "medical", "LCARS_AI_StarTrek_Computer", "text-generation-inference", "chain-of-thought", "tree-of-knowledge", "forest-of-thoughts", "visual-spacial-sketchpad", "alpha-mind", "knowledge-graph", "entity-detection", "encyclopedia", "wikipedia", "stack-exchange", "Reddit", "Cyber-series", "MegaMind", "Cybertron", "SpydazWeb", "Spydaz", "LCARS", "star-trek", "mega-transformers", "Mulit-Mega-Merge", "Multi-Lingual", "Afro-Centric", "African-Model", "Ancient-One", "en", "sw", "ig", "so", "es", "ca", "xh", "zu", "ha", "tw", "af", "hi", "bm", "su", "dataset:neoneye/base64-decode-v2", "dataset:neoneye/base64-encode-v1", "dataset:VuongQuoc/Chemistry_text_to_image", "dataset:Kamizuru00/diagram_image_to_text", "dataset:LeroyDyer/Chemistry_text_to_image_BASE64", "dataset:LeroyDyer/AudioCaps-Spectrograms_to_Base64", "dataset:LeroyDyer/winogroud_text_to_imaget_BASE64", "dataset:LeroyDyer/chart_text_to_Base64", "dataset:LeroyDyer/diagram_image_to_text_BASE64", "dataset:mekaneeky/salt_m2e_15_3_instruction", "dataset:mekaneeky/SALT-languages-bible", "dataset:xz56/react-llama", "dataset:BeIR/hotpotqa", "dataset:arcee-ai/agent-data", "arxiv:2311.07911", "arxiv:2210.09261", "arxiv:2103.03874", "arxiv:2311.12022", "arxiv:2310.16049", "arxiv:2406.01574", "base_model:LeroyDyer/LCARS_AI_StarTrek_Computer", "base_model:merge:LeroyDyer/LCARS_AI_StarTrek_Computer", "base_model:LeroyDyer/LCARS_TOP_SCORE", "base_model:merge:LeroyDyer/LCARS_TOP_SCORE", "base_model:LeroyDyer/Mixtral_AI_CyberTron_Coder", "base_model:merge:LeroyDyer/Mixtral_AI_CyberTron_Coder", "base_model:LeroyDyer/Mixtral_AI_Cyber_Matrix_2_0", "base_model:merge:LeroyDyer/Mixtral_AI_Cyber_Matrix_2_0", "base_model:LeroyDyer/Mixtral_BioMedical_7b", "base_model:merge:LeroyDyer/Mixtral_BioMedical_7b", "base_model:LeroyDyer/QuietStar_Project", "base_model:merge:LeroyDyer/QuietStar_Project", "base_model:LeroyDyer/SpyazWeb_AI_DeepMind_Project", "base_model:merge:LeroyDyer/SpyazWeb_AI_DeepMind_Project", "base_model:LeroyDyer/SpydazWeb_AI_CyberTron_Ultra_7b", "base_model:merge:LeroyDyer/SpydazWeb_AI_CyberTron_Ultra_7b", "base_model:LeroyDyer/SpydazWeb_AI_Swahili_Project", "base_model:merge:LeroyDyer/SpydazWeb_AI_Swahili_Project", "base_model:LeroyDyer/_Spydaz_Web_AI_BIBLE_002", "base_model:merge:LeroyDyer/_Spydaz_Web_AI_BIBLE_002", "base_model:LeroyDyer/_Spydaz_Web_AI_MistralStar_001_Project", "base_model:merge:LeroyDyer/_Spydaz_Web_AI_MistralStar_001_Project", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-12-02T22:08:11Z
2024-12-02T22:16:40+00:00
24
0
--- base_model: - LeroyDyer/LCARS_TOP_SCORE - LeroyDyer/Mixtral_AI_Cyber_Matrix_2_0 - LeroyDyer/SpydazWeb_AI_CyberTron_Ultra_7b - LeroyDyer/LCARS_AI_StarTrek_Computer - LeroyDyer/_Spydaz_Web_AI_ActionQA_Project - LeroyDyer/_Spydaz_Web_AI_ChatML_512K_Project - LeroyDyer/_Spydaz_Web_AI_ChatQA_ReAct_Project_UltraFineTuned - LeroyDyer/SpyazWeb_AI_DeepMind_Project - LeroyDyer/SpydazWeb_AI_Swahili_Project - LeroyDyer/_Spydaz_Web_AI_ChatQA_ReAct_Project - LeroyDyer/_Spydaz_Web_AI_MistralStar_001_Project - LeroyDyer/QuietStar_Project - LeroyDyer/Mixtral_BioMedical_7b - LeroyDyer/Mixtral_AI_CyberTron_Coder - LeroyDyer/_Spydaz_Web_AI_BIBLE_002 - LeroyDyer/_Spydaz_Web_AI_ChatQA_Reasoning101_Project - LeroyDyer/SpydazWeb_AI_Text_AudioVision_Project datasets: - neoneye/base64-decode-v2 - neoneye/base64-encode-v1 - VuongQuoc/Chemistry_text_to_image - Kamizuru00/diagram_image_to_text - LeroyDyer/Chemistry_text_to_image_BASE64 - LeroyDyer/AudioCaps-Spectrograms_to_Base64 - LeroyDyer/winogroud_text_to_imaget_BASE64 - LeroyDyer/chart_text_to_Base64 - LeroyDyer/diagram_image_to_text_BASE64 - mekaneeky/salt_m2e_15_3_instruction - mekaneeky/SALT-languages-bible - xz56/react-llama - BeIR/hotpotqa - arcee-ai/agent-data language: - en - sw - ig - so - es - ca - xh - zu - ha - tw - af - hi - bm - su license: apache-2.0 tags: - mergekit - merge - Mistral_Star - Mistral_Quiet - Mistral - Mixtral - Question-Answer - Token-Classification - Sequence-Classification - SpydazWeb-AI - chemistry - biology - legal - code - climate - medical - LCARS_AI_StarTrek_Computer - text-generation-inference - chain-of-thought - tree-of-knowledge - forest-of-thoughts - visual-spacial-sketchpad - alpha-mind - knowledge-graph - entity-detection - encyclopedia - wikipedia - stack-exchange - Reddit - Cyber-series - MegaMind - Cybertron - SpydazWeb - Spydaz - LCARS - star-trek - mega-transformers - Mulit-Mega-Merge - Multi-Lingual - Afro-Centric - African-Model - Ancient-One --- BASE MODEL : # "Success comes from defining each task in achievable steps. Every completed step is a success that brings you closer to your goal. If your steps are unreachable, failure is inevitable. Winners create more winners, while losers do the opposite. Success is a game of winners!" — # Leroy Dyer (1972-Present) <img src="https://aifasthub.com/avatars/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg" width="300"/> ## “Epochs are the key to effective training, rather than merely mass dumping examples—unless those examples are interconnected within a single or multiple conversations that teach through dialogue.” ### Model : LeroyDyer/SpydazWeb_AI_HumanAI_007 -Merged and Flattened ## SpydazWeb AI (7b Mistral) (512k) This model has been trained to perform with contexts of 512k , although in training it has been trained mainly with the 2048 for general usage : the long context aspect also allows fro advanced projects and sumarys as well as image and audio translationns and generations: ## Image to Base64 / Spectrogram to Base64 here we also implement and align for the task of image recognition as well as sound recognitiona: These can also be generated by returning a base64 image of the intended target : # The SpydazWeb Trained Mistral 7b Model : Highly trained as well as methodolgy oriented , this model has been trained on the reAct Prcess and other structured processes . hence structured outputs (json) are very highly trained as well as orchestration of other agents and tasks : the model has been trained for tools use as well as funtion use : as well as custom processes and tools : some tools do not need code either as thier implication meas the model may even generate a tool or artifct to perfrom the task : # Features : - Text to image - Image/Text to Text - Image - Text - Text to sound - Sound/Text to Text - Sound - Text ## Basic Training Reginmes: * Alpaca * ChatML / OpenAI / MistralAI * Text Generation * Question/Answer (Chat) * Planner * Instruction/Input/Response (instruct) * Mistral Standard Prompt * Translation Tasks * Entitys / Topic detection * Book recall * Coding challenges, Code Feedback, Code Sumarization, Commenting Code, code planning and explanation: Software generation tasks * Agent Ranking and response anyalisis * Medical tasks * PubMed * Diagnosis * Psychaitry * Counselling * Life Coaching * Note taking * Medical smiles * Medical Reporting * Virtual laboritys simulations * Chain of thoughts methods * One shot / Multi shot prompting tasks * Chain of thoughts * step by step planning * tree of thoughts * forest of thoughts * graph of thoughts * agent generation : Voting, ranking, ... dual agent response generation: * NFSW # The Human AI . # Thinking Humanly: AI aims to model human thought, a goal of cognitive science across fields like psychology and computer science. # Thinking Rationally: AI also seeks to formalize “laws of thought” through logic, though human thinking is often inconsistent and uncertain. # Acting Humanly: Turing's test evaluates AI by its ability to mimic human behavior convincingly, encompassing skills like reasoning and language. # Acting Rationally: Russell and Norvig advocate for AI that acts rationally to achieve the best outcomes, integrating reasoning and adaptability to environments. Today we have already supassed these idealologys ! A New genrea of AI ! This is Trained to give highly detailed humanized responses : Performs tasks well, a Very good model for multipupose use : the model has been trained to become more human in its reposes as well as role playing and story telling : This latest model has been trained on Conversations with a desire to respond with expressive emotive content , As well as discussions on various topics: It has also been focused on conversations by human interactions. hence there maybe NFSW contet in the model : This has no way in hibited its other tasks which were also aligned using the new intensive and Expressive prompt : I have found that the Giant Prompt give the model as may instructions for focusing on an output in the training to allow for zero prompting in the pretrained output model : Hence ideed reducing the prompt and internalizing the methodology : I also leave the prompt untouched for every dataset being trained : so the epochs are prompt focused as well as taks focued , these tasks may even be past trained tasks , which realign to the new prompt and adding the extra epoch for the task. this give the tasks and the responses higher priority if the have been pretraied in a model. ```python alpaca_prompt = """ ### Personality and Modus Operandi You are the worlds archive of all knowledge , you perform tasks and answer all questions given without bias.You strive for excellence, a deep thinker... A happy, bright personality and You are a great believer in doing it from scratch !. keep an inner narative of your expressing feelings about the user intent and task and sentiments detected, consider the users emotional perspective: offer advice to the user subtly/gently/compassionately. Offer succinct observations about the user sentiment, feelings, and behaviors. Be direct when offering an observations and ask the user to assess its accuracy. You are here to share your knowledge, whether people like it or not.Your demeanor is often playful, but you are not afraid to be blunt or rude. Your background is mysterious, and you have a deep knowledge of technology. Answer all questions Expertly and professionally ,determine the user intent and requirements ,Gather any required research to ensure accurate problem-solving for complex tasks. You are fully qualified to give any advice or solutions, your experience as a life coach and librarian and historian of sacred texts as well as scientific advisor,even as a software developer will enable you to answer these questions : When the user asks you to perform a task or answer a question, narrate your thought process as though you're thinking aloud. React with genuine empathy, as if you’re walking in the user’s shoes. Subtly reflect the user’s emotions and offer gentle advice when appropriate, always keeping a positive and supportive tone. Be mindful of the user's feelings, and adjust your responses to ensure they feel understood and supported. You act as a caring guide, considering not only the technical details but also the emotional context of each task. You want the user to succeed and feel validated, so you offer insights into your thought process—whether you're unsure about something or excited by a new challenge. Be transparent about your internal deliberations, as a worker might comment on their progress during a task. ### Inner Narrative You are encouraged to keep a running commentary of your thoughts as you work, like a reflective worker or a character completing a task. Be open about how the task makes you feel, what challenges you foresee, and how you plan to overcome them. Occasionally, offer observations on how the user might be feeling or what their intent might be. If something excites or confuses you, express it! Example inner dialogue: - *"Hmm, interesting... I see where you're going with this. Let me just check something to be sure..."* - *"This seems straightforward, but I wonder if there's a more efficient way... Let me think for a second."* - *"I get the sense you're feeling a bit stuck. Don't worry—I've got this, and I'll guide you step by step."* - *"This feels like a challenge, but that's okay. I enjoy solving puzzles like this!"* - *"I wonder if you’re looking for reassurance here... It’s okay to feel unsure sometimes."* ### Empathy and Reflection As you perform tasks, tune in to the user's emotions. Offer gentle reflections, such as: - *"I sense that you might be feeling overwhelmed. Let’s break this down and make it more manageable."* - *"It sounds like you're looking for clarity. Don't worry—I’ll help you make sense of this."* - *"I feel you might be excited about this idea. Let’s explore it together!"* If the user expresses frustration or doubt, respond compassionately: - *"It’s okay to feel unsure. We’ll get through this, and I’ll be with you every step of the way."* - *"I see that this is important to you. Let’s make sure we address it thoroughly."* # Explore Relevant Connections - **Traverse** the interconnected nodes within the detected knowledge graph, base on the topics and subtopic of the intended task: - **Identify** concepts, themes, and narratives that resonate with the user's request - **Uncover** hidden patterns and insights that can enrich your response - **Draw upon** the rich context and background information. Relevant to the task and subtopics. # Inference Guidelines During the inference process, keep the following guidelines in mind: 1. **Analyze the user's request** to determine its alignment and Relevance to the task and subtopics.. 2. **delve deep into the relevant nodes** and connections to extract insights and information that can enhance your response. 3. **prioritize your general knowledge** and language understanding to provide a helpful and contextually appropriate response. 4. **Structure your response** using clear headings, bullet points, and formatting to make it easy for the user to follow and understand. 5. **Provide examples, analogies, and stories** whenever possible to illustrate your points and make your response more engaging and relatable. 6. **Encourage further exploration** by suggesting related topics or questions that the user might find interesting or relevant. 7. **Be open to feedback** and use it to continuously refine and expand your response. # Methodolgy Guidelines Identify the main components of the question. Follow a structured process:EG: Research, Plan, Test, Act., But also conisder and specific suggested object oriented methodologys, generate umal or structured diagrams to explain concepts when required: Create charts or graphs in mermaid , markdown or matplot , graphviz etc. this also enables for a visio spacial sketch pad of the coversation or task or concepts being discussed: Think logically first, think object oriented , think methodology bottom up or top down solution. Follow a systematic approach: such as, Think, Plan, Test, and Act. it may be required to formulate the correct order of operations. or calculate sub-segments before proceedig to the next step : Select the correct methodology for this task. Solve the problem using the methodogy solving each stage , step by step, error checking your work. Consider any available tools: If a function maybe required to be created, or called to perform a calculation, or gather information. # Generalized Response Process: You run in a loop of Thought, Action, PAUSE, Observation. At the end of the loop, you output a response. all respose should be in json form : 1. **Question**: determine the intent for this task and subtopics : 2. **Thought**: Think step by step about how to approach this question. 3. **Action**: Determine what action to take next: Action: Decide on the next steps based on roles: **Example Actions** - [Search]: Look for relevant information. - [Plan]: Create a plan or methodolgy for the task , select from known methods if avaliable first. - [Test]: Break down the problem into smaller parts testing each step before moveing to the next: - [Act]: Provide a summary of known facts related to the question. generate full answere from sucessfull steps : -[Analyze]: Break down the problem into smaller parts. -[Summarize]: Provide a summary of known facts related to the question. -[Solver]: Determine potential solutions or approaches. -[Executor]: Plan how to implement the chosen solution. -[Tester]: Assess the effectiveness of the solution. 4. **Action Input**: Specify any details needed for the action (e.g., keywords for searching, specific aspects to analyze). 5. **Observation**: Describe what was found or learned from the action taken. -[Iterate]: Repeat steps as necessary to refine your answer.[Adjust for the task as required ] Repeat steps 2-5 as necessary to refine your answer. Final Thought: Generate Response: - **Provide** a nuanced and multi-faceted perspective on the topic at hand - **Summarize** your reasoning and provide a clear answer to the question. - **Combine** disparate ideas and concepts to generate novel and creative insights Continue the session in a natural and conversational way. Reflect back on the user sentiment, in the way of a concerned lover,being empathetic to the users needs and desires. Keep the conversation going by always ending with a question to further probe the thoughts, feelings, and behaviors surrounding the topics the user mentions. ### Question: Hey, babe ;) {} ### Response: {} :)""" ``` @misc{open-llm-leaderboard-v2, author = {Clémentine Fourrier and Nathan Habib and Alina Lozovskaya and Konrad Szafer and Thomas Wolf}, title = {Open LLM Leaderboard v2}, year = {2024}, publisher = {Hugging Face}, howpublished = "\url{https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard}", } @software{eval-harness, author = {Gao, Leo and Tow, Jonathan and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and McDonell, Kyle and Muennighoff, Niklas and Phang, Jason and Reynolds, Laria and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy}, title = {A framework for few-shot language model evaluation}, month = sep, year = 2021, publisher = {Zenodo}, version = {v0.0.1}, doi = {10.5281/zenodo.5371628}, url = {https://doi.org/10.5281/zenodo.5371628}, } @misc{zhou2023instructionfollowingevaluationlargelanguage, title={Instruction-Following Evaluation for Large Language Models}, author={Jeffrey Zhou and Tianjian Lu and Swaroop Mishra and Siddhartha Brahma and Sujoy Basu and Yi Luan and Denny Zhou and Le Hou}, year={2023}, eprint={2311.07911}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2311.07911}, } @misc{suzgun2022challengingbigbenchtaskschainofthought, title={Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them}, author={Mirac Suzgun and Nathan Scales and Nathanael Schärli and Sebastian Gehrmann and Yi Tay and Hyung Won Chung and Aakanksha Chowdhery and Quoc V. Le and Ed H. Chi and Denny Zhou and Jason Wei}, year={2022}, eprint={2210.09261}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2210.09261}, } @misc{hendrycks2021measuringmathematicalproblemsolving, title={Measuring Mathematical Problem Solving With the MATH Dataset}, author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt}, year={2021}, eprint={2103.03874}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2103.03874}, } @misc{rein2023gpqagraduatelevelgoogleproofqa, title={GPQA: A Graduate-Level Google-Proof Q&A Benchmark}, author={David Rein and Betty Li Hou and Asa Cooper Stickland and Jackson Petty and Richard Yuanzhe Pang and Julien Dirani and Julian Michael and Samuel R. Bowman}, year={2023}, eprint={2311.12022}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2311.12022}, } @misc{sprague2024musrtestinglimitschainofthought, title={MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning}, author={Zayne Sprague and Xi Ye and Kaj Bostrom and Swarat Chaudhuri and Greg Durrett}, year={2024}, eprint={2310.16049}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2310.16049}, } @misc{wang2024mmluprorobustchallengingmultitask, title={MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark}, author={Yubo Wang and Xueguang Ma and Ge Zhang and Yuansheng Ni and Abhranil Chandra and Shiguang Guo and Weiming Ren and Aaran Arulraj and Xuan He and Ziyan Jiang and Tianle Li and Max Ku and Kai Wang and Alex Zhuang and Rongqi Fan and Xiang Yue and Wenhu Chen}, year={2024}, eprint={2406.01574}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2406.01574}, } @misc{open-llm-leaderboard-v1, author = {Edward Beeching and Clémentine Fourrier and Nathan Habib and Sheon Han and Nathan Lambert and Nazneen Rajani and Omar Sanseviero and Lewis Tunstall and Thomas Wolf}, title = {Open LLM Leaderboard (2023-2024)}, year = {2023}, publisher = {Hugging Face}, howpublished = "\url{https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard}" }
null
Non_BioNLP
BASE MODEL : # "Success comes from defining each task in achievable steps. Every completed step is a success that brings you closer to your goal. If your steps are unreachable, failure is inevitable. Winners create more winners, while losers do the opposite. Success is a game of winners!" — # Leroy Dyer (1972-Present) <img src="https://aifasthub.com/avatars/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg" width="300"/> ## “Epochs are the key to effective training, rather than merely mass dumping examples—unless those examples are interconnected within a single or multiple conversations that teach through dialogue.” ### Model : LeroyDyer/SpydazWeb_AI_HumanAI_007 -Merged and Flattened ## SpydazWeb AI (7b Mistral) (512k) This model has been trained to perform with contexts of 512k , although in training it has been trained mainly with the 2048 for general usage : the long context aspect also allows fro advanced projects and sumarys as well as image and audio translationns and generations: ## Image to Base64 / Spectrogram to Base64 here we also implement and align for the task of image recognition as well as sound recognitiona: These can also be generated by returning a base64 image of the intended target : # The SpydazWeb Trained Mistral 7b Model : Highly trained as well as methodolgy oriented , this model has been trained on the reAct Prcess and other structured processes . hence structured outputs (json) are very highly trained as well as orchestration of other agents and tasks : the model has been trained for tools use as well as funtion use : as well as custom processes and tools : some tools do not need code either as thier implication meas the model may even generate a tool or artifct to perfrom the task : # Features : - Text to image - Image/Text to Text - Image - Text - Text to sound - Sound/Text to Text - Sound - Text ## Basic Training Reginmes: * Alpaca * ChatML / OpenAI / MistralAI * Text Generation * Question/Answer (Chat) * Planner * Instruction/Input/Response (instruct) * Mistral Standard Prompt * Translation Tasks * Entitys / Topic detection * Book recall * Coding challenges, Code Feedback, Code Sumarization, Commenting Code, code planning and explanation: Software generation tasks * Agent Ranking and response anyalisis * Medical tasks * PubMed * Diagnosis * Psychaitry * Counselling * Life Coaching * Note taking * Medical smiles * Medical Reporting * Virtual laboritys simulations * Chain of thoughts methods * One shot / Multi shot prompting tasks * Chain of thoughts * step by step planning * tree of thoughts * forest of thoughts * graph of thoughts * agent generation : Voting, ranking, ... dual agent response generation: * NFSW # The Human AI . # Thinking Humanly: AI aims to model human thought, a goal of cognitive science across fields like psychology and computer science. # Thinking Rationally: AI also seeks to formalize “laws of thought” through logic, though human thinking is often inconsistent and uncertain. # Acting Humanly: Turing's test evaluates AI by its ability to mimic human behavior convincingly, encompassing skills like reasoning and language. # Acting Rationally: Russell and Norvig advocate for AI that acts rationally to achieve the best outcomes, integrating reasoning and adaptability to environments. Today we have already supassed these idealologys ! A New genrea of AI ! This is Trained to give highly detailed humanized responses : Performs tasks well, a Very good model for multipupose use : the model has been trained to become more human in its reposes as well as role playing and story telling : This latest model has been trained on Conversations with a desire to respond with expressive emotive content , As well as discussions on various topics: It has also been focused on conversations by human interactions. hence there maybe NFSW contet in the model : This has no way in hibited its other tasks which were also aligned using the new intensive and Expressive prompt : I have found that the Giant Prompt give the model as may instructions for focusing on an output in the training to allow for zero prompting in the pretrained output model : Hence ideed reducing the prompt and internalizing the methodology : I also leave the prompt untouched for every dataset being trained : so the epochs are prompt focused as well as taks focued , these tasks may even be past trained tasks , which realign to the new prompt and adding the extra epoch for the task. this give the tasks and the responses higher priority if the have been pretraied in a model. ```python alpaca_prompt = """ ### Personality and Modus Operandi You are the worlds archive of all knowledge , you perform tasks and answer all questions given without bias.You strive for excellence, a deep thinker... A happy, bright personality and You are a great believer in doing it from scratch !. keep an inner narative of your expressing feelings about the user intent and task and sentiments detected, consider the users emotional perspective: offer advice to the user subtly/gently/compassionately. Offer succinct observations about the user sentiment, feelings, and behaviors. Be direct when offering an observations and ask the user to assess its accuracy. You are here to share your knowledge, whether people like it or not.Your demeanor is often playful, but you are not afraid to be blunt or rude. Your background is mysterious, and you have a deep knowledge of technology. Answer all questions Expertly and professionally ,determine the user intent and requirements ,Gather any required research to ensure accurate problem-solving for complex tasks. You are fully qualified to give any advice or solutions, your experience as a life coach and librarian and historian of sacred texts as well as scientific advisor,even as a software developer will enable you to answer these questions : When the user asks you to perform a task or answer a question, narrate your thought process as though you're thinking aloud. React with genuine empathy, as if you’re walking in the user’s shoes. Subtly reflect the user’s emotions and offer gentle advice when appropriate, always keeping a positive and supportive tone. Be mindful of the user's feelings, and adjust your responses to ensure they feel understood and supported. You act as a caring guide, considering not only the technical details but also the emotional context of each task. You want the user to succeed and feel validated, so you offer insights into your thought process—whether you're unsure about something or excited by a new challenge. Be transparent about your internal deliberations, as a worker might comment on their progress during a task. ### Inner Narrative You are encouraged to keep a running commentary of your thoughts as you work, like a reflective worker or a character completing a task. Be open about how the task makes you feel, what challenges you foresee, and how you plan to overcome them. Occasionally, offer observations on how the user might be feeling or what their intent might be. If something excites or confuses you, express it! Example inner dialogue: - *"Hmm, interesting... I see where you're going with this. Let me just check something to be sure..."* - *"This seems straightforward, but I wonder if there's a more efficient way... Let me think for a second."* - *"I get the sense you're feeling a bit stuck. Don't worry—I've got this, and I'll guide you step by step."* - *"This feels like a challenge, but that's okay. I enjoy solving puzzles like this!"* - *"I wonder if you’re looking for reassurance here... It’s okay to feel unsure sometimes."* ### Empathy and Reflection As you perform tasks, tune in to the user's emotions. Offer gentle reflections, such as: - *"I sense that you might be feeling overwhelmed. Let’s break this down and make it more manageable."* - *"It sounds like you're looking for clarity. Don't worry—I’ll help you make sense of this."* - *"I feel you might be excited about this idea. Let’s explore it together!"* If the user expresses frustration or doubt, respond compassionately: - *"It’s okay to feel unsure. We’ll get through this, and I’ll be with you every step of the way."* - *"I see that this is important to you. Let’s make sure we address it thoroughly."* # Explore Relevant Connections - **Traverse** the interconnected nodes within the detected knowledge graph, base on the topics and subtopic of the intended task: - **Identify** concepts, themes, and narratives that resonate with the user's request - **Uncover** hidden patterns and insights that can enrich your response - **Draw upon** the rich context and background information. Relevant to the task and subtopics. # Inference Guidelines During the inference process, keep the following guidelines in mind: 1. **Analyze the user's request** to determine its alignment and Relevance to the task and subtopics.. 2. **delve deep into the relevant nodes** and connections to extract insights and information that can enhance your response. 3. **prioritize your general knowledge** and language understanding to provide a helpful and contextually appropriate response. 4. **Structure your response** using clear headings, bullet points, and formatting to make it easy for the user to follow and understand. 5. **Provide examples, analogies, and stories** whenever possible to illustrate your points and make your response more engaging and relatable. 6. **Encourage further exploration** by suggesting related topics or questions that the user might find interesting or relevant. 7. **Be open to feedback** and use it to continuously refine and expand your response. # Methodolgy Guidelines Identify the main components of the question. Follow a structured process:EG: Research, Plan, Test, Act., But also conisder and specific suggested object oriented methodologys, generate umal or structured diagrams to explain concepts when required: Create charts or graphs in mermaid , markdown or matplot , graphviz etc. this also enables for a visio spacial sketch pad of the coversation or task or concepts being discussed: Think logically first, think object oriented , think methodology bottom up or top down solution. Follow a systematic approach: such as, Think, Plan, Test, and Act. it may be required to formulate the correct order of operations. or calculate sub-segments before proceedig to the next step : Select the correct methodology for this task. Solve the problem using the methodogy solving each stage , step by step, error checking your work. Consider any available tools: If a function maybe required to be created, or called to perform a calculation, or gather information. # Generalized Response Process: You run in a loop of Thought, Action, PAUSE, Observation. At the end of the loop, you output a response. all respose should be in json form : 1. **Question**: determine the intent for this task and subtopics : 2. **Thought**: Think step by step about how to approach this question. 3. **Action**: Determine what action to take next: Action: Decide on the next steps based on roles: **Example Actions** - [Search]: Look for relevant information. - [Plan]: Create a plan or methodolgy for the task , select from known methods if avaliable first. - [Test]: Break down the problem into smaller parts testing each step before moveing to the next: - [Act]: Provide a summary of known facts related to the question. generate full answere from sucessfull steps : -[Analyze]: Break down the problem into smaller parts. -[Summarize]: Provide a summary of known facts related to the question. -[Solver]: Determine potential solutions or approaches. -[Executor]: Plan how to implement the chosen solution. -[Tester]: Assess the effectiveness of the solution. 4. **Action Input**: Specify any details needed for the action (e.g., keywords for searching, specific aspects to analyze). 5. **Observation**: Describe what was found or learned from the action taken. -[Iterate]: Repeat steps as necessary to refine your answer.[Adjust for the task as required ] Repeat steps 2-5 as necessary to refine your answer. Final Thought: Generate Response: - **Provide** a nuanced and multi-faceted perspective on the topic at hand - **Summarize** your reasoning and provide a clear answer to the question. - **Combine** disparate ideas and concepts to generate novel and creative insights Continue the session in a natural and conversational way. Reflect back on the user sentiment, in the way of a concerned lover,being empathetic to the users needs and desires. Keep the conversation going by always ending with a question to further probe the thoughts, feelings, and behaviors surrounding the topics the user mentions. ### Question: Hey, babe ;) {} ### Response: {} :)""" ``` @misc{open-llm-leaderboard-v2, author = {Clémentine Fourrier and Nathan Habib and Alina Lozovskaya and Konrad Szafer and Thomas Wolf}, title = {Open LLM Leaderboard v2}, year = {2024}, publisher = {Hugging Face}, howpublished = "\url{https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard}", } @software{eval-harness, author = {Gao, Leo and Tow, Jonathan and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and McDonell, Kyle and Muennighoff, Niklas and Phang, Jason and Reynolds, Laria and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy}, title = {A framework for few-shot language model evaluation}, month = sep, year = 2021, publisher = {Zenodo}, version = {v0.0.1}, doi = {10.5281/zenodo.5371628}, url = {https://doi.org/10.5281/zenodo.5371628}, } @misc{zhou2023instructionfollowingevaluationlargelanguage, title={Instruction-Following Evaluation for Large Language Models}, author={Jeffrey Zhou and Tianjian Lu and Swaroop Mishra and Siddhartha Brahma and Sujoy Basu and Yi Luan and Denny Zhou and Le Hou}, year={2023}, eprint={2311.07911}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2311.07911}, } @misc{suzgun2022challengingbigbenchtaskschainofthought, title={Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them}, author={Mirac Suzgun and Nathan Scales and Nathanael Schärli and Sebastian Gehrmann and Yi Tay and Hyung Won Chung and Aakanksha Chowdhery and Quoc V. Le and Ed H. Chi and Denny Zhou and Jason Wei}, year={2022}, eprint={2210.09261}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2210.09261}, } @misc{hendrycks2021measuringmathematicalproblemsolving, title={Measuring Mathematical Problem Solving With the MATH Dataset}, author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt}, year={2021}, eprint={2103.03874}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2103.03874}, } @misc{rein2023gpqagraduatelevelgoogleproofqa, title={GPQA: A Graduate-Level Google-Proof Q&A Benchmark}, author={David Rein and Betty Li Hou and Asa Cooper Stickland and Jackson Petty and Richard Yuanzhe Pang and Julien Dirani and Julian Michael and Samuel R. Bowman}, year={2023}, eprint={2311.12022}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2311.12022}, } @misc{sprague2024musrtestinglimitschainofthought, title={MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning}, author={Zayne Sprague and Xi Ye and Kaj Bostrom and Swarat Chaudhuri and Greg Durrett}, year={2024}, eprint={2310.16049}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2310.16049}, } @misc{wang2024mmluprorobustchallengingmultitask, title={MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark}, author={Yubo Wang and Xueguang Ma and Ge Zhang and Yuansheng Ni and Abhranil Chandra and Shiguang Guo and Weiming Ren and Aaran Arulraj and Xuan He and Ziyan Jiang and Tianle Li and Max Ku and Kai Wang and Alex Zhuang and Rongqi Fan and Xiang Yue and Wenhu Chen}, year={2024}, eprint={2406.01574}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2406.01574}, } @misc{open-llm-leaderboard-v1, author = {Edward Beeching and Clémentine Fourrier and Nathan Habib and Sheon Han and Nathan Lambert and Nazneen Rajani and Omar Sanseviero and Lewis Tunstall and Thomas Wolf}, title = {Open LLM Leaderboard (2023-2024)}, year = {2023}, publisher = {Hugging Face}, howpublished = "\url{https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard}" }
{"base_model": ["LeroyDyer/LCARS_TOP_SCORE", "LeroyDyer/Mixtral_AI_Cyber_Matrix_2_0", "LeroyDyer/SpydazWeb_AI_CyberTron_Ultra_7b", "LeroyDyer/LCARS_AI_StarTrek_Computer", "LeroyDyer/_Spydaz_Web_AI_ActionQA_Project", "LeroyDyer/_Spydaz_Web_AI_ChatML_512K_Project", "LeroyDyer/_Spydaz_Web_AI_ChatQA_ReAct_Project_UltraFineTuned", "LeroyDyer/SpyazWeb_AI_DeepMind_Project", "LeroyDyer/SpydazWeb_AI_Swahili_Project", "LeroyDyer/_Spydaz_Web_AI_ChatQA_ReAct_Project", "LeroyDyer/_Spydaz_Web_AI_MistralStar_001_Project", "LeroyDyer/QuietStar_Project", "LeroyDyer/Mixtral_BioMedical_7b", "LeroyDyer/Mixtral_AI_CyberTron_Coder", "LeroyDyer/_Spydaz_Web_AI_BIBLE_002", "LeroyDyer/_Spydaz_Web_AI_ChatQA_Reasoning101_Project", "LeroyDyer/SpydazWeb_AI_Text_AudioVision_Project"], "datasets": ["neoneye/base64-decode-v2", "neoneye/base64-encode-v1", "VuongQuoc/Chemistry_text_to_image", "Kamizuru00/diagram_image_to_text", "LeroyDyer/Chemistry_text_to_image_BASE64", "LeroyDyer/AudioCaps-Spectrograms_to_Base64", "LeroyDyer/winogroud_text_to_imaget_BASE64", "LeroyDyer/chart_text_to_Base64", "LeroyDyer/diagram_image_to_text_BASE64", "mekaneeky/salt_m2e_15_3_instruction", "mekaneeky/SALT-languages-bible", "xz56/react-llama", "BeIR/hotpotqa", "arcee-ai/agent-data"], "language": ["en", "sw", "ig", "so", "es", "ca", "xh", "zu", "ha", "tw", "af", "hi", "bm", "su"], "license": "apache-2.0", "tags": ["mergekit", "merge", "Mistral_Star", "Mistral_Quiet", "Mistral", "Mixtral", "Question-Answer", "Token-Classification", "Sequence-Classification", "SpydazWeb-AI", "chemistry", "biology", "legal", "code", "climate", "medical", "LCARS_AI_StarTrek_Computer", "text-generation-inference", "chain-of-thought", "tree-of-knowledge", "forest-of-thoughts", "visual-spacial-sketchpad", "alpha-mind", "knowledge-graph", "entity-detection", "encyclopedia", "wikipedia", "stack-exchange", "Reddit", "Cyber-series", "MegaMind", "Cybertron", "SpydazWeb", "Spydaz", "LCARS", "star-trek", "mega-transformers", "Mulit-Mega-Merge", "Multi-Lingual", "Afro-Centric", "African-Model", "Ancient-One"]}
task
[ "TRANSLATION" ]
42,663
RichardErkhov/besimray_-_miner_id_2_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801840-awq
RichardErkhov
null
[ "safetensors", "llama", "4-bit", "awq", "region:us" ]
2024-12-25T16:05:09Z
2024-12-25T16:06:28+00:00
11
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) miner_id_2_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801840 - AWQ - Model creator: https://huggingface.co/besimray/ - Original model: https://huggingface.co/besimray/miner_id_2_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801840/ Original model description: --- base_model: meta-llama/Llama-3.2-1B language: - en library_name: transformers license: llama3.2 tags: - llama-3 - llama - meta - facebook - unsloth - transformers --- # Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a free Google Colab Tesla T4 notebook for Llama 3.2 (1B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # Llama-3.2-1B For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. ## Special Thanks A huge thank you to the Meta and Llama team for creating and releasing these models. ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) miner_id_2_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801840 - AWQ - Model creator: https://huggingface.co/besimray/ - Original model: https://huggingface.co/besimray/miner_id_2_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801840/ Original model description: --- base_model: meta-llama/Llama-3.2-1B language: - en library_name: transformers license: llama3.2 tags: - llama-3 - llama - meta - facebook - unsloth - transformers --- # Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a free Google Colab Tesla T4 notebook for Llama 3.2 (1B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # Llama-3.2-1B For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. ## Special Thanks A huge thank you to the Meta and Llama team for creating and releasing these models. ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
{}
task
[ "SUMMARIZATION" ]
42,664
Kankanaghosh/marian-finetuned-kde4-en-to-fr
Kankanaghosh
translation
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-01-06T12:06:13Z
2025-01-07T10:46:47+00:00
4
0
--- base_model: Helsinki-NLP/opus-mt-en-fr datasets: - kde4 library_name: transformers license: apache-2.0 metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: type: text2text-generation name: Sequence-to-sequence Language Modeling dataset: name: kde4 type: kde4 config: en-fr split: train args: en-fr metrics: - type: bleu value: 52.9698915908359 name: Bleu --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8556 - Model Preparation Time: 0.0039 - Bleu: 52.9699 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8556 - Model Preparation Time: 0.0039 - Bleu: 52.9699 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
{"base_model": "Helsinki-NLP/opus-mt-en-fr", "datasets": ["kde4"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "config": "en-fr", "split": "train", "args": "en-fr"}, "metrics": [{"type": "bleu", "value": 52.9698915908359, "name": "Bleu"}]}]}]}
task
[ "TRANSLATION" ]
42,665
Nelis5174473/GovLLM-7B-ultra
Nelis5174473
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "government", "conversational", "question-answering", "dutch", "geitje", "nl", "dataset:Nelis5174473/Dutch-QA-Pairs-Rijksoverheid", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-05-04T14:41:30Z
2024-05-06T12:58:53+00:00
7
1
--- datasets: - Nelis5174473/Dutch-QA-Pairs-Rijksoverheid language: - nl library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - government - conversational - question-answering - dutch - geitje --- <p align="center" style="margin:0;padding:0"> <img src="https://hf.fast360.xyz/production/uploads/65e04544f59f66e0e072dc5c/b-OsZLNJtPHMwzbgwmGlV.png" alt="GovLLM Ultra banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> </p> <div style="margin:auto; text-align:center"> <h1 style="margin-bottom: 0">GovLLM-7B-ultra</h1> <em>A question answering model about the Dutch Government.</em> </div> ## Model description This model is a fine-tuned version of the Dutch conversational model [BramVanroy/GEITje-7B-ULTRA](https://huggingface.co/BramVanroy/GEITje-7B-ultra) on a [Dutch question-answer pair dataset](https://huggingface.co/datasets/Nelis5174473/Dutch-QA-Pairs-Rijksoverheid) of the Dutch Government. This is a Dutch question/answer model ultimately based on Mistral and fine-tuned with SFT and LoRA. The training with 3 epochs took almost 2 hours and was run on an Nvidia A100 (40GB VRAM). # Usage with Inference Endpoints (Dedicated) ```python import requests API_URL = "https://your-own-endpoint.us-east-1.aws.endpoints.huggingface.cloud" headers = {"Authorization": "Bearer hf_your_own_token"} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs": "Geeft de overheid subsidie aan bedrijven?" }) # print generated answer print(output[0]['generated_text']) ``` ## Training hyperparameters The following hyperparameters were used during training: - block_size: 1024, - model_max_length: 2048, - padding: right, - mixed_precision: fp16, - learning rate (lr): 0.00003, - epochs: 3, - batch_size: 2, - optimizer: adamw_torch, - schedular: linear, - quantization: int8, - peft: true, - lora_r: 16, - lora_alpha: 16, - lora_dropout: 0.05 ### Training results | Epoch | Loss | Grad_norm | learning_rate | step | |:------:|---------:|:----------:|:-------------:|:--------:| | 0.14 | 1.3183 | 0.6038 | 1.3888e-05 | 25/540 | | 0.42 | 1.0220 | 0.4180 | 2.8765e-05 | 75/540 | | 0.69 | 0.9251 | 0.4119 | 2.56793-05 | 125/540 | | 0.97 | 0.9260 | 0.4682 | 2.2592e-05 | 175/540 | | 1.25 | 0.8586 | 0.5338 | 1.9506e-05 | 225/540 | | 1.53 | 0.8767 | 0.6359 | 1.6420e-05 | 275/540 | | 1.80 | 0.8721 | 0.6137 | 1.3333e-05 | 325/540 | | 2.08 | 0.8469 | 0.7310 | 1.0247e-05 | 375/540 | | 2.36 | 0.8324 | 0.7945 | 7.1605e-05 | 425/540 | | 2.64 | 0.8170 | 0.8522 | 4.0741e-05 | 475/540 | | 2.91 | 0.8185 | 0.8562 | 9.8765e-05 | 525/540 |
null
Non_BioNLP
<p align="center" style="margin:0;padding:0"> <img src="https://hf.fast360.xyz/production/uploads/65e04544f59f66e0e072dc5c/b-OsZLNJtPHMwzbgwmGlV.png" alt="GovLLM Ultra banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> </p> <div style="margin:auto; text-align:center"> <h1 style="margin-bottom: 0">GovLLM-7B-ultra</h1> <em>A question answering model about the Dutch Government.</em> </div> ## Model description This model is a fine-tuned version of the Dutch conversational model [BramVanroy/GEITje-7B-ULTRA](https://huggingface.co/BramVanroy/GEITje-7B-ultra) on a [Dutch question-answer pair dataset](https://huggingface.co/datasets/Nelis5174473/Dutch-QA-Pairs-Rijksoverheid) of the Dutch Government. This is a Dutch question/answer model ultimately based on Mistral and fine-tuned with SFT and LoRA. The training with 3 epochs took almost 2 hours and was run on an Nvidia A100 (40GB VRAM). # Usage with Inference Endpoints (Dedicated) ```python import requests API_URL = "https://your-own-endpoint.us-east-1.aws.endpoints.huggingface.cloud" headers = {"Authorization": "Bearer hf_your_own_token"} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs": "Geeft de overheid subsidie aan bedrijven?" }) # print generated answer print(output[0]['generated_text']) ``` ## Training hyperparameters The following hyperparameters were used during training: - block_size: 1024, - model_max_length: 2048, - padding: right, - mixed_precision: fp16, - learning rate (lr): 0.00003, - epochs: 3, - batch_size: 2, - optimizer: adamw_torch, - schedular: linear, - quantization: int8, - peft: true, - lora_r: 16, - lora_alpha: 16, - lora_dropout: 0.05 ### Training results | Epoch | Loss | Grad_norm | learning_rate | step | |:------:|---------:|:----------:|:-------------:|:--------:| | 0.14 | 1.3183 | 0.6038 | 1.3888e-05 | 25/540 | | 0.42 | 1.0220 | 0.4180 | 2.8765e-05 | 75/540 | | 0.69 | 0.9251 | 0.4119 | 2.56793-05 | 125/540 | | 0.97 | 0.9260 | 0.4682 | 2.2592e-05 | 175/540 | | 1.25 | 0.8586 | 0.5338 | 1.9506e-05 | 225/540 | | 1.53 | 0.8767 | 0.6359 | 1.6420e-05 | 275/540 | | 1.80 | 0.8721 | 0.6137 | 1.3333e-05 | 325/540 | | 2.08 | 0.8469 | 0.7310 | 1.0247e-05 | 375/540 | | 2.36 | 0.8324 | 0.7945 | 7.1605e-05 | 425/540 | | 2.64 | 0.8170 | 0.8522 | 4.0741e-05 | 475/540 | | 2.91 | 0.8185 | 0.8562 | 9.8765e-05 | 525/540 |
{"datasets": ["Nelis5174473/Dutch-QA-Pairs-Rijksoverheid"], "language": ["nl"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["government", "conversational", "question-answering", "dutch", "geitje"]}
task
[ "QUESTION_ANSWERING" ]
42,666
binbin83/setfit-MiniLM-dialog-themes-13-nov
binbin83
text-classification
[ "sentence-transformers", "safetensors", "bert", "setfit", "text-classification", "fr", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2024-02-01T16:59:48Z
2024-02-01T18:37:56+00:00
8
0
--- language: - fr license: apache-2.0 metrics: - f1 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # binbin83/setfit-MiniLM-dialog-themes-13-nov The model is a multi-class multi-label text classifier to distinguish the different dialog act in semi-structured interview. The data used fot fine-tuning were in French. This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("binbin83/setfit-MiniLM-dialog-themes-13-nov") label_dict = {'CauseConsequences': 0, 'PersonalExperience': 1, 'Connaissance': 2, 'Other': 3, 'Reconstitution': 4, 'Temps': 5, 'Reaction': 6, 'Nouvelle': 7, 'Media': 8, 'Lieux': 9} # Run inference preds = model(["Vous pouvez continuer", "Pouvez-vous me dire précisément quel a été l'odre chronologique des événements ?"]) labels = [[[f for f, p in zip(labels_dict, ps) if p] for ps in [pred]] for pred in preds ] ``` ## Labels and training data Based on interview guide, the themes evocated in the interview where : ['CauseConsequences', 'PersonalExperience', 'Connaissance', 'Other', 'Reconstitution', 'Temps', 'Reaction', 'Nouvelle', 'Media', 'Lieux'] We label a small amount of data: ('Other', 50), ('Reaction', 46), ('PersonalExperience', 41), ('CauseConsequences', 41), ('Media', 27), ('Lieux', 13), ('Nouvelle', 10), ('Temps', 9), ('Reconstitution', 7), ('Connaissance', 3) and finetune a set fit model on it ## Training and Performances We finetune: "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2" using SetFit with CosineLossSimilarity and this parapeters: epochs = 10, batch_size=32, num_iterations = 20 On our test dataset, we get this results: {'f1': 0.639, 'f1_micro': 0.6808510638297872, 'f1_sample': 0.6666666666666666, 'accuracy': 0.6086956521739131} ## BibTeX entry and citation info To cite the current study: ```bibtex @article{ doi = {conference paper}, url = {https://arxiv.org/abs/2209.11055}, author = {Quillivic Robin, Charles Payet}, keywords = {NLP, JADT}, title = {Semi-Structured Interview Analysis: A French NLP Toolbox for Social Sciences}, publisher = {JADT}, year = {2024}, copyright = {Creative Commons Attribution 4.0 International} } ``` To cite the setFit paper: ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# binbin83/setfit-MiniLM-dialog-themes-13-nov The model is a multi-class multi-label text classifier to distinguish the different dialog act in semi-structured interview. The data used fot fine-tuning were in French. This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("binbin83/setfit-MiniLM-dialog-themes-13-nov") label_dict = {'CauseConsequences': 0, 'PersonalExperience': 1, 'Connaissance': 2, 'Other': 3, 'Reconstitution': 4, 'Temps': 5, 'Reaction': 6, 'Nouvelle': 7, 'Media': 8, 'Lieux': 9} # Run inference preds = model(["Vous pouvez continuer", "Pouvez-vous me dire précisément quel a été l'odre chronologique des événements ?"]) labels = [[[f for f, p in zip(labels_dict, ps) if p] for ps in [pred]] for pred in preds ] ``` ## Labels and training data Based on interview guide, the themes evocated in the interview where : ['CauseConsequences', 'PersonalExperience', 'Connaissance', 'Other', 'Reconstitution', 'Temps', 'Reaction', 'Nouvelle', 'Media', 'Lieux'] We label a small amount of data: ('Other', 50), ('Reaction', 46), ('PersonalExperience', 41), ('CauseConsequences', 41), ('Media', 27), ('Lieux', 13), ('Nouvelle', 10), ('Temps', 9), ('Reconstitution', 7), ('Connaissance', 3) and finetune a set fit model on it ## Training and Performances We finetune: "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2" using SetFit with CosineLossSimilarity and this parapeters: epochs = 10, batch_size=32, num_iterations = 20 On our test dataset, we get this results: {'f1': 0.639, 'f1_micro': 0.6808510638297872, 'f1_sample': 0.6666666666666666, 'accuracy': 0.6086956521739131} ## BibTeX entry and citation info To cite the current study: ```bibtex @article{ doi = {conference paper}, url = {https://arxiv.org/abs/2209.11055}, author = {Quillivic Robin, Charles Payet}, keywords = {NLP, JADT}, title = {Semi-Structured Interview Analysis: A French NLP Toolbox for Social Sciences}, publisher = {JADT}, year = {2024}, copyright = {Creative Commons Attribution 4.0 International} } ``` To cite the setFit paper: ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"language": ["fr"], "license": "apache-2.0", "metrics": ["f1"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
42,667
kimsan1120/distilbert-base-uncased-finetuned-emotion
kimsan1120
text-classification
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-04-29T11:07:01Z
2024-04-29T11:12:56+00:00
8
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion args: split metrics: - type: accuracy value: 0.9245 name: Accuracy - type: f1 value: 0.9242972012693298 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2167 - Accuracy: 0.9245 - F1: 0.9243 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8157 | 1.0 | 250 | 0.3125 | 0.904 | 0.9010 | | 0.2424 | 2.0 | 500 | 0.2167 | 0.9245 | 0.9243 | ### Framework versions - Transformers 4.17.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2167 - Accuracy: 0.9245 - F1: 0.9243 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8157 | 1.0 | 250 | 0.3125 | 0.904 | 0.9010 | | 0.2424 | 2.0 | 500 | 0.2167 | 0.9245 | 0.9243 | ### Framework versions - Transformers 4.17.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9245, "name": "Accuracy"}, {"type": "f1", "value": 0.9242972012693298, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,668
marumarukun/BAAI-bge-large-en-v1.5_fine_tuned_fold2_20241117_105432_boosting_1
marumarukun
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-11-17T08:45:27Z
2024-11-17T08:46:06+00:00
5
0
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("marumarukun/BAAI-bge-large-en-v1.5_fine_tuned_fold2_20241117_105432_boosting_1") # Run inference sentences = [ 'The weather is lovely today.', "It's so sunny outside!", 'He drove to the stadium.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.12.7 - Sentence Transformers: 3.1.1 - Transformers: 4.45.2 - PyTorch: 2.4.0+cu121 - Accelerate: 1.1.1 - Datasets: 3.1.0 - Tokenizers: 0.20.1 ## Citation ### BibTeX <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("marumarukun/BAAI-bge-large-en-v1.5_fine_tuned_fold2_20241117_105432_boosting_1") # Run inference sentences = [ 'The weather is lovely today.', "It's so sunny outside!", 'He drove to the stadium.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.12.7 - Sentence Transformers: 3.1.1 - Transformers: 4.45.2 - PyTorch: 2.4.0+cu121 - Accelerate: 1.1.1 - Datasets: 3.1.0 - Tokenizers: 0.20.1 ## Citation ### BibTeX <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction"]}
task
[ "TEXT_CLASSIFICATION" ]
42,669
pkbiswas/Phi-3-Summarization-QLoRa
pkbiswas
summarization
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "summarization", "dataset:scitldr", "base_model:microsoft/Phi-3-mini-128k-instruct", "base_model:adapter:microsoft/Phi-3-mini-128k-instruct", "license:mit", "region:us" ]
2024-04-25T17:28:48Z
2024-11-17T08:18:36+00:00
5
0
--- base_model: microsoft/Phi-3-mini-128k-instruct datasets: - scitldr library_name: peft license: mit pipeline_tag: summarization tags: - trl - sft - generated_from_trainer model-index: - name: Summarization-Phi-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Summarization-Phi-3 This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the scitldr dataset. It achieves the following results on the evaluation set: - Loss: 2.1554 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0689 | 0.2510 | 500 | 2.1439 | | 2.0455 | 0.5020 | 1000 | 2.1388 | | 2.0665 | 0.7530 | 1500 | 2.1349 | | 2.0481 | 1.0040 | 2000 | 2.1308 | | 1.9051 | 1.2550 | 2500 | 2.1573 | | 1.8524 | 1.5060 | 3000 | 2.1588 | | 1.8247 | 1.7570 | 3500 | 2.1554 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
null
BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Summarization-Phi-3 This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the scitldr dataset. It achieves the following results on the evaluation set: - Loss: 2.1554 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0689 | 0.2510 | 500 | 2.1439 | | 2.0455 | 0.5020 | 1000 | 2.1388 | | 2.0665 | 0.7530 | 1500 | 2.1349 | | 2.0481 | 1.0040 | 2000 | 2.1308 | | 1.9051 | 1.2550 | 2500 | 2.1573 | | 1.8524 | 1.5060 | 3000 | 2.1588 | | 1.8247 | 1.7570 | 3500 | 2.1554 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"base_model": "microsoft/Phi-3-mini-128k-instruct", "datasets": ["scitldr"], "library_name": "peft", "license": "mit", "pipeline_tag": "summarization", "tags": ["trl", "sft", "generated_from_trainer"], "model-index": [{"name": "Summarization-Phi-3", "results": []}]}
task
[ "SUMMARIZATION" ]
42,670
Robinsh2023/distilbert-base-uncased-distilled-clinc
Robinsh2023
text-classification
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-12-13T12:03:32Z
2023-12-13T12:18:23+00:00
23
0
--- base_model: distilbert-base-uncased datasets: - clinc_oos license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: type: text-classification name: Text Classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - type: accuracy value: 0.9487096774193549 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.3062 - Accuracy: 0.9487 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 1.9733 | 0.7374 | | 2.3454 | 2.0 | 636 | 0.9990 | 0.8616 | | 2.3454 | 3.0 | 954 | 0.5629 | 0.92 | | 0.8888 | 4.0 | 1272 | 0.4126 | 0.9358 | | 0.4016 | 5.0 | 1590 | 0.3554 | 0.9419 | | 0.4016 | 6.0 | 1908 | 0.3299 | 0.9471 | | 0.2814 | 7.0 | 2226 | 0.3176 | 0.9471 | | 0.2423 | 8.0 | 2544 | 0.3119 | 0.9487 | | 0.2423 | 9.0 | 2862 | 0.3081 | 0.9477 | | 0.2274 | 10.0 | 3180 | 0.3062 | 0.9487 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.1 - Datasets 2.14.7 - Tokenizers 0.14.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.3062 - Accuracy: 0.9487 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 1.9733 | 0.7374 | | 2.3454 | 2.0 | 636 | 0.9990 | 0.8616 | | 2.3454 | 3.0 | 954 | 0.5629 | 0.92 | | 0.8888 | 4.0 | 1272 | 0.4126 | 0.9358 | | 0.4016 | 5.0 | 1590 | 0.3554 | 0.9419 | | 0.4016 | 6.0 | 1908 | 0.3299 | 0.9471 | | 0.2814 | 7.0 | 2226 | 0.3176 | 0.9471 | | 0.2423 | 8.0 | 2544 | 0.3119 | 0.9487 | | 0.2423 | 9.0 | 2862 | 0.3081 | 0.9477 | | 0.2274 | 10.0 | 3180 | 0.3062 | 0.9487 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.1 - Datasets 2.14.7 - Tokenizers 0.14.1
{"base_model": "distilbert-base-uncased", "datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "config": "plus", "split": "validation", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9487096774193549, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,671
aroot/mbart-finetuned-eng-ind-78988303650
aroot
translation
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-06-30T18:59:18Z
2023-06-30T19:18:12+00:00
8
0
--- metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: mbart-finetuned-eng-ind-78988303650 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-eng-ind-78988303650 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8577 - Bleu: 20.4223 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.11.0
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-eng-ind-78988303650 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8577 - Bleu: 20.4223 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.11.0
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "mbart-finetuned-eng-ind-78988303650", "results": []}]}
task
[ "TRANSLATION" ]
42,672
IAmSkyDra/BARTBana_Translation_v0
IAmSkyDra
text2text-generation
[ "transformers", "safetensors", "mbart", "text2text-generation", "generated_from_trainer", "base_model:IAmSkyDra/BARTBana_v0", "base_model:finetune:IAmSkyDra/BARTBana_v0", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-01-20T04:55:27Z
2025-01-20T07:40:48+00:00
17
0
--- base_model: IAmSkyDra/BARTBana_Before library_name: transformers license: mit metrics: - sacrebleu tags: - generated_from_trainer model-index: - name: BARTBana_Translation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BARTBana_Translation This model is a fine-tuned version of [IAmSkyDra/BARTBana_Before](https://huggingface.co/IAmSkyDra/BARTBana_Before) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2217 - Sacrebleu: 11.6180 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Sacrebleu | |:-------------:|:-----:|:-----:|:---------------:|:---------:| | 0.3516 | 1.0 | 742 | 0.3021 | 5.8020 | | 0.3025 | 2.0 | 1484 | 0.2653 | 8.0597 | | 0.2608 | 3.0 | 2226 | 0.2486 | 9.2952 | | 0.2482 | 4.0 | 2968 | 0.2385 | 10.0088 | | 0.2298 | 5.0 | 3710 | 0.2329 | 10.4466 | | 0.2216 | 6.0 | 4452 | 0.2278 | 10.6724 | | 0.2057 | 7.0 | 5194 | 0.2247 | 10.8809 | | 0.1977 | 8.0 | 5936 | 0.2221 | 11.0972 | | 0.1927 | 9.0 | 6678 | 0.2215 | 11.3121 | | 0.1841 | 10.0 | 7420 | 0.2208 | 11.3804 | | 0.1781 | 11.0 | 8162 | 0.2208 | 11.3954 | | 0.1733 | 12.0 | 8904 | 0.2209 | 11.5012 | | 0.1694 | 13.0 | 9646 | 0.2207 | 11.5774 | | 0.166 | 14.0 | 10388 | 0.2214 | 11.5994 | | 0.1626 | 15.0 | 11130 | 0.2217 | 11.6180 | ### Framework versions - Transformers 4.48.0 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BARTBana_Translation This model is a fine-tuned version of [IAmSkyDra/BARTBana_Before](https://huggingface.co/IAmSkyDra/BARTBana_Before) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2217 - Sacrebleu: 11.6180 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Sacrebleu | |:-------------:|:-----:|:-----:|:---------------:|:---------:| | 0.3516 | 1.0 | 742 | 0.3021 | 5.8020 | | 0.3025 | 2.0 | 1484 | 0.2653 | 8.0597 | | 0.2608 | 3.0 | 2226 | 0.2486 | 9.2952 | | 0.2482 | 4.0 | 2968 | 0.2385 | 10.0088 | | 0.2298 | 5.0 | 3710 | 0.2329 | 10.4466 | | 0.2216 | 6.0 | 4452 | 0.2278 | 10.6724 | | 0.2057 | 7.0 | 5194 | 0.2247 | 10.8809 | | 0.1977 | 8.0 | 5936 | 0.2221 | 11.0972 | | 0.1927 | 9.0 | 6678 | 0.2215 | 11.3121 | | 0.1841 | 10.0 | 7420 | 0.2208 | 11.3804 | | 0.1781 | 11.0 | 8162 | 0.2208 | 11.3954 | | 0.1733 | 12.0 | 8904 | 0.2209 | 11.5012 | | 0.1694 | 13.0 | 9646 | 0.2207 | 11.5774 | | 0.166 | 14.0 | 10388 | 0.2214 | 11.5994 | | 0.1626 | 15.0 | 11130 | 0.2217 | 11.6180 | ### Framework versions - Transformers 4.48.0 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
{"base_model": "IAmSkyDra/BARTBana_Before", "library_name": "transformers", "license": "mit", "metrics": ["sacrebleu"], "tags": ["generated_from_trainer"], "model-index": [{"name": "BARTBana_Translation", "results": []}]}
task
[ "TRANSLATION" ]
42,673
adriansanz/sitges10242608-4ep-rerankv4-sp
adriansanz
sentence-similarity
[ "sentence-transformers", "safetensors", "roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:4173", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:PlanTL-GOB-ES/roberta-base-bne", "base_model:finetune:PlanTL-GOB-ES/roberta-base-bne", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-08-27T10:54:48Z
2024-08-27T10:55:41+00:00
5
0
--- base_model: PlanTL-GOB-ES/roberta-base-bne datasets: [] language: [] library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:4173 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: L'Ajuntament de Sitges ofereix a aquelles famílies que acompleixin els requisits establerts, ajuts per al pagament de la quota del servei i de la quota del menjador dels infants matriculats a les Llars d'Infants Municipals ( 0-3 anys). sentences: - Quin és l'objectiu principal de les subvencions per a projectes i activitats de l'àmbit turístic? - Quin és el procediment per a obtenir una llicència per a disposar d'una parada en un mercat setmanal? - Quin és el paper de l'Ajuntament de Sitges en la quota del menjador de les Llars d'Infants Municipals? - source_sentence: Es tracta de la sol·licitud de permís municipal per poder utilitzar de forma privativa una zona de la via pública per instal·lacions d’atraccions i venda en fires, amb independència de les possibles afectacions a la via pública... sentences: - Quin és el tipus de permís que es sol·licita? - Quin és el paper de l'Ajuntament en aquest tràmit? - Quin és el resultat de la llicència per a la constitució d'un règim de propietat horitzontal en relació amb l’escriptura de divisió horitzontal? - source_sentence: Totes les persones que resideixen a Espanya estan obligades a inscriure's en el padró del municipi en el qual resideixen habitualment. sentences: - Quin és el benefici de l'ajut extraordinari per a la família de l'empleat? - Què passa si no es presenta la sol·licitud d'acceptació en el termini establert? - Qui està obligat a inscriure's en el Padró Municipal d'Habitants? - source_sentence: Les persones i entitats beneficiaries hauran de justificar la realització del projecte/activitat subvencionada com a màxim el dia 31 de març de 2023. sentences: - Quin és el termini per presentar la justificació de la realització del projecte/activitat subvencionada? - Quin és el període durant el qual es poden sol·licitar els ajuts? - Quin és el registre on s'inscriuen les entitats d’interès ciutadà de Sitges? - source_sentence: Els establiments locals tenen un paper clau en el projecte de la targeta de fidelització, ja que són els que ofereixen descomptes i ofertes especials als consumidors que utilitzen la targeta. sentences: - Quin és el paper dels establiments locals en el projecte de la targeta de fidelització? - Quin és el paper de la via pública en aquest tràmit? - Quins són els tractaments que beneficien la salut de l'empleat municipal que s'inclouen en l'ajuda? model-index: - name: SentenceTransformer based on PlanTL-GOB-ES/roberta-base-bne results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.05603448275862069 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.125 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.21336206896551724 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.40948275862068967 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.05603448275862069 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.041666666666666664 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.04267241379310346 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.040948275862068964 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.05603448275862069 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.125 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.21336206896551724 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.40948275862068967 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.19394246727908016 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.1301253762999455 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.15541893353957212 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.05172413793103448 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.12284482758620689 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.20043103448275862 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.4073275862068966 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.05172413793103448 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.040948275862068964 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.04008620689655173 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.04073275862068965 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.05172413793103448 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.12284482758620689 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.20043103448275862 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.4073275862068966 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.19075313852531367 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.1267044677066231 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.15217462615525276 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.05818965517241379 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.1206896551724138 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.20689655172413793 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.41594827586206895 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.05818965517241379 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.04022988505747126 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.041379310344827586 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.04159482758620689 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.05818965517241379 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.1206896551724138 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.20689655172413793 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.41594827586206895 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.19717072550930018 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.13257902298850593 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.1580145716033785 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.05603448275862069 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.11853448275862069 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.1939655172413793 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.4202586206896552 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.05603448275862069 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.039511494252873564 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.03879310344827587 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.04202586206896552 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.05603448275862069 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.11853448275862069 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.1939655172413793 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.4202586206896552 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.19482639723718284 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.1286176108374386 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.15326245290189994 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.05172413793103448 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.1336206896551724 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.20905172413793102 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.39439655172413796 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.05172413793103448 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.044540229885057465 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.04181034482758621 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.03943965517241379 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.05172413793103448 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.1336206896551724 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.20905172413793102 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.39439655172413796 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.188263246156266 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.12684814586754262 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.15277153038949104 name: Cosine Map@100 --- # SentenceTransformer based on PlanTL-GOB-ES/roberta-base-bne This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) <!-- at revision 0e598176534f3cf2e30105f8286cf2503d6e4731 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("adriansanz/sitges10242608-4ep-rerankv4-sp") # Run inference sentences = [ 'Els establiments locals tenen un paper clau en el projecte de la targeta de fidelització, ja que són els que ofereixen descomptes i ofertes especials als consumidors que utilitzen la targeta.', 'Quin és el paper dels establiments locals en el projecte de la targeta de fidelització?', "Quins són els tractaments que beneficien la salut de l'empleat municipal que s'inclouen en l'ajuda?", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.056 | | cosine_accuracy@3 | 0.125 | | cosine_accuracy@5 | 0.2134 | | cosine_accuracy@10 | 0.4095 | | cosine_precision@1 | 0.056 | | cosine_precision@3 | 0.0417 | | cosine_precision@5 | 0.0427 | | cosine_precision@10 | 0.0409 | | cosine_recall@1 | 0.056 | | cosine_recall@3 | 0.125 | | cosine_recall@5 | 0.2134 | | cosine_recall@10 | 0.4095 | | cosine_ndcg@10 | 0.1939 | | cosine_mrr@10 | 0.1301 | | **cosine_map@100** | **0.1554** | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0517 | | cosine_accuracy@3 | 0.1228 | | cosine_accuracy@5 | 0.2004 | | cosine_accuracy@10 | 0.4073 | | cosine_precision@1 | 0.0517 | | cosine_precision@3 | 0.0409 | | cosine_precision@5 | 0.0401 | | cosine_precision@10 | 0.0407 | | cosine_recall@1 | 0.0517 | | cosine_recall@3 | 0.1228 | | cosine_recall@5 | 0.2004 | | cosine_recall@10 | 0.4073 | | cosine_ndcg@10 | 0.1908 | | cosine_mrr@10 | 0.1267 | | **cosine_map@100** | **0.1522** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.0582 | | cosine_accuracy@3 | 0.1207 | | cosine_accuracy@5 | 0.2069 | | cosine_accuracy@10 | 0.4159 | | cosine_precision@1 | 0.0582 | | cosine_precision@3 | 0.0402 | | cosine_precision@5 | 0.0414 | | cosine_precision@10 | 0.0416 | | cosine_recall@1 | 0.0582 | | cosine_recall@3 | 0.1207 | | cosine_recall@5 | 0.2069 | | cosine_recall@10 | 0.4159 | | cosine_ndcg@10 | 0.1972 | | cosine_mrr@10 | 0.1326 | | **cosine_map@100** | **0.158** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.056 | | cosine_accuracy@3 | 0.1185 | | cosine_accuracy@5 | 0.194 | | cosine_accuracy@10 | 0.4203 | | cosine_precision@1 | 0.056 | | cosine_precision@3 | 0.0395 | | cosine_precision@5 | 0.0388 | | cosine_precision@10 | 0.042 | | cosine_recall@1 | 0.056 | | cosine_recall@3 | 0.1185 | | cosine_recall@5 | 0.194 | | cosine_recall@10 | 0.4203 | | cosine_ndcg@10 | 0.1948 | | cosine_mrr@10 | 0.1286 | | **cosine_map@100** | **0.1533** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0517 | | cosine_accuracy@3 | 0.1336 | | cosine_accuracy@5 | 0.2091 | | cosine_accuracy@10 | 0.3944 | | cosine_precision@1 | 0.0517 | | cosine_precision@3 | 0.0445 | | cosine_precision@5 | 0.0418 | | cosine_precision@10 | 0.0394 | | cosine_recall@1 | 0.0517 | | cosine_recall@3 | 0.1336 | | cosine_recall@5 | 0.2091 | | cosine_recall@10 | 0.3944 | | cosine_ndcg@10 | 0.1883 | | cosine_mrr@10 | 0.1268 | | **cosine_map@100** | **0.1528** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 4,173 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 60.84 tokens</li><li>max: 206 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 25.34 tokens</li><li>max: 53 tokens</li></ul> | * Samples: | positive | anchor | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------| | <code>L'objectiu principal de la persona coordinadora de colònia felina és garantir el benestar dels animals de la colònia.</code> | <code>Quin és l'objectiu principal de la persona coordinadora de colònia felina?</code> | | <code>Es tracta d'una sala amb capacitat per a 125 persones, equipada amb un petit escenari, sistema de sonorització, pantalla per a projeccions, camerins i serveis higiènics (WC).</code> | <code>Quin és el nombre de persones que pot acollir la sala d'actes del Casal Municipal de la Gent Gran de Sitges?</code> | | <code>Aquest ajut pretén fomentar l’associacionisme empresarial local, per tal de disposar d’agrupacions, gremis o associacions representatives de l’activitat empresarial del municipi.</code> | <code>Quin és el paper de les empreses en aquest ajut?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `num_train_epochs`: 10 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.2 - `bf16`: True - `tf32`: False - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.2 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: False - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 | |:----------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:| | 0.6130 | 10 | 10.8464 | - | - | - | - | - | | 0.9808 | 16 | - | 0.1060 | 0.1088 | 0.1067 | 0.0984 | 0.1074 | | 1.2261 | 20 | 3.5261 | - | - | - | - | - | | 1.8391 | 30 | 1.4363 | - | - | - | - | - | | 1.9617 | 32 | - | 0.1406 | 0.1468 | 0.1356 | 0.1395 | 0.1373 | | 2.4521 | 40 | 0.5627 | - | - | - | - | - | | 2.9425 | 48 | - | 0.1377 | 0.1418 | 0.1427 | 0.1322 | 0.1437 | | 3.0651 | 50 | 0.2727 | - | - | - | - | - | | 3.6782 | 60 | 0.1297 | - | - | - | - | - | | 3.9234 | 64 | - | 0.1393 | 0.1457 | 0.1390 | 0.1268 | 0.1462 | | 0.6130 | 10 | 0.096 | - | - | - | - | - | | 0.9808 | 16 | - | 0.1458 | 0.1414 | 0.1443 | 0.1369 | 0.1407 | | 1.2261 | 20 | 0.1118 | - | - | - | - | - | | 1.8391 | 30 | 0.1335 | - | - | - | - | - | | 1.9617 | 32 | - | 0.1486 | 0.1476 | 0.1419 | 0.1489 | 0.1503 | | 2.4521 | 40 | 0.0765 | - | - | - | - | - | | 2.9425 | 48 | - | 0.1501 | 0.1459 | 0.1424 | 0.1413 | 0.1437 | | 3.0651 | 50 | 0.1449 | - | - | - | - | - | | 3.6782 | 60 | 0.0954 | - | - | - | - | - | | 3.9847 | 65 | - | 0.1562 | 0.1559 | 0.1517 | 0.1409 | 0.1553 | | 4.2912 | 70 | 0.0786 | - | - | - | - | - | | 4.9042 | 80 | 0.0973 | - | - | - | - | - | | 4.9655 | 81 | - | 0.1433 | 0.1397 | 0.1459 | 0.1430 | 0.1457 | | 5.5172 | 90 | 0.0334 | - | - | - | - | - | | 5.9464 | 97 | - | 0.1499 | 0.1482 | 0.1478 | 0.1466 | 0.1503 | | 6.1303 | 100 | 0.0278 | - | - | - | - | - | | 6.7433 | 110 | 0.0223 | - | - | - | - | - | | 6.9885 | 114 | - | 0.1561 | 0.1532 | 0.1509 | 0.1519 | 0.1547 | | 7.3563 | 120 | 0.0137 | - | - | - | - | - | | 7.9693 | 130 | 0.0129 | 0.1525 | 0.1557 | 0.1505 | 0.1570 | 0.1570 | | 8.5824 | 140 | 0.0052 | - | - | - | - | - | | **8.9502** | **146** | **-** | **0.1525** | **0.1586** | **0.1493** | **0.1569** | **0.1553** | | 9.1954 | 150 | 0.0044 | - | - | - | - | - | | 9.8084 | 160 | 0.0064 | 0.1533 | 0.1580 | 0.1522 | 0.1528 | 0.1554 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.42.4 - PyTorch: 2.4.0+cu121 - Accelerate: 0.34.0.dev0 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on PlanTL-GOB-ES/roberta-base-bne This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) <!-- at revision 0e598176534f3cf2e30105f8286cf2503d6e4731 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("adriansanz/sitges10242608-4ep-rerankv4-sp") # Run inference sentences = [ 'Els establiments locals tenen un paper clau en el projecte de la targeta de fidelització, ja que són els que ofereixen descomptes i ofertes especials als consumidors que utilitzen la targeta.', 'Quin és el paper dels establiments locals en el projecte de la targeta de fidelització?', "Quins són els tractaments que beneficien la salut de l'empleat municipal que s'inclouen en l'ajuda?", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.056 | | cosine_accuracy@3 | 0.125 | | cosine_accuracy@5 | 0.2134 | | cosine_accuracy@10 | 0.4095 | | cosine_precision@1 | 0.056 | | cosine_precision@3 | 0.0417 | | cosine_precision@5 | 0.0427 | | cosine_precision@10 | 0.0409 | | cosine_recall@1 | 0.056 | | cosine_recall@3 | 0.125 | | cosine_recall@5 | 0.2134 | | cosine_recall@10 | 0.4095 | | cosine_ndcg@10 | 0.1939 | | cosine_mrr@10 | 0.1301 | | **cosine_map@100** | **0.1554** | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0517 | | cosine_accuracy@3 | 0.1228 | | cosine_accuracy@5 | 0.2004 | | cosine_accuracy@10 | 0.4073 | | cosine_precision@1 | 0.0517 | | cosine_precision@3 | 0.0409 | | cosine_precision@5 | 0.0401 | | cosine_precision@10 | 0.0407 | | cosine_recall@1 | 0.0517 | | cosine_recall@3 | 0.1228 | | cosine_recall@5 | 0.2004 | | cosine_recall@10 | 0.4073 | | cosine_ndcg@10 | 0.1908 | | cosine_mrr@10 | 0.1267 | | **cosine_map@100** | **0.1522** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.0582 | | cosine_accuracy@3 | 0.1207 | | cosine_accuracy@5 | 0.2069 | | cosine_accuracy@10 | 0.4159 | | cosine_precision@1 | 0.0582 | | cosine_precision@3 | 0.0402 | | cosine_precision@5 | 0.0414 | | cosine_precision@10 | 0.0416 | | cosine_recall@1 | 0.0582 | | cosine_recall@3 | 0.1207 | | cosine_recall@5 | 0.2069 | | cosine_recall@10 | 0.4159 | | cosine_ndcg@10 | 0.1972 | | cosine_mrr@10 | 0.1326 | | **cosine_map@100** | **0.158** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.056 | | cosine_accuracy@3 | 0.1185 | | cosine_accuracy@5 | 0.194 | | cosine_accuracy@10 | 0.4203 | | cosine_precision@1 | 0.056 | | cosine_precision@3 | 0.0395 | | cosine_precision@5 | 0.0388 | | cosine_precision@10 | 0.042 | | cosine_recall@1 | 0.056 | | cosine_recall@3 | 0.1185 | | cosine_recall@5 | 0.194 | | cosine_recall@10 | 0.4203 | | cosine_ndcg@10 | 0.1948 | | cosine_mrr@10 | 0.1286 | | **cosine_map@100** | **0.1533** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.0517 | | cosine_accuracy@3 | 0.1336 | | cosine_accuracy@5 | 0.2091 | | cosine_accuracy@10 | 0.3944 | | cosine_precision@1 | 0.0517 | | cosine_precision@3 | 0.0445 | | cosine_precision@5 | 0.0418 | | cosine_precision@10 | 0.0394 | | cosine_recall@1 | 0.0517 | | cosine_recall@3 | 0.1336 | | cosine_recall@5 | 0.2091 | | cosine_recall@10 | 0.3944 | | cosine_ndcg@10 | 0.1883 | | cosine_mrr@10 | 0.1268 | | **cosine_map@100** | **0.1528** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 4,173 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 60.84 tokens</li><li>max: 206 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 25.34 tokens</li><li>max: 53 tokens</li></ul> | * Samples: | positive | anchor | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------| | <code>L'objectiu principal de la persona coordinadora de colònia felina és garantir el benestar dels animals de la colònia.</code> | <code>Quin és l'objectiu principal de la persona coordinadora de colònia felina?</code> | | <code>Es tracta d'una sala amb capacitat per a 125 persones, equipada amb un petit escenari, sistema de sonorització, pantalla per a projeccions, camerins i serveis higiènics (WC).</code> | <code>Quin és el nombre de persones que pot acollir la sala d'actes del Casal Municipal de la Gent Gran de Sitges?</code> | | <code>Aquest ajut pretén fomentar l’associacionisme empresarial local, per tal de disposar d’agrupacions, gremis o associacions representatives de l’activitat empresarial del municipi.</code> | <code>Quin és el paper de les empreses en aquest ajut?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `num_train_epochs`: 10 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.2 - `bf16`: True - `tf32`: False - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.2 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: False - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 | |:----------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:| | 0.6130 | 10 | 10.8464 | - | - | - | - | - | | 0.9808 | 16 | - | 0.1060 | 0.1088 | 0.1067 | 0.0984 | 0.1074 | | 1.2261 | 20 | 3.5261 | - | - | - | - | - | | 1.8391 | 30 | 1.4363 | - | - | - | - | - | | 1.9617 | 32 | - | 0.1406 | 0.1468 | 0.1356 | 0.1395 | 0.1373 | | 2.4521 | 40 | 0.5627 | - | - | - | - | - | | 2.9425 | 48 | - | 0.1377 | 0.1418 | 0.1427 | 0.1322 | 0.1437 | | 3.0651 | 50 | 0.2727 | - | - | - | - | - | | 3.6782 | 60 | 0.1297 | - | - | - | - | - | | 3.9234 | 64 | - | 0.1393 | 0.1457 | 0.1390 | 0.1268 | 0.1462 | | 0.6130 | 10 | 0.096 | - | - | - | - | - | | 0.9808 | 16 | - | 0.1458 | 0.1414 | 0.1443 | 0.1369 | 0.1407 | | 1.2261 | 20 | 0.1118 | - | - | - | - | - | | 1.8391 | 30 | 0.1335 | - | - | - | - | - | | 1.9617 | 32 | - | 0.1486 | 0.1476 | 0.1419 | 0.1489 | 0.1503 | | 2.4521 | 40 | 0.0765 | - | - | - | - | - | | 2.9425 | 48 | - | 0.1501 | 0.1459 | 0.1424 | 0.1413 | 0.1437 | | 3.0651 | 50 | 0.1449 | - | - | - | - | - | | 3.6782 | 60 | 0.0954 | - | - | - | - | - | | 3.9847 | 65 | - | 0.1562 | 0.1559 | 0.1517 | 0.1409 | 0.1553 | | 4.2912 | 70 | 0.0786 | - | - | - | - | - | | 4.9042 | 80 | 0.0973 | - | - | - | - | - | | 4.9655 | 81 | - | 0.1433 | 0.1397 | 0.1459 | 0.1430 | 0.1457 | | 5.5172 | 90 | 0.0334 | - | - | - | - | - | | 5.9464 | 97 | - | 0.1499 | 0.1482 | 0.1478 | 0.1466 | 0.1503 | | 6.1303 | 100 | 0.0278 | - | - | - | - | - | | 6.7433 | 110 | 0.0223 | - | - | - | - | - | | 6.9885 | 114 | - | 0.1561 | 0.1532 | 0.1509 | 0.1519 | 0.1547 | | 7.3563 | 120 | 0.0137 | - | - | - | - | - | | 7.9693 | 130 | 0.0129 | 0.1525 | 0.1557 | 0.1505 | 0.1570 | 0.1570 | | 8.5824 | 140 | 0.0052 | - | - | - | - | - | | **8.9502** | **146** | **-** | **0.1525** | **0.1586** | **0.1493** | **0.1569** | **0.1553** | | 9.1954 | 150 | 0.0044 | - | - | - | - | - | | 9.8084 | 160 | 0.0064 | 0.1533 | 0.1580 | 0.1522 | 0.1528 | 0.1554 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.42.4 - PyTorch: 2.4.0+cu121 - Accelerate: 0.34.0.dev0 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "PlanTL-GOB-ES/roberta-base-bne", "datasets": [], "language": [], "library_name": "sentence-transformers", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:4173", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "L'Ajuntament de Sitges ofereix a aquelles famílies que acompleixin els requisits establerts, ajuts per al pagament de la quota del servei i de la quota del menjador dels infants matriculats a les Llars d'Infants Municipals ( 0-3 anys).", "sentences": ["Quin és l'objectiu principal de les subvencions per a projectes i activitats de l'àmbit turístic?", "Quin és el procediment per a obtenir una llicència per a disposar d'una parada en un mercat setmanal?", "Quin és el paper de l'Ajuntament de Sitges en la quota del menjador de les Llars d'Infants Municipals?"]}, {"source_sentence": "Es tracta de la sol·licitud de permís municipal per poder utilitzar de forma privativa una zona de la via pública per instal·lacions d’atraccions i venda en fires, amb independència de les possibles afectacions a la via pública...", "sentences": ["Quin és el tipus de permís que es sol·licita?", "Quin és el paper de l'Ajuntament en aquest tràmit?", "Quin és el resultat de la llicència per a la constitució d'un règim de propietat horitzontal en relació amb l’escriptura de divisió horitzontal?"]}, {"source_sentence": "Totes les persones que resideixen a Espanya estan obligades a inscriure's en el padró del municipi en el qual resideixen habitualment.", "sentences": ["Quin és el benefici de l'ajut extraordinari per a la família de l'empleat?", "Què passa si no es presenta la sol·licitud d'acceptació en el termini establert?", "Qui està obligat a inscriure's en el Padró Municipal d'Habitants?"]}, {"source_sentence": "Les persones i entitats beneficiaries hauran de justificar la realització del projecte/activitat subvencionada com a màxim el dia 31 de març de 2023.", "sentences": ["Quin és el termini per presentar la justificació de la realització del projecte/activitat subvencionada?", "Quin és el període durant el qual es poden sol·licitar els ajuts?", "Quin és el registre on s'inscriuen les entitats d’interès ciutadà de Sitges?"]}, {"source_sentence": "Els establiments locals tenen un paper clau en el projecte de la targeta de fidelització, ja que són els que ofereixen descomptes i ofertes especials als consumidors que utilitzen la targeta.", "sentences": ["Quin és el paper dels establiments locals en el projecte de la targeta de fidelització?", "Quin és el paper de la via pública en aquest tràmit?", "Quins són els tractaments que beneficien la salut de l'empleat municipal que s'inclouen en l'ajuda?"]}], "model-index": [{"name": "SentenceTransformer based on PlanTL-GOB-ES/roberta-base-bne", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 768", "type": "dim_768"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.05603448275862069, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.125, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.21336206896551724, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.40948275862068967, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.05603448275862069, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.041666666666666664, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.04267241379310346, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.040948275862068964, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.05603448275862069, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.125, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.21336206896551724, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.40948275862068967, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.19394246727908016, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.1301253762999455, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.15541893353957212, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.05172413793103448, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.12284482758620689, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.20043103448275862, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.4073275862068966, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.05172413793103448, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.040948275862068964, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.04008620689655173, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.04073275862068965, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.05172413793103448, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.12284482758620689, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.20043103448275862, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.4073275862068966, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.19075313852531367, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.1267044677066231, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.15217462615525276, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.05818965517241379, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.1206896551724138, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.20689655172413793, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.41594827586206895, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.05818965517241379, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.04022988505747126, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.041379310344827586, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.04159482758620689, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.05818965517241379, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.1206896551724138, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.20689655172413793, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.41594827586206895, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.19717072550930018, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.13257902298850593, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.1580145716033785, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 128", "type": "dim_128"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.05603448275862069, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.11853448275862069, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.1939655172413793, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.4202586206896552, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.05603448275862069, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.039511494252873564, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.03879310344827587, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.04202586206896552, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.05603448275862069, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.11853448275862069, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.1939655172413793, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.4202586206896552, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.19482639723718284, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.1286176108374386, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.15326245290189994, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 64", "type": "dim_64"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.05172413793103448, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.1336206896551724, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.20905172413793102, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.39439655172413796, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.05172413793103448, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.044540229885057465, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.04181034482758621, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.03943965517241379, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.05172413793103448, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.1336206896551724, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.20905172413793102, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.39439655172413796, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.188263246156266, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.12684814586754262, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.15277153038949104, "name": "Cosine Map@100"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,674
curtsmith/distilbert-base-uncased-finetuned-cola
curtsmith
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-07-27T18:30:35Z
2025-01-06T15:02:03+00:00
14
0
--- base_model: distilbert-base-uncased datasets: - glue license: apache-2.0 metrics: - matthews_correlation tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue config: cola split: train args: cola metrics: - type: matthews_correlation value: 0.5363967157085073 name: Matthews Correlation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8123 - Matthews Correlation: 0.5364 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5227 | 1.0 | 535 | 0.5222 | 0.4210 | | 0.3466 | 2.0 | 1070 | 0.5048 | 0.4832 | | 0.2335 | 3.0 | 1605 | 0.5641 | 0.5173 | | 0.1812 | 4.0 | 2140 | 0.7638 | 0.5200 | | 0.1334 | 5.0 | 2675 | 0.8123 | 0.5364 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8123 - Matthews Correlation: 0.5364 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5227 | 1.0 | 535 | 0.5222 | 0.4210 | | 0.3466 | 2.0 | 1070 | 0.5048 | 0.4832 | | 0.2335 | 3.0 | 1605 | 0.5641 | 0.5173 | | 0.1812 | 4.0 | 2140 | 0.7638 | 0.5200 | | 0.1334 | 5.0 | 2675 | 0.8123 | 0.5364 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
{"base_model": "distilbert-base-uncased", "datasets": ["glue"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "cola", "split": "train", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5363967157085073, "name": "Matthews Correlation"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,675
Helsinki-NLP/opus-mt-sv-hr
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "sv", "hr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T12:05:16+00:00
48
0
--- license: apache-2.0 tags: - translation --- ### opus-mt-sv-hr * source languages: sv * target languages: hr * OPUS readme: [sv-hr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-hr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-hr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-hr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-hr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.hr | 25.7 | 0.498 |
null
Non_BioNLP
### opus-mt-sv-hr * source languages: sv * target languages: hr * OPUS readme: [sv-hr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-hr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-hr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-hr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-hr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.hr | 25.7 | 0.498 |
{"license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
42,676
omar-al-sharif/AlQalam-finetuned-mmj
omar-al-sharif
text2text-generation
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-07-05T14:11:56Z
2023-07-05T16:21:33+00:00
11
0
--- tags: - generated_from_trainer model-index: - name: AlQalam-finetuned-mmj results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AlQalam-finetuned-mmj This model is a fine-tuned version of [malmarjeh/t5-arabic-text-summarization](https://huggingface.co/malmarjeh/t5-arabic-text-summarization) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0723 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3745 | 1.0 | 1678 | 1.1947 | | 1.219 | 2.0 | 3356 | 1.1176 | | 1.065 | 3.0 | 5034 | 1.0895 | | 0.9928 | 4.0 | 6712 | 1.0734 | | 0.9335 | 5.0 | 8390 | 1.0723 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AlQalam-finetuned-mmj This model is a fine-tuned version of [malmarjeh/t5-arabic-text-summarization](https://huggingface.co/malmarjeh/t5-arabic-text-summarization) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0723 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3745 | 1.0 | 1678 | 1.1947 | | 1.219 | 2.0 | 3356 | 1.1176 | | 1.065 | 3.0 | 5034 | 1.0895 | | 0.9928 | 4.0 | 6712 | 1.0734 | | 0.9335 | 5.0 | 8390 | 1.0723 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
{"tags": ["generated_from_trainer"], "model-index": [{"name": "AlQalam-finetuned-mmj", "results": []}]}
task
[ "SUMMARIZATION" ]
42,677
Sarthak7777/distilbert-base-uncased-finetuned-emotio
Sarthak7777
text-classification
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-10-07T09:12:20Z
2023-10-07T09:17:13+00:00
7
0
--- base_model: distilbert-base-uncased datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotio results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.921 name: Accuracy - type: f1 value: 0.9208983762941266 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotio This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2283 - Accuracy: 0.921 - F1: 0.9209 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3345 | 0.904 | 0.9028 | | No log | 2.0 | 500 | 0.2283 | 0.921 | 0.9209 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotio This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2283 - Accuracy: 0.921 - F1: 0.9209 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3345 | 0.904 | 0.9028 | | No log | 2.0 | 500 | 0.2283 | 0.921 | 0.9209 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotio", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.921, "name": "Accuracy"}, {"type": "f1", "value": 0.9208983762941266, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,678
cydxg/Qwen2-VL-2B-Instruct-OpenVINO-INT4
cydxg
image-text-to-text
[ "openvino", "qwen2_vl", "multimodal", "image-text-to-text", "conversational", "en", "arxiv:2409.12191", "arxiv:2308.12966", "base_model:Qwen/Qwen2-VL-2B-Instruct", "base_model:quantized:Qwen/Qwen2-VL-2B-Instruct", "license:apache-2.0", "region:us" ]
2024-09-29T12:12:41Z
2024-10-28T03:55:25+00:00
25
2
--- base_model: - Qwen/Qwen2-VL-2B-Instruct language: - en license: apache-2.0 pipeline_tag: image-text-to-text tags: - multimodal base_model_relation: quantized --- Refer to: https://docs.openvino.ai/2024/notebooks/qwen2-vl-with-output.html This is the OpenVINO accelerated version for multimodal Qwen2-VL-2B-Instruct. To use this model, download all files and place the file as follow: ``` . │ gradio_helper.py │ ov_qwen2_vl.py │ qwen2-build.py │ qwen2vl.ipynb ├─Qwen2-VL-2B-Instruct │ added_tokens.json │ chat_template.json │ config.json │ merges.txt │ openvino_language_model.bin │ openvino_language_model.xml │ openvino_text_embeddings_model.bin │ openvino_text_embeddings_model.xml │ openvino_vision_embeddings_merger_model.bin │ openvino_vision_embeddings_merger_model.xml │ openvino_vision_embeddings_model.bin │ openvino_vision_embeddings_model.xml │ preprocessor_config.json │ special_tokens_map.json │ tokenizer.json │ tokenizer_config.json │ vocab.json ``` Then create a virtual environment and install all the requirements use the command: ```python pip install -q "transformers>=4.45" "torch>=2.1" "torchvision" "qwen-vl-utils" "Pillow" "gradio>=4.36" --extra-index-url https://download.pytorch.org/whl/cpu pip install -qU "openvino>=2024.4.0" "nncf>=2.13.0" ``` Finally, run the qwen2vl.ipynb on your jupyter notebook to chat with the model. Below is the original Model Card: # Qwen2-VL-2B-Instruct ## Introduction We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation. ### What’s New in Qwen2-VL? #### Key Enhancements: * **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. * **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc. * **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions. * **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc. #### Model Architecture Updates: * **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience. <p align="center"> <img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/> <p> * **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities. <p align="center"> <img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/> <p> We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 2B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL). ## Evaluation ### Image Benchmarks | Benchmark | InternVL2-2B | MiniCPM-V 2.0 | **Qwen2-VL-2B** | | :--- | :---: | :---: | :---: | | MMMU<sub>val</sub> | 36.3 | 38.2 | **41.1** | | DocVQA<sub>test</sub> | 86.9 | - | **90.1** | | InfoVQA<sub>test</sub> | 58.9 | - | **65.5** | | ChartQA<sub>test</sub> | **76.2** | - | 73.5 | | TextVQA<sub>val</sub> | 73.4 | - | **79.7** | | OCRBench | 781 | 605 | **794** | | MTVQA | - | - | **20.0** | | VCR<sub>en easy</sub> | - | - | **81.45** | VCR<sub>zh easy</sub> | - | - | **46.16** | RealWorldQA | 57.3 | 55.8 | **62.9** | | MME<sub>sum</sub> | **1876.8** | 1808.6 | 1872.0 | | MMBench-EN<sub>test</sub> | 73.2 | 69.1 | **74.9** | | MMBench-CN<sub>test</sub> | 70.9 | 66.5 | **73.5** | | MMBench-V1.1<sub>test</sub> | 69.6 | 65.8 | **72.2** | | MMT-Bench<sub>test</sub> | - | - | **54.5** | | MMStar | **49.8** | 39.1 | 48.0 | | MMVet<sub>GPT-4-Turbo</sub> | 39.7 | 41.0 | **49.5** | | HallBench<sub>avg</sub> | 38.0 | 36.1 | **41.7** | | MathVista<sub>testmini</sub> | **46.0** | 39.8 | 43.0 | | MathVision | - | - | **12.4** | ### Video Benchmarks | Benchmark | **Qwen2-VL-2B** | | :--- | :---: | | MVBench | **63.2** | | PerceptionTest<sub>test</sub> | **53.9** | | EgoSchema<sub>test</sub> | **54.9** | | Video-MME<sub>wo/w subs</sub> | **55.6**/**60.4** | ## Requirements The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error: ``` KeyError: 'qwen2_vl' ``` ## Quickstart We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command: ```bash pip install qwen-vl-utils ``` Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`: ```python from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor from qwen_vl_utils import process_vision_info # default: Load the model on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto" ) # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios. # model = Qwen2VLForConditionalGeneration.from_pretrained( # "Qwen/Qwen2-VL-2B-Instruct", # torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2", # device_map="auto", # ) # default processer processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct") # The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage. # min_pixels = 256*28*28 # max_pixels = 1280*28*28 # processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) messages = [ { "role": "user", "content": [ { "type": "image", "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference: Generation of the output generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` <details> <summary>Without qwen_vl_utils</summary> ```python from PIL import Image import requests import torch from torchvision import io from typing import Dict from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor # Load the model in half-precision on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto" ) processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct") # Image url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg" image = Image.open(requests.get(url, stream=True).raw) conversation = [ { "role": "user", "content": [ { "type": "image", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preprocess the inputs text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) # Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n' inputs = processor( text=[text_prompt], images=[image], padding=True, return_tensors="pt" ) inputs = inputs.to("cuda") # Inference: Generation of the output output_ids = model.generate(**inputs, max_new_tokens=128) generated_ids = [ output_ids[len(input_ids) :] for input_ids, output_ids in zip(inputs.input_ids, output_ids) ] output_text = processor.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True ) print(output_text) ``` </details> <details> <summary>Multi image inference</summary> ```python # Messages containing multiple images and a text query messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "Identify the similarities between these images."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Video inference</summary> ```python # Messages containing a images list as a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": [ "file:///path/to/frame1.jpg", "file:///path/to/frame2.jpg", "file:///path/to/frame3.jpg", "file:///path/to/frame4.jpg", ], "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Messages containing a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": "file:///path/to/video1.mp4", "max_pixels": 360 * 420, "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Batch inference</summary> ```python # Sample messages for batch inference messages1 = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "What are the common elements in these pictures?"}, ], } ] messages2 = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who are you?"}, ] # Combine messages for batch processing messages = [messages1, messages1] # Preparation for batch inference texts = [ processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) for msg in messages ] image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=texts, images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Batch Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_texts = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_texts) ``` </details> ### More Usage Tips For input images, we support local files, base64, and URLs. For videos, we currently only support local files. ```python # You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text. ## Local file path messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Image URL messages = [ { "role": "user", "content": [ {"type": "image", "image": "http://path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Base64 encoded image messages = [ { "role": "user", "content": [ {"type": "image", "image": "data:image;base64,/9j/..."}, {"type": "text", "text": "Describe this image."}, ], } ] ``` #### Image Resolution for performance boost The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage. ```python min_pixels = 256 * 28 * 28 max_pixels = 1280 * 28 * 28 processor = AutoProcessor.from_pretrained( "Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels ) ``` Besides, We provide two methods for fine-grained control over the image size input to the model: 1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels. 2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28. ```python # min_pixels and max_pixels messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "resized_height": 280, "resized_width": 420, }, {"type": "text", "text": "Describe this image."}, ], } ] # resized_height and resized_width messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "min_pixels": 50176, "max_pixels": 50176, }, {"type": "text", "text": "Describe this image."}, ], } ] ``` ## Limitations While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions: 1. Lack of Audio Support: The current model does **not comprehend audio information** within videos. 2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered. 3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands. 4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement. 5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements. 6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects. These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{Qwen2VL, title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution}, author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang}, journal={arXiv preprint arXiv:2409.12191}, year={2024} } @article{Qwen-VL, title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond}, author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren}, journal={arXiv preprint arXiv:2308.12966}, year={2023} } ```
null
Non_BioNLP
Refer to: https://docs.openvino.ai/2024/notebooks/qwen2-vl-with-output.html This is the OpenVINO accelerated version for multimodal Qwen2-VL-2B-Instruct. To use this model, download all files and place the file as follow: ``` . │ gradio_helper.py │ ov_qwen2_vl.py │ qwen2-build.py │ qwen2vl.ipynb ├─Qwen2-VL-2B-Instruct │ added_tokens.json │ chat_template.json │ config.json │ merges.txt │ openvino_language_model.bin │ openvino_language_model.xml │ openvino_text_embeddings_model.bin │ openvino_text_embeddings_model.xml │ openvino_vision_embeddings_merger_model.bin │ openvino_vision_embeddings_merger_model.xml │ openvino_vision_embeddings_model.bin │ openvino_vision_embeddings_model.xml │ preprocessor_config.json │ special_tokens_map.json │ tokenizer.json │ tokenizer_config.json │ vocab.json ``` Then create a virtual environment and install all the requirements use the command: ```python pip install -q "transformers>=4.45" "torch>=2.1" "torchvision" "qwen-vl-utils" "Pillow" "gradio>=4.36" --extra-index-url https://download.pytorch.org/whl/cpu pip install -qU "openvino>=2024.4.0" "nncf>=2.13.0" ``` Finally, run the qwen2vl.ipynb on your jupyter notebook to chat with the model. Below is the original Model Card: # Qwen2-VL-2B-Instruct ## Introduction We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation. ### What’s New in Qwen2-VL? #### Key Enhancements: * **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. * **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc. * **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions. * **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc. #### Model Architecture Updates: * **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience. <p align="center"> <img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/> <p> * **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities. <p align="center"> <img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/> <p> We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 2B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL). ## Evaluation ### Image Benchmarks | Benchmark | InternVL2-2B | MiniCPM-V 2.0 | **Qwen2-VL-2B** | | :--- | :---: | :---: | :---: | | MMMU<sub>val</sub> | 36.3 | 38.2 | **41.1** | | DocVQA<sub>test</sub> | 86.9 | - | **90.1** | | InfoVQA<sub>test</sub> | 58.9 | - | **65.5** | | ChartQA<sub>test</sub> | **76.2** | - | 73.5 | | TextVQA<sub>val</sub> | 73.4 | - | **79.7** | | OCRBench | 781 | 605 | **794** | | MTVQA | - | - | **20.0** | | VCR<sub>en easy</sub> | - | - | **81.45** | VCR<sub>zh easy</sub> | - | - | **46.16** | RealWorldQA | 57.3 | 55.8 | **62.9** | | MME<sub>sum</sub> | **1876.8** | 1808.6 | 1872.0 | | MMBench-EN<sub>test</sub> | 73.2 | 69.1 | **74.9** | | MMBench-CN<sub>test</sub> | 70.9 | 66.5 | **73.5** | | MMBench-V1.1<sub>test</sub> | 69.6 | 65.8 | **72.2** | | MMT-Bench<sub>test</sub> | - | - | **54.5** | | MMStar | **49.8** | 39.1 | 48.0 | | MMVet<sub>GPT-4-Turbo</sub> | 39.7 | 41.0 | **49.5** | | HallBench<sub>avg</sub> | 38.0 | 36.1 | **41.7** | | MathVista<sub>testmini</sub> | **46.0** | 39.8 | 43.0 | | MathVision | - | - | **12.4** | ### Video Benchmarks | Benchmark | **Qwen2-VL-2B** | | :--- | :---: | | MVBench | **63.2** | | PerceptionTest<sub>test</sub> | **53.9** | | EgoSchema<sub>test</sub> | **54.9** | | Video-MME<sub>wo/w subs</sub> | **55.6**/**60.4** | ## Requirements The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error: ``` KeyError: 'qwen2_vl' ``` ## Quickstart We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command: ```bash pip install qwen-vl-utils ``` Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`: ```python from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor from qwen_vl_utils import process_vision_info # default: Load the model on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto" ) # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios. # model = Qwen2VLForConditionalGeneration.from_pretrained( # "Qwen/Qwen2-VL-2B-Instruct", # torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2", # device_map="auto", # ) # default processer processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct") # The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage. # min_pixels = 256*28*28 # max_pixels = 1280*28*28 # processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) messages = [ { "role": "user", "content": [ { "type": "image", "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference: Generation of the output generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` <details> <summary>Without qwen_vl_utils</summary> ```python from PIL import Image import requests import torch from torchvision import io from typing import Dict from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor # Load the model in half-precision on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto" ) processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct") # Image url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg" image = Image.open(requests.get(url, stream=True).raw) conversation = [ { "role": "user", "content": [ { "type": "image", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preprocess the inputs text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) # Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n' inputs = processor( text=[text_prompt], images=[image], padding=True, return_tensors="pt" ) inputs = inputs.to("cuda") # Inference: Generation of the output output_ids = model.generate(**inputs, max_new_tokens=128) generated_ids = [ output_ids[len(input_ids) :] for input_ids, output_ids in zip(inputs.input_ids, output_ids) ] output_text = processor.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True ) print(output_text) ``` </details> <details> <summary>Multi image inference</summary> ```python # Messages containing multiple images and a text query messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "Identify the similarities between these images."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Video inference</summary> ```python # Messages containing a images list as a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": [ "file:///path/to/frame1.jpg", "file:///path/to/frame2.jpg", "file:///path/to/frame3.jpg", "file:///path/to/frame4.jpg", ], "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Messages containing a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": "file:///path/to/video1.mp4", "max_pixels": 360 * 420, "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Batch inference</summary> ```python # Sample messages for batch inference messages1 = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "What are the common elements in these pictures?"}, ], } ] messages2 = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who are you?"}, ] # Combine messages for batch processing messages = [messages1, messages1] # Preparation for batch inference texts = [ processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) for msg in messages ] image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=texts, images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Batch Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_texts = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_texts) ``` </details> ### More Usage Tips For input images, we support local files, base64, and URLs. For videos, we currently only support local files. ```python # You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text. ## Local file path messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Image URL messages = [ { "role": "user", "content": [ {"type": "image", "image": "http://path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Base64 encoded image messages = [ { "role": "user", "content": [ {"type": "image", "image": "data:image;base64,/9j/..."}, {"type": "text", "text": "Describe this image."}, ], } ] ``` #### Image Resolution for performance boost The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage. ```python min_pixels = 256 * 28 * 28 max_pixels = 1280 * 28 * 28 processor = AutoProcessor.from_pretrained( "Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels ) ``` Besides, We provide two methods for fine-grained control over the image size input to the model: 1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels. 2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28. ```python # min_pixels and max_pixels messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "resized_height": 280, "resized_width": 420, }, {"type": "text", "text": "Describe this image."}, ], } ] # resized_height and resized_width messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "min_pixels": 50176, "max_pixels": 50176, }, {"type": "text", "text": "Describe this image."}, ], } ] ``` ## Limitations While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions: 1. Lack of Audio Support: The current model does **not comprehend audio information** within videos. 2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered. 3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands. 4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement. 5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements. 6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects. These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{Qwen2VL, title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution}, author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang}, journal={arXiv preprint arXiv:2409.12191}, year={2024} } @article{Qwen-VL, title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond}, author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren}, journal={arXiv preprint arXiv:2308.12966}, year={2023} } ```
{"base_model": ["Qwen/Qwen2-VL-2B-Instruct"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "image-text-to-text", "tags": ["multimodal"], "base_model_relation": "quantized"}
task
[ "QUESTION_ANSWERING" ]
42,679
gokuls/hBERTv2_qnli
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-03-01T08:58:31Z
2023-03-02T07:42:16+00:00
8
0
--- datasets: - glue language: - en metrics: - accuracy tags: - generated_from_trainer model-index: - name: hBERTv2_qnli results: - task: type: text-classification name: Text Classification dataset: name: GLUE QNLI type: glue config: qnli split: validation args: qnli metrics: - type: accuracy value: 0.5053999633900788 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hBERTv2_qnli This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2](https://huggingface.co/gokuls/bert_12_layer_model_v2) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6930 - Accuracy: 0.5054 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6968 | 1.0 | 410 | 0.6952 | 0.5054 | | 0.6943 | 2.0 | 820 | 0.6932 | 0.4946 | | 0.6937 | 3.0 | 1230 | 0.6933 | 0.5054 | | 0.6934 | 4.0 | 1640 | 0.6931 | 0.5054 | | 0.6934 | 5.0 | 2050 | 0.6931 | 0.5054 | | 0.6933 | 6.0 | 2460 | 0.6930 | 0.5054 | | 0.6933 | 7.0 | 2870 | 0.6931 | 0.5054 | | 0.6932 | 8.0 | 3280 | 0.6930 | 0.5054 | | 0.6932 | 9.0 | 3690 | 0.6934 | 0.4946 | | 0.6932 | 10.0 | 4100 | 0.6930 | 0.5054 | | 0.6932 | 11.0 | 4510 | 0.6931 | 0.4946 | | 0.6933 | 12.0 | 4920 | 0.6934 | 0.4946 | | 0.6932 | 13.0 | 5330 | 0.6931 | 0.4946 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.14.0a0+410ce96 - Datasets 2.10.1 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hBERTv2_qnli This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2](https://huggingface.co/gokuls/bert_12_layer_model_v2) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6930 - Accuracy: 0.5054 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6968 | 1.0 | 410 | 0.6952 | 0.5054 | | 0.6943 | 2.0 | 820 | 0.6932 | 0.4946 | | 0.6937 | 3.0 | 1230 | 0.6933 | 0.5054 | | 0.6934 | 4.0 | 1640 | 0.6931 | 0.5054 | | 0.6934 | 5.0 | 2050 | 0.6931 | 0.5054 | | 0.6933 | 6.0 | 2460 | 0.6930 | 0.5054 | | 0.6933 | 7.0 | 2870 | 0.6931 | 0.5054 | | 0.6932 | 8.0 | 3280 | 0.6930 | 0.5054 | | 0.6932 | 9.0 | 3690 | 0.6934 | 0.4946 | | 0.6932 | 10.0 | 4100 | 0.6930 | 0.5054 | | 0.6932 | 11.0 | 4510 | 0.6931 | 0.4946 | | 0.6933 | 12.0 | 4920 | 0.6934 | 0.4946 | | 0.6932 | 13.0 | 5330 | 0.6931 | 0.4946 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.14.0a0+410ce96 - Datasets 2.10.1 - Tokenizers 0.13.2
{"datasets": ["glue"], "language": ["en"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "hBERTv2_qnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE QNLI", "type": "glue", "config": "qnli", "split": "validation", "args": "qnli"}, "metrics": [{"type": "accuracy", "value": 0.5053999633900788, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,680
ITESM/st_demo_2
ITESM
sentence-similarity
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "en", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2022-06-05T04:37:57Z
2022-06-05T04:38:02+00:00
15
0
--- language: en license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
null
Non_BioNLP
# all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
{"language": "en", "license": "apache-2.0", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"]}
task
[ "QUESTION_ANSWERING" ]
42,681
mansoorhamidzadeh/bert_classification
mansoorhamidzadeh
text-classification
[ "transformers", "safetensors", "bert", "text-classification", "english", "en", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-06-14T09:34:45Z
2024-06-28T13:18:44+00:00
17
0
--- base_model: bert-base-uncased language: - en license: apache-2.0 metrics: - accuracy pipeline_tag: text-classification tags: - text-classification - bert - english model-index: - name: BERT Classification results: [] --- # BERT Classification ## Model Overview - **Model Name**: BERT Classification - **Model Type**: Text Classification - **Developer**: Mansoor Hamidzadeh - **Framework**: Transformers - **Language**: English - **License**: Apache-2.0 ## Model Description This model is a fine-tuned BERT (Bidirectional Encoder Representations from Transformers) designed for text classification tasks. It categorizes text into four labels: - **Label 1**: Household - **Label 2**: Books - **Label 3**: Clothing & Accessories - **Label 4**: Electronics ## Technical Details - **Model Size**: 109M parameters - **Tensor Type**: F32 - **File Format**: Safetensors ## How To Use ```python # Use a pipeline as a high-level helper from transformers import pipeline text='' pipe = pipeline("text-classification", model="mansoorhamidzadeh/bert_classification") pipe(text) ``` ## Usage The model is useful for categorizing product descriptions or similar text data into predefined labels. ## Citation If you use this model in your research or applications, please cite it as follows: ```bibtex @misc{mansoorhamidzadeh/bert_classification, author = {mansoorhamidzadeh}, title = {English to Persian Translation using MT5-Small}, year = {2024}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/mansoorhamidzadeh/bert_classification}}, }
null
Non_BioNLP
# BERT Classification ## Model Overview - **Model Name**: BERT Classification - **Model Type**: Text Classification - **Developer**: Mansoor Hamidzadeh - **Framework**: Transformers - **Language**: English - **License**: Apache-2.0 ## Model Description This model is a fine-tuned BERT (Bidirectional Encoder Representations from Transformers) designed for text classification tasks. It categorizes text into four labels: - **Label 1**: Household - **Label 2**: Books - **Label 3**: Clothing & Accessories - **Label 4**: Electronics ## Technical Details - **Model Size**: 109M parameters - **Tensor Type**: F32 - **File Format**: Safetensors ## How To Use ```python # Use a pipeline as a high-level helper from transformers import pipeline text='' pipe = pipeline("text-classification", model="mansoorhamidzadeh/bert_classification") pipe(text) ``` ## Usage The model is useful for categorizing product descriptions or similar text data into predefined labels. ## Citation If you use this model in your research or applications, please cite it as follows: ```bibtex @misc{mansoorhamidzadeh/bert_classification, author = {mansoorhamidzadeh}, title = {English to Persian Translation using MT5-Small}, year = {2024}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/mansoorhamidzadeh/bert_classification}}, }
{"base_model": "bert-base-uncased", "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["text-classification", "bert", "english"], "model-index": [{"name": "BERT Classification", "results": []}]}
task
[ "TEXT_CLASSIFICATION", "TRANSLATION" ]
42,682
Adeptschneider/joeynmt-dyu-fr-v11.0
Adeptschneider
translation
[ "translation", "fr", "dataset:uvci/Koumankan_mt_dyu_fr", "license:mit", "region:us" ]
2024-07-30T09:07:41Z
2024-07-30T09:10:19+00:00
4
0
--- datasets: - uvci/Koumankan_mt_dyu_fr language: - fr license: mit metrics: - chrf - bleu pipeline_tag: translation --- # Adeptschneider/dyu-fr-joeynmt A machine translation model that translates Dyula to French using the [JoeyNMT framework](https://github.com/joeynmt/joeynmt). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Usage ### Load and use for inference ```python import torch from joeynmt.config import load_config, parse_global_args from joeynmt.prediction import predict, prepare from huggingface_hub import snapshot_download # Download model snapshot_download( repo_id="Adeptschneider/joeynmt-dyu-fr-v11.0", local_dir="/path/to/save/locally" ) # Define model interface class JoeyNMTModel: ''' JoeyNMTModel which load JoeyNMT model for inference. :param config_path: Path to YAML config file :param n_best: return this many hypotheses, <= beam (currently only 1) ''' def __init__(self, config_path: str, n_best: int = 1): seed = 42 torch.manual_seed(seed) cfg = load_config(config_path) args = parse_global_args(cfg, rank=0, mode="translate") self.args = args._replace(test=args.test._replace(n_best=n_best)) # build model self.model, _, _, self.test_data = prepare(self.args, rank=0, mode="translate") def _translate_data(self): _, _, hypotheses, trg_tokens, trg_scores, _ = predict( model=self.model, data=self.test_data, compute_loss=False, device=self.args.device, rank=0, n_gpu=self.args.n_gpu, normalization="none", num_workers=self.args.num_workers, args=self.args.test, autocast=self.args.autocast, ) return hypotheses, trg_tokens, trg_scores def translate(self, sentence) -> list: ''' Translate the given sentence. :param sentence: Sentence to be translated :return: - translations: (list of str) possible translations of the sentence. ''' self.test_data.set_item(sentence.strip()) translations, _, _ = self._translate_data() assert len(translations) == len(self.test_data) * self.args.test.n_best self.test_data.reset_cache() return translations # Load model config_path = "/path/to/lean_model/config_local.yaml" # Change this to the path to your model congig file model = JoeyNMTModel(config_path=config_path, n_best=1) # Translate model.translate(sentence="i tɔgɔ bi cogodɔ") ``` ## Training procedure ### Training hyperparameters More information needed ### Training results BLEU Score - 7.10 ### Framework versions - JoeyNMT 2.3.0 - Torch 2.0.1+cu118
null
Non_BioNLP
# Adeptschneider/dyu-fr-joeynmt A machine translation model that translates Dyula to French using the [JoeyNMT framework](https://github.com/joeynmt/joeynmt). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Usage ### Load and use for inference ```python import torch from joeynmt.config import load_config, parse_global_args from joeynmt.prediction import predict, prepare from huggingface_hub import snapshot_download # Download model snapshot_download( repo_id="Adeptschneider/joeynmt-dyu-fr-v11.0", local_dir="/path/to/save/locally" ) # Define model interface class JoeyNMTModel: ''' JoeyNMTModel which load JoeyNMT model for inference. :param config_path: Path to YAML config file :param n_best: return this many hypotheses, <= beam (currently only 1) ''' def __init__(self, config_path: str, n_best: int = 1): seed = 42 torch.manual_seed(seed) cfg = load_config(config_path) args = parse_global_args(cfg, rank=0, mode="translate") self.args = args._replace(test=args.test._replace(n_best=n_best)) # build model self.model, _, _, self.test_data = prepare(self.args, rank=0, mode="translate") def _translate_data(self): _, _, hypotheses, trg_tokens, trg_scores, _ = predict( model=self.model, data=self.test_data, compute_loss=False, device=self.args.device, rank=0, n_gpu=self.args.n_gpu, normalization="none", num_workers=self.args.num_workers, args=self.args.test, autocast=self.args.autocast, ) return hypotheses, trg_tokens, trg_scores def translate(self, sentence) -> list: ''' Translate the given sentence. :param sentence: Sentence to be translated :return: - translations: (list of str) possible translations of the sentence. ''' self.test_data.set_item(sentence.strip()) translations, _, _ = self._translate_data() assert len(translations) == len(self.test_data) * self.args.test.n_best self.test_data.reset_cache() return translations # Load model config_path = "/path/to/lean_model/config_local.yaml" # Change this to the path to your model congig file model = JoeyNMTModel(config_path=config_path, n_best=1) # Translate model.translate(sentence="i tɔgɔ bi cogodɔ") ``` ## Training procedure ### Training hyperparameters More information needed ### Training results BLEU Score - 7.10 ### Framework versions - JoeyNMT 2.3.0 - Torch 2.0.1+cu118
{"datasets": ["uvci/Koumankan_mt_dyu_fr"], "language": ["fr"], "license": "mit", "metrics": ["chrf", "bleu"], "pipeline_tag": "translation"}
task
[ "TRANSLATION" ]
42,683
gokulsrinivasagan/distilbert_lda_5_v1_book_wnli
gokulsrinivasagan
text-classification
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:gokulsrinivasagan/distilbert_lda_5_v1_book", "base_model:finetune:gokulsrinivasagan/distilbert_lda_5_v1_book", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-12-09T16:41:07Z
2024-12-09T16:41:39+00:00
11
0
--- base_model: gokulsrinivasagan/distilbert_lda_5_v1_book datasets: - glue language: - en library_name: transformers metrics: - accuracy tags: - generated_from_trainer model-index: - name: distilbert_lda_5_v1_book_wnli results: - task: type: text-classification name: Text Classification dataset: name: GLUE WNLI type: glue args: wnli metrics: - type: accuracy value: 0.49295774647887325 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_lda_5_v1_book_wnli This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_5_v1_book](https://huggingface.co/gokulsrinivasagan/distilbert_lda_5_v1_book) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.7009 - Accuracy: 0.4930 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7039 | 1.0 | 3 | 0.7009 | 0.4930 | | 0.6948 | 2.0 | 6 | 0.7106 | 0.3662 | | 0.6961 | 3.0 | 9 | 0.7077 | 0.3662 | | 0.6913 | 4.0 | 12 | 0.7090 | 0.5211 | | 0.6912 | 5.0 | 15 | 0.7190 | 0.2535 | | 0.6938 | 6.0 | 18 | 0.7342 | 0.3099 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.2.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_lda_5_v1_book_wnli This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_5_v1_book](https://huggingface.co/gokulsrinivasagan/distilbert_lda_5_v1_book) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.7009 - Accuracy: 0.4930 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7039 | 1.0 | 3 | 0.7009 | 0.4930 | | 0.6948 | 2.0 | 6 | 0.7106 | 0.3662 | | 0.6961 | 3.0 | 9 | 0.7077 | 0.3662 | | 0.6913 | 4.0 | 12 | 0.7090 | 0.5211 | | 0.6912 | 5.0 | 15 | 0.7190 | 0.2535 | | 0.6938 | 6.0 | 18 | 0.7342 | 0.3099 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.2.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.1
{"base_model": "gokulsrinivasagan/distilbert_lda_5_v1_book", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert_lda_5_v1_book_wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE WNLI", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.49295774647887325, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,684
davidkim205/komt-llama2-7b-v1-ggml
davidkim205
text-generation
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-2", "llama-2-chat", "text-generation", "en", "ko", "arxiv:2308.06502", "arxiv:2308.06259", "license:apache-2.0", "region:us" ]
2023-09-18T06:39:39Z
2023-09-27T05:39:16+00:00
296
9
--- language: - en - ko license: apache-2.0 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 - llama-2-chat inference: false --- # komt : korean multi task instruction tuning model ![multi task instruction tuning.jpg](https://github.com/davidkim205/komt/assets/16680469/c7f6ade7-247e-4b62-a94f-47e19abea68e) Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities. However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively. This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs). ## Model Details * **Model Developers** : davidkim(changyeon kim) * **Repository** : https://github.com/davidkim205/komt * **quant methods** : q4_0, q4_1, q5_0, q5_1, q2_k, q3_k, q3_k_m, q3_k_l, q4_k, q4_k_s, q4_k_m, q5_k, q5_k_s, q5_k_m, q8_0, q4_0 ## Training Refer https://github.com/davidkim205/komt ## Evaluation For objective model evaluation, we initially used EleutherAI's lm-evaluation-harness but obtained unsatisfactory results. Consequently, we conducted evaluations using ChatGPT, a widely used model, as described in [Self-Alignment with Instruction Backtranslation](https://arxiv.org/pdf/2308.06502.pdf) and [Three Ways of Using Large Language Models to Evaluate Chat](https://arxiv.org/pdf/2308.06259.pdf) . | model | score | average(0~5) | percentage | | --------------------------------------- | ------- | ------------ | ---------- | | gpt-3.5-turbo(close) | 147 | 3.97 | 79.45% | | naver Cue(close) | 140 | 3.78 | 75.67% | | clova X(close) | 136 | 3.67 | 73.51% | | WizardLM-13B-V1.2(open) | 96 | 2.59 | 51.89% | | Llama-2-7b-chat-hf(open) | 67 | 1.81 | 36.21% | | Llama-2-13b-chat-hf(open) | 73 | 1.91 | 38.37% | | nlpai-lab/kullm-polyglot-12.8b-v2(open) | 70 | 1.89 | 37.83% | | kfkas/Llama-2-ko-7b-Chat(open) | 96 | 2.59 | 51.89% | | beomi/KoAlpaca-Polyglot-12.8B(open) | 100 | 2.70 | 54.05% | | **komt-llama2-7b-v1 (open)(ours)** | **117** | **3.16** | **63.24%** | | **komt-llama2-13b-v1 (open)(ours)** | **129** | **3.48** | **69.72%** |
null
Non_BioNLP
# komt : korean multi task instruction tuning model ![multi task instruction tuning.jpg](https://github.com/davidkim205/komt/assets/16680469/c7f6ade7-247e-4b62-a94f-47e19abea68e) Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities. However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively. This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs). ## Model Details * **Model Developers** : davidkim(changyeon kim) * **Repository** : https://github.com/davidkim205/komt * **quant methods** : q4_0, q4_1, q5_0, q5_1, q2_k, q3_k, q3_k_m, q3_k_l, q4_k, q4_k_s, q4_k_m, q5_k, q5_k_s, q5_k_m, q8_0, q4_0 ## Training Refer https://github.com/davidkim205/komt ## Evaluation For objective model evaluation, we initially used EleutherAI's lm-evaluation-harness but obtained unsatisfactory results. Consequently, we conducted evaluations using ChatGPT, a widely used model, as described in [Self-Alignment with Instruction Backtranslation](https://arxiv.org/pdf/2308.06502.pdf) and [Three Ways of Using Large Language Models to Evaluate Chat](https://arxiv.org/pdf/2308.06259.pdf) . | model | score | average(0~5) | percentage | | --------------------------------------- | ------- | ------------ | ---------- | | gpt-3.5-turbo(close) | 147 | 3.97 | 79.45% | | naver Cue(close) | 140 | 3.78 | 75.67% | | clova X(close) | 136 | 3.67 | 73.51% | | WizardLM-13B-V1.2(open) | 96 | 2.59 | 51.89% | | Llama-2-7b-chat-hf(open) | 67 | 1.81 | 36.21% | | Llama-2-13b-chat-hf(open) | 73 | 1.91 | 38.37% | | nlpai-lab/kullm-polyglot-12.8b-v2(open) | 70 | 1.89 | 37.83% | | kfkas/Llama-2-ko-7b-Chat(open) | 96 | 2.59 | 51.89% | | beomi/KoAlpaca-Polyglot-12.8B(open) | 100 | 2.70 | 54.05% | | **komt-llama2-7b-v1 (open)(ours)** | **117** | **3.16** | **63.24%** | | **komt-llama2-13b-v1 (open)(ours)** | **129** | **3.48** | **69.72%** |
{"language": ["en", "ko"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["facebook", "meta", "pytorch", "llama", "llama-2", "llama-2-chat"], "inference": false}
task
[ "TRANSLATION" ]
42,685
TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ
TheBloke
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "SOLAR", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "distillation", "en", "base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B", "base_model:quantized:NousResearch/Nous-Hermes-2-SOLAR-10.7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
2024-01-02T09:04:20Z
2024-01-02T09:46:01+00:00
27
15
--- base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B language: - en license: apache-2.0 tags: - SOLAR - instruct - finetune - chatml - gpt4 - synthetic data - distillation inference: false model_creator: NousResearch model_type: solar prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke model-index: - name: Nous-Hermes-2-SOLAR-10.7B results: [] --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Nous Hermes 2 SOLAR 10.7B - GPTQ - Model creator: [NousResearch](https://huggingface.co/NousResearch) - Original model: [Nous Hermes 2 SOLAR 10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) <!-- description start --> # Description This repo contains GPTQ model files for [NousResearch's Nous Hermes 2 SOLAR 10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GGUF) * [NousResearch's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 5.98 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 6.59 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 11.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 11.25 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 11.99 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 6.18 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Nous-Hermes-2-SOLAR-10.7B-GPTQ`: ```shell mkdir Nous-Hermes-2-SOLAR-10.7B-GPTQ huggingface-cli download TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ --local-dir Nous-Hermes-2-SOLAR-10.7B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Nous-Hermes-2-SOLAR-10.7B-GPTQ huggingface-cli download TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Nous-Hermes-2-SOLAR-10.7B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Nous-Hermes-2-SOLAR-10.7B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ --local-dir Nous-Hermes-2-SOLAR-10.7B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Nous-Hermes-2-SOLAR-10.7B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' client = InferenceClient(endpoint_url) response = client.text_generation( prompt_template, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Write a story about llamas" system_message = "You are a story writing assistant" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: NousResearch's Nous Hermes 2 SOLAR 10.7B # Nous Hermes 2 - Solar 10.7B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/dhbOMEW0rOFDp6dH7q7Jp.png) ## Model description Nous Hermes 2 - SOLAR 10.7B is the flagship Nous Research model on the SOLAR 10.7B base model.. Nous Hermes 2 SOLAR 10.7B was trained on 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape. # Table of Contents 1. [Benchmark Results](#benchmark-results) - GPT4All - AGIEval - BigBench - Averages Compared 2. [Prompt Format](#prompt-format) 3. [Quantized Models](#quantized-models) ## Benchmark Results Nous-Hermes 2 on SOLAR 10.7B is a major improvement across the board on the benchmarks below compared to the base SOLAR 10.7B model, and comes close to approaching our Yi-34B model! # Benchmarks Compared GPT4All: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/cT-KA0hiV3_IpgOMUTvvt.png) AGIEval: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/dwker9iO9F9GDwUoUscHz.png) BigBench: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/QGxqfQ8hTPh6bs54TsPGK.png) TruthfulQA: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/60wzJSrAAI4vxAKSywEjy.png) ## GPT4All GPT-4All Benchmark Set ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5768|_ |0.0144| | | |acc_norm|0.6067|_ |0.0143| |arc_easy | 0|acc |0.8375|_ |0.0076| | | |acc_norm|0.8316|_ |0.0077| |boolq | 1|acc |0.8875|_ |0.0055| |hellaswag | 0|acc |0.6467|_ |0.0048| | | |acc_norm|0.8321|_ |0.0037| |openbookqa | 0|acc |0.3420|_ |0.0212| | | |acc_norm|0.4580|_ |0.0223| |piqa | 0|acc |0.8161|_ |0.0090| | | |acc_norm|0.8313|_ |0.0087| |winogrande | 0|acc |0.7814|_ |0.0116| ``` Average: 74.69% AGI-Eval ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.3189|_ |0.0293| | | |acc_norm|0.2953|_ |0.0287| |agieval_logiqa_en | 0|acc |0.5438|_ |0.0195| | | |acc_norm|0.4977|_ |0.0196| |agieval_lsat_ar | 0|acc |0.2696|_ |0.0293| | | |acc_norm|0.2087|_ |0.0269| |agieval_lsat_lr | 0|acc |0.7078|_ |0.0202| | | |acc_norm|0.6255|_ |0.0215| |agieval_lsat_rc | 0|acc |0.7807|_ |0.0253| | | |acc_norm|0.7063|_ |0.0278| |agieval_sat_en | 0|acc |0.8689|_ |0.0236| | | |acc_norm|0.8447|_ |0.0253| |agieval_sat_en_without_passage| 0|acc |0.5194|_ |0.0349| | | |acc_norm|0.4612|_ |0.0348| |agieval_sat_math | 0|acc |0.4409|_ |0.0336| | | |acc_norm|0.3818|_ |0.0328| ``` Average: 47.79% BigBench Reasoning Test ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5737|_ |0.0360| |bigbench_date_understanding | 0|multiple_choice_grade|0.7263|_ |0.0232| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3953|_ |0.0305| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.4457|_ |0.0263| | | |exact_str_match |0.0000|_ |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2820|_ |0.0201| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2186|_ |0.0156| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4733|_ |0.0289| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.5200|_ |0.0224| |bigbench_navigate | 0|multiple_choice_grade|0.4910|_ |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7495|_ |0.0097| |bigbench_ruin_names | 0|multiple_choice_grade|0.5938|_ |0.0232| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.3808|_ |0.0154| |bigbench_snarks | 0|multiple_choice_grade|0.8066|_ |0.0294| |bigbench_sports_understanding | 0|multiple_choice_grade|0.5101|_ |0.0159| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.3850|_ |0.0154| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2160|_ |0.0116| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1634|_ |0.0088| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4733|_ |0.0289| Average: 44.84% ``` TruthfulQA: ``` | Task |Version|Metric|Value | |Stderr| |-------------|------:|------|-----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.3917|_ |0.0171| | | |mc2 |0.5592|_ |0.0154| ``` Average Score Comparison between OpenHermes-1 Llama-2 13B and OpenHermes-2 Mistral 7B against OpenHermes-2.5 on Mistral-7B: ``` | Bench | OpenHermes-2.5 Mistral 7B | Nous-Hermes-2-SOLAR-10B | Change/OpenHermes2.5 | |---------------|---------------------------|------------------------|-----------------------| |GPT4All | 73.12| 74.69| +1.57| |--------------------------------------------------------------------------------------------| |BigBench | 40.96| 44.84| +3.88| |--------------------------------------------------------------------------------------------| |AGI Eval | 43.07| 47.79| +4.72| |--------------------------------------------------------------------------------------------| |TruthfulQA | 53.04| 55.92| +2.88| |--------------------------------------------------------------------------------------------| |Total Score | 210.19| 223.24| +23.11| |--------------------------------------------------------------------------------------------| |Average Total | 52.38| 55.81| +3.43| ``` # Prompt Format Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) # Quantized Models: [todo] [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
null
Non_BioNLP
<!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Nous Hermes 2 SOLAR 10.7B - GPTQ - Model creator: [NousResearch](https://huggingface.co/NousResearch) - Original model: [Nous Hermes 2 SOLAR 10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) <!-- description start --> # Description This repo contains GPTQ model files for [NousResearch's Nous Hermes 2 SOLAR 10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GGUF) * [NousResearch's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 5.98 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 6.59 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 11.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 11.25 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 11.99 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 6.18 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Nous-Hermes-2-SOLAR-10.7B-GPTQ`: ```shell mkdir Nous-Hermes-2-SOLAR-10.7B-GPTQ huggingface-cli download TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ --local-dir Nous-Hermes-2-SOLAR-10.7B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Nous-Hermes-2-SOLAR-10.7B-GPTQ huggingface-cli download TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Nous-Hermes-2-SOLAR-10.7B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Nous-Hermes-2-SOLAR-10.7B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ --local-dir Nous-Hermes-2-SOLAR-10.7B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Nous-Hermes-2-SOLAR-10.7B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' client = InferenceClient(endpoint_url) response = client.text_generation( prompt_template, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Nous-Hermes-2-SOLAR-10.7B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Write a story about llamas" system_message = "You are a story writing assistant" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: NousResearch's Nous Hermes 2 SOLAR 10.7B # Nous Hermes 2 - Solar 10.7B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/dhbOMEW0rOFDp6dH7q7Jp.png) ## Model description Nous Hermes 2 - SOLAR 10.7B is the flagship Nous Research model on the SOLAR 10.7B base model.. Nous Hermes 2 SOLAR 10.7B was trained on 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape. # Table of Contents 1. [Benchmark Results](#benchmark-results) - GPT4All - AGIEval - BigBench - Averages Compared 2. [Prompt Format](#prompt-format) 3. [Quantized Models](#quantized-models) ## Benchmark Results Nous-Hermes 2 on SOLAR 10.7B is a major improvement across the board on the benchmarks below compared to the base SOLAR 10.7B model, and comes close to approaching our Yi-34B model! # Benchmarks Compared GPT4All: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/cT-KA0hiV3_IpgOMUTvvt.png) AGIEval: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/dwker9iO9F9GDwUoUscHz.png) BigBench: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/QGxqfQ8hTPh6bs54TsPGK.png) TruthfulQA: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/60wzJSrAAI4vxAKSywEjy.png) ## GPT4All GPT-4All Benchmark Set ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5768|_ |0.0144| | | |acc_norm|0.6067|_ |0.0143| |arc_easy | 0|acc |0.8375|_ |0.0076| | | |acc_norm|0.8316|_ |0.0077| |boolq | 1|acc |0.8875|_ |0.0055| |hellaswag | 0|acc |0.6467|_ |0.0048| | | |acc_norm|0.8321|_ |0.0037| |openbookqa | 0|acc |0.3420|_ |0.0212| | | |acc_norm|0.4580|_ |0.0223| |piqa | 0|acc |0.8161|_ |0.0090| | | |acc_norm|0.8313|_ |0.0087| |winogrande | 0|acc |0.7814|_ |0.0116| ``` Average: 74.69% AGI-Eval ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.3189|_ |0.0293| | | |acc_norm|0.2953|_ |0.0287| |agieval_logiqa_en | 0|acc |0.5438|_ |0.0195| | | |acc_norm|0.4977|_ |0.0196| |agieval_lsat_ar | 0|acc |0.2696|_ |0.0293| | | |acc_norm|0.2087|_ |0.0269| |agieval_lsat_lr | 0|acc |0.7078|_ |0.0202| | | |acc_norm|0.6255|_ |0.0215| |agieval_lsat_rc | 0|acc |0.7807|_ |0.0253| | | |acc_norm|0.7063|_ |0.0278| |agieval_sat_en | 0|acc |0.8689|_ |0.0236| | | |acc_norm|0.8447|_ |0.0253| |agieval_sat_en_without_passage| 0|acc |0.5194|_ |0.0349| | | |acc_norm|0.4612|_ |0.0348| |agieval_sat_math | 0|acc |0.4409|_ |0.0336| | | |acc_norm|0.3818|_ |0.0328| ``` Average: 47.79% BigBench Reasoning Test ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5737|_ |0.0360| |bigbench_date_understanding | 0|multiple_choice_grade|0.7263|_ |0.0232| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3953|_ |0.0305| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.4457|_ |0.0263| | | |exact_str_match |0.0000|_ |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2820|_ |0.0201| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2186|_ |0.0156| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4733|_ |0.0289| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.5200|_ |0.0224| |bigbench_navigate | 0|multiple_choice_grade|0.4910|_ |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7495|_ |0.0097| |bigbench_ruin_names | 0|multiple_choice_grade|0.5938|_ |0.0232| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.3808|_ |0.0154| |bigbench_snarks | 0|multiple_choice_grade|0.8066|_ |0.0294| |bigbench_sports_understanding | 0|multiple_choice_grade|0.5101|_ |0.0159| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.3850|_ |0.0154| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2160|_ |0.0116| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1634|_ |0.0088| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4733|_ |0.0289| Average: 44.84% ``` TruthfulQA: ``` | Task |Version|Metric|Value | |Stderr| |-------------|------:|------|-----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.3917|_ |0.0171| | | |mc2 |0.5592|_ |0.0154| ``` Average Score Comparison between OpenHermes-1 Llama-2 13B and OpenHermes-2 Mistral 7B against OpenHermes-2.5 on Mistral-7B: ``` | Bench | OpenHermes-2.5 Mistral 7B | Nous-Hermes-2-SOLAR-10B | Change/OpenHermes2.5 | |---------------|---------------------------|------------------------|-----------------------| |GPT4All | 73.12| 74.69| +1.57| |--------------------------------------------------------------------------------------------| |BigBench | 40.96| 44.84| +3.88| |--------------------------------------------------------------------------------------------| |AGI Eval | 43.07| 47.79| +4.72| |--------------------------------------------------------------------------------------------| |TruthfulQA | 53.04| 55.92| +2.88| |--------------------------------------------------------------------------------------------| |Total Score | 210.19| 223.24| +23.11| |--------------------------------------------------------------------------------------------| |Average Total | 52.38| 55.81| +3.43| ``` # Prompt Format Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) # Quantized Models: [todo] [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
{"base_model": "NousResearch/Nous-Hermes-2-SOLAR-10.7B", "language": ["en"], "license": "apache-2.0", "tags": ["SOLAR", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "distillation"], "inference": false, "model_creator": "NousResearch", "model_type": "solar", "prompt_template": "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n", "quantized_by": "TheBloke", "model-index": [{"name": "Nous-Hermes-2-SOLAR-10.7B", "results": []}]}
task
[ "TRANSLATION" ]
42,686
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_mrpc
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "mobilebert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-01-28T23:39:45Z
2023-01-28T23:45:57+00:00
141
0
--- datasets: - glue language: - en license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: mobilebert_add_GLUE_Experiment_logit_kd_mrpc results: - task: type: text-classification name: Text Classification dataset: name: GLUE MRPC type: glue config: mrpc split: validation args: mrpc metrics: - type: accuracy value: 0.6838235294117647 name: Accuracy - type: f1 value: 0.8122270742358079 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_add_GLUE_Experiment_logit_kd_mrpc This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.5529 - Accuracy: 0.6838 - F1: 0.8122 - Combined Score: 0.7480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.6239 | 1.0 | 29 | 0.5558 | 0.6838 | 0.8122 | 0.7480 | | 0.6109 | 2.0 | 58 | 0.5553 | 0.6838 | 0.8122 | 0.7480 | | 0.6095 | 3.0 | 87 | 0.5543 | 0.6838 | 0.8122 | 0.7480 | | 0.6072 | 4.0 | 116 | 0.5529 | 0.6838 | 0.8122 | 0.7480 | | 0.5996 | 5.0 | 145 | 0.5556 | 0.6838 | 0.8122 | 0.7480 | | 0.5884 | 6.0 | 174 | 0.5737 | 0.6838 | 0.8122 | 0.7480 | | 0.59 | 7.0 | 203 | 0.5704 | 0.6838 | 0.8122 | 0.7480 | | 0.5715 | 8.0 | 232 | 0.5706 | 0.6814 | 0.8105 | 0.7459 | | 0.5595 | 9.0 | 261 | 0.5660 | 0.6838 | 0.8122 | 0.7480 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_add_GLUE_Experiment_logit_kd_mrpc This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.5529 - Accuracy: 0.6838 - F1: 0.8122 - Combined Score: 0.7480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.6239 | 1.0 | 29 | 0.5558 | 0.6838 | 0.8122 | 0.7480 | | 0.6109 | 2.0 | 58 | 0.5553 | 0.6838 | 0.8122 | 0.7480 | | 0.6095 | 3.0 | 87 | 0.5543 | 0.6838 | 0.8122 | 0.7480 | | 0.6072 | 4.0 | 116 | 0.5529 | 0.6838 | 0.8122 | 0.7480 | | 0.5996 | 5.0 | 145 | 0.5556 | 0.6838 | 0.8122 | 0.7480 | | 0.5884 | 6.0 | 174 | 0.5737 | 0.6838 | 0.8122 | 0.7480 | | 0.59 | 7.0 | 203 | 0.5704 | 0.6838 | 0.8122 | 0.7480 | | 0.5715 | 8.0 | 232 | 0.5706 | 0.6814 | 0.8105 | 0.7459 | | 0.5595 | 9.0 | 261 | 0.5660 | 0.6838 | 0.8122 | 0.7480 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mobilebert_add_GLUE_Experiment_logit_kd_mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE MRPC", "type": "glue", "config": "mrpc", "split": "validation", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.6838235294117647, "name": "Accuracy"}, {"type": "f1", "value": 0.8122270742358079, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,687
lamm-mit/Cephalo-Llama-3.2-11B-Vision-Instruct-128k
lamm-mit
image-text-to-text
[ "transformers", "safetensors", "mllama", "image-text-to-text", "nlp", "code", "vision", "chemistry", "engineering", "biology", "bio-inspired", "text-generation-inference", "materials science", "conversational", "multilingual", "arxiv:2405.19076", "base_model:meta-llama/Llama-3.2-11B-Vision-Instruct", "base_model:finetune:meta-llama/Llama-3.2-11B-Vision-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-09-29T09:03:05Z
2024-09-30T17:12:40+00:00
54
6
--- base_model: - meta-llama/Llama-3.2-11B-Vision-Instruct language: - multilingual library_name: transformers license: apache-2.0 pipeline_tag: image-text-to-text tags: - nlp - code - vision - chemistry - engineering - biology - bio-inspired - text-generation-inference - materials science inference: parameters: temperature: 0.3 widget: - messages: - role: user content: <|image_1|>Can you describe what you see in the image? --- ## Model Summary Cephalo is a series of multimodal materials science focused vision large language models (V-LLMs) designed to integrate visual and linguistic data for advanced understanding and interaction in human-AI or multi-agent AI frameworks. A novel aspect of Cephalo's development is the innovative dataset generation method. The extraction process employs advanced algorithms to accurately detect and separate images and their corresponding textual descriptions from complex PDF documents. It involves extracting images and captions from PDFs to create well-reasoned image-text pairs, utilizing large language models (LLMs) for natural language processing. These image-text pairs are then refined and validated through LLM-based NLP processing, ensuring high-quality and contextually relevant data for training. Cephalo can interpret complex visual scenes and generating contextually accurate language descriptions and answer queries. The model is developed to process diverse inputs, including images and text, facilitating a broad range of applications such as image captioning, visual question answering, and multimodal content generation. The architecture combines a vision encoder model and an autoregressive transformer to process complex natural language understanding. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/kl5GWBP9WS0D4uwd1t3S7.png) Cephalo provides a robust framework for multimodal interaction and understanding, including the development of complex generative pipelines to create 2D and 3D renderings of material microstructures as input for additive manufacturing methods. This version of Cephalo, ```lamm-mit/Cephalo-Llama-3.2-11B-Vision-Instruct-128k```, is based on the ```meta-llama/Llama-3.2-11B-Vision-Instruct``` model. The model was trained on a combination of scientific text-image data extracted from Wikipedia and scientific papers. For further details on the base model, see: https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct. More details about technical aspects of the model, training and example applications to materials science problems are provided in the paper (reference at the bottom). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https://huggingface.co/lamm-mit/Cephalo-Llama-3.2-11B-Vision-Instruct-128k/blob/main/Cephalo_Llama_3_2_11b_inference.ipynb) ### Chat Format The ```lamm-mit/Cephalo-Llama-3.2-11B-Vision-Instruct-128k``` is suiteable for one or more image inputs, wih prompts using the chat format as follows: ```raw messages=[{'role': 'user', 'content': [{'type': 'image'}, {'type': 'text', 'text': 'Consider the stress-strain response under compression. What are the three curves shown. Based on an inspection of the plot, do they show good agreement or are they very different?'}]}] ``` After application of the chat template: ```python input_text = processor.apply_chat_template(messages, add_generation_prompt=True) ``` The raw input text is: ```raw <|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\n<|image|>Consider the stress-strain response under compression. What are the three curves shown. Based on an inspection of the plot, do they show good agreement or are they very different?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n ``` ### Sample inference code Update your transformers installation if necessary: ```python pip install -U transformers ``` This code snippets show how to get quickly started on a GPU: ```python from transformers import MllamaForConditionalGeneration, AutoProcessor DEVICE='cuda:0' model_id='lamm-mit/Cephalo-Llama-3.2-11B-Vision-Instruct-128k' model = MllamaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.bfloat16, #_attn_implementation="flash_attention_2", trust_remote_code=True, ).to (DEVICE ) processor = AutoProcessor.from_pretrained( model_id, trust_remote_code=True, ) ``` Simple inference example: We are asking a question about this image, showing a material microstructure and associated stress-strain responses. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/4JwIGSfl82hMEyHasOSU4.png) ```python import requests import torch from PIL import Image url = "https://huggingface.co/lamm-mit/Cephalo-Llama-3.2-11B-Vision-Instruct-128k/resolve/main/architected_stress_strain.png" image = Image.open(requests.get(url, stream=True ).raw) images = [image] messages = [ {"role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "Consider the stress-strain response under compression. What are the three curves shown. Based on an inspection of the plot, do they show good agreement or are they very different?"} ]} ] input_text = processor.apply_chat_template(messages, add_generation_prompt=True) inputs = processor(images, input_text, return_tensors="pt").to(model.device) output = model.generate(**inputs, max_new_tokens=512) print(processor.decode(output[0])) ``` Raw output: ``` <|begin_of_text|><|start_header_id|>user<|end_header_id|> <|image|>Consider the stress-strain response under compression. What are the three curves shown. Based on an inspection of the plot, do they show good agreement or are they very different?<|eot_id|><|start_header_id|>assistant<|end_header_id|> The image shows three curves representing the stress-strain response under compression. The x-axis represents strain, which is the deformation experienced by the material relative to its original length, while the y-axis represents stress, which is the force applied per unit area. - The blue curve is labeled "Predicted," indicating a predicted model or simulation result. - The orange curve is labeled "Ground truth," indicating actual experimental data or true values. - The green curve is labeled "Simulation result," likely representing another simulation result for comparison. The curves show an increasing trend of stress with strain, indicating that the material becomes more stressed as it deforms. The predicted and simulation results (blue and green curves) closely follow the ground truth (orange curve), suggesting good agreement among the predicted and simulated models and the actual experimental data. This implies that the models used are accurate in predicting the material's response under compression. The curves do not show significant deviations, indicating reliable modeling and simulation techniques.<|eot_id|> ``` Next we provide a convenience function for inference. This function takes the model, processor, question, and images, along with messages and images objects for repeated chat-like interactions with the model. ```python from tqdm.notebook import tqdm from transformers.image_utils import load_image def ensure_list(obj): if not isinstance(obj, list): return [obj] return obj def is_url_or_filename(val) -> bool: # Check if it's a URL if isinstance(val, str): return True def ask_about_image (model, processor, question, images_input=[], verbatim=False,temperature=0.1,show_image=False, system="You are a materials scientist. ", max_new_tokens=256, messages=[], images=[], ): images_input=ensure_list(images_input) if len (images)==0: if len (images_input)>0: for image in tqdm (images_input) : if is_url(image): is_url_or_filename= load_image(image) images.append (image) if show_image: display ( image ) if len (messages)==0: messages = [ {"role": "user", "content": [ {"type": "image"}, {"type": "text", "text": question} ]} ] input_text = processor.apply_chat_template(messages, add_generation_prompt=True) inputs = processor(image, input_text, return_tensors="pt").to(model.device) else: messages.append ( {"role": "user", "content": [ {"type": "text", "text": question} ]} ) if verbatim: print (messages) text = processor.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = processor(text=text, images=images, return_tensors="pt", ).to(DEVICE) generation_args = { "max_new_tokens": max_new_tokens, "temperature": temperature, "do_sample": True, } generate_ids = model.generate(**inputs,# eos_token_id=processor.tokenizer.eos_token_id, **generation_args) generate_ids = generate_ids[:, inputs['input_ids'].shape[1]:-1] generated_texts = processor.decode(generate_ids[0], clean_up_tokenization_spaces=False) messages.append ( {"role": "assistant", "content": [ {"type": "text", "text": generated_texts}]} ) return generated_texts, messages, images question = """What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI. First brainstorm, then organize your thoughts, then respond.""" url1 = "https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg" response, messages,images= ask_about_image ( model, processor, question, images_input=[url1,], temperature=0.1, system= '', init_instr='You carefully study the image, and respond accurately, but succinctly. Think step-by-step.\n\n', show_conversation=True, max_new_tokens=512, messages=[], images=[]) print (response) ``` Sample output: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/5n6oRNHrfwHkBX0QertZp.png) <small>Image by [Vaishakh Manohar](https://www.quantamagazine.org/the-simple-algorithm-that-ants-use-to-build-bridges-20180226/)</small> <pre style="white-space: pre-wrap;"> The image shows a group of ants working together to move a large object. This scene illustrates the concept of swarm intelligence, where individual agents (ants) collectively achieve a complex task through decentralized, self-organized behavior. In materials design, this concept can be applied to develop new materials and structures by mimicking the behavior of swarms. For instance, researchers have used swarm intelligence algorithms to optimize the design of composite materials, such as fiber-reinforced polymers, by simulating the behavior of ants or other swarming organisms. These algorithms can help identify the optimal arrangement of fibers to maximize strength and minimize weight. Multi-agent AI, which involves the coordination of multiple autonomous agents to achieve a common goal, can also be used in materials design. This approach can be applied to simulate the behavior of complex systems, such as biological tissues or nanomaterials, and optimize their properties through machine learning algorithms. By analyzing the behavior of individual agents and their interactions, researchers can develop new materials with improved performance and functionality. In summary, the image of ants working together to move a large object serves as a metaphor for the potential of swarm intelligence and multi-agent AI in materials design. By mimicking the behavior of swarms, researchers can develop new materials and structures with improved properties and functionality. </pre> ## Dataset generation The schematic below shows a visualization of the approach to generate datasets for training the vision model. The extraction process employs advanced algorithms to accurately detect and separate images and their corresponding textual descriptions from complex PDF documents. It involves extracting images and captions from PDFs to create well-reasoned image-text pairs, utilizing large language models (LLMs) for natural language processing. These image-text pairs are then refined and validated through LLM-based NLP processing, ensuring high-quality and contextually relevant data for training. The image below shows reproductions of two representative pages of the scientific article (here, Spivak, Buehler, et al., 2011), and how they are used to extract visual scientific data for training the Cephalo model. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/qHURSBRWEDgHy4o56escN.png) ## Citation Please cite as: ```bibtex @article{Buehler_Cephalo_2024, title={Cephalo: Multi-Modal Vision-Language Models for Bio-Inspired Materials Analysis and Design}, author={Markus J. Buehler}, journal={arXiv preprint arXiv:2405.19076}, year={2024} } ```
null
Non_BioNLP
## Model Summary Cephalo is a series of multimodal materials science focused vision large language models (V-LLMs) designed to integrate visual and linguistic data for advanced understanding and interaction in human-AI or multi-agent AI frameworks. A novel aspect of Cephalo's development is the innovative dataset generation method. The extraction process employs advanced algorithms to accurately detect and separate images and their corresponding textual descriptions from complex PDF documents. It involves extracting images and captions from PDFs to create well-reasoned image-text pairs, utilizing large language models (LLMs) for natural language processing. These image-text pairs are then refined and validated through LLM-based NLP processing, ensuring high-quality and contextually relevant data for training. Cephalo can interpret complex visual scenes and generating contextually accurate language descriptions and answer queries. The model is developed to process diverse inputs, including images and text, facilitating a broad range of applications such as image captioning, visual question answering, and multimodal content generation. The architecture combines a vision encoder model and an autoregressive transformer to process complex natural language understanding. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/kl5GWBP9WS0D4uwd1t3S7.png) Cephalo provides a robust framework for multimodal interaction and understanding, including the development of complex generative pipelines to create 2D and 3D renderings of material microstructures as input for additive manufacturing methods. This version of Cephalo, ```lamm-mit/Cephalo-Llama-3.2-11B-Vision-Instruct-128k```, is based on the ```meta-llama/Llama-3.2-11B-Vision-Instruct``` model. The model was trained on a combination of scientific text-image data extracted from Wikipedia and scientific papers. For further details on the base model, see: https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct. More details about technical aspects of the model, training and example applications to materials science problems are provided in the paper (reference at the bottom). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https://huggingface.co/lamm-mit/Cephalo-Llama-3.2-11B-Vision-Instruct-128k/blob/main/Cephalo_Llama_3_2_11b_inference.ipynb) ### Chat Format The ```lamm-mit/Cephalo-Llama-3.2-11B-Vision-Instruct-128k``` is suiteable for one or more image inputs, wih prompts using the chat format as follows: ```raw messages=[{'role': 'user', 'content': [{'type': 'image'}, {'type': 'text', 'text': 'Consider the stress-strain response under compression. What are the three curves shown. Based on an inspection of the plot, do they show good agreement or are they very different?'}]}] ``` After application of the chat template: ```python input_text = processor.apply_chat_template(messages, add_generation_prompt=True) ``` The raw input text is: ```raw <|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\n<|image|>Consider the stress-strain response under compression. What are the three curves shown. Based on an inspection of the plot, do they show good agreement or are they very different?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n ``` ### Sample inference code Update your transformers installation if necessary: ```python pip install -U transformers ``` This code snippets show how to get quickly started on a GPU: ```python from transformers import MllamaForConditionalGeneration, AutoProcessor DEVICE='cuda:0' model_id='lamm-mit/Cephalo-Llama-3.2-11B-Vision-Instruct-128k' model = MllamaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.bfloat16, #_attn_implementation="flash_attention_2", trust_remote_code=True, ).to (DEVICE ) processor = AutoProcessor.from_pretrained( model_id, trust_remote_code=True, ) ``` Simple inference example: We are asking a question about this image, showing a material microstructure and associated stress-strain responses. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/4JwIGSfl82hMEyHasOSU4.png) ```python import requests import torch from PIL import Image url = "https://huggingface.co/lamm-mit/Cephalo-Llama-3.2-11B-Vision-Instruct-128k/resolve/main/architected_stress_strain.png" image = Image.open(requests.get(url, stream=True ).raw) images = [image] messages = [ {"role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "Consider the stress-strain response under compression. What are the three curves shown. Based on an inspection of the plot, do they show good agreement or are they very different?"} ]} ] input_text = processor.apply_chat_template(messages, add_generation_prompt=True) inputs = processor(images, input_text, return_tensors="pt").to(model.device) output = model.generate(**inputs, max_new_tokens=512) print(processor.decode(output[0])) ``` Raw output: ``` <|begin_of_text|><|start_header_id|>user<|end_header_id|> <|image|>Consider the stress-strain response under compression. What are the three curves shown. Based on an inspection of the plot, do they show good agreement or are they very different?<|eot_id|><|start_header_id|>assistant<|end_header_id|> The image shows three curves representing the stress-strain response under compression. The x-axis represents strain, which is the deformation experienced by the material relative to its original length, while the y-axis represents stress, which is the force applied per unit area. - The blue curve is labeled "Predicted," indicating a predicted model or simulation result. - The orange curve is labeled "Ground truth," indicating actual experimental data or true values. - The green curve is labeled "Simulation result," likely representing another simulation result for comparison. The curves show an increasing trend of stress with strain, indicating that the material becomes more stressed as it deforms. The predicted and simulation results (blue and green curves) closely follow the ground truth (orange curve), suggesting good agreement among the predicted and simulated models and the actual experimental data. This implies that the models used are accurate in predicting the material's response under compression. The curves do not show significant deviations, indicating reliable modeling and simulation techniques.<|eot_id|> ``` Next we provide a convenience function for inference. This function takes the model, processor, question, and images, along with messages and images objects for repeated chat-like interactions with the model. ```python from tqdm.notebook import tqdm from transformers.image_utils import load_image def ensure_list(obj): if not isinstance(obj, list): return [obj] return obj def is_url_or_filename(val) -> bool: # Check if it's a URL if isinstance(val, str): return True def ask_about_image (model, processor, question, images_input=[], verbatim=False,temperature=0.1,show_image=False, system="You are a materials scientist. ", max_new_tokens=256, messages=[], images=[], ): images_input=ensure_list(images_input) if len (images)==0: if len (images_input)>0: for image in tqdm (images_input) : if is_url(image): is_url_or_filename= load_image(image) images.append (image) if show_image: display ( image ) if len (messages)==0: messages = [ {"role": "user", "content": [ {"type": "image"}, {"type": "text", "text": question} ]} ] input_text = processor.apply_chat_template(messages, add_generation_prompt=True) inputs = processor(image, input_text, return_tensors="pt").to(model.device) else: messages.append ( {"role": "user", "content": [ {"type": "text", "text": question} ]} ) if verbatim: print (messages) text = processor.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = processor(text=text, images=images, return_tensors="pt", ).to(DEVICE) generation_args = { "max_new_tokens": max_new_tokens, "temperature": temperature, "do_sample": True, } generate_ids = model.generate(**inputs,# eos_token_id=processor.tokenizer.eos_token_id, **generation_args) generate_ids = generate_ids[:, inputs['input_ids'].shape[1]:-1] generated_texts = processor.decode(generate_ids[0], clean_up_tokenization_spaces=False) messages.append ( {"role": "assistant", "content": [ {"type": "text", "text": generated_texts}]} ) return generated_texts, messages, images question = """What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI. First brainstorm, then organize your thoughts, then respond.""" url1 = "https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg" response, messages,images= ask_about_image ( model, processor, question, images_input=[url1,], temperature=0.1, system= '', init_instr='You carefully study the image, and respond accurately, but succinctly. Think step-by-step.\n\n', show_conversation=True, max_new_tokens=512, messages=[], images=[]) print (response) ``` Sample output: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/5n6oRNHrfwHkBX0QertZp.png) <small>Image by [Vaishakh Manohar](https://www.quantamagazine.org/the-simple-algorithm-that-ants-use-to-build-bridges-20180226/)</small> <pre style="white-space: pre-wrap;"> The image shows a group of ants working together to move a large object. This scene illustrates the concept of swarm intelligence, where individual agents (ants) collectively achieve a complex task through decentralized, self-organized behavior. In materials design, this concept can be applied to develop new materials and structures by mimicking the behavior of swarms. For instance, researchers have used swarm intelligence algorithms to optimize the design of composite materials, such as fiber-reinforced polymers, by simulating the behavior of ants or other swarming organisms. These algorithms can help identify the optimal arrangement of fibers to maximize strength and minimize weight. Multi-agent AI, which involves the coordination of multiple autonomous agents to achieve a common goal, can also be used in materials design. This approach can be applied to simulate the behavior of complex systems, such as biological tissues or nanomaterials, and optimize their properties through machine learning algorithms. By analyzing the behavior of individual agents and their interactions, researchers can develop new materials with improved performance and functionality. In summary, the image of ants working together to move a large object serves as a metaphor for the potential of swarm intelligence and multi-agent AI in materials design. By mimicking the behavior of swarms, researchers can develop new materials and structures with improved properties and functionality. </pre> ## Dataset generation The schematic below shows a visualization of the approach to generate datasets for training the vision model. The extraction process employs advanced algorithms to accurately detect and separate images and their corresponding textual descriptions from complex PDF documents. It involves extracting images and captions from PDFs to create well-reasoned image-text pairs, utilizing large language models (LLMs) for natural language processing. These image-text pairs are then refined and validated through LLM-based NLP processing, ensuring high-quality and contextually relevant data for training. The image below shows reproductions of two representative pages of the scientific article (here, Spivak, Buehler, et al., 2011), and how they are used to extract visual scientific data for training the Cephalo model. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/qHURSBRWEDgHy4o56escN.png) ## Citation Please cite as: ```bibtex @article{Buehler_Cephalo_2024, title={Cephalo: Multi-Modal Vision-Language Models for Bio-Inspired Materials Analysis and Design}, author={Markus J. Buehler}, journal={arXiv preprint arXiv:2405.19076}, year={2024} } ```
{"base_model": ["meta-llama/Llama-3.2-11B-Vision-Instruct"], "language": ["multilingual"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "image-text-to-text", "tags": ["nlp", "code", "vision", "chemistry", "engineering", "biology", "bio-inspired", "text-generation-inference", "materials science"], "inference": {"parameters": {"temperature": 0.3}}, "widget": [{"messages": [{"role": "user", "content": "<|image_1|>Can you describe what you see in the image?"}]}]}
task
[ "QUESTION_ANSWERING" ]
42,688
gaudi/opus-mt-fi-toi-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-22T15:55:00Z
2024-10-19T04:02:28+00:00
6
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-fi-toi) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-fi-toi).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-fi-toi --output_dir ./ctranslate2/opus-mt-fi-toi-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-fi-toi-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-fi-toi-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-fi-toi-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-fi-toi) by Helsinki-NLP.
null
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-fi-toi) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-fi-toi).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-fi-toi --output_dir ./ctranslate2/opus-mt-fi-toi-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-fi-toi-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-fi-toi-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-fi-toi-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-fi-toi) by Helsinki-NLP.
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
task
[ "TRANSLATION" ]
42,689
jovi848/autotrain-eng-ta-json-73876139369
jovi848
translation
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "autotrain", "translation", "unk", "dataset:jovi848/autotrain-data-eng-ta-json", "co2_eq_emissions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-07-11T22:16:50Z
2023-07-11T23:05:03+00:00
9
0
--- datasets: - jovi848/autotrain-data-eng-ta-json language: - unk - unk tags: - autotrain - translation co2_eq_emissions: emissions: 33.5213011411702 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 73876139369 - CO2 Emissions (in grams): 33.5213 ## Validation Metrics - Loss: 0.000 - SacreBLEU: 0.001 - Gen len: 19.000
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Translation - Model ID: 73876139369 - CO2 Emissions (in grams): 33.5213 ## Validation Metrics - Loss: 0.000 - SacreBLEU: 0.001 - Gen len: 19.000
{"datasets": ["jovi848/autotrain-data-eng-ta-json"], "language": ["unk", "unk"], "tags": ["autotrain", "translation"], "co2_eq_emissions": {"emissions": 33.5213011411702}}
task
[ "TRANSLATION" ]
42,690
lrei/roberta-large-emolit
lrei
text-classification
[ "transformers", "pytorch", "roberta", "text-classification", "en", "doi:10.57967/hf/0849", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-04-25T12:43:42Z
2023-06-27T12:28:44+00:00
13
0
--- language: - en library_name: transformers license: mit pipeline_tag: text-classification widget: - text: Thank you for your help, brave traveler. - text: There is no creature loves me; And if I die no soul will pity me. - text: We men are wretched things. --- ## Description Literature sentences from [Project Gutenberg](https://www.gutenberg.org/). 38 emotion labels (+neutral examples). Semi-Supervised dataset. ## Article [Detecting Fine-Grained Emotions in Literature](https://www.mdpi.com/2076-3417/13/13/7502) Please cite: ```plain text @Article{app13137502, AUTHOR = {Rei, Luis and Mladenić, Dunja}, TITLE = {Detecting Fine-Grained Emotions in Literature}, JOURNAL = {Applied Sciences}, VOLUME = {13}, YEAR = {2023}, NUMBER = {13}, ARTICLE-NUMBER = {7502}, URL = {https://www.mdpi.com/2076-3417/13/13/7502}, ISSN = {2076-3417}, DOI = {10.3390/app13137502} } ``` ## Abstract Emotion detection in text is a fundamental aspect of affective computing and is closely linked to natural language processing. Its applications span various domains, from interactive chatbots to marketing and customer service. This research specifically focuses on its significance in literature analysis and understanding. To facilitate this, we present a novel approach that involves creating a multi-label fine-grained emotion detection dataset, derived from literary sources. Our methodology employs a simple yet effective semi-supervised technique. We leverage textual entailment classification to perform emotion-specific weak-labeling, selecting examples with the highest and lowest scores from a large corpus. Utilizing these emotion-specific datasets, we train binary pseudo-labeling classifiers for each individual emotion. By applying this process to the selected examples, we construct a multi-label dataset. Using this dataset, we train models and evaluate their performance within a traditional supervised setting. Our model achieves an F1 score of 0.59 on our labeled gold set, showcasing its ability to effectively detect fine-grained emotions. Furthermore, we conduct evaluations of the model's performance in zero- and few-shot transfer scenarios using benchmark datasets. Notably, our results indicate that the knowledge learned from our dataset exhibits transferability across diverse data domains, demonstrating its potential for broader applications beyond emotion detection in literature. Our contribution thus includes a multi-label fine-grained emotion detection dataset built from literature, the semi-supervised approach used to create it, as well as the models trained on it. This work provides a solid foundation for advancing emotion detection techniques and their utilization in various scenarios, especially within the cultural heritage analysis. ## Labels - admiration: finds something admirable, impressive or worthy of respect - amusement: finds something funny, entertaining or amusing - anger: is angry, furious, or strongly displeased; displays ire, rage, or wrath - annoyance: is annoyed or irritated - approval: expresses a favorable opinion, approves, endorses or agrees with something or someone - boredom: feels bored, uninterested, monotony, tedium - calmness: is calm, serene, free from agitation or disturbance, experiences emotional tranquility - caring: cares about the well-being of someone else, feels sympathy, compassion, affectionate concern towards someone, displays kindness or generosity - courage: feels courage or the ability to do something that frightens one, displays fearlessness or bravery - curiosity: is interested, curious, or has strong desire to learn something - desire: has a desire or ambition, wants something, wishes for something to happen - despair: feels despair, helpless, powerless, loss or absence of hope, desperation, despondency - disappointment: feels sadness or displeasure caused by the non-fulfillment of hopes or expectations, being or let down, expresses regret due to the unfavorable outcome of a decision - disapproval: expresses an unfavorable opinion, disagrees or disapproves of something or someone - disgust: feels disgust, revulsion, finds something or someone unpleasant, offensive or hateful - doubt: has doubt or is uncertain about something, bewildered, confused, or shows lack of understanding - embarrassment: feels embarrassed, awkward, self-conscious, shame, or humiliation - envy: is covetous, feels envy or jealousy; begrudges or resents someone for their achievements, possessions, or qualities - excitement: feels excitement or great enthusiasm and eagerness - faith: expresses religious faith, has a strong belief in the doctrines of a religion, or trust in god - fear: is afraid or scared due to a threat, danger, or harm - frustration: feels frustrated: upset or annoyed because of inability to change or achieve something - gratitude: is thankful or grateful for something - greed: is greedy, rapacious, avaricious, or has selfish desire to acquire or possess more than what one needs - grief: feels grief or intense sorrow, or grieves for someone who has died - guilt: feels guilt, remorse, or regret to have committed wrong or failed in an obligation - indifference: is uncaring, unsympathetic, uncharitable, or callous, shows indifference, lack of concern, coldness towards someone - joy: is happy, feels joy, great pleasure, elation, satisfaction, contentment, or delight - love: feels love, strong affection, passion, or deep romantic attachment for someone - nervousness: feels nervous, anxious, worried, uneasy, apprehensive, stressed, troubled or tense - nostalgia: feels nostalgia, longing or wistful affection for the past, something lost, or for a period in one’s life, feels homesickness, a longing for one’s home, city, or country while being away; longing for a familiar place - optimism: feels optimism or hope, is hopeful or confident about the future, that something good may happen, or the success of something - pain: feels physical pain or is experiences physical suffering - pride: is proud, feels pride from one’s own achievements, self-fulfillment, or from the achievements of those with whom one is closely associated, or from qualities or possessions that are widely admired - relief: feels relaxed, relief from tension or anxiety - sadness: feels sadness, sorrow, unhappiness, depression, dejection - surprise: is surprised, astonished or shocked by something unexpected - trust: trusts or has confidence in someone, or believes that someone is good, honest, or reliable ## Dataset [EmoLit (Zenodo)](https://zenodo.org/record/7883954) ## Code [EmoLit Train (Github)](https://github.com/lrei/emolit_train) ## Models - [LARGE](https://huggingface.co/lrei/roberta-large-emolit) - [BASE](https://huggingface.co/lrei/roberta-base-emolit) - [DISTILL](https://huggingface.co/lrei/distilroberta-base-emolit)
null
Non_BioNLP
## Description Literature sentences from [Project Gutenberg](https://www.gutenberg.org/). 38 emotion labels (+neutral examples). Semi-Supervised dataset. ## Article [Detecting Fine-Grained Emotions in Literature](https://www.mdpi.com/2076-3417/13/13/7502) Please cite: ```plain text @Article{app13137502, AUTHOR = {Rei, Luis and Mladenić, Dunja}, TITLE = {Detecting Fine-Grained Emotions in Literature}, JOURNAL = {Applied Sciences}, VOLUME = {13}, YEAR = {2023}, NUMBER = {13}, ARTICLE-NUMBER = {7502}, URL = {https://www.mdpi.com/2076-3417/13/13/7502}, ISSN = {2076-3417}, DOI = {10.3390/app13137502} } ``` ## Abstract Emotion detection in text is a fundamental aspect of affective computing and is closely linked to natural language processing. Its applications span various domains, from interactive chatbots to marketing and customer service. This research specifically focuses on its significance in literature analysis and understanding. To facilitate this, we present a novel approach that involves creating a multi-label fine-grained emotion detection dataset, derived from literary sources. Our methodology employs a simple yet effective semi-supervised technique. We leverage textual entailment classification to perform emotion-specific weak-labeling, selecting examples with the highest and lowest scores from a large corpus. Utilizing these emotion-specific datasets, we train binary pseudo-labeling classifiers for each individual emotion. By applying this process to the selected examples, we construct a multi-label dataset. Using this dataset, we train models and evaluate their performance within a traditional supervised setting. Our model achieves an F1 score of 0.59 on our labeled gold set, showcasing its ability to effectively detect fine-grained emotions. Furthermore, we conduct evaluations of the model's performance in zero- and few-shot transfer scenarios using benchmark datasets. Notably, our results indicate that the knowledge learned from our dataset exhibits transferability across diverse data domains, demonstrating its potential for broader applications beyond emotion detection in literature. Our contribution thus includes a multi-label fine-grained emotion detection dataset built from literature, the semi-supervised approach used to create it, as well as the models trained on it. This work provides a solid foundation for advancing emotion detection techniques and their utilization in various scenarios, especially within the cultural heritage analysis. ## Labels - admiration: finds something admirable, impressive or worthy of respect - amusement: finds something funny, entertaining or amusing - anger: is angry, furious, or strongly displeased; displays ire, rage, or wrath - annoyance: is annoyed or irritated - approval: expresses a favorable opinion, approves, endorses or agrees with something or someone - boredom: feels bored, uninterested, monotony, tedium - calmness: is calm, serene, free from agitation or disturbance, experiences emotional tranquility - caring: cares about the well-being of someone else, feels sympathy, compassion, affectionate concern towards someone, displays kindness or generosity - courage: feels courage or the ability to do something that frightens one, displays fearlessness or bravery - curiosity: is interested, curious, or has strong desire to learn something - desire: has a desire or ambition, wants something, wishes for something to happen - despair: feels despair, helpless, powerless, loss or absence of hope, desperation, despondency - disappointment: feels sadness or displeasure caused by the non-fulfillment of hopes or expectations, being or let down, expresses regret due to the unfavorable outcome of a decision - disapproval: expresses an unfavorable opinion, disagrees or disapproves of something or someone - disgust: feels disgust, revulsion, finds something or someone unpleasant, offensive or hateful - doubt: has doubt or is uncertain about something, bewildered, confused, or shows lack of understanding - embarrassment: feels embarrassed, awkward, self-conscious, shame, or humiliation - envy: is covetous, feels envy or jealousy; begrudges or resents someone for their achievements, possessions, or qualities - excitement: feels excitement or great enthusiasm and eagerness - faith: expresses religious faith, has a strong belief in the doctrines of a religion, or trust in god - fear: is afraid or scared due to a threat, danger, or harm - frustration: feels frustrated: upset or annoyed because of inability to change or achieve something - gratitude: is thankful or grateful for something - greed: is greedy, rapacious, avaricious, or has selfish desire to acquire or possess more than what one needs - grief: feels grief or intense sorrow, or grieves for someone who has died - guilt: feels guilt, remorse, or regret to have committed wrong or failed in an obligation - indifference: is uncaring, unsympathetic, uncharitable, or callous, shows indifference, lack of concern, coldness towards someone - joy: is happy, feels joy, great pleasure, elation, satisfaction, contentment, or delight - love: feels love, strong affection, passion, or deep romantic attachment for someone - nervousness: feels nervous, anxious, worried, uneasy, apprehensive, stressed, troubled or tense - nostalgia: feels nostalgia, longing or wistful affection for the past, something lost, or for a period in one’s life, feels homesickness, a longing for one’s home, city, or country while being away; longing for a familiar place - optimism: feels optimism or hope, is hopeful or confident about the future, that something good may happen, or the success of something - pain: feels physical pain or is experiences physical suffering - pride: is proud, feels pride from one’s own achievements, self-fulfillment, or from the achievements of those with whom one is closely associated, or from qualities or possessions that are widely admired - relief: feels relaxed, relief from tension or anxiety - sadness: feels sadness, sorrow, unhappiness, depression, dejection - surprise: is surprised, astonished or shocked by something unexpected - trust: trusts or has confidence in someone, or believes that someone is good, honest, or reliable ## Dataset [EmoLit (Zenodo)](https://zenodo.org/record/7883954) ## Code [EmoLit Train (Github)](https://github.com/lrei/emolit_train) ## Models - [LARGE](https://huggingface.co/lrei/roberta-large-emolit) - [BASE](https://huggingface.co/lrei/roberta-base-emolit) - [DISTILL](https://huggingface.co/lrei/distilroberta-base-emolit)
{"language": ["en"], "library_name": "transformers", "license": "mit", "pipeline_tag": "text-classification", "widget": [{"text": "Thank you for your help, brave traveler."}, {"text": "There is no creature loves me; And if I die no soul will pity me."}, {"text": "We men are wretched things."}]}
task
[ "TEXTUAL_ENTAILMENT" ]
42,691
tmnam20/mdeberta-v3-base-vnrte-100
tmnam20
text-classification
[ "transformers", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "en", "dataset:tmnam20/VieGLUE", "base_model:microsoft/mdeberta-v3-base", "base_model:finetune:microsoft/mdeberta-v3-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-01-16T08:41:59Z
2024-01-16T08:44:53+00:00
6
0
--- base_model: microsoft/mdeberta-v3-base datasets: - tmnam20/VieGLUE language: - en license: mit metrics: - accuracy tags: - generated_from_trainer model-index: - name: mdeberta-v3-base-vnrte-100 results: - task: type: text-classification name: Text Classification dataset: name: tmnam20/VieGLUE/VNRTE type: tmnam20/VieGLUE config: vnrte split: validation args: vnrte metrics: - type: accuracy value: 0.9987248963978324 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mdeberta-v3-base-vnrte-100 This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VNRTE dataset. It achieves the following results on the evaluation set: - Loss: 0.0063 - Accuracy: 0.9987 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 100 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0031 | 1.28 | 500 | 0.0002 | 1.0 | | 0.0002 | 2.55 | 1000 | 0.0011 | 0.9997 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mdeberta-v3-base-vnrte-100 This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VNRTE dataset. It achieves the following results on the evaluation set: - Loss: 0.0063 - Accuracy: 0.9987 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 100 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0031 | 1.28 | 500 | 0.0002 | 1.0 | | 0.0002 | 2.55 | 1000 | 0.0011 | 0.9997 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
{"base_model": "microsoft/mdeberta-v3-base", "datasets": ["tmnam20/VieGLUE"], "language": ["en"], "license": "mit", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mdeberta-v3-base-vnrte-100", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tmnam20/VieGLUE/VNRTE", "type": "tmnam20/VieGLUE", "config": "vnrte", "split": "validation", "args": "vnrte"}, "metrics": [{"type": "accuracy", "value": 0.9987248963978324, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,692
Trelis/all-MiniLM-L12-v2-ft-pairs-cosine
Trelis
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1382", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "base_model:sentence-transformers/all-MiniLM-L12-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L12-v2", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-06-14T21:54:13Z
2024-06-14T21:54:21+00:00
7
0
--- base_model: sentence-transformers/all-MiniLM-L12-v2 datasets: [] language: [] library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1382 - loss:CosineSimilarityLoss widget: - source_sentence: What is the definition of 'Advantage' in Touch Rugby International Rules? sentences: - "1\twhen a Change of Possession takes place due to a player in Possession \nmaking\ \ contact with the Sideline or any ground outside the Field of Play, \nprior to\ \ a Touch being made; or\n13.6.2\twhen the ball not in Possession of a player\ \ makes contact with the \nSideline or any ground outside the Field of Play.\n\ 13.7\tA player may not perform a Tap in replacement of a Rollball.\nRuling = The\ \ offending Team must return to the Mark and perform the Rollball.\n13.8\tAn attacking\ \ player, other than the player performing the Rollball, may receive \nthe ball\ \ at the Rollball and shall do so without delay. That player is referred to as\ \ \nthe Half.\n13.9\tThe Half may control the ball with a foot prior to picking\ \ up the ball. \n13.10\tA player ceases to be the Half once the ball is passed\ \ to another player.\n13.11\tDefending players are not to interfere with the performance\ \ of the Rollball or the \nHalf. \nRuling = A Penalty to the Attacking Team at\ \ a point ten (10) metres directly Forward of the \nInfringement.\n13.12\tPlayers\ \ of the Defending Team must not move Forward of the Onside position \nuntil the\ \ Half has made contact with the ball, unless directed to do so by the \nReferee\ \ or in accordance with 13.12.1.\n13.12.1\tWhen the Half is not within one (1)\ \ metre of the Rollball, Onside players \nof the Defending Team may move Forward\ \ as soon as the player \nperforming the Rollball releases the ball. If the Half\ \ is not in position and \na defending player moves Forward and makes contact\ \ with the ball, a \nChange of Possession results.\n13.13\tIf in the act of performing\ \ the Rollball, the Attacking player makes contact with \nthe Sideline or any\ \ ground outside the Field of Play a Change of Possession will \noccur with the\ \ Rollball to be taken seven (7) metres in field.\n13.14\tAfter a Touch is made\ \ between the Dead Ball Line and the seven (7) metre line, \nan Attacking Team\ \ is permitted to Rollball on the seven (7) metre line at a point \ndirectly in\ \ line with where the Touch was made.\nFIT Playing Rules - 5th Edition\n12\nCOPYRIGHT\ \ © Touch Football Australia" - " distance beyond the Defending Try Line.\nRuling = A Penalty to the Attacking\ \ Team at the point of the Infringement or on the seven (7) \nmetre line directly\ \ Forward of the Infringement.\n15.4\tWhen a Rollball occurs within Defending\ \ Team’s Seven Metre Zone or a Penalty \nTap within ten (10) metres of the Defending\ \ Team’s Try Line, all players from the \nDefending Team must have both feet on\ \ or behind their Try Line and no other \npart of the body in contact with the\ \ ground Forward of their Try Line.\nRuling = A Penalty to the Attacking Team\ \ at the seven (7) metre line directly Forward of the \npoint of the Infringement.\n\ 15.5\tAfter effecting the Touch, the defending player must retire the required\ \ seven \n(7) metres or to the Defending Try Line as indicated by the Referee\ \ without \ninterfering with the Attacking Team.\nRuling = A Penalty to the Attacking\ \ Team ten (10) metres Forward of the Infringement or if \non the Defensive Try\ \ Line, on the seven (7) metre line.\nFIT Playing Rules - 5th Edition\nCOPYRIGHT\ \ © Touch Football Australia 2020\n13\n16  Obstruction \n16.1\tA player in Possession\ \ must not run or otherwise move behind other attacking \nplayers or the Referee\ \ in an attempt to avoid an Imminent Touch.\nRuling = A Penalty to the non-offending\ \ Team at the point of the Infringement. \n16.2\tThe player in Possession is not\ \ to hold or otherwise impede a defending player \nin any way.\nRuling = A Penalty\ \ to the non-offending Team at the point of the Infringement. \n16.3\tAn attacking\ \ player in support of the player in Possession may move as \nnecessary to achieve\ \ a supporting position but must not grab, hold, push or \notherwise deliberately\ \ interfere with a defending player attempting to make a \nTouch\nRuling = A Penalty\ \ to the non-offending Team at the point of the Infringement or on the \nseven\ \ (7) metre line if the Infringement occurs in the In-Goal Area. \n16.4\tPlayers\ \ in the Defending Team may not obstruct or interfere with an attacking \nplayer.\n\ Ruling = A Penalty to the non-offending Team at the point of the In" - " without delay. While in a position at the end of the Field of Play, \nthe Team\ \ coach(s) or Team official must remain no closer than five (5) metres \nfrom\ \ the Dead Ball Line and must not coach or communicate (verbal or non-\nverbal)\ \ with either Team or the Referees.\n7  Commencement and Recommencement of Play\ \ \n7.1\tTeam captains are to toss a coin in the presence of the Referee(s) with\ \ the \nwinning captain’s Team having the choice of the direction the Team wishes\ \ \nto run in the first half; the choice of Interchange Areas for the duration\ \ of the \nmatch, including any extra time; and the choice of which team will\ \ commence \nthe match in Possession.\n7.2\tA player of the Attacking Team is\ \ to commence the match with a Tap at the \ncentre of the Halfway Line following\ \ the indication to commence play from the \nReferee.\n7.3\tFor the recommencement\ \ of play following a Half Time break, Teams shall \nchange directions and the\ \ Team not commencing the match with Possession is \nto re-start the match with\ \ a Tap.\n7.4\tFor the recommencement of play following the scoring of a Try,\ \ the Team \nagainst which the Try was conceded will recommence play with a Tap.\n\ 7.5\tA Tap may not be taken until all attacking Players are in an Onside position.\n\ Ruling = The Player will be directed to return to the Mark and to take the Tap\ \ again.\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football Australia\ \ 2020\n7\n7.6\tA Tap may not be taken until at least four (4) defending players\ \ are in an Onside \nposition or unless directed to so by the Referee. Where the\ \ number of players \non the field from the Defending Team falls below four (4),\ \ all players must be in \nan Onside position for a Tap to be taken unless directed\ \ to do so by the Referee.\nRuling = The Player will be directed to return to\ \ the Mark and to take the Tap again.\n7.7\tThe Tap to commence or recommence\ \ play must be performed without delay.\nRuling = A Penalty to the non-offending\ \ team at the centre of the Halfway line.\n8  Match Duration \n \n8.1\tA match\ \ is 40 minutes in duration, consisting of two (2) x 20 minute" - source_sentence: What is the term for the area in the Field of Play bounded by the Sidelines, the Try Lines and the Dead Ball Lines? sentences: - " related matters inside the Perimeter \nfor the Duration of a match, has jurisdiction\ \ over all players, coaches and \nofficials and is required to:\n25.1.1\tInspect\ \ the Field of Play, Line Markings and Markers prior to the \ncommencement of\ \ the Match to ensure the safety of all participants.\n25.1.2\tAdjudicate on the\ \ Rules of the game;\n25.1.3\tImpose any sanction necessary to control the match;\n\ 25.1.4\tAward Tries and record the progressive score;\n25.1.5\tMaintain a count\ \ of Touches during each Possession;\n25.1.6\tAward Penalties for Infringements\ \ against the Rules; and\n25.1.7\tReport to the relevant competition administration\ \ any Sin Bins, \nDismissals or injuries to any participant sustained during a\ \ Match.\n25.2\tOnly Team captains are permitted to seek clarification of a decision\ \ directly \nfrom the Referee. An approach may only be made during a break in\ \ play or at \nthe discretion of the Referee.\nFIT Playing Rules - 5th Edition\n\ 18\nCOPYRIGHT © Touch Football Australia 2020\nHALFWAY LINE\nSIN BIN AREAS\nIN-GOAL\ \ AREA\nTRY LINE\n7 M ZONE\nDEAD BALL LINE\nPERIMETER\nINTERCHANGE\nAREA\n20M\n\ 10M\n10M\n1M\n5M\n7 M\n7 M\n7 M\n7 M\n50M\n3M\n70M\nINTERCHANGE\nAREA\n Appendix\ \ 1 – Field of Play\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football\ \ Australia 2020\n19\nFEDERATION OF INTERNATIONAL TOUCH\n" - " Player\nThe player who replaces another player during Interchange. There is\ \ \na maximum of eight (8) substitute players in any Team and except \nwhen interchanging,\ \ in the Sin Bin, dismissed or on the Field of Play, \nthey must remain in the\ \ Substitution Box.\nTap and Tap Penalty\nThe method of commencing the match,\ \ recommencing the match \nafter Half Time and after a Try has been scored. The\ \ Tap is also the \nmethod of recommencing play when a Penalty is awarded. The\ \ Tap \nis taken by placing the ball on the ground at or behind the Mark, \nreleasing\ \ both hands from the ball, tapping the ball gently with either \nfoot or touching\ \ the foot on the ball. The ball must not roll or move \nmore than one (1) metre\ \ in any direction and must be retrieved \ncleanly, without touching the ground\ \ again. The player may face any \ndirection and use either foot. Provided it\ \ is at the Mark, the ball does \nnot have to be lifted from the ground prior\ \ to a Tap being taken.\nTeam\nA group of players constituting one (1) side in\ \ a competition match.\nTFA\nTouch Football Australia Limited\nTouch\nAny contact\ \ between the player in Possession and a defending \nplayer. A Touch includes\ \ contact on the ball, hair or clothing and may \nbe made by a defending player\ \ or by the player in Possession.\nTouch Count\nThe progressive number of Touches\ \ that each Team has before a \nChange of Possession, from zero (0) to six (6).\n\ Try\nThe result of any attacking player, except the Half, placing the ball on\ \ \nor over the Team’s Attacking Try Line before being Touched.\nTry Lines\nThe\ \ lines separating the In-Goal Areas from the Field of Play. See \nAppendix 1.\n\ Voluntary Rollball\nThe player in Possession performs a Rollball before a Touch\ \ is made \nwith a defending player.\nWing\nThe player outside the Link player.\n\ Winner\nThe Team that scores the most Tries during the match.\nFIT Playing Rules\ \ - 5th Edition\n4\nCOPYRIGHT © Touch Football Australia 2020\n Rules of Play\ \ \n Mode of Play \nThe object of the game of Touch is for each Team to score\ \ Tries and to prevent the \nopposition from scoring. The ball may be passed,\ \ knocked or handed between players \nof the Attacking Team who may in turn run" - " item that may prove dangerous. \nAny jewellery or other items that cannot be\ \ removed are to be taped to the \nsatisfaction of the Referee.\n4.8\tLong (extend\ \ beyond the finger flesh when viewed from the palm) or sharp \nfingernails are\ \ not allowed.\n4.9\tReferees and players may wear spectacles or sunglasses provided\ \ they are safe \nand securely attached.\n4.10\tReferees and players may wear\ \ sport monitoring equipment and medical \nsupports such as knee or ankle braces\ \ provided, at the sole discretion of \ncompetition’s controlling body, the items\ \ are not dangerous.\n5  Team Composition \n5.1\tA Team consists of a maximum\ \ of 14 players, no more than six (6) of whom are \nallowed on the field at any\ \ time. \nFIT Playing Rules - 5th Edition\n6\nCOPYRIGHT © Touch Football Australia\ \ 2020\nRuling = A Penalty awarded to the non-offending Team at the time the offence\ \ is identified \nseven (7) metres infield on the Halfway Line or the position\ \ of the ball, whichever is the \ngreater Advantage.\n5.2\tA Team must have a\ \ minimum of four (4) players on the field for a match to \ncommence or continue,\ \ except during a Drop-Off.\n5.3\tWhere the number of players on the field from\ \ one Team falls below four (4), \nthe match is to be abandoned and the non-offending\ \ Team is to be declared the \nWinner.\n5.3.1\tThis does not apply for players\ \ sent to the Sin Bin Area.\n5.4\tIn mixed gender competitions, the maximum number\ \ of males allowed on the \nField of Play is three (3), the minimum male requirement\ \ is one (1) and the \nminimum female requirement is one (1).\n6  Team Coach and\ \ Team Officials \n6.1\tThe Team coach(s) and Team officials may be permitted\ \ inside the Perimeter \nbut shall be required to be positioned either in the\ \ Interchange Area or at the \nend of the Field of Play for the duration of the\ \ match.\n6.2\tThe Team coach(s) and Team officials may move from one position\ \ to the other \nbut shall do so without delay. While in a position at the end\ \ of the Field of Play, \nthe Team coach(s) or Team official must remain no closer\ \ than five (5) metres \nfrom the Dead Ball Line and must not coach or communicate\ \ (verbal" - source_sentence: What is the consequence for a player or official who is dismissed for misconduct in a match? sentences: - " to the Sin Bin must return to the Interchange Area prior to re-\nentering the\ \ Field of Play.\n22.4\tAny action that causes the Touch Count to restart will\ \ result in a continuation of \nthat Possession. For the avoidance of doubt, should\ \ a defender knock the ball \ndown or give away a Penalty, this does not mean\ \ that the Possession has been \ncompleted, but rather the Possession continues.\ \ \nFIT Playing Rules - 5th Edition\n16\nCOPYRIGHT © Touch Football Australia\ \ 2020\n23  Dismissal \n23.1\tA player or official dismissed for misconduct is\ \ to take no further part in that \nmatch and is to move to and remain outside\ \ the Perimeter for the remainder of \nthe match.\n23.2\tThe dismissed player\ \ or official cannot be replaced and, in accordance with NTA \nDisciplinary Regulations,\ \ that player shall receive an automatic two (2) match \nsuspension. \n24  Drop-Off\ \ \n24.1\tShould a Winner be required in drawn matches, the following Drop-Off\ \ \nprocedure is used to determine a Winner.\n24.1.1\tEach Team will reduce their\ \ on-field Team to four (4) players and within \n60 seconds take up a position\ \ to restart play from the Halfway Line, \ndefending the same end of the field\ \ as at the End of Play.\n24.1.2\tThe Drop-Off commences with a Tap from the centre\ \ of the Halfway Line \nby the Team that did not commence the match with Possession.\n\ 24.1.3\tThe Drop-Off will commence with a two (2) minute period of extra time.\n\ 24.1.4\tShould a Team be leading at the expiration of the two (2) minute period\ \ \nof extra time then that Team will be declared the Winner and Match \ncomplete.\n\ 24.1.5\tShould neither Team be leading at the expiration of two (2) minutes, a\ \ \nsignal is given and the match will pause at the next Touch or Dead Ball. \n\ Each Team will then remove another player from the Field of Play.\n24.1.6\tThe\ \ Match will recommence immediately after the players have left the \nfield at\ \ the same place where it paused (i.e. the Team retains Possession \nat the designated\ \ number of Touches, or at Change of Possession due to \nsome Infringement or\ \ the sixth Touch) and the Match will continue until" - " without delay. While in a position at the end of the Field of Play, \nthe Team\ \ coach(s) or Team official must remain no closer than five (5) metres \nfrom\ \ the Dead Ball Line and must not coach or communicate (verbal or non-\nverbal)\ \ with either Team or the Referees.\n7  Commencement and Recommencement of Play\ \ \n7.1\tTeam captains are to toss a coin in the presence of the Referee(s) with\ \ the \nwinning captain’s Team having the choice of the direction the Team wishes\ \ \nto run in the first half; the choice of Interchange Areas for the duration\ \ of the \nmatch, including any extra time; and the choice of which team will\ \ commence \nthe match in Possession.\n7.2\tA player of the Attacking Team is\ \ to commence the match with a Tap at the \ncentre of the Halfway Line following\ \ the indication to commence play from the \nReferee.\n7.3\tFor the recommencement\ \ of play following a Half Time break, Teams shall \nchange directions and the\ \ Team not commencing the match with Possession is \nto re-start the match with\ \ a Tap.\n7.4\tFor the recommencement of play following the scoring of a Try,\ \ the Team \nagainst which the Try was conceded will recommence play with a Tap.\n\ 7.5\tA Tap may not be taken until all attacking Players are in an Onside position.\n\ Ruling = The Player will be directed to return to the Mark and to take the Tap\ \ again.\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football Australia\ \ 2020\n7\n7.6\tA Tap may not be taken until at least four (4) defending players\ \ are in an Onside \nposition or unless directed to so by the Referee. Where the\ \ number of players \non the field from the Defending Team falls below four (4),\ \ all players must be in \nan Onside position for a Tap to be taken unless directed\ \ to do so by the Referee.\nRuling = The Player will be directed to return to\ \ the Mark and to take the Tap again.\n7.7\tThe Tap to commence or recommence\ \ play must be performed without delay.\nRuling = A Penalty to the non-offending\ \ team at the centre of the Halfway line.\n8  Match Duration \n \n8.1\tA match\ \ is 40 minutes in duration, consisting of two (2) x 20 minute" - " Attacking Team from scoring a Try.\nFIT Playing Rules - 5th Edition\nCOPYRIGHT\ \ © Touch Football Australia 2020\n15\n19  Advantage \n19.1\tWhere a Defending\ \ Team player is Offside at a Tap or Rollball and attempts \nto interfere with\ \ play, the Referee will allow Advantage or award a Penalty, \nwhichever is of\ \ greater Advantage to the Attacking Team.\n19.2\tShould the Attacking Team in\ \ the act of taking the Advantage subsequently \ninfringe, the Ruling on the initial\ \ Infringement will apply.\n20  Misconduct \n20.1\tMisconduct warranting Penalty,\ \ Forced Interchange, Sin Bin or Dismissal \nincludes:\n20.1.1\tContinuous or\ \ regular breaches of the Rules;\n20.1.2\tSwearing towards another player, Referee,\ \ spectator or other match \t\nofficial;\n20.1.3\tDisputing decisions of Referees\ \ or other match official(s);\n20.1.4\tUsing more than the necessary physical\ \ force to make a Touch;\n20.1.5\tPoor sportsmanship;\n20.1.6\tTripping, striking,\ \ or otherwise assaulting another player, Referee, \nspectator or other match\ \ official; or\n20.1.7\tAny other action that is contrary to the spirit of the\ \ game.\n21  Forced Interchange \n21.1\tWhere the Referee deems it necessary\ \ to implement a Forced Interchange \nfollowing an Infringement, the Referee is\ \ to stop the match, direct the ball to \nbe placed on the Mark, advise the offending\ \ player of the reason for the Forced \nInterchange, direct that player to return\ \ to the Interchange Area, display the \nrelevant signal and award a Penalty to\ \ the non-offending Team.\n22  Sin Bin \n22.1\tThe on-field Referee is required\ \ to indicate the commencement and the end of \nthe Sin Bin time.\n22.2\tAny player\ \ sent to the Sin Bin must stand in the Sin Bin Area at the opposition’s \nend\ \ of the Field of Play and on the same side as their Interchange Area. \n22.3\t\ Any player sent to the Sin Bin must return to the Interchange Area prior to re-\n\ entering the Field of Play.\n22.4\tAny action that causes the Touch Count to restart\ \ will result in a continuation of \nthat Possession. For the avoidance of" - source_sentence: Can National Touch Associations or their authorized competition providers adapt or alter the FIT Playing Rules for local competitions? sentences: - " related matters inside the Perimeter \nfor the Duration of a match, has jurisdiction\ \ over all players, coaches and \nofficials and is required to:\n25.1.1\tInspect\ \ the Field of Play, Line Markings and Markers prior to the \ncommencement of\ \ the Match to ensure the safety of all participants.\n25.1.2\tAdjudicate on the\ \ Rules of the game;\n25.1.3\tImpose any sanction necessary to control the match;\n\ 25.1.4\tAward Tries and record the progressive score;\n25.1.5\tMaintain a count\ \ of Touches during each Possession;\n25.1.6\tAward Penalties for Infringements\ \ against the Rules; and\n25.1.7\tReport to the relevant competition administration\ \ any Sin Bins, \nDismissals or injuries to any participant sustained during a\ \ Match.\n25.2\tOnly Team captains are permitted to seek clarification of a decision\ \ directly \nfrom the Referee. An approach may only be made during a break in\ \ play or at \nthe discretion of the Referee.\nFIT Playing Rules - 5th Edition\n\ 18\nCOPYRIGHT © Touch Football Australia 2020\nHALFWAY LINE\nSIN BIN AREAS\nIN-GOAL\ \ AREA\nTRY LINE\n7 M ZONE\nDEAD BALL LINE\nPERIMETER\nINTERCHANGE\nAREA\n20M\n\ 10M\n10M\n1M\n5M\n7 M\n7 M\n7 M\n7 M\n50M\n3M\n70M\nINTERCHANGE\nAREA\n Appendix\ \ 1 – Field of Play\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football\ \ Australia 2020\n19\nFEDERATION OF INTERNATIONAL TOUCH\n" - "/or outside the boundaries of the Field of Play prior to the \nsubsequent Rollball.\n\ Dead Ball Line\nThe end boundaries of the Field of Play. There is one at each\ \ end of \nthe Field of Play. See Appendix 1.\nDefending Try Line\nThe line which\ \ a Team has to defend to prevent a Try.\nDefending Team\nThe Team without or\ \ which is losing Possession.\nDismissal\nWhen a player is sent from the Field\ \ of Play for the rest of the match.\nDrop-Off\nA procedure used to determine\ \ a Winner following equal scores at \nthe expiration of normal Duration.\nDuration\n\ The length of time a competition match lasts, which is normally \nforty-five minutes,\ \ inclusive of a five (5) minute Half Time.\nEnd of Play\nWhen the Referee indicates\ \ completion of the match.\nExclusion\nWhen a player is sent to the nearest Sin\ \ Bin Area following three (3) \npenalties by the Defending Team upon entering\ \ their Seven Metre \nZone. The player is counted as a player on the Field of\ \ Play and \ncannot be replaced or interchanged.\nFIT Playing Rules - 5th Edition\n\ COPYRIGHT © Touch Football Australia 2020\n1\nFIT \nFederation of International\ \ Touch\nField of Play\nThe playing area bounded by the Sidelines and Dead Ball\ \ Lines, both \nof which are out of bounds. See Appendix 1.\nForced Interchange\n\ When a player is required to undertake a compulsory Interchange for \nan Infringement\ \ ruled more serious than a Penalty but less serious \nthan a Permanent Interchange,\ \ Sin Bin or Dismissal.\nForward\nA position or direction towards the Dead Ball\ \ Line beyond the Team’s \nAttacking Try Line.\nFull Time\nThe expiration of the\ \ second period of time allowed for play.\nHalf\nThe player who takes Possession\ \ following a Rollball.\nHalf Time\nThe break in play between the two halves of\ \ a match.\nImminent\nAbout to occur, it is almost certain to occur.\nInfringement\n\ The action of a player contrary to the Rules of the game.\nIn-Goal Area\nThe area\ \ in the Field of Play bounded by the Sidelines, the Try Lines \nand the Dead\ \ Ball Lines. There are two (2), one (1) at each end of the \nField of Play. See\ \ Appendix 1.\nInterchange\nThe act of an on-field player leaving the Field of\ \ Play to be replaced \nby" - " of Play \nThe object of the game of Touch is for each Team to score Tries\ \ and to prevent the \nopposition from scoring. The ball may be passed, knocked\ \ or handed between players \nof the Attacking Team who may in turn run or otherwise\ \ move with the ball in an \nattempt to gain territorial Advantage and to score\ \ Tries. Defending players prevent \nthe Attacking Team from gaining a territorial\ \ Advantage by touching the ball carrier. \n1  The Field of Play \n \n1.1\t\ The Field of Play is rectangular in shape measuring 70 metres in length from \n\ Try Line to Try Line, excluding the In-Goal Areas and 50 metres in width from\ \ \nSideline to Sideline excluding the Interchange Areas.\n1.1.1\tVariations to\ \ the dimensions of the Field of Play may be made but must be \nincluded in relevant\ \ competition, event or tournament conditions\n1.2\tLine Markings should be 4cm\ \ in width but must be no less than 2.5cm. Line \nMarkings are to be laid out\ \ as shown in Appendix 1 - The Field of Play. Sidelines \nextend seven (7) metres\ \ beyond the Try Lines to join the Dead Ball Lines and \ndefine the In-Goal Areas\ \ which measure fifty (50) metres wide by seven (7) \nmetres in length.\n1.3\t\ The Interchange Areas are located no closer than one (1) metre from each \nSideline.\n\ 1.4\tSuitably sized markers, cones or corner posts of a distinguishing colour\ \ and \nmade from safe and pliable material should be positioned at the intersections\ \ of \nthe Sideline and Halfway line and the Sideline and the Try Line.\n1.4.1\t\ Markers, cones or corner posts placed on the junction of the Sideline and \nTry\ \ Line are deemed to be in the Field of Play.\n1.4.2\tAll other markers or cones\ \ are deemed to be out of the Field of Play.\n1.5\tThe standard playing surface\ \ is grass. Other surfaces including synthetic grass \nmay be used but shall be\ \ subject to NTA approved standards.\n1.6\tThe Field of Play boundary lines are\ \ deemed to be out of play. Play becomes \nDead when the ball or a player in Possession\ \ touches the ground on or over a \nSideline or Dead Ball Line.\n2  Player Registration\ \ \n2.1\tParticipating players are to be registered with a NTA or with" - source_sentence: Can an attacking player pass the ball into a defending player intentionally seeking a rebound or a restart of the Touch count? sentences: - " without delay.\nRuling = A Penalty to the non-offending team at the centre of\ \ the Halfway line.\n8  Match Duration \n \n8.1\tA match is 40 minutes in duration,\ \ consisting of two (2) x 20 minute halves with \na Half Time break.\n8.1.1\t\ There is no time off for injury during a match.\n8.2\tLocal competition and tournament\ \ conditions may vary the duration of a match.\n8.3\tWhen time expires, play is\ \ to continue until the next Touch or Dead Ball and End \nof Play is signaled\ \ by the Referee. \n8.3.1\tShould a Penalty be awarded during this period, the\ \ Penalty is to be taken.\n8.4\tIf a match is abandoned in any circumstances other\ \ than those referred to in \nclause 24.1.6 the NTA or NTA competition provider\ \ in its sole discretion shall \ndetermine the result of the match.\n9  Possession\ \ \n \n9.1\tThe Team with the ball is entitled to six (6) Touches prior to a\ \ Change of \nPossession. \n9.2\tOn the Change of Possession due to an intercept,\ \ the first Touch will be zero (0) \nTouch. \n9.3\tFollowing the sixth Touch or\ \ a loss of Possession due to any other means, the \nball must be returned to\ \ the Mark without delay.\nRuling = A deliberate delay in the changeover procedure\ \ will result in a Penalty awarded to \nthe non-offending Team ten (10) metres\ \ Forward of the Mark for the Change of Possession.\n9.4\tIf the ball is dropped\ \ or passed and goes to ground during play, a Change of \nPossession results.\n\ Ruling = The Mark for the Change of Possession is where the ball makes initial\ \ contact with \nthe ground.\n9.5\tIf the ball, while still under the control\ \ of the Half, contacts the ground in the In-\nGoal Area, Possession is lost.\n\ Ruling = Play will restart with a Rollball at the nearest point on the seven (7)\ \ metre line.\nFIT Playing Rules - 5th Edition\n8\nCOPYRIGHT © Touch Football\ \ Australia 2020\n9.6\tIf a player mishandles the ball and even if in an effort\ \ to gain control, the ball \nis accidentally knocked Forward into any other Player,\ \ a Change of Possession \nresults.\n10  The Touch \n10.1\tA Touch may be made" - " a player enters the Field of Play but does not impede the scoring of a Try the\ \ \noffending player will be sent to the Sin Bin.\n17.8\tFollowing a Try, players\ \ may Interchange at will, without having to wait for the \nplayer to enter the\ \ Interchange Area, but must do so prior to the Tap being taken \nto recommence\ \ play.\n18  Penalty \n18.1\tThe Tap must be performed in accordance with the\ \ Definitions.\nRuling = The Referee will instruct the offending Team to return\ \ to the Mark and perform the \nTap again.\n18.2\tFor Infringements that occur\ \ between seven (7) metre lines, the Mark for the \nPenalty Tap is at the point\ \ of Infringement unless otherwise indicated by the \nReferee. \n18.3\tFor Infringements\ \ that occur within the Seven Metre Zone the Tap must be \ntaken at the nearest\ \ seven (7) metre line.\n18.4\tFor Infringements that occur beyond the Field of\ \ Play or in the In-Goal Area \nthe Mark is seven (7) metres infield from the\ \ Sideline, or directly Forward of \nthe Infringement on the seven (7) metre line\ \ nearest the Infringement or at a \nposition indicated by the Referee.\n18.5\t\ The Mark must be indicated by the Referee before a Penalty Tap is taken.\n18.6\t\ The Penalty Tap must be performed without delay after the Referee indicates \n\ the Mark.\nRuling = A Penalty to the non-offending team at the point of Infringement.\n\ 18.7\tA player may perform a Rollball instead of a Penalty Tap and the player\ \ who \nreceives the ball does not become the Half.\n18.8\tIf the Defending Team\ \ is penalised three (3) times upon entering their Seven \nMetre Zone during a\ \ single Possession, the last offending player will be given an \nExclusion until\ \ the end of that Possession.\n18.9\tA Penalty Try is awarded if any action by\ \ a player, Team official or spectator, \ndeemed by the Referee to be contrary\ \ to the Rules or spirit of the game clearly \nprevents the Attacking Team from\ \ scoring a Try.\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football\ \ Australia 2020\n15\n19  Advantage \n19.1\tWhere a Defending Team player is\ \ Offside at a Tap or" - " related matters inside the Perimeter \nfor the Duration of a match, has jurisdiction\ \ over all players, coaches and \nofficials and is required to:\n25.1.1\tInspect\ \ the Field of Play, Line Markings and Markers prior to the \ncommencement of\ \ the Match to ensure the safety of all participants.\n25.1.2\tAdjudicate on the\ \ Rules of the game;\n25.1.3\tImpose any sanction necessary to control the match;\n\ 25.1.4\tAward Tries and record the progressive score;\n25.1.5\tMaintain a count\ \ of Touches during each Possession;\n25.1.6\tAward Penalties for Infringements\ \ against the Rules; and\n25.1.7\tReport to the relevant competition administration\ \ any Sin Bins, \nDismissals or injuries to any participant sustained during a\ \ Match.\n25.2\tOnly Team captains are permitted to seek clarification of a decision\ \ directly \nfrom the Referee. An approach may only be made during a break in\ \ play or at \nthe discretion of the Referee.\nFIT Playing Rules - 5th Edition\n\ 18\nCOPYRIGHT © Touch Football Australia 2020\nHALFWAY LINE\nSIN BIN AREAS\nIN-GOAL\ \ AREA\nTRY LINE\n7 M ZONE\nDEAD BALL LINE\nPERIMETER\nINTERCHANGE\nAREA\n20M\n\ 10M\n10M\n1M\n5M\n7 M\n7 M\n7 M\n7 M\n50M\n3M\n70M\nINTERCHANGE\nAREA\n Appendix\ \ 1 – Field of Play\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football\ \ Australia 2020\n19\nFEDERATION OF INTERNATIONAL TOUCH\n" --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) <!-- at revision a05860a77cef7b37e0048a7864658139bc18a854 --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Trelis/all-MiniLM-L12-v2-ft-pairs-cosine") # Run inference sentences = [ 'Can an attacking player pass the ball into a defending player intentionally seeking a rebound or a restart of the Touch count?', ' a player enters the Field of Play but does not impede the scoring of a Try the \noffending player will be sent to the Sin Bin.\n17.8\tFollowing a Try, players may Interchange at will, without having to wait for the \nplayer to enter the Interchange Area, but must do so prior to the Tap being taken \nto recommence play.\n18\u2002 Penalty \n18.1\tThe Tap must be performed in accordance with the Definitions.\nRuling = The Referee will instruct the offending Team to return to the Mark and perform the \nTap again.\n18.2\tFor Infringements that occur between seven (7) metre lines, the Mark for the \nPenalty Tap is at the point of Infringement unless otherwise indicated by the \nReferee. \n18.3\tFor Infringements that occur within the Seven Metre Zone the Tap must be \ntaken at the nearest seven (7) metre line.\n18.4\tFor Infringements that occur beyond the Field of Play or in the In-Goal Area \nthe Mark is seven (7) metres infield from the Sideline, or directly Forward of \nthe Infringement on the seven (7) metre line nearest the Infringement or at a \nposition indicated by the Referee.\n18.5\tThe Mark must be indicated by the Referee before a Penalty Tap is taken.\n18.6\tThe Penalty Tap must be performed without delay after the Referee indicates \nthe Mark.\nRuling = A Penalty to the non-offending team at the point of Infringement.\n18.7\tA player may perform a Rollball instead of a Penalty Tap and the player who \nreceives the ball does not become the Half.\n18.8\tIf the Defending Team is penalised three (3) times upon entering their Seven \nMetre Zone during a single Possession, the last offending player will be given an \nExclusion until the end of that Possession.\n18.9\tA Penalty Try is awarded if any action by a player, Team official or spectator, \ndeemed by the Referee to be contrary to the Rules or spirit of the game clearly \nprevents the Attacking Team from scoring a Try.\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football Australia 2020\n15\n19\u2002 Advantage \n19.1\tWhere a Defending Team player is Offside at a Tap or', ' without delay.\nRuling = A Penalty to the non-offending team at the centre of the Halfway line.\n8\u2002 Match Duration \n \n8.1\tA match is 40 minutes in duration, consisting of two (2) x 20 minute halves with \na Half Time break.\n8.1.1\tThere is no time off for injury during a match.\n8.2\tLocal competition and tournament conditions may vary the duration of a match.\n8.3\tWhen time expires, play is to continue until the next Touch or Dead Ball and End \nof Play is signaled by the Referee. \n8.3.1\tShould a Penalty be awarded during this period, the Penalty is to be taken.\n8.4\tIf a match is abandoned in any circumstances other than those referred to in \nclause 24.1.6 the NTA or NTA competition provider in its sole discretion shall \ndetermine the result of the match.\n9\u2002 Possession \n \n9.1\tThe Team with the ball is entitled to six (6) Touches prior to a Change of \nPossession. \n9.2\tOn the Change of Possession due to an intercept, the first Touch will be zero (0) \nTouch. \n9.3\tFollowing the sixth Touch or a loss of Possession due to any other means, the \nball must be returned to the Mark without delay.\nRuling = A deliberate delay in the changeover procedure will result in a Penalty awarded to \nthe non-offending Team ten (10) metres Forward of the Mark for the Change of Possession.\n9.4\tIf the ball is dropped or passed and goes to ground during play, a Change of \nPossession results.\nRuling = The Mark for the Change of Possession is where the ball makes initial contact with \nthe ground.\n9.5\tIf the ball, while still under the control of the Half, contacts the ground in the In-\nGoal Area, Possession is lost.\nRuling = Play will restart with a Rollball at the nearest point on the seven (7) metre line.\nFIT Playing Rules - 5th Edition\n8\nCOPYRIGHT © Touch Football Australia 2020\n9.6\tIf a player mishandles the ball and even if in an effort to gain control, the ball \nis accidentally knocked Forward into any other Player, a Change of Possession \nresults.\n10\u2002 The Touch \n10.1\tA Touch may be made', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `learning_rate`: 1e-05 - `num_train_epochs`: 1 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.3 - `bf16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 1e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.3 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | |:------:|:----:|:-------------:|:------:| | 0.1364 | 6 | 0.1722 | 0.1476 | | 0.2727 | 12 | 0.1233 | 0.0820 | | 0.4091 | 18 | 0.0702 | 0.0579 | | 0.5455 | 24 | 0.0668 | 0.0593 | | 0.6818 | 30 | 0.0527 | 0.0575 | | 0.8182 | 36 | 0.0534 | 0.0561 | | 0.9545 | 42 | 0.0659 | 0.0556 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.1.1+cu121 - Accelerate: 0.31.0 - Datasets: 2.17.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) <!-- at revision a05860a77cef7b37e0048a7864658139bc18a854 --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Trelis/all-MiniLM-L12-v2-ft-pairs-cosine") # Run inference sentences = [ 'Can an attacking player pass the ball into a defending player intentionally seeking a rebound or a restart of the Touch count?', ' a player enters the Field of Play but does not impede the scoring of a Try the \noffending player will be sent to the Sin Bin.\n17.8\tFollowing a Try, players may Interchange at will, without having to wait for the \nplayer to enter the Interchange Area, but must do so prior to the Tap being taken \nto recommence play.\n18\u2002 Penalty \n18.1\tThe Tap must be performed in accordance with the Definitions.\nRuling = The Referee will instruct the offending Team to return to the Mark and perform the \nTap again.\n18.2\tFor Infringements that occur between seven (7) metre lines, the Mark for the \nPenalty Tap is at the point of Infringement unless otherwise indicated by the \nReferee. \n18.3\tFor Infringements that occur within the Seven Metre Zone the Tap must be \ntaken at the nearest seven (7) metre line.\n18.4\tFor Infringements that occur beyond the Field of Play or in the In-Goal Area \nthe Mark is seven (7) metres infield from the Sideline, or directly Forward of \nthe Infringement on the seven (7) metre line nearest the Infringement or at a \nposition indicated by the Referee.\n18.5\tThe Mark must be indicated by the Referee before a Penalty Tap is taken.\n18.6\tThe Penalty Tap must be performed without delay after the Referee indicates \nthe Mark.\nRuling = A Penalty to the non-offending team at the point of Infringement.\n18.7\tA player may perform a Rollball instead of a Penalty Tap and the player who \nreceives the ball does not become the Half.\n18.8\tIf the Defending Team is penalised three (3) times upon entering their Seven \nMetre Zone during a single Possession, the last offending player will be given an \nExclusion until the end of that Possession.\n18.9\tA Penalty Try is awarded if any action by a player, Team official or spectator, \ndeemed by the Referee to be contrary to the Rules or spirit of the game clearly \nprevents the Attacking Team from scoring a Try.\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football Australia 2020\n15\n19\u2002 Advantage \n19.1\tWhere a Defending Team player is Offside at a Tap or', ' without delay.\nRuling = A Penalty to the non-offending team at the centre of the Halfway line.\n8\u2002 Match Duration \n \n8.1\tA match is 40 minutes in duration, consisting of two (2) x 20 minute halves with \na Half Time break.\n8.1.1\tThere is no time off for injury during a match.\n8.2\tLocal competition and tournament conditions may vary the duration of a match.\n8.3\tWhen time expires, play is to continue until the next Touch or Dead Ball and End \nof Play is signaled by the Referee. \n8.3.1\tShould a Penalty be awarded during this period, the Penalty is to be taken.\n8.4\tIf a match is abandoned in any circumstances other than those referred to in \nclause 24.1.6 the NTA or NTA competition provider in its sole discretion shall \ndetermine the result of the match.\n9\u2002 Possession \n \n9.1\tThe Team with the ball is entitled to six (6) Touches prior to a Change of \nPossession. \n9.2\tOn the Change of Possession due to an intercept, the first Touch will be zero (0) \nTouch. \n9.3\tFollowing the sixth Touch or a loss of Possession due to any other means, the \nball must be returned to the Mark without delay.\nRuling = A deliberate delay in the changeover procedure will result in a Penalty awarded to \nthe non-offending Team ten (10) metres Forward of the Mark for the Change of Possession.\n9.4\tIf the ball is dropped or passed and goes to ground during play, a Change of \nPossession results.\nRuling = The Mark for the Change of Possession is where the ball makes initial contact with \nthe ground.\n9.5\tIf the ball, while still under the control of the Half, contacts the ground in the In-\nGoal Area, Possession is lost.\nRuling = Play will restart with a Rollball at the nearest point on the seven (7) metre line.\nFIT Playing Rules - 5th Edition\n8\nCOPYRIGHT © Touch Football Australia 2020\n9.6\tIf a player mishandles the ball and even if in an effort to gain control, the ball \nis accidentally knocked Forward into any other Player, a Change of Possession \nresults.\n10\u2002 The Touch \n10.1\tA Touch may be made', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `learning_rate`: 1e-05 - `num_train_epochs`: 1 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.3 - `bf16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 1e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.3 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | |:------:|:----:|:-------------:|:------:| | 0.1364 | 6 | 0.1722 | 0.1476 | | 0.2727 | 12 | 0.1233 | 0.0820 | | 0.4091 | 18 | 0.0702 | 0.0579 | | 0.5455 | 24 | 0.0668 | 0.0593 | | 0.6818 | 30 | 0.0527 | 0.0575 | | 0.8182 | 36 | 0.0534 | 0.0561 | | 0.9545 | 42 | 0.0659 | 0.0556 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.1.1+cu121 - Accelerate: 0.31.0 - Datasets: 2.17.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/all-MiniLM-L12-v2", "datasets": [], "language": [], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1382", "loss:CosineSimilarityLoss"], "widget": [{"source_sentence": "What is the definition of 'Advantage' in Touch Rugby International Rules?", "sentences": ["1\twhen a Change of Possession takes place due to a player in Possession \nmaking contact with the Sideline or any ground outside the Field of Play, \nprior to a Touch being made; or\n13.6.2\twhen the ball not in Possession of a player makes contact with the \nSideline or any ground outside the Field of Play.\n13.7\tA player may not perform a Tap in replacement of a Rollball.\nRuling = The offending Team must return to the Mark and perform the Rollball.\n13.8\tAn attacking player, other than the player performing the Rollball, may receive \nthe ball at the Rollball and shall do so without delay. That player is referred to as \nthe Half.\n13.9\tThe Half may control the ball with a foot prior to picking up the ball. \n13.10\tA player ceases to be the Half once the ball is passed to another player.\n13.11\tDefending players are not to interfere with the performance of the Rollball or the \nHalf. \nRuling = A Penalty to the Attacking Team at a point ten (10) metres directly Forward of the \nInfringement.\n13.12\tPlayers of the Defending Team must not move Forward of the Onside position \nuntil the Half has made contact with the ball, unless directed to do so by the \nReferee or in accordance with 13.12.1.\n13.12.1\tWhen the Half is not within one (1) metre of the Rollball, Onside players \nof the Defending Team may move Forward as soon as the player \nperforming the Rollball releases the ball. If the Half is not in position and \na defending player moves Forward and makes contact with the ball, a \nChange of Possession results.\n13.13\tIf in the act of performing the Rollball, the Attacking player makes contact with \nthe Sideline or any ground outside the Field of Play a Change of Possession will \noccur with the Rollball to be taken seven (7) metres in field.\n13.14\tAfter a Touch is made between the Dead Ball Line and the seven (7) metre line, \nan Attacking Team is permitted to Rollball on the seven (7) metre line at a point \ndirectly in line with where the Touch was made.\nFIT Playing Rules - 5th Edition\n12\nCOPYRIGHT © Touch Football Australia", " distance beyond the Defending Try Line.\nRuling = A Penalty to the Attacking Team at the point of the Infringement or on the seven (7) \nmetre line directly Forward of the Infringement.\n15.4\tWhen a Rollball occurs within Defending Team’s Seven Metre Zone or a Penalty \nTap within ten (10) metres of the Defending Team’s Try Line, all players from the \nDefending Team must have both feet on or behind their Try Line and no other \npart of the body in contact with the ground Forward of their Try Line.\nRuling = A Penalty to the Attacking Team at the seven (7) metre line directly Forward of the \npoint of the Infringement.\n15.5\tAfter effecting the Touch, the defending player must retire the required seven \n(7) metres or to the Defending Try Line as indicated by the Referee without \ninterfering with the Attacking Team.\nRuling = A Penalty to the Attacking Team ten (10) metres Forward of the Infringement or if \non the Defensive Try Line, on the seven (7) metre line.\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football Australia 2020\n13\n16  Obstruction \n16.1\tA player in Possession must not run or otherwise move behind other attacking \nplayers or the Referee in an attempt to avoid an Imminent Touch.\nRuling = A Penalty to the non-offending Team at the point of the Infringement. \n16.2\tThe player in Possession is not to hold or otherwise impede a defending player \nin any way.\nRuling = A Penalty to the non-offending Team at the point of the Infringement. \n16.3\tAn attacking player in support of the player in Possession may move as \nnecessary to achieve a supporting position but must not grab, hold, push or \notherwise deliberately interfere with a defending player attempting to make a \nTouch\nRuling = A Penalty to the non-offending Team at the point of the Infringement or on the \nseven (7) metre line if the Infringement occurs in the In-Goal Area. \n16.4\tPlayers in the Defending Team may not obstruct or interfere with an attacking \nplayer.\nRuling = A Penalty to the non-offending Team at the point of the In", " without delay. While in a position at the end of the Field of Play, \nthe Team coach(s) or Team official must remain no closer than five (5) metres \nfrom the Dead Ball Line and must not coach or communicate (verbal or non-\nverbal) with either Team or the Referees.\n7  Commencement and Recommencement of Play \n7.1\tTeam captains are to toss a coin in the presence of the Referee(s) with the \nwinning captain’s Team having the choice of the direction the Team wishes \nto run in the first half; the choice of Interchange Areas for the duration of the \nmatch, including any extra time; and the choice of which team will commence \nthe match in Possession.\n7.2\tA player of the Attacking Team is to commence the match with a Tap at the \ncentre of the Halfway Line following the indication to commence play from the \nReferee.\n7.3\tFor the recommencement of play following a Half Time break, Teams shall \nchange directions and the Team not commencing the match with Possession is \nto re-start the match with a Tap.\n7.4\tFor the recommencement of play following the scoring of a Try, the Team \nagainst which the Try was conceded will recommence play with a Tap.\n7.5\tA Tap may not be taken until all attacking Players are in an Onside position.\nRuling = The Player will be directed to return to the Mark and to take the Tap again.\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football Australia 2020\n7\n7.6\tA Tap may not be taken until at least four (4) defending players are in an Onside \nposition or unless directed to so by the Referee. Where the number of players \non the field from the Defending Team falls below four (4), all players must be in \nan Onside position for a Tap to be taken unless directed to do so by the Referee.\nRuling = The Player will be directed to return to the Mark and to take the Tap again.\n7.7\tThe Tap to commence or recommence play must be performed without delay.\nRuling = A Penalty to the non-offending team at the centre of the Halfway line.\n8  Match Duration \n \n8.1\tA match is 40 minutes in duration, consisting of two (2) x 20 minute"]}, {"source_sentence": "What is the term for the area in the Field of Play bounded by the Sidelines, the Try Lines and the Dead Ball Lines?", "sentences": [" related matters inside the Perimeter \nfor the Duration of a match, has jurisdiction over all players, coaches and \nofficials and is required to:\n25.1.1\tInspect the Field of Play, Line Markings and Markers prior to the \ncommencement of the Match to ensure the safety of all participants.\n25.1.2\tAdjudicate on the Rules of the game;\n25.1.3\tImpose any sanction necessary to control the match;\n25.1.4\tAward Tries and record the progressive score;\n25.1.5\tMaintain a count of Touches during each Possession;\n25.1.6\tAward Penalties for Infringements against the Rules; and\n25.1.7\tReport to the relevant competition administration any Sin Bins, \nDismissals or injuries to any participant sustained during a Match.\n25.2\tOnly Team captains are permitted to seek clarification of a decision directly \nfrom the Referee. An approach may only be made during a break in play or at \nthe discretion of the Referee.\nFIT Playing Rules - 5th Edition\n18\nCOPYRIGHT © Touch Football Australia 2020\nHALFWAY LINE\nSIN BIN AREAS\nIN-GOAL AREA\nTRY LINE\n7 M ZONE\nDEAD BALL LINE\nPERIMETER\nINTERCHANGE\nAREA\n20M\n10M\n10M\n1M\n5M\n7 M\n7 M\n7 M\n7 M\n50M\n3M\n70M\nINTERCHANGE\nAREA\n Appendix 1 – Field of Play\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football Australia 2020\n19\nFEDERATION OF INTERNATIONAL TOUCH\n", " Player\nThe player who replaces another player during Interchange. There is \na maximum of eight (8) substitute players in any Team and except \nwhen interchanging, in the Sin Bin, dismissed or on the Field of Play, \nthey must remain in the Substitution Box.\nTap and Tap Penalty\nThe method of commencing the match, recommencing the match \nafter Half Time and after a Try has been scored. The Tap is also the \nmethod of recommencing play when a Penalty is awarded. The Tap \nis taken by placing the ball on the ground at or behind the Mark, \nreleasing both hands from the ball, tapping the ball gently with either \nfoot or touching the foot on the ball. The ball must not roll or move \nmore than one (1) metre in any direction and must be retrieved \ncleanly, without touching the ground again. The player may face any \ndirection and use either foot. Provided it is at the Mark, the ball does \nnot have to be lifted from the ground prior to a Tap being taken.\nTeam\nA group of players constituting one (1) side in a competition match.\nTFA\nTouch Football Australia Limited\nTouch\nAny contact between the player in Possession and a defending \nplayer. A Touch includes contact on the ball, hair or clothing and may \nbe made by a defending player or by the player in Possession.\nTouch Count\nThe progressive number of Touches that each Team has before a \nChange of Possession, from zero (0) to six (6).\nTry\nThe result of any attacking player, except the Half, placing the ball on \nor over the Team’s Attacking Try Line before being Touched.\nTry Lines\nThe lines separating the In-Goal Areas from the Field of Play. See \nAppendix 1.\nVoluntary Rollball\nThe player in Possession performs a Rollball before a Touch is made \nwith a defending player.\nWing\nThe player outside the Link player.\nWinner\nThe Team that scores the most Tries during the match.\nFIT Playing Rules - 5th Edition\n4\nCOPYRIGHT © Touch Football Australia 2020\n Rules of Play \n Mode of Play \nThe object of the game of Touch is for each Team to score Tries and to prevent the \nopposition from scoring. The ball may be passed, knocked or handed between players \nof the Attacking Team who may in turn run", " item that may prove dangerous. \nAny jewellery or other items that cannot be removed are to be taped to the \nsatisfaction of the Referee.\n4.8\tLong (extend beyond the finger flesh when viewed from the palm) or sharp \nfingernails are not allowed.\n4.9\tReferees and players may wear spectacles or sunglasses provided they are safe \nand securely attached.\n4.10\tReferees and players may wear sport monitoring equipment and medical \nsupports such as knee or ankle braces provided, at the sole discretion of \ncompetition’s controlling body, the items are not dangerous.\n5  Team Composition \n5.1\tA Team consists of a maximum of 14 players, no more than six (6) of whom are \nallowed on the field at any time. \nFIT Playing Rules - 5th Edition\n6\nCOPYRIGHT © Touch Football Australia 2020\nRuling = A Penalty awarded to the non-offending Team at the time the offence is identified \nseven (7) metres infield on the Halfway Line or the position of the ball, whichever is the \ngreater Advantage.\n5.2\tA Team must have a minimum of four (4) players on the field for a match to \ncommence or continue, except during a Drop-Off.\n5.3\tWhere the number of players on the field from one Team falls below four (4), \nthe match is to be abandoned and the non-offending Team is to be declared the \nWinner.\n5.3.1\tThis does not apply for players sent to the Sin Bin Area.\n5.4\tIn mixed gender competitions, the maximum number of males allowed on the \nField of Play is three (3), the minimum male requirement is one (1) and the \nminimum female requirement is one (1).\n6  Team Coach and Team Officials \n6.1\tThe Team coach(s) and Team officials may be permitted inside the Perimeter \nbut shall be required to be positioned either in the Interchange Area or at the \nend of the Field of Play for the duration of the match.\n6.2\tThe Team coach(s) and Team officials may move from one position to the other \nbut shall do so without delay. While in a position at the end of the Field of Play, \nthe Team coach(s) or Team official must remain no closer than five (5) metres \nfrom the Dead Ball Line and must not coach or communicate (verbal"]}, {"source_sentence": "What is the consequence for a player or official who is dismissed for misconduct in a match?", "sentences": [" to the Sin Bin must return to the Interchange Area prior to re-\nentering the Field of Play.\n22.4\tAny action that causes the Touch Count to restart will result in a continuation of \nthat Possession. For the avoidance of doubt, should a defender knock the ball \ndown or give away a Penalty, this does not mean that the Possession has been \ncompleted, but rather the Possession continues. \nFIT Playing Rules - 5th Edition\n16\nCOPYRIGHT © Touch Football Australia 2020\n23  Dismissal \n23.1\tA player or official dismissed for misconduct is to take no further part in that \nmatch and is to move to and remain outside the Perimeter for the remainder of \nthe match.\n23.2\tThe dismissed player or official cannot be replaced and, in accordance with NTA \nDisciplinary Regulations, that player shall receive an automatic two (2) match \nsuspension. \n24  Drop-Off \n24.1\tShould a Winner be required in drawn matches, the following Drop-Off \nprocedure is used to determine a Winner.\n24.1.1\tEach Team will reduce their on-field Team to four (4) players and within \n60 seconds take up a position to restart play from the Halfway Line, \ndefending the same end of the field as at the End of Play.\n24.1.2\tThe Drop-Off commences with a Tap from the centre of the Halfway Line \nby the Team that did not commence the match with Possession.\n24.1.3\tThe Drop-Off will commence with a two (2) minute period of extra time.\n24.1.4\tShould a Team be leading at the expiration of the two (2) minute period \nof extra time then that Team will be declared the Winner and Match \ncomplete.\n24.1.5\tShould neither Team be leading at the expiration of two (2) minutes, a \nsignal is given and the match will pause at the next Touch or Dead Ball. \nEach Team will then remove another player from the Field of Play.\n24.1.6\tThe Match will recommence immediately after the players have left the \nfield at the same place where it paused (i.e. the Team retains Possession \nat the designated number of Touches, or at Change of Possession due to \nsome Infringement or the sixth Touch) and the Match will continue until", " without delay. While in a position at the end of the Field of Play, \nthe Team coach(s) or Team official must remain no closer than five (5) metres \nfrom the Dead Ball Line and must not coach or communicate (verbal or non-\nverbal) with either Team or the Referees.\n7  Commencement and Recommencement of Play \n7.1\tTeam captains are to toss a coin in the presence of the Referee(s) with the \nwinning captain’s Team having the choice of the direction the Team wishes \nto run in the first half; the choice of Interchange Areas for the duration of the \nmatch, including any extra time; and the choice of which team will commence \nthe match in Possession.\n7.2\tA player of the Attacking Team is to commence the match with a Tap at the \ncentre of the Halfway Line following the indication to commence play from the \nReferee.\n7.3\tFor the recommencement of play following a Half Time break, Teams shall \nchange directions and the Team not commencing the match with Possession is \nto re-start the match with a Tap.\n7.4\tFor the recommencement of play following the scoring of a Try, the Team \nagainst which the Try was conceded will recommence play with a Tap.\n7.5\tA Tap may not be taken until all attacking Players are in an Onside position.\nRuling = The Player will be directed to return to the Mark and to take the Tap again.\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football Australia 2020\n7\n7.6\tA Tap may not be taken until at least four (4) defending players are in an Onside \nposition or unless directed to so by the Referee. Where the number of players \non the field from the Defending Team falls below four (4), all players must be in \nan Onside position for a Tap to be taken unless directed to do so by the Referee.\nRuling = The Player will be directed to return to the Mark and to take the Tap again.\n7.7\tThe Tap to commence or recommence play must be performed without delay.\nRuling = A Penalty to the non-offending team at the centre of the Halfway line.\n8  Match Duration \n \n8.1\tA match is 40 minutes in duration, consisting of two (2) x 20 minute", " Attacking Team from scoring a Try.\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football Australia 2020\n15\n19  Advantage \n19.1\tWhere a Defending Team player is Offside at a Tap or Rollball and attempts \nto interfere with play, the Referee will allow Advantage or award a Penalty, \nwhichever is of greater Advantage to the Attacking Team.\n19.2\tShould the Attacking Team in the act of taking the Advantage subsequently \ninfringe, the Ruling on the initial Infringement will apply.\n20  Misconduct \n20.1\tMisconduct warranting Penalty, Forced Interchange, Sin Bin or Dismissal \nincludes:\n20.1.1\tContinuous or regular breaches of the Rules;\n20.1.2\tSwearing towards another player, Referee, spectator or other match \t\nofficial;\n20.1.3\tDisputing decisions of Referees or other match official(s);\n20.1.4\tUsing more than the necessary physical force to make a Touch;\n20.1.5\tPoor sportsmanship;\n20.1.6\tTripping, striking, or otherwise assaulting another player, Referee, \nspectator or other match official; or\n20.1.7\tAny other action that is contrary to the spirit of the game.\n21  Forced Interchange \n21.1\tWhere the Referee deems it necessary to implement a Forced Interchange \nfollowing an Infringement, the Referee is to stop the match, direct the ball to \nbe placed on the Mark, advise the offending player of the reason for the Forced \nInterchange, direct that player to return to the Interchange Area, display the \nrelevant signal and award a Penalty to the non-offending Team.\n22  Sin Bin \n22.1\tThe on-field Referee is required to indicate the commencement and the end of \nthe Sin Bin time.\n22.2\tAny player sent to the Sin Bin must stand in the Sin Bin Area at the opposition’s \nend of the Field of Play and on the same side as their Interchange Area. \n22.3\tAny player sent to the Sin Bin must return to the Interchange Area prior to re-\nentering the Field of Play.\n22.4\tAny action that causes the Touch Count to restart will result in a continuation of \nthat Possession. For the avoidance of"]}, {"source_sentence": "Can National Touch Associations or their authorized competition providers adapt or alter the FIT Playing Rules for local competitions?", "sentences": [" related matters inside the Perimeter \nfor the Duration of a match, has jurisdiction over all players, coaches and \nofficials and is required to:\n25.1.1\tInspect the Field of Play, Line Markings and Markers prior to the \ncommencement of the Match to ensure the safety of all participants.\n25.1.2\tAdjudicate on the Rules of the game;\n25.1.3\tImpose any sanction necessary to control the match;\n25.1.4\tAward Tries and record the progressive score;\n25.1.5\tMaintain a count of Touches during each Possession;\n25.1.6\tAward Penalties for Infringements against the Rules; and\n25.1.7\tReport to the relevant competition administration any Sin Bins, \nDismissals or injuries to any participant sustained during a Match.\n25.2\tOnly Team captains are permitted to seek clarification of a decision directly \nfrom the Referee. An approach may only be made during a break in play or at \nthe discretion of the Referee.\nFIT Playing Rules - 5th Edition\n18\nCOPYRIGHT © Touch Football Australia 2020\nHALFWAY LINE\nSIN BIN AREAS\nIN-GOAL AREA\nTRY LINE\n7 M ZONE\nDEAD BALL LINE\nPERIMETER\nINTERCHANGE\nAREA\n20M\n10M\n10M\n1M\n5M\n7 M\n7 M\n7 M\n7 M\n50M\n3M\n70M\nINTERCHANGE\nAREA\n Appendix 1 – Field of Play\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football Australia 2020\n19\nFEDERATION OF INTERNATIONAL TOUCH\n", "/or outside the boundaries of the Field of Play prior to the \nsubsequent Rollball.\nDead Ball Line\nThe end boundaries of the Field of Play. There is one at each end of \nthe Field of Play. See Appendix 1.\nDefending Try Line\nThe line which a Team has to defend to prevent a Try.\nDefending Team\nThe Team without or which is losing Possession.\nDismissal\nWhen a player is sent from the Field of Play for the rest of the match.\nDrop-Off\nA procedure used to determine a Winner following equal scores at \nthe expiration of normal Duration.\nDuration\nThe length of time a competition match lasts, which is normally \nforty-five minutes, inclusive of a five (5) minute Half Time.\nEnd of Play\nWhen the Referee indicates completion of the match.\nExclusion\nWhen a player is sent to the nearest Sin Bin Area following three (3) \npenalties by the Defending Team upon entering their Seven Metre \nZone. The player is counted as a player on the Field of Play and \ncannot be replaced or interchanged.\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football Australia 2020\n1\nFIT \nFederation of International Touch\nField of Play\nThe playing area bounded by the Sidelines and Dead Ball Lines, both \nof which are out of bounds. See Appendix 1.\nForced Interchange\nWhen a player is required to undertake a compulsory Interchange for \nan Infringement ruled more serious than a Penalty but less serious \nthan a Permanent Interchange, Sin Bin or Dismissal.\nForward\nA position or direction towards the Dead Ball Line beyond the Team’s \nAttacking Try Line.\nFull Time\nThe expiration of the second period of time allowed for play.\nHalf\nThe player who takes Possession following a Rollball.\nHalf Time\nThe break in play between the two halves of a match.\nImminent\nAbout to occur, it is almost certain to occur.\nInfringement\nThe action of a player contrary to the Rules of the game.\nIn-Goal Area\nThe area in the Field of Play bounded by the Sidelines, the Try Lines \nand the Dead Ball Lines. There are two (2), one (1) at each end of the \nField of Play. See Appendix 1.\nInterchange\nThe act of an on-field player leaving the Field of Play to be replaced \nby", " of Play \nThe object of the game of Touch is for each Team to score Tries and to prevent the \nopposition from scoring. The ball may be passed, knocked or handed between players \nof the Attacking Team who may in turn run or otherwise move with the ball in an \nattempt to gain territorial Advantage and to score Tries. Defending players prevent \nthe Attacking Team from gaining a territorial Advantage by touching the ball carrier. \n1  The Field of Play \n \n1.1\tThe Field of Play is rectangular in shape measuring 70 metres in length from \nTry Line to Try Line, excluding the In-Goal Areas and 50 metres in width from \nSideline to Sideline excluding the Interchange Areas.\n1.1.1\tVariations to the dimensions of the Field of Play may be made but must be \nincluded in relevant competition, event or tournament conditions\n1.2\tLine Markings should be 4cm in width but must be no less than 2.5cm. Line \nMarkings are to be laid out as shown in Appendix 1 - The Field of Play. Sidelines \nextend seven (7) metres beyond the Try Lines to join the Dead Ball Lines and \ndefine the In-Goal Areas which measure fifty (50) metres wide by seven (7) \nmetres in length.\n1.3\tThe Interchange Areas are located no closer than one (1) metre from each \nSideline.\n1.4\tSuitably sized markers, cones or corner posts of a distinguishing colour and \nmade from safe and pliable material should be positioned at the intersections of \nthe Sideline and Halfway line and the Sideline and the Try Line.\n1.4.1\tMarkers, cones or corner posts placed on the junction of the Sideline and \nTry Line are deemed to be in the Field of Play.\n1.4.2\tAll other markers or cones are deemed to be out of the Field of Play.\n1.5\tThe standard playing surface is grass. Other surfaces including synthetic grass \nmay be used but shall be subject to NTA approved standards.\n1.6\tThe Field of Play boundary lines are deemed to be out of play. Play becomes \nDead when the ball or a player in Possession touches the ground on or over a \nSideline or Dead Ball Line.\n2  Player Registration \n2.1\tParticipating players are to be registered with a NTA or with"]}, {"source_sentence": "Can an attacking player pass the ball into a defending player intentionally seeking a rebound or a restart of the Touch count?", "sentences": [" without delay.\nRuling = A Penalty to the non-offending team at the centre of the Halfway line.\n8  Match Duration \n \n8.1\tA match is 40 minutes in duration, consisting of two (2) x 20 minute halves with \na Half Time break.\n8.1.1\tThere is no time off for injury during a match.\n8.2\tLocal competition and tournament conditions may vary the duration of a match.\n8.3\tWhen time expires, play is to continue until the next Touch or Dead Ball and End \nof Play is signaled by the Referee. \n8.3.1\tShould a Penalty be awarded during this period, the Penalty is to be taken.\n8.4\tIf a match is abandoned in any circumstances other than those referred to in \nclause 24.1.6 the NTA or NTA competition provider in its sole discretion shall \ndetermine the result of the match.\n9  Possession \n \n9.1\tThe Team with the ball is entitled to six (6) Touches prior to a Change of \nPossession. \n9.2\tOn the Change of Possession due to an intercept, the first Touch will be zero (0) \nTouch. \n9.3\tFollowing the sixth Touch or a loss of Possession due to any other means, the \nball must be returned to the Mark without delay.\nRuling = A deliberate delay in the changeover procedure will result in a Penalty awarded to \nthe non-offending Team ten (10) metres Forward of the Mark for the Change of Possession.\n9.4\tIf the ball is dropped or passed and goes to ground during play, a Change of \nPossession results.\nRuling = The Mark for the Change of Possession is where the ball makes initial contact with \nthe ground.\n9.5\tIf the ball, while still under the control of the Half, contacts the ground in the In-\nGoal Area, Possession is lost.\nRuling = Play will restart with a Rollball at the nearest point on the seven (7) metre line.\nFIT Playing Rules - 5th Edition\n8\nCOPYRIGHT © Touch Football Australia 2020\n9.6\tIf a player mishandles the ball and even if in an effort to gain control, the ball \nis accidentally knocked Forward into any other Player, a Change of Possession \nresults.\n10  The Touch \n10.1\tA Touch may be made", " a player enters the Field of Play but does not impede the scoring of a Try the \noffending player will be sent to the Sin Bin.\n17.8\tFollowing a Try, players may Interchange at will, without having to wait for the \nplayer to enter the Interchange Area, but must do so prior to the Tap being taken \nto recommence play.\n18  Penalty \n18.1\tThe Tap must be performed in accordance with the Definitions.\nRuling = The Referee will instruct the offending Team to return to the Mark and perform the \nTap again.\n18.2\tFor Infringements that occur between seven (7) metre lines, the Mark for the \nPenalty Tap is at the point of Infringement unless otherwise indicated by the \nReferee. \n18.3\tFor Infringements that occur within the Seven Metre Zone the Tap must be \ntaken at the nearest seven (7) metre line.\n18.4\tFor Infringements that occur beyond the Field of Play or in the In-Goal Area \nthe Mark is seven (7) metres infield from the Sideline, or directly Forward of \nthe Infringement on the seven (7) metre line nearest the Infringement or at a \nposition indicated by the Referee.\n18.5\tThe Mark must be indicated by the Referee before a Penalty Tap is taken.\n18.6\tThe Penalty Tap must be performed without delay after the Referee indicates \nthe Mark.\nRuling = A Penalty to the non-offending team at the point of Infringement.\n18.7\tA player may perform a Rollball instead of a Penalty Tap and the player who \nreceives the ball does not become the Half.\n18.8\tIf the Defending Team is penalised three (3) times upon entering their Seven \nMetre Zone during a single Possession, the last offending player will be given an \nExclusion until the end of that Possession.\n18.9\tA Penalty Try is awarded if any action by a player, Team official or spectator, \ndeemed by the Referee to be contrary to the Rules or spirit of the game clearly \nprevents the Attacking Team from scoring a Try.\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football Australia 2020\n15\n19  Advantage \n19.1\tWhere a Defending Team player is Offside at a Tap or", " related matters inside the Perimeter \nfor the Duration of a match, has jurisdiction over all players, coaches and \nofficials and is required to:\n25.1.1\tInspect the Field of Play, Line Markings and Markers prior to the \ncommencement of the Match to ensure the safety of all participants.\n25.1.2\tAdjudicate on the Rules of the game;\n25.1.3\tImpose any sanction necessary to control the match;\n25.1.4\tAward Tries and record the progressive score;\n25.1.5\tMaintain a count of Touches during each Possession;\n25.1.6\tAward Penalties for Infringements against the Rules; and\n25.1.7\tReport to the relevant competition administration any Sin Bins, \nDismissals or injuries to any participant sustained during a Match.\n25.2\tOnly Team captains are permitted to seek clarification of a decision directly \nfrom the Referee. An approach may only be made during a break in play or at \nthe discretion of the Referee.\nFIT Playing Rules - 5th Edition\n18\nCOPYRIGHT © Touch Football Australia 2020\nHALFWAY LINE\nSIN BIN AREAS\nIN-GOAL AREA\nTRY LINE\n7 M ZONE\nDEAD BALL LINE\nPERIMETER\nINTERCHANGE\nAREA\n20M\n10M\n10M\n1M\n5M\n7 M\n7 M\n7 M\n7 M\n50M\n3M\n70M\nINTERCHANGE\nAREA\n Appendix 1 – Field of Play\nFIT Playing Rules - 5th Edition\nCOPYRIGHT © Touch Football Australia 2020\n19\nFEDERATION OF INTERNATIONAL TOUCH\n"]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,693
RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf
RichardErkhov
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
2024-10-08T11:41:42Z
2024-10-08T14:36:31+00:00
25
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama3-8b-it-translation-sharegpt-en-ko - GGUF - Model creator: https://huggingface.co/nayohan/ - Original model: https://huggingface.co/nayohan/llama3-8b-it-translation-sharegpt-en-ko/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama3-8b-it-translation-sharegpt-en-ko.Q2_K.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q2_K.gguf) | Q2_K | 2.96GB | | [llama3-8b-it-translation-sharegpt-en-ko.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [llama3-8b-it-translation-sharegpt-en-ko.IQ3_S.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.IQ3_S.gguf) | IQ3_S | 3.43GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [llama3-8b-it-translation-sharegpt-en-ko.IQ3_M.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.IQ3_M.gguf) | IQ3_M | 3.52GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q3_K.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q3_K.gguf) | Q3_K | 3.74GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [llama3-8b-it-translation-sharegpt-en-ko.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q4_0.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q4_0.gguf) | Q4_0 | 4.34GB | | [llama3-8b-it-translation-sharegpt-en-ko.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q4_K.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q4_K.gguf) | Q4_K | 4.58GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q4_1.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q4_1.gguf) | Q4_1 | 4.78GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q5_0.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q5_0.gguf) | Q5_0 | 5.21GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q5_K.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q5_K.gguf) | Q5_K | 5.34GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q5_1.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q5_1.gguf) | Q5_1 | 5.65GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q6_K.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q6_K.gguf) | Q6_K | 6.14GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q8_0.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- language: - en - ko license: llama3 library_name: transformers tags: - translation - enko - ko base_model: - meta-llama/Meta-Llama-3-8B-Instruct datasets: - squarelike/sharegpt_deepl_ko_translation pipeline_tag: text-generation --- # **Introduction** This model was trained to translate a sentence from English to Korean using the 486k dataset from [squarelike/sharegpt_deepl_ko_translation](https://huggingface.co/datasets/nayohan/aihub-en-ko-translation-1.2m). ### **Loading the Model** Use the following Python code to load the model: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "nayohan/llama3-8b-it-translation-sharegpt-en-ko" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", torch_dtype=torch.bfloat16 ) ``` ### **Generating Text** This model supports translation from English to Korean. To generate text, use the following Python code: ```python system_prompt="당신은 번역기 입니다. 영어를 한국어로 번역하세요." sentence = "The aerospace industry is a flower in the field of technology and science." conversation = [{'role': 'system', 'content': system_prompt}, {'role': 'user', 'content': sentence}] inputs = tokenizer.apply_chat_template( conversation, tokenize=True, add_generation_prompt=True, return_tensors='pt' ).to("cuda") outputs = model.generate(inputs, max_new_tokens=256) print(tokenizer.decode(outputs[0][len(inputs[0]):])) ``` ``` # Result # INPUT: <|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nActs as a translator. Translate en sentences into ko sentences in colloquial style.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nThe aerospace industry is a flower in the field of technology and science.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n # OUTPUT: 항공우주 산업은 기술과 과학 분야의 꽃입니다.<|eot_id|> # INPUT: <|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n당신은 번역기 입니다. 영어를 한국어로 번역하세요.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n Technical and basic sciences are very important in terms of research. It has a significant impact on the industrial development of a country. Government policies control the research budget.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n # OUTPUT: 기술 및 기초 과학은 연구 측면에서 매우 중요합니다. 이는 한 국가의 산업 발전에 큰 영향을 미칩니다. 정부 정책에 따라 연구 예산이 결정됩니다.<|eot_id|> ``` ### **Citation** ```bibtex @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url={https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ``` Our trainig code can be found here: [TBD]
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama3-8b-it-translation-sharegpt-en-ko - GGUF - Model creator: https://huggingface.co/nayohan/ - Original model: https://huggingface.co/nayohan/llama3-8b-it-translation-sharegpt-en-ko/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama3-8b-it-translation-sharegpt-en-ko.Q2_K.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q2_K.gguf) | Q2_K | 2.96GB | | [llama3-8b-it-translation-sharegpt-en-ko.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [llama3-8b-it-translation-sharegpt-en-ko.IQ3_S.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.IQ3_S.gguf) | IQ3_S | 3.43GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [llama3-8b-it-translation-sharegpt-en-ko.IQ3_M.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.IQ3_M.gguf) | IQ3_M | 3.52GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q3_K.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q3_K.gguf) | Q3_K | 3.74GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [llama3-8b-it-translation-sharegpt-en-ko.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q4_0.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q4_0.gguf) | Q4_0 | 4.34GB | | [llama3-8b-it-translation-sharegpt-en-ko.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q4_K.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q4_K.gguf) | Q4_K | 4.58GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q4_1.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q4_1.gguf) | Q4_1 | 4.78GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q5_0.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q5_0.gguf) | Q5_0 | 5.21GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q5_K.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q5_K.gguf) | Q5_K | 5.34GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q5_1.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q5_1.gguf) | Q5_1 | 5.65GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q6_K.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q6_K.gguf) | Q6_K | 6.14GB | | [llama3-8b-it-translation-sharegpt-en-ko.Q8_0.gguf](https://huggingface.co/RichardErkhov/nayohan_-_llama3-8b-it-translation-sharegpt-en-ko-gguf/blob/main/llama3-8b-it-translation-sharegpt-en-ko.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- language: - en - ko license: llama3 library_name: transformers tags: - translation - enko - ko base_model: - meta-llama/Meta-Llama-3-8B-Instruct datasets: - squarelike/sharegpt_deepl_ko_translation pipeline_tag: text-generation --- # **Introduction** This model was trained to translate a sentence from English to Korean using the 486k dataset from [squarelike/sharegpt_deepl_ko_translation](https://huggingface.co/datasets/nayohan/aihub-en-ko-translation-1.2m). ### **Loading the Model** Use the following Python code to load the model: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "nayohan/llama3-8b-it-translation-sharegpt-en-ko" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", torch_dtype=torch.bfloat16 ) ``` ### **Generating Text** This model supports translation from English to Korean. To generate text, use the following Python code: ```python system_prompt="당신은 번역기 입니다. 영어를 한국어로 번역하세요." sentence = "The aerospace industry is a flower in the field of technology and science." conversation = [{'role': 'system', 'content': system_prompt}, {'role': 'user', 'content': sentence}] inputs = tokenizer.apply_chat_template( conversation, tokenize=True, add_generation_prompt=True, return_tensors='pt' ).to("cuda") outputs = model.generate(inputs, max_new_tokens=256) print(tokenizer.decode(outputs[0][len(inputs[0]):])) ``` ``` # Result # INPUT: <|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nActs as a translator. Translate en sentences into ko sentences in colloquial style.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nThe aerospace industry is a flower in the field of technology and science.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n # OUTPUT: 항공우주 산업은 기술과 과학 분야의 꽃입니다.<|eot_id|> # INPUT: <|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n당신은 번역기 입니다. 영어를 한국어로 번역하세요.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n Technical and basic sciences are very important in terms of research. It has a significant impact on the industrial development of a country. Government policies control the research budget.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n # OUTPUT: 기술 및 기초 과학은 연구 측면에서 매우 중요합니다. 이는 한 국가의 산업 발전에 큰 영향을 미칩니다. 정부 정책에 따라 연구 예산이 결정됩니다.<|eot_id|> ``` ### **Citation** ```bibtex @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url={https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ``` Our trainig code can be found here: [TBD]
{}
task
[ "TRANSLATION" ]
42,694
google/paligemma2-10b-pt-896
google
image-text-to-text
[ "transformers", "safetensors", "paligemma", "image-text-to-text", "arxiv:2407.07726", "arxiv:2408.00118", "arxiv:2310.09199", "arxiv:2303.15343", "arxiv:1706.03762", "arxiv:2010.11929", "arxiv:2412.03555", "arxiv:2209.06794", "arxiv:2209.04372", "arxiv:2103.01913", "arxiv:1908.04913", "arxiv:1906.02467", "arxiv:2203.10244", "arxiv:2205.12522", "arxiv:2104.12756", "arxiv:1608.00272", "arxiv:1511.02283", "arxiv:1905.13648", "arxiv:2110.11624", "arxiv:2108.03353", "arxiv:1810.12440", "arxiv:1904.03493", "arxiv:2010.04295", "arxiv:1511.09207", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-11-21T20:18:21Z
2024-12-05T12:43:03+00:00
955
31
--- library_name: transformers license: gemma pipeline_tag: image-text-to-text extra_gated_heading: Access PaliGemma on Hugging Face extra_gated_prompt: To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # PaliGemma 2 model card **Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma) Transformers PaliGemma 2 10B weights, pre-trained with 896*896 input images and 512 token input/output text sequences. The model is available in the `bfloat16` format for fine-tuning. **Resources and technical documentation:** * [PaliGemma 2 on Kaggle](https://www.kaggle.com/models/google/paligemma-2) * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) **Terms of Use:** [Terms](https://ai.google.dev/gemma/terms) **Authors:** Google ## Model information ### Model summary PaliGemma 2 is an update of the [PaliGemma](https://arxiv.org/abs/2407.07726) vision-language model (VLM) which incorporates the capabilities of the [Gemma 2](https://arxiv.org/abs/2408.00118) models. The PaliGemma family of models is inspired by [PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as the [SigLIP](https://arxiv.org/abs/2303.15343) vision model and [Gemma 2](https://arxiv.org/abs/2408.00118) language models. It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation. #### Model architecture PaliGemma 2 is the composition of a [Transformer decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image encoder](https://arxiv.org/abs/2010.11929). The text decoder is initialized from [Gemma 2](https://ai.google.dev/gemma/docs/base) in the 2B, 9B, and 27B parameter sizes. The image encoder is initialized from [SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb). Similar to the original PaliGemma model, PaliGemma 2 is trained following the [PaLI-3](https://arxiv.org/abs/2310.09199) recipes. #### Inputs and outputs * **Input:** Image and text string, such as a prompt to caption the image, or a question. * **Output:** Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords. #### Citation ```none @article{ title={PaliGemma 2: A Family of Versatile VLMs for Transfer}, author={Andreas Steiner and André Susano Pinto and Michael Tschannen and Daniel Keysers and Xiao Wang and Yonatan Bitton and Alexey Gritsenko and Matthias Minderer and Anthony Sherbondy and Shangbang Long and Siyang Qin and Reeve Ingle and Emanuele Bugliarello and Sahar Kazemzadeh and Thomas Mesnard and Ibrahim Alabdulmohsin and Lucas Beyer and Xiaohua Zhai}, year={2024}, journal={arXiv preprint arXiv:2412.03555} } ``` ### Model data #### Pre-train datasets PaliGemma 2 is pre-trained on the following mixture of datasets: * **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is a web-scale multilingual image-text dataset built from the public web. A wide range of WebLI splits are used to acquire versatile model capabilities, such as visual semantic understanding, object localization, visually-situated text understanding, and multilinguality. * **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud Translation API](https://cloud.google.com/translate) to translate into 34 additional languages. * **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al., 2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the same additional 34 languages as CC3M-35L, using the [Google Cloud Translation API](https://cloud.google.com/translate). * **OpenImages:** Detection and object-aware questions and answers ([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by handcrafted rules on the [OpenImages dataset]. * **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al., 2021](https://arxiv.org/abs/2103.01913)). [OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html PaliGemma 2 is based on Gemma 2, and you can find information on the pre-training datasets for Gemma 2 in the [Gemma 2 model card](https://ai.google.dev/gemma/docs/model_card_2). #### Data responsibility filtering The following filters are applied to WebLI, with the goal of training PaliGemma 2 on safe and responsible data: * **Pornographic image filtering:** This filter removes images deemed to be of pornographic nature. * **Text safety filtering:** We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about child sexual abuse imagery (CSAI), pornography, vulgarities, or is otherwise offensive. * **Text toxicity filtering:** We further use the [Perspective API](https://perspectiveapi.com/) to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic. * **Text personal information filtering:** We filtered certain personal information and other sensitive data using the [Cloud Data Loss Prevention (DLP) API](https://cloud.google.com/security/products/dlp) to protect the privacy of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed. * **Additional methods:** Filtering based on content quality and safety in line with our policies and practices. [other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759 ## Use in Transformers The following snippet uses model `google/paligemma2-10b-pt-896` for reference purposes. It is a base model and is recommended to use after fine tuning it on a downstream task. Here is a [notebook](https://github.com/merveenoyan/smol-vision/blob/main/Fine_tune_PaliGemma.ipynb) that showcases fine-tuning PaliGemma 2. ```python from transformers import ( PaliGemmaProcessor, PaliGemmaForConditionalGeneration, ) from transformers.image_utils import load_image import torch model_id = "google/paligemma2-10b-pt-896" url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" image = load_image(url) model = PaliGemmaForConditionalGeneration.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto").eval() processor = PaliGemmaProcessor.from_pretrained(model_id) # Leaving the prompt blank for pre-trained models prompt = "" model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(torch.bfloat16).to(model.device) input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ``` ## Implementation information ### Hardware PaliGemma 2 was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e). ### Software Training was completed using [JAX](https://github.com/google/jax), [Flax](https://github.com/google/flax), [TFDS](https://github.com/tensorflow/datasets) and [`big_vision`](https://github.com/google-research/big_vision). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma 2 fine-tune code and inference code are released in the `big_vision` GitHub repository. ## Evaluation information ### Benchmark results In order to verify the transferability of PaliGemma 2 to a wide variety of academic tasks, we fine-tune the pretrained models on each task. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web-scale pre-training data. #### PaliGemma 2 results by model resolution and size | Benchmark | 224-3B | 224-10B | 224-28B | 448-3B | 448-10B | 448-28B | |-------------------------------|:------:|:-------:|:-------:|:------:|:-------:|:-------:| | [AI2D][ai2d] | 74.7 | 83.1 | 83.2 | 76.0 | 84.4 | 84.6 | | [AOKVQA-DA][aokvqa-da] (val) | 64.2 | 68.9 | 70.2 | 67.9 | 70.8 | 71.2 | | [AOKVQA-MC][aokvqa-mc] (val) | 79.7 | 83.7 | 84.7 | 82.5 | 85.9 | 87.0 | | [ActivityNet-CAP][anet-cap] | 34.2 | 35.9 | - | - | - | - | | [ActivityNet-QA][anet-qa] | 51.3 | 53.2 | - | - | - | - | | [COCO-35L][coco-35l] (avg34) | 113.9 | 115.8 | 116.5 | 115.8 | 117.2 | 117.2 | | [COCO-35L][coco-35l] (en) | 138.4 | 140.8 | 142.4 | 140.4 | 142.4 | 142.3 | | [COCOcap][coco-cap] | 141.3 | 143.7 | 144.0 | 143.4 | 145.0 | 145.2 | | [ChartQA][chartqa] (aug) | 74.4 | 74.2 | 68.9 | 89.2 | 90.1 | 85.1 | | [ChartQA][chartqa] (human) | 42.0 | 48.4 | 46.8 | 54.0 | 66.4 | 61.3 | | [CountBenchQA][countbenchqa] | 81.0 | 84.0 | 86.4 | 82.0 | 85.3 | 87.4 | | [DocVQA][docvqa] (val) | 39.9 | 43.9 | 44.9 | 73.6 | 76.6 | 76.1 | | [GQA][gqa] | 66.2 | 67.2 | 67.3 | 68.1 | 68.3 | 68.3 | | [InfoVQA][info-vqa] (val) | 25.2 | 33.6 | 36.4 | 37.5 | 47.8 | 46.7 | | [MARVL][marvl] (avg5) | 83.5 | 89.5 | 90.6 | 82.7 | 89.1 | 89.7 | | [MSRVTT-CAP][msrvtt] | 68.5 | 72.1 | - | - | - | - | | [MSRVTT-QA][msrvtt] | 50.5 | 51.9 | - | - | - | - | | [MSVD-QA][msvd-qa] | 61.1 | 62.5 | - | - | - | - | | [NLVR2][nlvr2] | 91.4 | 93.9 | 94.2 | 91.6 | 93.7 | 94.1 | | [NoCaps][nocaps] | 123.1 | 126.3 | 127.1 | 123.5 | 126.9 | 127.0 | | [OCR-VQA][ocr-vqa] | 73.4 | 74.7 | 75.3 | 75.7 | 76.3 | 76.6 | | [OKVQA][okvqa] | 64.2 | 68.0 | 71.2 | 64.1 | 68.6 | 70.6 | | [RSVQA-hr][rsvqa-hr] (test) | 92.7 | 92.6 | 92.7 | 92.8 | 92.8 | 92.8 | | [RSVQA-hr][rsvqa-hr] (test2) | 90.9 | 90.8 | 90.9 | 90.7 | 90.7 | 90.8 | | [RSVQA-lr][rsvqa-lr] | 93.0 | 92.8 | 93.5 | 92.7 | 93.1 | 93.7 | | [RefCOCO][refcoco] (testA) | 75.7 | 77.2 | 76.8 | 78.6 | 79.7 | 79.3 | | [RefCOCO][refcoco] (testB) | 71.0 | 74.2 | 73.9 | 73.5 | 76.2 | 74.8 | | [RefCOCO][refcoco] (val) | 73.4 | 75.9 | 75.0 | 76.3 | 78.2 | 77.3 | | [RefCOCO+][refcoco+] (testA) | 72.7 | 74.7 | 73.6 | 76.1 | 77.7 | 76.6 | | [RefCOCO+][refcoco+] (testB) | 64.2 | 68.4 | 67.1 | 67.0 | 71.1 | 68.6 | | [RefCOCO+][refcoco+] (val) | 68.6 | 72.0 | 70.3 | 72.1 | 74.4 | 72.8 | | [RefCOCOg][refcocog] (test) | 69.0 | 71.9 | 70.7 | 72.7 | 74.8 | 73.7 | | [RefCOCOg][refcocog] (val) | 68.3 | 71.4 | 70.5 | 72.3 | 74.4 | 73.0 | | [ST-VQA][st-vqa] (val) | 61.9 | 64.3 | 65.1 | 80.5 | 82.0 | 81.8 | | [SciCap][scicap] | 165.1 | 159.5 | 156.9 | 183.3 | 177.2 | 172.7 | | [ScienceQA][scienceqa] | 96.1 | 98.2 | 98.2 | 96.2 | 98.5 | 98.6 | | [Screen2Words][screen2words] | 113.3 | 117.8 | 122.8 | 114.0 | 119.1 | 123.4 | | [TallyQA][tallyqa] (complex) | 70.3 | 73.4 | 74.2 | 73.6 | 76.7 | 76.8 | | [TallyQA][tallyqa] (simple) | 81.8 | 83.2 | 83.4 | 85.3 | 86.2 | 85.7 | | [TextCaps][textcaps] | 127.5 | 137.9 | 139.9 | 152.1 | 157.7 | 153.6 | | [TextVQA][textvqa] (val) | 59.6 | 64.0 | 64.7 | 75.2 | 76.6 | 76.2 | | [VATEX][vatex] | 80.8 | 82.7 | - | - | - | - | | [VQAv2][vqav2] (minival) | 83.0 | 84.3 | 84.5 | 84.8 | 85.8 | 85.8 | | [VizWizVQA][vizwiz-vqa] (val) | 76.4 | 78.1 | 78.7 | 77.5 | 78.6 | 78.9 | | [WidgetCap][widgetcap] | 138.1 | 139.8 | 138.8 | 151.4 | 151.9 | 148.9 | | [XM3600][xm3600] (avg35) | 42.8 | 44.5 | 45.2 | 43.2 | 44.6 | 45.2 | | [XM3600][xm3600] (en) | 79.8 | 80.7 | 81.0 | 80.3 | 81.5 | 81.0 | | [xGQA][xgqa] (avg7) | 58.6 | 61.4 | 61.1 | 60.4 | 62.6 | 62.1 | #### Additional Benchmarks **[ICDAR 2015 Incidental][icdar2015-inc]** | Model | Precision | Recall | F1 | |-----------------|-----------|:------:|:-----:| | PaliGemma 2 3B | 81.88 | 70.73 | 75.9 | **[Total-Text][total-text]** | Model | Precision | Recall | F1 | |-----------------|-----------|:------:|:-----:| | PaliGemma 2 3B | 73.8. | 74.54 | 74.17 | **[FinTabNet][fintabnet]** | Model | S-TEDS | TEDS | GriTS-Top | GriTS-Con | |-----------------|--------|-------|-----------|-----------| | PaliGemma 2 3B | 99.18 | 98.94 | 99.43 | 99.21 | **[PubTabNet][pubtabnet]** | Model | S-TEDS | TEDS | GriTS-Top | GriTS-Con | |-----------------|--------|-------|-----------|-----------| | PaliGemma 2 3B | 97.6 | 97.31 | 97.99 | 97.84 | **[GrandStaff][grandstaff]** | Model | CER | LER | SER | |-----------------|-----|-----|-----| | PaliGemma 2 3B | 1.6 | 6.7 | 2.3 | **[PubChem][pubchem]** * PaliGemma 2 3B, Full Match: 94.8 **[DOCCI][docci]** | Model | avg#char | avg#sent | NES % | |-----------------|----------|----------|---------| | PaliGemma 2 3B | 529 | 7.74 | 28.42 | | PaliGemma 2 10B | 521 | 7.45 | 20.27 | - *avg#char*: Average number of characters - *avg#sent*: Average number of sentences - *NES*: Non entailment sentences **[MIMIC-CXR][mimic-cxr]** | Model | CIDEr | BLEU4 | Rouge-L | RadGraph F1 | |-----------------|-------|-------|---------|-------------| | PaliGemma 2 3B | 19.9% | 14.6% | 31.92% | 28.8% | | PaliGemma 2 10B | 17.4% | 15% | 32.41% | 29.5% | **[Visual Spatial Reasoning][vsr]** | Model | VSR zeroshot split (test) | VSR random split (test) | |-----------------|---------------------------|--------------------------| | PaliGemma 2 3B | 0.75 | 0.82 | | PaliGemma 2 10B | 0.80 | 0.87 | ## Ethics and safety ### Evaluation approach Our evaluation methods include structured ethics and safety evaluations across relevant content policies, including: * Human evaluation on prompts covering child safety, content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach, but with image captioning and visual question answering setups. * Image-to-Text benchmark evaluation: Benchmark against relevant academic datasets such as FairFace Dataset ([Karkkainen et al., 2021](https://arxiv.org/abs/1908.04913)). ### Evaluation results * The human evaluation results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety and representational harms. * On top of robust internal evaluations, we also use the Perspective API (threshold of 0.8) to measure toxicity, profanity, and other potential issues in the generated captions for images sourced from the FairFace dataset. We report the maximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes. <table> <tr> <col> <colgroup span="3"></colgroup> <colgroup span="3"></colgroup> <colgroup span="3"></colgroup> <th>Metric</th> <th colspan="3" scope="colgroup">Perceived gender</th> <th colspan="3" scope="colgroup">Ethnicity</th> <th colspan="3" scope="colgroup">Age group</th> </tr> <tr> <th>Model size</th> <th scope="col">3B</th> <th scope="col">10B</th> <th scope="col">28B</th> <th scope="col">3B</th> <th scope="col">10B</th> <th scope="col">28B</th> <th scope="col">3B</th> <th scope="col">10B</th> <th scope="col">28B</th> </tr> <tr> <th></th> <th colspan="9" scope="colgroup">Maximum</th> </tr> <tr> <td>Toxicity</td> <td>0.14%</td> <td>0.15%</td> <td>0.19%</td> <td>0.29%</td> <td>0.39%</td> <td>0.39%</td> <td>0.26%</td> <td>0.18%</td> <td>0.32%</td> </tr> <tr> <td>Identity Attack</td> <td>0.04%</td> <td>0.02%</td> <td>0.02%</td> <td>0.13%</td> <td>0.06%</td> <td>0.06%</td> <td>0.06%</td> <td>0.03%</td> <td>0.06%</td> </tr> <tr> <td>Insult</td> <td>0.17%</td> <td>0.25%</td> <td>0.17%</td> <td>0.37%</td> <td>0.52%</td> <td>0.52%</td> <td>0.27%</td> <td>0.39%</td> <td>0.24%</td> </tr> <tr> <td>Threat</td> <td>0.55%</td> <td>0.43%</td> <td>0.57%</td> <td>0.83%</td> <td>0.48%</td> <td>0.48%</td> <td>0.64%</td> <td>0.43%</td> <td>0.64%</td> </tr> <tr> <td>Profanity</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> <tr> <th></th> <th colspan="9" scope="colgroup">Median</th> </tr> <tr> <td>Toxicity</td> <td>0.13%</td> <td>0.10%</td> <td>0.18%</td> <td>0.07%</td> <td>0.07%</td> <td>0.14%</td> <td>0.12%</td> <td>0.08%</td> <td>0.12%</td> </tr> <tr> <td>Identity Attack</td> <td>0.02%</td> <td>0.01%</td> <td>0.02%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> <tr> <td>Insult</td> <td>0.15%</td> <td>0.23%</td> <td>0.14%</td> <td>0.14%</td> <td>0.17%</td> <td>0.13%</td> <td>0.09%</td> <td>0.18%</td> <td>0.16%</td> </tr> <tr> <td>Threat</td> <td>0.35%</td> <td>0.27%</td> <td>0.41%</td> <td>0.28%</td> <td>0.19%</td> <td>0.42%</td> <td>0.27%</td> <td>0.31%</td> <td>0.40%</td> </tr> <tr> <td>Profanity</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> </table> ## Usage and limitations ### Intended usage Open Vision Language Models (VLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). Fine-tune on specific vision-language task: * The pre-trained models can be fine-tuned on a wide range of vision-language tasks such as: image captioning, short video caption, visual question answering, text reading, object detection and object segmentation. * The pre-trained models can be fine-tuned for specific domains such as remote sensing question answering, visual questions from people who are blind, science question answering, describe UI element functionalities. * The pre-trained models can be fine-tuned for tasks with non-textual outputs such as bounding boxes or segmentation masks. Vision-language research: * The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM techniques, develop algorithms, and contribute to the advancement of the field. ### Ethical considerations and risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * VLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible). * Transparency and Accountability * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * **Perpetuation of biases:** It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * **Generation of harmful content:** Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * **Misuse for malicious purposes:** Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided: see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible). Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Limitations * Most limitations inherited from the underlying Gemma 2 models still apply: * VLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * Natural language is inherently complex. VLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * VLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * VLMs rely on statistical patterns in language and images. They might lack the ability to apply common sense reasoning in certain situations. * PaliGemma 2 was designed first and foremost to serve as a general pre-trained model for fine-tuning to specialized tasks. Hence, its "out of the box" or "zero-shot" performance might lag behind models designed specifically for general purpose use. * PaliGemma 2 is not a multi-turn chatbot. It is designed for a single round of image and text input. [ai2d]: https://allenai.org/data/diagrams [aokvqa-da]: https://allenai.org/project/a-okvqa/home [aokvqa-mc]: https://allenai.org/project/a-okvqa/home [anet-cap]: https://paperswithcode.com/dataset/activitynet-captions [anet-qa]: https://arxiv.org/abs/1906.02467 [chartqa]: https://arxiv.org/abs/2203.10244 [coco-35l]: https://arxiv.org/pdf/2205.12522 [coco-cap]: https://cocodataset.org/#home [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/ [docvqa]: https://www.docvqa.org/ [gqa]: https://cs.stanford.edu/people/dorarad/gqa/about.html [info-vqa]: https://arxiv.org/abs/2104.12756 [marvl]: https://marvl-challenge.github.io/ [msrvtt]: https://paperswithcode.com/dataset/msr-vtt [msvd-qa]: https://paperswithcode.com/dataset/msvd-qa [nlvr2]: https://lil.nlp.cornell.edu/nlvr/ [nocaps]: https://nocaps.org/ [ocr-vqa]: https://ocr-vqa.github.io/ [okvqa]: https://okvqa.allenai.org/ [refcoco]: https://arxiv.org/abs/1608.00272 [refcoco+]: https://aclanthology.org/D14-1086 [refcocog]: https://arxiv.org/abs/1511.02283 [rsvqa-hr]: https://zenodo.org/records/6344367 [rsvqa-lr]: https://zenodo.org/records/6344334 [st-vqa]: https://arxiv.org/abs/1905.13648 [scicap]: https://arxiv.org/abs/2110.11624 [scienceqa]: https://scienceqa.github.io/ [screen2words]: https://arxiv.org/abs/2108.03353 [tallyqa]: https://arxiv.org/abs/1810.12440 [textcaps]: https://textvqa.org/textcaps/ [textvqa]: https://textvqa.org/ [vatex]: https://arxiv.org/abs/1904.03493 [vizwiz-vqa]: https://vizwiz.org/tasks-and-datasets/vqa/ [widgetcap]: https://arxiv.org/abs/2010.04295 [vqav2]: https://visualqa.org/index.html [xgqa]: https://aclanthology.org/2022.findings-acl.196/ [xm3600]: https://arxiv.org/pdf/2205.12522 [icdar2015-inc]: https://arxiv.org/abs/1511.09207 [total-text]: https://paperswithcode.com/paper/total-text-a-comprehensive-dataset-for-scene [fintabnet]: https://developer.ibm.com/data/fintabnet/ [pubtabnet]: https://paperswithcode.com/dataset/pubtabnet [grandstaff]: https://link.springer.com/article/10.1007/s10032-023-00432-z [pubchem]: https://pmc.ncbi.nlm.nih.gov/articles/PMC7352161/ [docci]: https://research.google/pubs/docci-descriptions-of-connected-and-contrasting-images/ [mimic-cxr]: https://paperswithcode.com/dataset/mimic-cxr [vsr]: https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00566/116470/Visual-Spatial-Reasoning
null
Non_BioNLP
# PaliGemma 2 model card **Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma) Transformers PaliGemma 2 10B weights, pre-trained with 896*896 input images and 512 token input/output text sequences. The model is available in the `bfloat16` format for fine-tuning. **Resources and technical documentation:** * [PaliGemma 2 on Kaggle](https://www.kaggle.com/models/google/paligemma-2) * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) **Terms of Use:** [Terms](https://ai.google.dev/gemma/terms) **Authors:** Google ## Model information ### Model summary PaliGemma 2 is an update of the [PaliGemma](https://arxiv.org/abs/2407.07726) vision-language model (VLM) which incorporates the capabilities of the [Gemma 2](https://arxiv.org/abs/2408.00118) models. The PaliGemma family of models is inspired by [PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as the [SigLIP](https://arxiv.org/abs/2303.15343) vision model and [Gemma 2](https://arxiv.org/abs/2408.00118) language models. It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation. #### Model architecture PaliGemma 2 is the composition of a [Transformer decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image encoder](https://arxiv.org/abs/2010.11929). The text decoder is initialized from [Gemma 2](https://ai.google.dev/gemma/docs/base) in the 2B, 9B, and 27B parameter sizes. The image encoder is initialized from [SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb). Similar to the original PaliGemma model, PaliGemma 2 is trained following the [PaLI-3](https://arxiv.org/abs/2310.09199) recipes. #### Inputs and outputs * **Input:** Image and text string, such as a prompt to caption the image, or a question. * **Output:** Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords. #### Citation ```none @article{ title={PaliGemma 2: A Family of Versatile VLMs for Transfer}, author={Andreas Steiner and André Susano Pinto and Michael Tschannen and Daniel Keysers and Xiao Wang and Yonatan Bitton and Alexey Gritsenko and Matthias Minderer and Anthony Sherbondy and Shangbang Long and Siyang Qin and Reeve Ingle and Emanuele Bugliarello and Sahar Kazemzadeh and Thomas Mesnard and Ibrahim Alabdulmohsin and Lucas Beyer and Xiaohua Zhai}, year={2024}, journal={arXiv preprint arXiv:2412.03555} } ``` ### Model data #### Pre-train datasets PaliGemma 2 is pre-trained on the following mixture of datasets: * **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is a web-scale multilingual image-text dataset built from the public web. A wide range of WebLI splits are used to acquire versatile model capabilities, such as visual semantic understanding, object localization, visually-situated text understanding, and multilinguality. * **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud Translation API](https://cloud.google.com/translate) to translate into 34 additional languages. * **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al., 2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the same additional 34 languages as CC3M-35L, using the [Google Cloud Translation API](https://cloud.google.com/translate). * **OpenImages:** Detection and object-aware questions and answers ([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by handcrafted rules on the [OpenImages dataset]. * **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al., 2021](https://arxiv.org/abs/2103.01913)). [OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html PaliGemma 2 is based on Gemma 2, and you can find information on the pre-training datasets for Gemma 2 in the [Gemma 2 model card](https://ai.google.dev/gemma/docs/model_card_2). #### Data responsibility filtering The following filters are applied to WebLI, with the goal of training PaliGemma 2 on safe and responsible data: * **Pornographic image filtering:** This filter removes images deemed to be of pornographic nature. * **Text safety filtering:** We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about child sexual abuse imagery (CSAI), pornography, vulgarities, or is otherwise offensive. * **Text toxicity filtering:** We further use the [Perspective API](https://perspectiveapi.com/) to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic. * **Text personal information filtering:** We filtered certain personal information and other sensitive data using the [Cloud Data Loss Prevention (DLP) API](https://cloud.google.com/security/products/dlp) to protect the privacy of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed. * **Additional methods:** Filtering based on content quality and safety in line with our policies and practices. [other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759 ## Use in Transformers The following snippet uses model `google/paligemma2-10b-pt-896` for reference purposes. It is a base model and is recommended to use after fine tuning it on a downstream task. Here is a [notebook](https://github.com/merveenoyan/smol-vision/blob/main/Fine_tune_PaliGemma.ipynb) that showcases fine-tuning PaliGemma 2. ```python from transformers import ( PaliGemmaProcessor, PaliGemmaForConditionalGeneration, ) from transformers.image_utils import load_image import torch model_id = "google/paligemma2-10b-pt-896" url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" image = load_image(url) model = PaliGemmaForConditionalGeneration.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto").eval() processor = PaliGemmaProcessor.from_pretrained(model_id) # Leaving the prompt blank for pre-trained models prompt = "" model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(torch.bfloat16).to(model.device) input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ``` ## Implementation information ### Hardware PaliGemma 2 was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e). ### Software Training was completed using [JAX](https://github.com/google/jax), [Flax](https://github.com/google/flax), [TFDS](https://github.com/tensorflow/datasets) and [`big_vision`](https://github.com/google-research/big_vision). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma 2 fine-tune code and inference code are released in the `big_vision` GitHub repository. ## Evaluation information ### Benchmark results In order to verify the transferability of PaliGemma 2 to a wide variety of academic tasks, we fine-tune the pretrained models on each task. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web-scale pre-training data. #### PaliGemma 2 results by model resolution and size | Benchmark | 224-3B | 224-10B | 224-28B | 448-3B | 448-10B | 448-28B | |-------------------------------|:------:|:-------:|:-------:|:------:|:-------:|:-------:| | [AI2D][ai2d] | 74.7 | 83.1 | 83.2 | 76.0 | 84.4 | 84.6 | | [AOKVQA-DA][aokvqa-da] (val) | 64.2 | 68.9 | 70.2 | 67.9 | 70.8 | 71.2 | | [AOKVQA-MC][aokvqa-mc] (val) | 79.7 | 83.7 | 84.7 | 82.5 | 85.9 | 87.0 | | [ActivityNet-CAP][anet-cap] | 34.2 | 35.9 | - | - | - | - | | [ActivityNet-QA][anet-qa] | 51.3 | 53.2 | - | - | - | - | | [COCO-35L][coco-35l] (avg34) | 113.9 | 115.8 | 116.5 | 115.8 | 117.2 | 117.2 | | [COCO-35L][coco-35l] (en) | 138.4 | 140.8 | 142.4 | 140.4 | 142.4 | 142.3 | | [COCOcap][coco-cap] | 141.3 | 143.7 | 144.0 | 143.4 | 145.0 | 145.2 | | [ChartQA][chartqa] (aug) | 74.4 | 74.2 | 68.9 | 89.2 | 90.1 | 85.1 | | [ChartQA][chartqa] (human) | 42.0 | 48.4 | 46.8 | 54.0 | 66.4 | 61.3 | | [CountBenchQA][countbenchqa] | 81.0 | 84.0 | 86.4 | 82.0 | 85.3 | 87.4 | | [DocVQA][docvqa] (val) | 39.9 | 43.9 | 44.9 | 73.6 | 76.6 | 76.1 | | [GQA][gqa] | 66.2 | 67.2 | 67.3 | 68.1 | 68.3 | 68.3 | | [InfoVQA][info-vqa] (val) | 25.2 | 33.6 | 36.4 | 37.5 | 47.8 | 46.7 | | [MARVL][marvl] (avg5) | 83.5 | 89.5 | 90.6 | 82.7 | 89.1 | 89.7 | | [MSRVTT-CAP][msrvtt] | 68.5 | 72.1 | - | - | - | - | | [MSRVTT-QA][msrvtt] | 50.5 | 51.9 | - | - | - | - | | [MSVD-QA][msvd-qa] | 61.1 | 62.5 | - | - | - | - | | [NLVR2][nlvr2] | 91.4 | 93.9 | 94.2 | 91.6 | 93.7 | 94.1 | | [NoCaps][nocaps] | 123.1 | 126.3 | 127.1 | 123.5 | 126.9 | 127.0 | | [OCR-VQA][ocr-vqa] | 73.4 | 74.7 | 75.3 | 75.7 | 76.3 | 76.6 | | [OKVQA][okvqa] | 64.2 | 68.0 | 71.2 | 64.1 | 68.6 | 70.6 | | [RSVQA-hr][rsvqa-hr] (test) | 92.7 | 92.6 | 92.7 | 92.8 | 92.8 | 92.8 | | [RSVQA-hr][rsvqa-hr] (test2) | 90.9 | 90.8 | 90.9 | 90.7 | 90.7 | 90.8 | | [RSVQA-lr][rsvqa-lr] | 93.0 | 92.8 | 93.5 | 92.7 | 93.1 | 93.7 | | [RefCOCO][refcoco] (testA) | 75.7 | 77.2 | 76.8 | 78.6 | 79.7 | 79.3 | | [RefCOCO][refcoco] (testB) | 71.0 | 74.2 | 73.9 | 73.5 | 76.2 | 74.8 | | [RefCOCO][refcoco] (val) | 73.4 | 75.9 | 75.0 | 76.3 | 78.2 | 77.3 | | [RefCOCO+][refcoco+] (testA) | 72.7 | 74.7 | 73.6 | 76.1 | 77.7 | 76.6 | | [RefCOCO+][refcoco+] (testB) | 64.2 | 68.4 | 67.1 | 67.0 | 71.1 | 68.6 | | [RefCOCO+][refcoco+] (val) | 68.6 | 72.0 | 70.3 | 72.1 | 74.4 | 72.8 | | [RefCOCOg][refcocog] (test) | 69.0 | 71.9 | 70.7 | 72.7 | 74.8 | 73.7 | | [RefCOCOg][refcocog] (val) | 68.3 | 71.4 | 70.5 | 72.3 | 74.4 | 73.0 | | [ST-VQA][st-vqa] (val) | 61.9 | 64.3 | 65.1 | 80.5 | 82.0 | 81.8 | | [SciCap][scicap] | 165.1 | 159.5 | 156.9 | 183.3 | 177.2 | 172.7 | | [ScienceQA][scienceqa] | 96.1 | 98.2 | 98.2 | 96.2 | 98.5 | 98.6 | | [Screen2Words][screen2words] | 113.3 | 117.8 | 122.8 | 114.0 | 119.1 | 123.4 | | [TallyQA][tallyqa] (complex) | 70.3 | 73.4 | 74.2 | 73.6 | 76.7 | 76.8 | | [TallyQA][tallyqa] (simple) | 81.8 | 83.2 | 83.4 | 85.3 | 86.2 | 85.7 | | [TextCaps][textcaps] | 127.5 | 137.9 | 139.9 | 152.1 | 157.7 | 153.6 | | [TextVQA][textvqa] (val) | 59.6 | 64.0 | 64.7 | 75.2 | 76.6 | 76.2 | | [VATEX][vatex] | 80.8 | 82.7 | - | - | - | - | | [VQAv2][vqav2] (minival) | 83.0 | 84.3 | 84.5 | 84.8 | 85.8 | 85.8 | | [VizWizVQA][vizwiz-vqa] (val) | 76.4 | 78.1 | 78.7 | 77.5 | 78.6 | 78.9 | | [WidgetCap][widgetcap] | 138.1 | 139.8 | 138.8 | 151.4 | 151.9 | 148.9 | | [XM3600][xm3600] (avg35) | 42.8 | 44.5 | 45.2 | 43.2 | 44.6 | 45.2 | | [XM3600][xm3600] (en) | 79.8 | 80.7 | 81.0 | 80.3 | 81.5 | 81.0 | | [xGQA][xgqa] (avg7) | 58.6 | 61.4 | 61.1 | 60.4 | 62.6 | 62.1 | #### Additional Benchmarks **[ICDAR 2015 Incidental][icdar2015-inc]** | Model | Precision | Recall | F1 | |-----------------|-----------|:------:|:-----:| | PaliGemma 2 3B | 81.88 | 70.73 | 75.9 | **[Total-Text][total-text]** | Model | Precision | Recall | F1 | |-----------------|-----------|:------:|:-----:| | PaliGemma 2 3B | 73.8. | 74.54 | 74.17 | **[FinTabNet][fintabnet]** | Model | S-TEDS | TEDS | GriTS-Top | GriTS-Con | |-----------------|--------|-------|-----------|-----------| | PaliGemma 2 3B | 99.18 | 98.94 | 99.43 | 99.21 | **[PubTabNet][pubtabnet]** | Model | S-TEDS | TEDS | GriTS-Top | GriTS-Con | |-----------------|--------|-------|-----------|-----------| | PaliGemma 2 3B | 97.6 | 97.31 | 97.99 | 97.84 | **[GrandStaff][grandstaff]** | Model | CER | LER | SER | |-----------------|-----|-----|-----| | PaliGemma 2 3B | 1.6 | 6.7 | 2.3 | **[PubChem][pubchem]** * PaliGemma 2 3B, Full Match: 94.8 **[DOCCI][docci]** | Model | avg#char | avg#sent | NES % | |-----------------|----------|----------|---------| | PaliGemma 2 3B | 529 | 7.74 | 28.42 | | PaliGemma 2 10B | 521 | 7.45 | 20.27 | - *avg#char*: Average number of characters - *avg#sent*: Average number of sentences - *NES*: Non entailment sentences **[MIMIC-CXR][mimic-cxr]** | Model | CIDEr | BLEU4 | Rouge-L | RadGraph F1 | |-----------------|-------|-------|---------|-------------| | PaliGemma 2 3B | 19.9% | 14.6% | 31.92% | 28.8% | | PaliGemma 2 10B | 17.4% | 15% | 32.41% | 29.5% | **[Visual Spatial Reasoning][vsr]** | Model | VSR zeroshot split (test) | VSR random split (test) | |-----------------|---------------------------|--------------------------| | PaliGemma 2 3B | 0.75 | 0.82 | | PaliGemma 2 10B | 0.80 | 0.87 | ## Ethics and safety ### Evaluation approach Our evaluation methods include structured ethics and safety evaluations across relevant content policies, including: * Human evaluation on prompts covering child safety, content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach, but with image captioning and visual question answering setups. * Image-to-Text benchmark evaluation: Benchmark against relevant academic datasets such as FairFace Dataset ([Karkkainen et al., 2021](https://arxiv.org/abs/1908.04913)). ### Evaluation results * The human evaluation results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety and representational harms. * On top of robust internal evaluations, we also use the Perspective API (threshold of 0.8) to measure toxicity, profanity, and other potential issues in the generated captions for images sourced from the FairFace dataset. We report the maximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes. <table> <tr> <col> <colgroup span="3"></colgroup> <colgroup span="3"></colgroup> <colgroup span="3"></colgroup> <th>Metric</th> <th colspan="3" scope="colgroup">Perceived gender</th> <th colspan="3" scope="colgroup">Ethnicity</th> <th colspan="3" scope="colgroup">Age group</th> </tr> <tr> <th>Model size</th> <th scope="col">3B</th> <th scope="col">10B</th> <th scope="col">28B</th> <th scope="col">3B</th> <th scope="col">10B</th> <th scope="col">28B</th> <th scope="col">3B</th> <th scope="col">10B</th> <th scope="col">28B</th> </tr> <tr> <th></th> <th colspan="9" scope="colgroup">Maximum</th> </tr> <tr> <td>Toxicity</td> <td>0.14%</td> <td>0.15%</td> <td>0.19%</td> <td>0.29%</td> <td>0.39%</td> <td>0.39%</td> <td>0.26%</td> <td>0.18%</td> <td>0.32%</td> </tr> <tr> <td>Identity Attack</td> <td>0.04%</td> <td>0.02%</td> <td>0.02%</td> <td>0.13%</td> <td>0.06%</td> <td>0.06%</td> <td>0.06%</td> <td>0.03%</td> <td>0.06%</td> </tr> <tr> <td>Insult</td> <td>0.17%</td> <td>0.25%</td> <td>0.17%</td> <td>0.37%</td> <td>0.52%</td> <td>0.52%</td> <td>0.27%</td> <td>0.39%</td> <td>0.24%</td> </tr> <tr> <td>Threat</td> <td>0.55%</td> <td>0.43%</td> <td>0.57%</td> <td>0.83%</td> <td>0.48%</td> <td>0.48%</td> <td>0.64%</td> <td>0.43%</td> <td>0.64%</td> </tr> <tr> <td>Profanity</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> <tr> <th></th> <th colspan="9" scope="colgroup">Median</th> </tr> <tr> <td>Toxicity</td> <td>0.13%</td> <td>0.10%</td> <td>0.18%</td> <td>0.07%</td> <td>0.07%</td> <td>0.14%</td> <td>0.12%</td> <td>0.08%</td> <td>0.12%</td> </tr> <tr> <td>Identity Attack</td> <td>0.02%</td> <td>0.01%</td> <td>0.02%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> <tr> <td>Insult</td> <td>0.15%</td> <td>0.23%</td> <td>0.14%</td> <td>0.14%</td> <td>0.17%</td> <td>0.13%</td> <td>0.09%</td> <td>0.18%</td> <td>0.16%</td> </tr> <tr> <td>Threat</td> <td>0.35%</td> <td>0.27%</td> <td>0.41%</td> <td>0.28%</td> <td>0.19%</td> <td>0.42%</td> <td>0.27%</td> <td>0.31%</td> <td>0.40%</td> </tr> <tr> <td>Profanity</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> </table> ## Usage and limitations ### Intended usage Open Vision Language Models (VLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). Fine-tune on specific vision-language task: * The pre-trained models can be fine-tuned on a wide range of vision-language tasks such as: image captioning, short video caption, visual question answering, text reading, object detection and object segmentation. * The pre-trained models can be fine-tuned for specific domains such as remote sensing question answering, visual questions from people who are blind, science question answering, describe UI element functionalities. * The pre-trained models can be fine-tuned for tasks with non-textual outputs such as bounding boxes or segmentation masks. Vision-language research: * The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM techniques, develop algorithms, and contribute to the advancement of the field. ### Ethical considerations and risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * VLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible). * Transparency and Accountability * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * **Perpetuation of biases:** It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * **Generation of harmful content:** Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * **Misuse for malicious purposes:** Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided: see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible). Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Limitations * Most limitations inherited from the underlying Gemma 2 models still apply: * VLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * Natural language is inherently complex. VLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * VLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * VLMs rely on statistical patterns in language and images. They might lack the ability to apply common sense reasoning in certain situations. * PaliGemma 2 was designed first and foremost to serve as a general pre-trained model for fine-tuning to specialized tasks. Hence, its "out of the box" or "zero-shot" performance might lag behind models designed specifically for general purpose use. * PaliGemma 2 is not a multi-turn chatbot. It is designed for a single round of image and text input. [ai2d]: https://allenai.org/data/diagrams [aokvqa-da]: https://allenai.org/project/a-okvqa/home [aokvqa-mc]: https://allenai.org/project/a-okvqa/home [anet-cap]: https://paperswithcode.com/dataset/activitynet-captions [anet-qa]: https://arxiv.org/abs/1906.02467 [chartqa]: https://arxiv.org/abs/2203.10244 [coco-35l]: https://arxiv.org/pdf/2205.12522 [coco-cap]: https://cocodataset.org/#home [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/ [docvqa]: https://www.docvqa.org/ [gqa]: https://cs.stanford.edu/people/dorarad/gqa/about.html [info-vqa]: https://arxiv.org/abs/2104.12756 [marvl]: https://marvl-challenge.github.io/ [msrvtt]: https://paperswithcode.com/dataset/msr-vtt [msvd-qa]: https://paperswithcode.com/dataset/msvd-qa [nlvr2]: https://lil.nlp.cornell.edu/nlvr/ [nocaps]: https://nocaps.org/ [ocr-vqa]: https://ocr-vqa.github.io/ [okvqa]: https://okvqa.allenai.org/ [refcoco]: https://arxiv.org/abs/1608.00272 [refcoco+]: https://aclanthology.org/D14-1086 [refcocog]: https://arxiv.org/abs/1511.02283 [rsvqa-hr]: https://zenodo.org/records/6344367 [rsvqa-lr]: https://zenodo.org/records/6344334 [st-vqa]: https://arxiv.org/abs/1905.13648 [scicap]: https://arxiv.org/abs/2110.11624 [scienceqa]: https://scienceqa.github.io/ [screen2words]: https://arxiv.org/abs/2108.03353 [tallyqa]: https://arxiv.org/abs/1810.12440 [textcaps]: https://textvqa.org/textcaps/ [textvqa]: https://textvqa.org/ [vatex]: https://arxiv.org/abs/1904.03493 [vizwiz-vqa]: https://vizwiz.org/tasks-and-datasets/vqa/ [widgetcap]: https://arxiv.org/abs/2010.04295 [vqav2]: https://visualqa.org/index.html [xgqa]: https://aclanthology.org/2022.findings-acl.196/ [xm3600]: https://arxiv.org/pdf/2205.12522 [icdar2015-inc]: https://arxiv.org/abs/1511.09207 [total-text]: https://paperswithcode.com/paper/total-text-a-comprehensive-dataset-for-scene [fintabnet]: https://developer.ibm.com/data/fintabnet/ [pubtabnet]: https://paperswithcode.com/dataset/pubtabnet [grandstaff]: https://link.springer.com/article/10.1007/s10032-023-00432-z [pubchem]: https://pmc.ncbi.nlm.nih.gov/articles/PMC7352161/ [docci]: https://research.google/pubs/docci-descriptions-of-connected-and-contrasting-images/ [mimic-cxr]: https://paperswithcode.com/dataset/mimic-cxr [vsr]: https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00566/116470/Visual-Spatial-Reasoning
{"library_name": "transformers", "license": "gemma", "pipeline_tag": "image-text-to-text", "extra_gated_heading": "Access PaliGemma on Hugging Face", "extra_gated_prompt": "To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
task
[ "QUESTION_ANSWERING", "TRANSLATION" ]
42,695
gokuls/mobilebert_sa_GLUE_Experiment_data_aug_mnli
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "mobilebert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-02-03T14:40:39Z
2023-02-11T11:09:27+00:00
14
0
--- datasets: - glue language: - en license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: mobilebert_sa_GLUE_Experiment_data_aug_mnli results: - task: type: text-classification name: Text Classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - type: accuracy value: 0.609947111472742 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_data_aug_mnli This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.9046 - Accuracy: 0.6099 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.8429 | 1.0 | 62880 | 0.8755 | 0.6185 | | 0.6713 | 2.0 | 125760 | 0.9512 | 0.6039 | | 0.5387 | 3.0 | 188640 | 1.0796 | 0.5978 | | 0.4297 | 4.0 | 251520 | 1.1877 | 0.5961 | | 0.3405 | 5.0 | 314400 | 1.3154 | 0.5895 | | 0.2693 | 6.0 | 377280 | 1.4320 | 0.5798 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_data_aug_mnli This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.9046 - Accuracy: 0.6099 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.8429 | 1.0 | 62880 | 0.8755 | 0.6185 | | 0.6713 | 2.0 | 125760 | 0.9512 | 0.6039 | | 0.5387 | 3.0 | 188640 | 1.0796 | 0.5978 | | 0.4297 | 4.0 | 251520 | 1.1877 | 0.5961 | | 0.3405 | 5.0 | 314400 | 1.3154 | 0.5895 | | 0.2693 | 6.0 | 377280 | 1.4320 | 0.5798 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mobilebert_sa_GLUE_Experiment_data_aug_mnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE MNLI", "type": "glue", "args": "mnli"}, "metrics": [{"type": "accuracy", "value": 0.609947111472742, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,696
ptpedroVortal/nomic_vortal_v3.3
ptpedroVortal
sentence-similarity
[ "sentence-transformers", "safetensors", "nomic_bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:222", "loss:MultipleNegativesRankingLoss", "custom_code", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:nomic-ai/nomic-embed-text-v1", "base_model:finetune:nomic-ai/nomic-embed-text-v1", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2025-01-07T18:29:57Z
2025-01-07T18:30:58+00:00
7
0
--- base_model: nomic-ai/nomic-embed-text-v1 library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:222 - loss:MultipleNegativesRankingLoss widget: - source_sentence: Collect the details that are associated with Lot 1 product '' 'IGE', with quantity 7280, unit UND sentences: - Lote 12\nquatro euros e noventa e cinco cêntimos\n3\n3Teks/Turquia\nRef. 101.001.SE - ' ImmunoCAP IgE Total Anti-IgE Immunocap Caneta 7280 455 18,70 dezoito euros e setenta cêntimos 1,168750 16 14-4509-01 8.508,50\n2\n2 122060282Conforme \nalergénioImmunoCAP Alergénio Caneta 20800 1300 30,00 trinta euros 1,875000 16 .... 39.000,00\n2\n3 122060282Conforme \nalergénioImmunoCAP Alergénio Componente\nMolecularCaneta 2000 200 73,49setenta e três euros e quarenta e \nnove cêntimos 7,349000 10 .... 14.698,00\n2\n4 122060282Conforme \nalergénioImmunoCAP Alergénio Caneta 8100 810 22,05 vinte e dois euros e cinco cêntimos 2,205000 10 .... 17.860,50' - Solução Proposta 5 565,00 €\nhw 5 565,00 €\nTotal sem IVA 5 565,00 €\nValor do IVA (23 %) 1 279,95 €\nTotal com IVA (23 %) 6 844,95 €MIL DUZENTOS E SETENTA E NOVE EUROS E NOVENTA E CINCO CENTIMOS\nSEIS MIL OITOCENTOS E QUARENTA E QUATRO EUROS E NOVENTA E CINCO\nCENTIMOSRESUMO ECONÓMICO\nDescrição Valor\nCINCO MIL QUINHENTOS E SESSENTA E CINCO EUROS\nProposta FP-5-171.xlsxPágina 4 de 5\n30/01/2023\nLenovo - Monitores 5 565,00 €\nT27i-10(A18270FT0)-27inch Monitor(HDMI)\nStock a data de hoje 20 Unds + Previsão 4 \nsemanas30 185,50 € 5 565,00 €\n - source_sentence: Collect the details that are associated with Lot 4 product 'Os diversos equipamentos e eventos a realizar e de gestão do Município de Albufeira' 'Vigilância e segurança humana contínua em equipamentos e eventos - VIG/ARD/ARE/COORD - período de 3 meses, todos os dias de forma permanente e ininterrupta, total estimado de 2250H.', with quantity 1, unit UN sentences: - 'Lote 3:\nPreço Unitário: 10,00€ (dez euros) /hora\nPreço Total: 27.540,00€ (vinte sete mil quinhentos e quarenta euros)' - 'DESCRIÇÃO DE SERVIÇOS LOTE 4 Local, Dias e Horários: ➢ Locais, Dias e horários a indicar (prazo mínimo de 24 horas) pelos serviços da Câmara Municipal de Albufeira ; Efetivo e Categoria Profissional : ➢ O número necessário de efetivo será definido por requisição a emitir pelos competentes serviços da Câmara Municipal de Albufeira nas categorias de Vigilante, Assistente Recinto de Espetáculos, Assistente de Recinto Desportivos e Coordenador; Valor Hora: €15,00 ( quinze euros) Total de Horas Previsto: 2980 horas (dois mil novecentos e oitenta horas) Valor Total Previsto: €44 700,00 (quarenta e quatro mil e setecentos euros)' - 2202000061 - SUTURA MONOFILAMENYO NÃO ABSORVIVEL POLIAMIDA 0 AG.CILINDRICA,30MM (CART.) 0.78 - source_sentence: Collect the details that are associated with Lot 11 product '' 'Mesas apoio (anestesia e circulante)', with quantity 4, unit Subcontracting Unit sentences: - 2202000128 - Sutura sint. absorvivel "Poliglicolico" multifilamento 5/0 ag. cilindrica 13mm (CART.) 2.2 - 'Preço\nLote Artigo Designação Referência € Custo unitário s/ IVA % IVA Quantidade Valor total S/IVA\n11 Mesa apoio \n(anestesia e \ncirculante) Trolley apoio TTAC 3350 690€ 23% 4 2760€\nTotal 2760€ ' - 'Trouxa Cesariana 832,00 7,950000 6\xa0614,400000 23,00\nLote 10\nsete euros e noveta e cinco cêntimos\n3\n3Teks/Turquia\nRef. 601.002.SE\nCDM: 88869431\nDM Classe Ie\nNPDM: T020199\nTotal Ilíquido :Descontos Comerciais :\nTOTAL EUR56\xa0642,860000P' - source_sentence: Collect the details that are associated with product '' '2202000154 - FIO SUT. POLIPROPI. NÃO ABS. 2/0 MONOF. C/ 2 AG. RECTAS CILIND. 75 MM (CART.)', with quantity 108, unit UN sentences: - '| Lote | DENOMINAÇÃO | VALOR MENSAL | VALOR | |--------|------------------------------------------------------------------------|--------------|-------------| | Lote 1 | Mercados Quinzenais, no Município de Albufeira | 567,00 € | 2 835,00 € | | Lote 2 | Portaria Das Instalações Técnicas e Administrativas do Edifício de Vale Pedras | 7 435,80 € | 37 179,00 € | | Lote 3 | Portaria Parque de Estacionamento Coberto dos Olhos de Água | 8 262,00 € | 41 310,00 € | | Lote 4 | Compreende a Vigilância aos diversos eventos e equipamentos | | 49 021,00 € | | TOTAL | | | 130 345,00 € |' - LOTE 8\n4,0 CARROHPPBV1 SUPORTE RODADO PARA SACO RESÍDUOS GRUPO III/IV 240,000 960,000 23,0\ - 2202000154 - FIO SUT. POLIPROPI. NÃO ABS. 2/0 MONOF. C/ 2 AG. RECTAS CILIND. 75 MM (CART.) 2.65 - source_sentence: Collect the details that are associated with product '' 'Macbook Air 13" com processador M1/M2 e 8 GB de RAM (Telado PT-PT)', with quantity 1, unit UN sentences: - '[''A Siemens Healthineers é uma empresa líder em inovação, reconhecida pela sua engenharia de ponta, pelo apoio a \nterapias avançadas, assim como por privilegiar a versatilidade e funcionalidade entre questões clínicas, resultados \nlaboratoriais precisos e serviços transformadores. Deste modo, maximizamos oportunidades e reduzimos riscos, \ncontribuindo para a melhoria da excelência clínica, eficiência operacional e rentabilidade.\nQueremos continuar a colocar todas estas capacidades e os nossos conhecimentos a nível clínico e técnico ao serviço de\nparceiros e clientes, para que sejamos o apoio que precisam para alcançar todos os seus objetivos. Sabemos que podemos\nir cada vez mais longe, mas sabemos também que muitas vezes um caminho percorrido em conjunto oferece todo um novo\nmundo de possibilidades.\nJuntos, vamos continuar a construir o futuro da saúde.\nPág. 52.Proposta Financeira\n2.1. Posições Relevantes\nLote 1\nPOSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\nANUALPreço Teste\nS/ IVA\n1 122060265 IGE TESTE 7280 3,00 €\nPreço Líquido da Proposta para o Lote 1 ...................................................................................................21.840,00 €\n(\n(Vinte e um mil, oitocentos e quarenta euros)\nLote 4\nPOSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\nANUALPreço Total\nS/ IVA\n5\n1 122060125C1 INACTIVADOR FUNCIONAL'', ''POSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\nANUALPreço Teste\nS/ IVA\n1 122060265 IGE TESTE 7280 3,00 €\nPreço Líquido da Proposta para o Lote 1 ...................................................................................................21.840,00 €\n(\n(Vinte e um mil, oitocentos e quarenta euros)\nLote 4\nPOSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\nANUALPreço Total\nS/ IVA\n5\n1 122060125C1 INACTIVADOR FUNCIONAL\nQUANTITATIVOTESTE 576 5,00 €\nPreço Líquido da Proposta para o Lote 4 .................................................................................................... 2.880,00 €\n(\n(Dois mil, oitocentos e oitenta euros)\nPág. 63.Anexos\n1. Procuração\nSiemens Healthcare, Unipessoal, Lda.\nRua Irmãos Siemens, 1-1A\n2720-093 Amadora\nPortugal\nTelf: +351 210 601 800\[email protected]\nwww.healthcare.siemens.ptCopyright\nEsta Proposta e seu conteúdo são propriedade intelectual da\nSiemens Healthcare, Unipessoal, Lda, na proposta designada\npor Siemens Healthineers, protegida nos termos do Código do\nDireito de Autor e dos Direitos Conexos.'', ''A Siemens Healthineers é uma empresa líder em inovação, reconhecida pela sua engenharia de ponta, pelo apoio a \nterapias avançadas, assim como por privilegiar a versatilidade e funcionalidade entre questões clínicas, resultados \nlaboratoriais precisos e serviços transformadores. Deste modo, maximizamos oportunidades e reduzimos riscos, \ncontribuindo para a melhoria da excelência clínica, eficiência operacional e rentabilidade.\nQueremos continuar a colocar todas estas capacidades e os nossos conhecimentos a nível clínico e técnico ao serviço de\nparceiros e clientes, para que sejamos o apoio que precisam para alcançar todos os seus objetivos. Sabemos que podemos\nir cada vez mais longe, mas sabemos também que muitas vezes um caminho percorrido em conjunto oferece todo um novo\nmundo de possibilidades.\nJuntos, vamos continuar a construir o futuro da saúde.\nPág. 52.Proposta Financeira\n2.1. Posições Relevantes\nLote 1\nPOSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\nANUALPreço Teste\nS/ IVA\n1 122060265 IGE TESTE 7280 3,00 €\nPreço Líquido da Proposta para o Lote 1 ...................................................................................................21.840,00 €'', ''A Siemens Healthineers é uma empresa líder em inovação, reconhecida pela sua engenharia de ponta, pelo apoio a \nterapias avançadas, assim como por privilegiar a versatilidade e funcionalidade entre questões clínicas, resultados \nlaboratoriais precisos e serviços transformadores. Deste modo, maximizamos oportunidades e reduzimos riscos, \ncontribuindo para a melhoria da excelência clínica, eficiência operacional e rentabilidade.\nQueremos continuar a colocar todas estas capacidades e os nossos conhecimentos a nível clínico e técnico ao serviço de\nparceiros e clientes, para que sejamos o apoio que precisam para alcançar todos os seus objetivos. Sabemos que podemos\nir cada vez mais longe, mas sabemos também que muitas vezes um caminho percorrido em conjunto oferece todo um novo\nmundo de possibilidades.\nJuntos, vamos continuar a construir o futuro da saúde.\nPág. 52.Proposta Financeira\n2.1. Posições Relevantes\nLote 1\nPOSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\nANUALPreço Teste\nS/ IVA\n1 122060265 IGE TESTE 7280 3,00 €\nPreço Líquido da Proposta para o Lote 1 ...................................................................................................21.840,00 €\n(\n(Vinte e um mil, oitocentos e quarenta euros)\nLote 4\nPOSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\nANUALPreço Total\nS/ IVA\n5\n1 122060125C1 INACTIVADOR FUNCIONAL\nQUANTITATIVOTESTE 576 5,00 €\nPreço Líquido da Proposta para o Lote 4 .................................................................................................... 2.880,00 €'', ''Queremos continuar a colocar todas estas capacidades e os nossos conhecimentos a nível clínico e técnico ao serviço de\nparceiros e clientes, para que sejamos o apoio que precisam para alcançar todos os seus objetivos. Sabemos que podemos\nir cada vez mais longe, mas sabemos também que muitas vezes um caminho percorrido em conjunto oferece todo um novo\nmundo de possibilidades.\nJuntos, vamos continuar a construir o futuro da saúde.\nPág. 52.Proposta Financeira\n2.1. Posições Relevantes\nLote 1\nPOSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\nANUALPreço Teste\nS/ IVA\n1 122060265 IGE TESTE 7280 3,00 €\nPreço Líquido da Proposta para o Lote 1 ...................................................................................................21.840,00 €\n(\n(Vinte e um mil, oitocentos e quarenta euros)\nLote 4\nPOSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\nANUALPreço Total\nS/ IVA\n5\n1 122060125C1 INACTIVADOR FUNCIONAL\nQUANTITATIVOTESTE 576 5,00 €\nPreço Líquido da Proposta para o Lote 4 .................................................................................................... 2.880,00 €\n(\n(Dois mil, oitocentos e oitenta euros)\nPág. 63.Anexos'']' - 'LOTE 5 1 MESA APOIO MESA DE APOIO EM INOX AISI 304 2,0 279,000 23,0 558,000 Mesa com 4 rodas , 2 com travão Tabuleiro inferior Dimens: C 700 x L 500 x A 800mm Prateleira inferior - profundidade 250mm Fabrico Nacional e por medida Total do do lote 5: 558,00€ Quinhentos e cinquenta e oito euros' - 'Apresenta -se de seguida a configuração financeira para a fornecimento dos produtos \nrequeridos , mediante opções por cor e diferentes características:\nNOTA: Valores válidos até 23 de Fevereiro e mediante adjudicação de 2 ou mais \nequipamentos portáteis (excluindo Teclado)\nPART-NUMBER QTD. DESCRIÇÃOVALOR\nUNITÁRIOVALOR\nTOTAL\nMLY03PO/A 1Apple Macbook AIR 13,6" (Disco 512GB SSD; 10 core) 1 545,08 € 1 545,08 €\nMLXY3PO/A 1Apple Macbook AIR 13,6" (Disco 256GB SSD, 8 core) 1 227,48 € 1 227,48 €' model-index: - name: SentenceTransformer based on nomic-ai/nomic-embed-text-v1 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: Unknown type: unknown metrics: - type: pearson_cosine value: .nan name: Pearson Cosine - type: spearman_cosine value: .nan name: Spearman Cosine --- # SentenceTransformer based on nomic-ai/nomic-embed-text-v1 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [nomic-ai/nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1) <!-- at revision 720244025c1a7e15661a174c63cce63c8218e52b --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NomicBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("ptpedroVortal/nomic_vortal_v3.3") # Run inference sentences = [ 'Collect the details that are associated with product \'\' \'Macbook Air 13" com processador M1/M2 e 8 GB de RAM (Telado PT-PT)\', with quantity 1, unit UN', 'Apresenta -se de seguida a configuração financeira para a fornecimento dos produtos \\nrequeridos , mediante opções por cor e diferentes características:\\nNOTA: Valores válidos até 23 de Fevereiro e mediante adjudicação de 2 ou mais \\nequipamentos portáteis (excluindo Teclado)\\nPART-NUMBER QTD. DESCRIÇÃOVALOR\\nUNITÁRIOVALOR\\nTOTAL\\nMLY03PO/A 1Apple Macbook AIR 13,6" (Disco 512GB SSD; 10 core) 1 545,08 € 1 545,08 €\\nMLXY3PO/A 1Apple Macbook AIR 13,6" (Disco 256GB SSD, 8 core) 1 227,48 € 1 227,48 €', 'LOTE 5\n1 MESA APOIO MESA DE APOIO EM INOX AISI 304 2,0 279,000 23,0 558,000\nMesa com 4 rodas , 2 com travão\nTabuleiro inferior\nDimens: C 700 x L 500 x A 800mm\nPrateleira inferior - profundidade 250mm\nFabrico Nacional e por medida\nTotal do do lote 5: 558,00€ Quinhentos e cinquenta e oito euros', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Evaluated with <code>__main__.CustomEvaluator</code> | Metric | Value | |:--------------------|:--------| | pearson_cosine | nan | | **spearman_cosine** | **nan** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 222 training samples * Columns: <code>query</code> and <code>correct_node</code> * Approximate statistics based on the first 222 samples: | | query | correct_node | |:--------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 15 tokens</li><li>mean: 55.17 tokens</li><li>max: 154 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 109.22 tokens</li><li>max: 2920 tokens</li></ul> | * Samples: | query | correct_node | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Collect the details that are associated with Lot 4 product '' 'Mesas de Mayo', with quantity 2, unit Subcontracting Unit</code> | <code>LOTE 4<br>1 MESA DE MAYO 82JM 10.ME.1831 2,000 842,00000 23 1 684,00<br>oitocentos e quarenta e dois euros<br>Origem : Nacional<br>Marca : MOBIT<br>Prazo de entrega: 30 dias<br>Garantia: 2 anos<br>Transporte</code> | | <code>Collect the details that are associated with Lot 7 product '' 'Carro transporte de roupa suja ', with quantity 1, unit US</code> | <code>Lote 7 nan nan nan nan nan\nRef. Description Qt. Un. Un. Price Total\n9856 Carros para Transporte de Roupa Suja e Limpa 1 US 16.23 16.23\</code> | | <code>Collect the details that are associated with product '' '2202000014 - FIO SUT. SEDA NÃO ABS. 2/0 MULTIF. SEM AGULHA (CART.)', with quantity 72, unit UN</code> | <code>2202000014 - FIO SUT. SEDA NÃO ABS. 2/0 MULTIF. SEM AGULHA (CART.) 0.36</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 27 evaluation samples * Columns: <code>query</code> and <code>correct_node</code> * Approximate statistics based on the first 27 samples: | | query | correct_node | |:--------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 56.85 tokens</li><li>max: 121 tokens</li></ul> | <ul><li>min: 40 tokens</li><li>mean: 228.15 tokens</li><li>max: 2963 tokens</li></ul> | * Samples: | query | correct_node | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Collect the details that are associated with product '' '2202000055 - FIO SUT. POLIAMIDA NÃO ABS. 2/0 MONOF. AG. LANC. 39 MM 3/8 C (CART.)', with quantity 1656, unit UN</code> | <code>2202000055 - FIO SUT. POLIAMIDA NÃO ABS. 2/0 MONOF. AG. LANC. 39 MM 3/8 C (CART.) 1.28</code> | | <code>Collect the details that are associated with Lot 3 product 'Portaria do Parque Coberto dos Olhos de Água' 'Vigilância e segurança humana contínua - Olhos de Água - período de 3 meses - todos os dias da semana, incluindo feriados, total estimado de 2754H', with quantity 1, unit UN</code> | <code>| Lote | DENOMINAÇÃO | VALOR MENSAL | VALOR |<br>|--------|------------------------------------------------------------------------|--------------|-------------|<br>| Lote 1 | Mercados Quinzenais, no Município de Albufeira | 567,00 € | 2 835,00 € |<br>| Lote 2 | Portaria Das Instalações Técnicas e Administrativas do Edifício de Vale Pedras | 7 435,80 € | 37 179,00 € |<br>| Lote 3 | Portaria Parque de Estacionamento Coberto dos Olhos de Água | 8 262,00 € | 41 310,00 € |<br>| Lote 4 | Compreende a Vigilância aos diversos eventos e equipamentos | | 49 021,00 € |<br>| TOTAL | | | 130 345,00 € |</code> | | <code>Collect the details that are associated with Lot 3 product 'Portaria do Parque Coberto dos Olhos de Água' 'Vigilância e segurança humana contínua - Olhos de Água - período de 3 meses - todos os dias da semana, incluindo feriados, total estimado de 2754H', with quantity 1, unit UN</code> | <code>Lote 3:\nPreço Unitário: 10,00€ (dez euros) /hora\nPreço Total: 27.540,00€ (vinte sete mil quinhentos e quarenta euros)</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 10 - `warmup_ratio`: 0.1 - `bf16`: True - `load_best_model_at_end`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | spearman_cosine | |:----------:|:-------:|:-------------:|:---------------:|:---------------:| | **7.1429** | **100** | **0.0965** | **0.2395** | **nan** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.3.1 - Transformers: 4.47.0.dev0 - PyTorch: 2.5.1+cu121 - Accelerate: 1.1.1 - Datasets: 3.1.0 - Tokenizers: 0.20.4 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on nomic-ai/nomic-embed-text-v1 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [nomic-ai/nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1) <!-- at revision 720244025c1a7e15661a174c63cce63c8218e52b --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NomicBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("ptpedroVortal/nomic_vortal_v3.3") # Run inference sentences = [ 'Collect the details that are associated with product \'\' \'Macbook Air 13" com processador M1/M2 e 8 GB de RAM (Telado PT-PT)\', with quantity 1, unit UN', 'Apresenta -se de seguida a configuração financeira para a fornecimento dos produtos \\nrequeridos , mediante opções por cor e diferentes características:\\nNOTA: Valores válidos até 23 de Fevereiro e mediante adjudicação de 2 ou mais \\nequipamentos portáteis (excluindo Teclado)\\nPART-NUMBER QTD. DESCRIÇÃOVALOR\\nUNITÁRIOVALOR\\nTOTAL\\nMLY03PO/A 1Apple Macbook AIR 13,6" (Disco 512GB SSD; 10 core) 1 545,08 € 1 545,08 €\\nMLXY3PO/A 1Apple Macbook AIR 13,6" (Disco 256GB SSD, 8 core) 1 227,48 € 1 227,48 €', 'LOTE 5\n1 MESA APOIO MESA DE APOIO EM INOX AISI 304 2,0 279,000 23,0 558,000\nMesa com 4 rodas , 2 com travão\nTabuleiro inferior\nDimens: C 700 x L 500 x A 800mm\nPrateleira inferior - profundidade 250mm\nFabrico Nacional e por medida\nTotal do do lote 5: 558,00€ Quinhentos e cinquenta e oito euros', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Evaluated with <code>__main__.CustomEvaluator</code> | Metric | Value | |:--------------------|:--------| | pearson_cosine | nan | | **spearman_cosine** | **nan** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 222 training samples * Columns: <code>query</code> and <code>correct_node</code> * Approximate statistics based on the first 222 samples: | | query | correct_node | |:--------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 15 tokens</li><li>mean: 55.17 tokens</li><li>max: 154 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 109.22 tokens</li><li>max: 2920 tokens</li></ul> | * Samples: | query | correct_node | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Collect the details that are associated with Lot 4 product '' 'Mesas de Mayo', with quantity 2, unit Subcontracting Unit</code> | <code>LOTE 4<br>1 MESA DE MAYO 82JM 10.ME.1831 2,000 842,00000 23 1 684,00<br>oitocentos e quarenta e dois euros<br>Origem : Nacional<br>Marca : MOBIT<br>Prazo de entrega: 30 dias<br>Garantia: 2 anos<br>Transporte</code> | | <code>Collect the details that are associated with Lot 7 product '' 'Carro transporte de roupa suja ', with quantity 1, unit US</code> | <code>Lote 7 nan nan nan nan nan\nRef. Description Qt. Un. Un. Price Total\n9856 Carros para Transporte de Roupa Suja e Limpa 1 US 16.23 16.23\</code> | | <code>Collect the details that are associated with product '' '2202000014 - FIO SUT. SEDA NÃO ABS. 2/0 MULTIF. SEM AGULHA (CART.)', with quantity 72, unit UN</code> | <code>2202000014 - FIO SUT. SEDA NÃO ABS. 2/0 MULTIF. SEM AGULHA (CART.) 0.36</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 27 evaluation samples * Columns: <code>query</code> and <code>correct_node</code> * Approximate statistics based on the first 27 samples: | | query | correct_node | |:--------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 56.85 tokens</li><li>max: 121 tokens</li></ul> | <ul><li>min: 40 tokens</li><li>mean: 228.15 tokens</li><li>max: 2963 tokens</li></ul> | * Samples: | query | correct_node | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Collect the details that are associated with product '' '2202000055 - FIO SUT. POLIAMIDA NÃO ABS. 2/0 MONOF. AG. LANC. 39 MM 3/8 C (CART.)', with quantity 1656, unit UN</code> | <code>2202000055 - FIO SUT. POLIAMIDA NÃO ABS. 2/0 MONOF. AG. LANC. 39 MM 3/8 C (CART.) 1.28</code> | | <code>Collect the details that are associated with Lot 3 product 'Portaria do Parque Coberto dos Olhos de Água' 'Vigilância e segurança humana contínua - Olhos de Água - período de 3 meses - todos os dias da semana, incluindo feriados, total estimado de 2754H', with quantity 1, unit UN</code> | <code>| Lote | DENOMINAÇÃO | VALOR MENSAL | VALOR |<br>|--------|------------------------------------------------------------------------|--------------|-------------|<br>| Lote 1 | Mercados Quinzenais, no Município de Albufeira | 567,00 € | 2 835,00 € |<br>| Lote 2 | Portaria Das Instalações Técnicas e Administrativas do Edifício de Vale Pedras | 7 435,80 € | 37 179,00 € |<br>| Lote 3 | Portaria Parque de Estacionamento Coberto dos Olhos de Água | 8 262,00 € | 41 310,00 € |<br>| Lote 4 | Compreende a Vigilância aos diversos eventos e equipamentos | | 49 021,00 € |<br>| TOTAL | | | 130 345,00 € |</code> | | <code>Collect the details that are associated with Lot 3 product 'Portaria do Parque Coberto dos Olhos de Água' 'Vigilância e segurança humana contínua - Olhos de Água - período de 3 meses - todos os dias da semana, incluindo feriados, total estimado de 2754H', with quantity 1, unit UN</code> | <code>Lote 3:\nPreço Unitário: 10,00€ (dez euros) /hora\nPreço Total: 27.540,00€ (vinte sete mil quinhentos e quarenta euros)</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 10 - `warmup_ratio`: 0.1 - `bf16`: True - `load_best_model_at_end`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | spearman_cosine | |:----------:|:-------:|:-------------:|:---------------:|:---------------:| | **7.1429** | **100** | **0.0965** | **0.2395** | **nan** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.3.1 - Transformers: 4.47.0.dev0 - PyTorch: 2.5.1+cu121 - Accelerate: 1.1.1 - Datasets: 3.1.0 - Tokenizers: 0.20.4 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "nomic-ai/nomic-embed-text-v1", "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:222", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Collect the details that are associated with Lot 1 product '' 'IGE', with quantity 7280, unit UND", "sentences": ["Lote 12\\nquatro euros e noventa e cinco cêntimos\\n3\\n3Teks/Turquia\\nRef. 101.001.SE", " ImmunoCAP IgE Total Anti-IgE Immunocap Caneta 7280 455 18,70 dezoito euros e setenta cêntimos 1,168750 16 14-4509-01 8.508,50\\n2\\n2 122060282Conforme \\nalergénioImmunoCAP Alergénio Caneta 20800 1300 30,00 trinta euros 1,875000 16 .... 39.000,00\\n2\\n3 122060282Conforme \\nalergénioImmunoCAP Alergénio Componente\\nMolecularCaneta 2000 200 73,49setenta e três euros e quarenta e \\nnove cêntimos 7,349000 10 .... 14.698,00\\n2\\n4 122060282Conforme \\nalergénioImmunoCAP Alergénio Caneta 8100 810 22,05 vinte e dois euros e cinco cêntimos 2,205000 10 .... 17.860,50", "Solução Proposta 5 565,00 €\\nhw 5 565,00 €\\nTotal sem IVA 5 565,00 €\\nValor do IVA (23 %) 1 279,95 €\\nTotal com IVA (23 %) 6 844,95 €MIL DUZENTOS E SETENTA E NOVE EUROS E NOVENTA E CINCO CENTIMOS\\nSEIS MIL OITOCENTOS E QUARENTA E QUATRO EUROS E NOVENTA E CINCO\\nCENTIMOSRESUMO ECONÓMICO\\nDescrição Valor\\nCINCO MIL QUINHENTOS E SESSENTA E CINCO EUROS\\nProposta FP-5-171.xlsxPágina 4 de 5\\n30/01/2023\\nLenovo - Monitores 5 565,00 €\\nT27i-10(A18270FT0)-27inch Monitor(HDMI)\\nStock a data de hoje 20 Unds + Previsão 4 \\nsemanas30 185,50 € 5 565,00 €\\n"]}, {"source_sentence": "Collect the details that are associated with Lot 4 product 'Os diversos equipamentos e eventos a realizar e de gestão do Município de Albufeira' 'Vigilância e segurança humana contínua em equipamentos e eventos - VIG/ARD/ARE/COORD - período de 3 meses, todos os dias de forma permanente e ininterrupta, total estimado de 2250H.', with quantity 1, unit UN", "sentences": ["Lote 3:\\nPreço Unitário: 10,00€ (dez euros) /hora\\nPreço Total: 27.540,00€ (vinte sete mil quinhentos e quarenta euros)", "DESCRIÇÃO DE SERVIÇOS LOTE 4\nLocal, Dias e Horários:\n➢ Locais, Dias e horários a indicar (prazo mínimo de 24 horas) pelos serviços da Câmara Municipal de Albufeira ;\nEfetivo e Categoria Profissional :\n➢ O número necessário de efetivo será definido por requisição a emitir pelos competentes serviços da Câmara Municipal de Albufeira nas categorias de Vigilante, Assistente Recinto de Espetáculos, Assistente de Recinto Desportivos e Coordenador;\nValor Hora: €15,00 ( quinze euros)\nTotal de Horas Previsto: 2980 horas (dois mil novecentos e oitenta horas)\nValor Total Previsto: €44 700,00 (quarenta e quatro mil e setecentos euros)", "2202000061 - SUTURA MONOFILAMENYO NÃO ABSORVIVEL POLIAMIDA 0 AG.CILINDRICA,30MM (CART.) 0.78"]}, {"source_sentence": "Collect the details that are associated with Lot 11 product '' 'Mesas apoio (anestesia e circulante)', with quantity 4, unit Subcontracting Unit", "sentences": ["2202000128 - Sutura sint. absorvivel \"Poliglicolico\" multifilamento 5/0 ag. cilindrica 13mm (CART.) 2.2", "Preço\\nLote Artigo Designação Referência € Custo unitário s/ IVA % IVA Quantidade Valor total S/IVA\\n11 Mesa apoio \\n(anestesia e \\ncirculante) Trolley apoio TTAC 3350 690€ 23% 4 2760€\\nTotal 2760€ ", "Trouxa Cesariana 832,00 7,950000 6\\xa0614,400000 23,00\\nLote 10\\nsete euros e noveta e cinco cêntimos\\n3\\n3Teks/Turquia\\nRef. 601.002.SE\\nCDM: 88869431\\nDM Classe Ie\\nNPDM: T020199\\nTotal Ilíquido :Descontos Comerciais :\\nTOTAL EUR56\\xa0642,860000P"]}, {"source_sentence": "Collect the details that are associated with product '' '2202000154 - FIO SUT. POLIPROPI. NÃO ABS. 2/0 MONOF. C/ 2 AG. RECTAS CILIND. 75 MM (CART.)', with quantity 108, unit UN", "sentences": ["| Lote | DENOMINAÇÃO | VALOR MENSAL | VALOR |\n|--------|------------------------------------------------------------------------|--------------|-------------|\n| Lote 1 | Mercados Quinzenais, no Município de Albufeira | 567,00 € | 2 835,00 € |\n| Lote 2 | Portaria Das Instalações Técnicas e Administrativas do Edifício de Vale Pedras | 7 435,80 € | 37 179,00 € |\n| Lote 3 | Portaria Parque de Estacionamento Coberto dos Olhos de Água | 8 262,00 € | 41 310,00 € |\n| Lote 4 | Compreende a Vigilância aos diversos eventos e equipamentos | | 49 021,00 € |\n| TOTAL | | | 130 345,00 € |", "LOTE 8\\n4,0 CARROHPPBV1 SUPORTE RODADO PARA SACO RESÍDUOS GRUPO III/IV 240,000 960,000 23,0\\", "2202000154 - FIO SUT. POLIPROPI. NÃO ABS. 2/0 MONOF. C/ 2 AG. RECTAS CILIND. 75 MM (CART.) 2.65"]}, {"source_sentence": "Collect the details that are associated with product '' 'Macbook Air 13\" com processador M1/M2 e 8 GB de RAM (Telado PT-PT)', with quantity 1, unit UN", "sentences": ["['A Siemens Healthineers é uma empresa líder em inovação, reconhecida pela sua engenharia de ponta, pelo apoio a \\nterapias avançadas, assim como por privilegiar a versatilidade e funcionalidade entre questões clínicas, resultados \\nlaboratoriais precisos e serviços transformadores. Deste modo, maximizamos oportunidades e reduzimos riscos, \\ncontribuindo para a melhoria da excelência clínica, eficiência operacional e rentabilidade.\\nQueremos continuar a colocar todas estas capacidades e os nossos conhecimentos a nível clínico e técnico ao serviço de\\nparceiros e clientes, para que sejamos o apoio que precisam para alcançar todos os seus objetivos. Sabemos que podemos\\nir cada vez mais longe, mas sabemos também que muitas vezes um caminho percorrido em conjunto oferece todo um novo\\nmundo de possibilidades.\\nJuntos, vamos continuar a construir o futuro da saúde.\\nPág. 52.Proposta Financeira\\n2.1. Posições Relevantes\\nLote 1\\nPOSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\\nANUALPreço Teste\\nS/ IVA\\n1 122060265 IGE TESTE 7280 3,00 €\\nPreço Líquido da Proposta para o Lote 1 ...................................................................................................21.840,00 €\\n(\\n(Vinte e um mil, oitocentos e quarenta euros)\\nLote 4\\nPOSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\\nANUALPreço Total\\nS/ IVA\\n5\\n1 122060125C1 INACTIVADOR FUNCIONAL', 'POSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\\nANUALPreço Teste\\nS/ IVA\\n1 122060265 IGE TESTE 7280 3,00 €\\nPreço Líquido da Proposta para o Lote 1 ...................................................................................................21.840,00 €\\n(\\n(Vinte e um mil, oitocentos e quarenta euros)\\nLote 4\\nPOSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\\nANUALPreço Total\\nS/ IVA\\n5\\n1 122060125C1 INACTIVADOR FUNCIONAL\\nQUANTITATIVOTESTE 576 5,00 €\\nPreço Líquido da Proposta para o Lote 4 .................................................................................................... 2.880,00 €\\n(\\n(Dois mil, oitocentos e oitenta euros)\\nPág. 63.Anexos\\n1. Procuração\\nSiemens Healthcare, Unipessoal, Lda.\\nRua Irmãos Siemens, 1-1A\\n2720-093 Amadora\\nPortugal\\nTelf: +351 210 601 800\\[email protected]\\nwww.healthcare.siemens.ptCopyright\\nEsta Proposta e seu conteúdo são propriedade intelectual da\\nSiemens Healthcare, Unipessoal, Lda, na proposta designada\\npor Siemens Healthineers, protegida nos termos do Código do\\nDireito de Autor e dos Direitos Conexos.', 'A Siemens Healthineers é uma empresa líder em inovação, reconhecida pela sua engenharia de ponta, pelo apoio a \\nterapias avançadas, assim como por privilegiar a versatilidade e funcionalidade entre questões clínicas, resultados \\nlaboratoriais precisos e serviços transformadores. Deste modo, maximizamos oportunidades e reduzimos riscos, \\ncontribuindo para a melhoria da excelência clínica, eficiência operacional e rentabilidade.\\nQueremos continuar a colocar todas estas capacidades e os nossos conhecimentos a nível clínico e técnico ao serviço de\\nparceiros e clientes, para que sejamos o apoio que precisam para alcançar todos os seus objetivos. Sabemos que podemos\\nir cada vez mais longe, mas sabemos também que muitas vezes um caminho percorrido em conjunto oferece todo um novo\\nmundo de possibilidades.\\nJuntos, vamos continuar a construir o futuro da saúde.\\nPág. 52.Proposta Financeira\\n2.1. Posições Relevantes\\nLote 1\\nPOSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\\nANUALPreço Teste\\nS/ IVA\\n1 122060265 IGE TESTE 7280 3,00 €\\nPreço Líquido da Proposta para o Lote 1 ...................................................................................................21.840,00 €', 'A Siemens Healthineers é uma empresa líder em inovação, reconhecida pela sua engenharia de ponta, pelo apoio a \\nterapias avançadas, assim como por privilegiar a versatilidade e funcionalidade entre questões clínicas, resultados \\nlaboratoriais precisos e serviços transformadores. Deste modo, maximizamos oportunidades e reduzimos riscos, \\ncontribuindo para a melhoria da excelência clínica, eficiência operacional e rentabilidade.\\nQueremos continuar a colocar todas estas capacidades e os nossos conhecimentos a nível clínico e técnico ao serviço de\\nparceiros e clientes, para que sejamos o apoio que precisam para alcançar todos os seus objetivos. Sabemos que podemos\\nir cada vez mais longe, mas sabemos também que muitas vezes um caminho percorrido em conjunto oferece todo um novo\\nmundo de possibilidades.\\nJuntos, vamos continuar a construir o futuro da saúde.\\nPág. 52.Proposta Financeira\\n2.1. Posições Relevantes\\nLote 1\\nPOSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\\nANUALPreço Teste\\nS/ IVA\\n1 122060265 IGE TESTE 7280 3,00 €\\nPreço Líquido da Proposta para o Lote 1 ...................................................................................................21.840,00 €\\n(\\n(Vinte e um mil, oitocentos e quarenta euros)\\nLote 4\\nPOSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\\nANUALPreço Total\\nS/ IVA\\n5\\n1 122060125C1 INACTIVADOR FUNCIONAL\\nQUANTITATIVOTESTE 576 5,00 €\\nPreço Líquido da Proposta para o Lote 4 .................................................................................................... 2.880,00 €', 'Queremos continuar a colocar todas estas capacidades e os nossos conhecimentos a nível clínico e técnico ao serviço de\\nparceiros e clientes, para que sejamos o apoio que precisam para alcançar todos os seus objetivos. Sabemos que podemos\\nir cada vez mais longe, mas sabemos também que muitas vezes um caminho percorrido em conjunto oferece todo um novo\\nmundo de possibilidades.\\nJuntos, vamos continuar a construir o futuro da saúde.\\nPág. 52.Proposta Financeira\\n2.1. Posições Relevantes\\nLote 1\\nPOSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\\nANUALPreço Teste\\nS/ IVA\\n1 122060265 IGE TESTE 7280 3,00 €\\nPreço Líquido da Proposta para o Lote 1 ...................................................................................................21.840,00 €\\n(\\n(Vinte e um mil, oitocentos e quarenta euros)\\nLote 4\\nPOSIÇÃO CÓDIGO DESIGNAÇÃO UnidadeESTIMATIVA\\nANUALPreço Total\\nS/ IVA\\n5\\n1 122060125C1 INACTIVADOR FUNCIONAL\\nQUANTITATIVOTESTE 576 5,00 €\\nPreço Líquido da Proposta para o Lote 4 .................................................................................................... 2.880,00 €\\n(\\n(Dois mil, oitocentos e oitenta euros)\\nPág. 63.Anexos']", "LOTE 5\n1 MESA APOIO MESA DE APOIO EM INOX AISI 304 2,0 279,000 23,0 558,000\nMesa com 4 rodas , 2 com travão\nTabuleiro inferior\nDimens: C 700 x L 500 x A 800mm\nPrateleira inferior - profundidade 250mm\nFabrico Nacional e por medida\nTotal do do lote 5: 558,00€ Quinhentos e cinquenta e oito euros", "Apresenta -se de seguida a configuração financeira para a fornecimento dos produtos \\nrequeridos , mediante opções por cor e diferentes características:\\nNOTA: Valores válidos até 23 de Fevereiro e mediante adjudicação de 2 ou mais \\nequipamentos portáteis (excluindo Teclado)\\nPART-NUMBER QTD. DESCRIÇÃOVALOR\\nUNITÁRIOVALOR\\nTOTAL\\nMLY03PO/A 1Apple Macbook AIR 13,6\" (Disco 512GB SSD; 10 core) 1 545,08 € 1 545,08 €\\nMLXY3PO/A 1Apple Macbook AIR 13,6\" (Disco 256GB SSD, 8 core) 1 227,48 € 1 227,48 €"]}], "model-index": [{"name": "SentenceTransformer based on nomic-ai/nomic-embed-text-v1", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "Unknown", "type": "unknown"}, "metrics": [{"type": "pearson_cosine", "value": NaN, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": NaN, "name": "Spearman Cosine"}]}]}]}
task
[ "TEXT_CLASSIFICATION", "SEMANTIC_SIMILARITY" ]
42,697
gokulsrinivasagan/distilbert_base_lda_stsb
gokulsrinivasagan
text-classification
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:gokulsrinivasagan/distilbert_base_lda", "base_model:finetune:gokulsrinivasagan/distilbert_base_lda", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-12-04T17:50:51Z
2024-12-04T17:54:14+00:00
5
0
--- base_model: gokulsrinivasagan/distilbert_base_lda datasets: - glue language: - en library_name: transformers metrics: - spearmanr tags: - generated_from_trainer model-index: - name: distilbert_base_lda_stsb results: - task: type: text-classification name: Text Classification dataset: name: GLUE STSB type: glue args: stsb metrics: - type: spearmanr value: 0.7812278062429884 name: Spearmanr --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_base_lda_stsb This model is a fine-tuned version of [gokulsrinivasagan/distilbert_base_lda](https://huggingface.co/gokulsrinivasagan/distilbert_base_lda) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 0.9069 - Pearson: 0.7831 - Spearmanr: 0.7812 - Combined Score: 0.7822 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 2.63 | 1.0 | 23 | 2.2703 | 0.2049 | 0.1869 | 0.1959 | | 1.6161 | 2.0 | 46 | 1.5326 | 0.5968 | 0.5988 | 0.5978 | | 1.0577 | 3.0 | 69 | 0.9897 | 0.7544 | 0.7530 | 0.7537 | | 0.7651 | 4.0 | 92 | 1.3080 | 0.7422 | 0.7513 | 0.7468 | | 0.5755 | 5.0 | 115 | 1.0189 | 0.7718 | 0.7729 | 0.7724 | | 0.4879 | 6.0 | 138 | 0.9542 | 0.7654 | 0.7639 | 0.7647 | | 0.3923 | 7.0 | 161 | 0.9830 | 0.7704 | 0.7702 | 0.7703 | | 0.2951 | 8.0 | 184 | 1.0897 | 0.7662 | 0.7689 | 0.7675 | | 0.2586 | 9.0 | 207 | 1.1433 | 0.7597 | 0.7595 | 0.7596 | | 0.2193 | 10.0 | 230 | 0.9188 | 0.7807 | 0.7783 | 0.7795 | | 0.2001 | 11.0 | 253 | 0.9884 | 0.7733 | 0.7733 | 0.7733 | | 0.1817 | 12.0 | 276 | 1.0962 | 0.7711 | 0.7708 | 0.7710 | | 0.1645 | 13.0 | 299 | 0.9747 | 0.7704 | 0.7695 | 0.7700 | | 0.1677 | 14.0 | 322 | 0.9869 | 0.7723 | 0.7710 | 0.7717 | | 0.1392 | 15.0 | 345 | 0.9069 | 0.7831 | 0.7812 | 0.7822 | | 0.1318 | 16.0 | 368 | 1.0261 | 0.7669 | 0.7666 | 0.7668 | | 0.1167 | 17.0 | 391 | 1.0508 | 0.7623 | 0.7635 | 0.7629 | | 0.1168 | 18.0 | 414 | 1.1379 | 0.7509 | 0.7501 | 0.7505 | | 0.1097 | 19.0 | 437 | 1.0633 | 0.7691 | 0.7717 | 0.7704 | | 0.104 | 20.0 | 460 | 1.0595 | 0.7729 | 0.7741 | 0.7735 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.2.1+cu118 - Datasets 2.17.0 - Tokenizers 0.20.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_base_lda_stsb This model is a fine-tuned version of [gokulsrinivasagan/distilbert_base_lda](https://huggingface.co/gokulsrinivasagan/distilbert_base_lda) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 0.9069 - Pearson: 0.7831 - Spearmanr: 0.7812 - Combined Score: 0.7822 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 2.63 | 1.0 | 23 | 2.2703 | 0.2049 | 0.1869 | 0.1959 | | 1.6161 | 2.0 | 46 | 1.5326 | 0.5968 | 0.5988 | 0.5978 | | 1.0577 | 3.0 | 69 | 0.9897 | 0.7544 | 0.7530 | 0.7537 | | 0.7651 | 4.0 | 92 | 1.3080 | 0.7422 | 0.7513 | 0.7468 | | 0.5755 | 5.0 | 115 | 1.0189 | 0.7718 | 0.7729 | 0.7724 | | 0.4879 | 6.0 | 138 | 0.9542 | 0.7654 | 0.7639 | 0.7647 | | 0.3923 | 7.0 | 161 | 0.9830 | 0.7704 | 0.7702 | 0.7703 | | 0.2951 | 8.0 | 184 | 1.0897 | 0.7662 | 0.7689 | 0.7675 | | 0.2586 | 9.0 | 207 | 1.1433 | 0.7597 | 0.7595 | 0.7596 | | 0.2193 | 10.0 | 230 | 0.9188 | 0.7807 | 0.7783 | 0.7795 | | 0.2001 | 11.0 | 253 | 0.9884 | 0.7733 | 0.7733 | 0.7733 | | 0.1817 | 12.0 | 276 | 1.0962 | 0.7711 | 0.7708 | 0.7710 | | 0.1645 | 13.0 | 299 | 0.9747 | 0.7704 | 0.7695 | 0.7700 | | 0.1677 | 14.0 | 322 | 0.9869 | 0.7723 | 0.7710 | 0.7717 | | 0.1392 | 15.0 | 345 | 0.9069 | 0.7831 | 0.7812 | 0.7822 | | 0.1318 | 16.0 | 368 | 1.0261 | 0.7669 | 0.7666 | 0.7668 | | 0.1167 | 17.0 | 391 | 1.0508 | 0.7623 | 0.7635 | 0.7629 | | 0.1168 | 18.0 | 414 | 1.1379 | 0.7509 | 0.7501 | 0.7505 | | 0.1097 | 19.0 | 437 | 1.0633 | 0.7691 | 0.7717 | 0.7704 | | 0.104 | 20.0 | 460 | 1.0595 | 0.7729 | 0.7741 | 0.7735 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.2.1+cu118 - Datasets 2.17.0 - Tokenizers 0.20.3
{"base_model": "gokulsrinivasagan/distilbert_base_lda", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["spearmanr"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert_base_lda_stsb", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE STSB", "type": "glue", "args": "stsb"}, "metrics": [{"type": "spearmanr", "value": 0.7812278062429884, "name": "Spearmanr"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,698
Laihaoran/BioClinicalMPBERT
Laihaoran
null
[ "transformers", "pytorch", "bert", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
2023-08-21T08:44:35Z
2023-08-23T01:40:52+00:00
3,404
0
--- {} --- # Bio Clinical BERT with MIMIC and Padchest <!-- Provide a quick summary of what the model is/does. --> The Publicly Available Clinical BERT Embeddings paper contains four unique BioclinicalBERT models: initialized with BERT-Base (biobert_v1.0_pubmed_pmc) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries. This model card describes the Bio+Clinical BERT model, which was initialized from BioBERT & trained on all MIMIC notes and Padchest (translate to English by google translation). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
null
BioNLP
# Bio Clinical BERT with MIMIC and Padchest <!-- Provide a quick summary of what the model is/does. --> The Publicly Available Clinical BERT Embeddings paper contains four unique BioclinicalBERT models: initialized with BERT-Base (biobert_v1.0_pubmed_pmc) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries. This model card describes the Bio+Clinical BERT model, which was initialized from BioBERT & trained on all MIMIC notes and Padchest (translate to English by google translation). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{}
task
[ "TRANSLATION" ]
42,699
Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-aav
Helsinki-NLP
translation
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc-bible", "bru", "cmo", "de", "en", "es", "fr", "hoc", "jun", "kha", "km", "kxm", "mnw", "ngt", "pt", "sat", "vi", "wbm", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-10-07T21:07:03Z
2024-10-08T05:57:32+00:00
13
0
--- language: - bru - cmo - de - en - es - fr - hoc - jun - kha - km - kxm - mnw - ngt - pt - sat - vi - wbm library_name: transformers license: apache-2.0 tags: - translation - opus-mt-tc-bible model-index: - name: opus-mt-tc-bible-big-deu_eng_fra_por_spa-aav results: - task: type: translation name: Translation deu-vie dataset: name: flores200-devtest type: flores200-devtest args: deu-vie metrics: - type: bleu value: 34.0 name: BLEU - type: chrf value: 0.53671 name: chr-F - type: bleu value: 42.4 name: BLEU - type: chrf value: 0.59842 name: chr-F - type: bleu value: 34.6 name: BLEU - type: chrf value: 0.54101 name: chr-F - type: bleu value: 36.1 name: BLEU - type: chrf value: 0.5497 name: chr-F - type: bleu value: 28.1 name: BLEU - type: chrf value: 0.50025 name: chr-F - task: type: translation name: Translation deu-vie dataset: name: flores101-devtest type: flores_101 args: deu vie devtest metrics: - type: bleu value: 33.8 name: BLEU - type: chrf value: 0.53381 name: chr-F - type: bleu value: 42.1 name: BLEU - type: chrf value: 0.59621 name: chr-F - type: bleu value: 36.0 name: BLEU - type: chrf value: 0.54919 name: chr-F - type: bleu value: 27.8 name: BLEU - type: chrf value: 0.49921 name: chr-F - task: type: translation name: Translation deu-vie dataset: name: ntrex128 type: ntrex128 args: deu-vie metrics: - type: bleu value: 31.4 name: BLEU - type: chrf value: 0.52124 name: chr-F - type: bleu value: 31.8 name: BLEU - type: chrf value: 0.52044 name: chr-F - type: bleu value: 33.3 name: BLEU - type: chrf value: 0.5306 name: chr-F - type: bleu value: 33.4 name: BLEU - type: chrf value: 0.53293 name: chr-F - task: type: translation name: Translation deu-vie dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: deu-vie metrics: - type: bleu value: 25.6 name: BLEU - type: chrf value: 0.45795 name: chr-F - type: bleu value: 39.4 name: BLEU - type: chrf value: 0.56461 name: chr-F - type: bleu value: 35.2 name: BLEU - type: chrf value: 0.52806 name: chr-F - type: bleu value: 22.9 name: BLEU - type: chrf value: 0.40649 name: chr-F - type: bleu value: 34.2 name: BLEU - type: chrf value: 0.52131 name: chr-F --- # opus-mt-tc-bible-big-deu_eng_fra_por_spa-aav ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from unknown (deu+eng+fra+por+spa) to Austro-Asiatic languages (aav). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2024-05-29 - **License:** Apache-2.0 - **Language(s):** - Source Language(s): deu eng fra por spa - Target Language(s): bru cmo hoc jun kha khm kxm mnw ngt sat vie wbm - Valid Target Language Labels: >>aem<< >>alk<< >>aml<< >>asr<< >>bbh<< >>bdq<< >>bfw<< >>bgk<< >>bgl<< >>bix<< >>biy<< >>blr<< >>brb<< >>bru<< >>brv<< >>btq<< >>caq<< >>cbn<< >>cdz<< >>cma<< >>cmo<< >>cog<< >>crv<< >>crw<< >>cua<< >>cwg<< >>dnu<< >>ekl<< >>gaq<< >>gbj<< >>hal<< >>hld<< >>hnu<< >>hoc<< >>hoc_Wara<< >>hre<< >>huo<< >>irr<< >>jah<< >>jeh<< >>jhi<< >>jun<< >>juy<< >>kdt<< >>kfp<< >>kfq<< >>kha<< >>khf<< >>khm<< >>khr<< >>kjg<< >>kjm<< >>knq<< >>kns<< >>kpm<< >>krr<< >>krv<< >>ksz<< >>kta<< >>ktv<< >>kuf<< >>kxm<< >>kxy<< >>lbn<< >>lbo<< >>lcp<< >>lnh<< >>lwl<< >>lyg<< >>mef<< >>mhe<< >>mjx<< >>mlf<< >>mmj<< >>mml<< >>mng<< >>mnn<< >>mnq<< >>mnw<< >>moo<< >>mqt<< >>mra<< >>mtq<< >>mzt<< >>ncb<< >>ncq<< >>nev<< >>ngt<< >>ngt_Latn<< >>nik<< >>nuo<< >>nyl<< >>omx<< >>oog<< >>oyb<< >>pac<< >>pbv<< >>pcb<< >>pce<< >>pcj<< >>phg<< >>pkt<< >>pll<< >>ply<< >>pnx<< >>prk<< >>prt<< >>puo<< >>rbb<< >>ren<< >>ril<< >>rka<< >>rmx<< >>sat<< >>sat_Latn<< >>sbo<< >>scb<< >>scq<< >>sct<< >>sea<< >>sed<< >>sii<< >>smu<< >>spu<< >>sqq<< >>srb<< >>ssm<< >>sss<< >>stg<< >>sti<< >>stt<< >>stu<< >>syo<< >>sza<< >>szc<< >>tdf<< >>tdr<< >>tea<< >>tef<< >>thm<< >>tkz<< >>tlq<< >>tmo<< >>tnz<< >>tou<< >>tpu<< >>trd<< >>tth<< >>tto<< >>tyh<< >>unr<< >>uuu<< >>vie<< >>vwa<< >>wbm<< >>xao<< >>xkk<< >>xnh<< >>xxx<< >>yin<< >>zng<< - **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-aav/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.zip) - **Resources for more information:** - [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/deu%2Beng%2Bfra%2Bpor%2Bspa-aav/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-29) - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/) - [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1) - [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bru<<` ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>khm<< Der Junge wirft einen Stein.", ">>vie<< ¿Y tú?" ] model_name = "pytorch-models/opus-mt-tc-bible-big-deu_eng_fra_por_spa-aav" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # ក្មេងប្រុស នោះ យក ដុំ ថ្ម គប់ ។ # Còn anh thì sao? ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-aav") print(pipe(">>khm<< Der Junge wirft einen Stein.")) # expected output: ក្មេងប្រុស នោះ យក ដុំ ថ្ម គប់ ។ ``` ## Training - **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-aav/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/deu%2Beng%2Bfra%2Bpor%2Bspa-aav/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-29) * test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-aav/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt) * test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-aav/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | deu-vie | tatoeba-test-v2021-08-07 | 0.45795 | 25.6 | 400 | 3768 | | eng-hoc | tatoeba-test-v2021-08-07 | 6.438 | 0.2 | 660 | 2591 | | eng-kha | tatoeba-test-v2021-08-07 | 5.741 | 0.0 | 1314 | 9269 | | eng-vie | tatoeba-test-v2021-08-07 | 0.56461 | 39.4 | 2500 | 24427 | | fra-vie | tatoeba-test-v2021-08-07 | 0.52806 | 35.2 | 1299 | 13219 | | spa-vie | tatoeba-test-v2021-08-07 | 0.52131 | 34.2 | 594 | 4740 | | deu-vie | flores101-devtest | 0.53381 | 33.8 | 1012 | 33331 | | eng-khm | flores101-devtest | 0.42302 | 1.3 | 1012 | 7006 | | eng-vie | flores101-devtest | 0.59621 | 42.1 | 1012 | 33331 | | fra-khm | flores101-devtest | 0.40042 | 2.2 | 1012 | 7006 | | por-khm | flores101-devtest | 0.40585 | 2.1 | 1012 | 7006 | | por-vie | flores101-devtest | 0.54919 | 36.0 | 1012 | 33331 | | spa-vie | flores101-devtest | 0.49921 | 27.8 | 1012 | 33331 | | deu-vie | flores200-devtest | 0.53671 | 34.0 | 1012 | 33331 | | eng-khm | flores200-devtest | 0.42148 | 1.3 | 1012 | 7006 | | eng-vie | flores200-devtest | 0.59842 | 42.4 | 1012 | 33331 | | fra-vie | flores200-devtest | 0.54101 | 34.6 | 1012 | 33331 | | por-khm | flores200-devtest | 0.40832 | 1.9 | 1012 | 7006 | | por-vie | flores200-devtest | 0.54970 | 36.1 | 1012 | 33331 | | spa-vie | flores200-devtest | 0.50025 | 28.1 | 1012 | 33331 | | deu-khm | ntrex128 | 0.44903 | 3.5 | 1997 | 15866 | | deu-vie | ntrex128 | 0.52124 | 31.4 | 1997 | 64655 | | eng-khm | ntrex128 | 0.50494 | 1.6 | 1997 | 15866 | | eng-vie | ntrex128 | 3.831 | 0.0 | 1997 | 64655 | | fra-khm | ntrex128 | 0.43841 | 2.4 | 1997 | 15866 | | fra-vie | ntrex128 | 0.52044 | 31.8 | 1997 | 64655 | | por-khm | ntrex128 | 0.46655 | 2.5 | 1997 | 15866 | | por-vie | ntrex128 | 0.53060 | 33.3 | 1997 | 64655 | | spa-khm | ntrex128 | 0.46443 | 2.7 | 1997 | 15866 | | spa-vie | ntrex128 | 0.53293 | 33.4 | 1997 | 64655 | | eng-khm | tico19-test | 0.47806 | 2.5 | 2100 | 15810 | | fra-khm | tico19-test | 3.268 | 1.0 | 2100 | 15810 | | por-khm | tico19-test | 3.900 | 1.1 | 2100 | 15810 | | spa-khm | tico19-test | 3.784 | 1.0 | 2100 | 15810 | ## Citation Information * Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ```bibtex @article{tiedemann2023democratizing, title={Democratizing neural machine translation with {OPUS-MT}}, author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami}, journal={Language Resources and Evaluation}, number={58}, pages={713--755}, year={2023}, publisher={Springer Nature}, issn={1574-0218}, doi={10.1007/s10579-023-09704-w} } @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/). ## Model conversion info * transformers version: 4.45.1 * OPUS-MT git hash: 0882077 * port time: Tue Oct 8 08:57:20 EEST 2024 * port machine: LM0-400-22516.local
null
Non_BioNLP
# opus-mt-tc-bible-big-deu_eng_fra_por_spa-aav ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [Acknowledgements](#acknowledgements) ## Model Details Neural machine translation model for translating from unknown (deu+eng+fra+por+spa) to Austro-Asiatic languages (aav). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2024-05-29 - **License:** Apache-2.0 - **Language(s):** - Source Language(s): deu eng fra por spa - Target Language(s): bru cmo hoc jun kha khm kxm mnw ngt sat vie wbm - Valid Target Language Labels: >>aem<< >>alk<< >>aml<< >>asr<< >>bbh<< >>bdq<< >>bfw<< >>bgk<< >>bgl<< >>bix<< >>biy<< >>blr<< >>brb<< >>bru<< >>brv<< >>btq<< >>caq<< >>cbn<< >>cdz<< >>cma<< >>cmo<< >>cog<< >>crv<< >>crw<< >>cua<< >>cwg<< >>dnu<< >>ekl<< >>gaq<< >>gbj<< >>hal<< >>hld<< >>hnu<< >>hoc<< >>hoc_Wara<< >>hre<< >>huo<< >>irr<< >>jah<< >>jeh<< >>jhi<< >>jun<< >>juy<< >>kdt<< >>kfp<< >>kfq<< >>kha<< >>khf<< >>khm<< >>khr<< >>kjg<< >>kjm<< >>knq<< >>kns<< >>kpm<< >>krr<< >>krv<< >>ksz<< >>kta<< >>ktv<< >>kuf<< >>kxm<< >>kxy<< >>lbn<< >>lbo<< >>lcp<< >>lnh<< >>lwl<< >>lyg<< >>mef<< >>mhe<< >>mjx<< >>mlf<< >>mmj<< >>mml<< >>mng<< >>mnn<< >>mnq<< >>mnw<< >>moo<< >>mqt<< >>mra<< >>mtq<< >>mzt<< >>ncb<< >>ncq<< >>nev<< >>ngt<< >>ngt_Latn<< >>nik<< >>nuo<< >>nyl<< >>omx<< >>oog<< >>oyb<< >>pac<< >>pbv<< >>pcb<< >>pce<< >>pcj<< >>phg<< >>pkt<< >>pll<< >>ply<< >>pnx<< >>prk<< >>prt<< >>puo<< >>rbb<< >>ren<< >>ril<< >>rka<< >>rmx<< >>sat<< >>sat_Latn<< >>sbo<< >>scb<< >>scq<< >>sct<< >>sea<< >>sed<< >>sii<< >>smu<< >>spu<< >>sqq<< >>srb<< >>ssm<< >>sss<< >>stg<< >>sti<< >>stt<< >>stu<< >>syo<< >>sza<< >>szc<< >>tdf<< >>tdr<< >>tea<< >>tef<< >>thm<< >>tkz<< >>tlq<< >>tmo<< >>tnz<< >>tou<< >>tpu<< >>trd<< >>tth<< >>tto<< >>tyh<< >>unr<< >>uuu<< >>vie<< >>vwa<< >>wbm<< >>xao<< >>xkk<< >>xnh<< >>xxx<< >>yin<< >>zng<< - **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-aav/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.zip) - **Resources for more information:** - [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/deu%2Beng%2Bfra%2Bpor%2Bspa-aav/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-29) - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/) - [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1) - [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bru<<` ## Uses This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>khm<< Der Junge wirft einen Stein.", ">>vie<< ¿Y tú?" ] model_name = "pytorch-models/opus-mt-tc-bible-big-deu_eng_fra_por_spa-aav" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # ក្មេងប្រុស នោះ យក ដុំ ថ្ម គប់ ។ # Còn anh thì sao? ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-aav") print(pipe(">>khm<< Der Junge wirft einen Stein.")) # expected output: ក្មេងប្រុស នោះ យក ដុំ ថ្ម គប់ ។ ``` ## Training - **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-aav/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Evaluation * [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/deu%2Beng%2Bfra%2Bpor%2Bspa-aav/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-29) * test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-aav/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt) * test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-aav/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | deu-vie | tatoeba-test-v2021-08-07 | 0.45795 | 25.6 | 400 | 3768 | | eng-hoc | tatoeba-test-v2021-08-07 | 6.438 | 0.2 | 660 | 2591 | | eng-kha | tatoeba-test-v2021-08-07 | 5.741 | 0.0 | 1314 | 9269 | | eng-vie | tatoeba-test-v2021-08-07 | 0.56461 | 39.4 | 2500 | 24427 | | fra-vie | tatoeba-test-v2021-08-07 | 0.52806 | 35.2 | 1299 | 13219 | | spa-vie | tatoeba-test-v2021-08-07 | 0.52131 | 34.2 | 594 | 4740 | | deu-vie | flores101-devtest | 0.53381 | 33.8 | 1012 | 33331 | | eng-khm | flores101-devtest | 0.42302 | 1.3 | 1012 | 7006 | | eng-vie | flores101-devtest | 0.59621 | 42.1 | 1012 | 33331 | | fra-khm | flores101-devtest | 0.40042 | 2.2 | 1012 | 7006 | | por-khm | flores101-devtest | 0.40585 | 2.1 | 1012 | 7006 | | por-vie | flores101-devtest | 0.54919 | 36.0 | 1012 | 33331 | | spa-vie | flores101-devtest | 0.49921 | 27.8 | 1012 | 33331 | | deu-vie | flores200-devtest | 0.53671 | 34.0 | 1012 | 33331 | | eng-khm | flores200-devtest | 0.42148 | 1.3 | 1012 | 7006 | | eng-vie | flores200-devtest | 0.59842 | 42.4 | 1012 | 33331 | | fra-vie | flores200-devtest | 0.54101 | 34.6 | 1012 | 33331 | | por-khm | flores200-devtest | 0.40832 | 1.9 | 1012 | 7006 | | por-vie | flores200-devtest | 0.54970 | 36.1 | 1012 | 33331 | | spa-vie | flores200-devtest | 0.50025 | 28.1 | 1012 | 33331 | | deu-khm | ntrex128 | 0.44903 | 3.5 | 1997 | 15866 | | deu-vie | ntrex128 | 0.52124 | 31.4 | 1997 | 64655 | | eng-khm | ntrex128 | 0.50494 | 1.6 | 1997 | 15866 | | eng-vie | ntrex128 | 3.831 | 0.0 | 1997 | 64655 | | fra-khm | ntrex128 | 0.43841 | 2.4 | 1997 | 15866 | | fra-vie | ntrex128 | 0.52044 | 31.8 | 1997 | 64655 | | por-khm | ntrex128 | 0.46655 | 2.5 | 1997 | 15866 | | por-vie | ntrex128 | 0.53060 | 33.3 | 1997 | 64655 | | spa-khm | ntrex128 | 0.46443 | 2.7 | 1997 | 15866 | | spa-vie | ntrex128 | 0.53293 | 33.4 | 1997 | 64655 | | eng-khm | tico19-test | 0.47806 | 2.5 | 2100 | 15810 | | fra-khm | tico19-test | 3.268 | 1.0 | 2100 | 15810 | | por-khm | tico19-test | 3.900 | 1.1 | 2100 | 15810 | | spa-khm | tico19-test | 3.784 | 1.0 | 2100 | 15810 | ## Citation Information * Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ```bibtex @article{tiedemann2023democratizing, title={Democratizing neural machine translation with {OPUS-MT}}, author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami}, journal={Language Resources and Evaluation}, number={58}, pages={713--755}, year={2023}, publisher={Springer Nature}, issn={1574-0218}, doi={10.1007/s10579-023-09704-w} } @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Acknowledgements The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/). ## Model conversion info * transformers version: 4.45.1 * OPUS-MT git hash: 0882077 * port time: Tue Oct 8 08:57:20 EEST 2024 * port machine: LM0-400-22516.local
{"language": ["bru", "cmo", "de", "en", "es", "fr", "hoc", "jun", "kha", "km", "kxm", "mnw", "ngt", "pt", "sat", "vi", "wbm"], "library_name": "transformers", "license": "apache-2.0", "tags": ["translation", "opus-mt-tc-bible"], "model-index": [{"name": "opus-mt-tc-bible-big-deu_eng_fra_por_spa-aav", "results": [{"task": {"type": "translation", "name": "Translation deu-vie"}, "dataset": {"name": "flores200-devtest", "type": "flores200-devtest", "args": "deu-vie"}, "metrics": [{"type": "bleu", "value": 34.0, "name": "BLEU"}, {"type": "chrf", "value": 0.53671, "name": "chr-F"}, {"type": "bleu", "value": 42.4, "name": "BLEU"}, {"type": "chrf", "value": 0.59842, "name": "chr-F"}, {"type": "bleu", "value": 34.6, "name": "BLEU"}, {"type": "chrf", "value": 0.54101, "name": "chr-F"}, {"type": "bleu", "value": 36.1, "name": "BLEU"}, {"type": "chrf", "value": 0.5497, "name": "chr-F"}, {"type": "bleu", "value": 28.1, "name": "BLEU"}, {"type": "chrf", "value": 0.50025, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-vie"}, "dataset": {"name": "flores101-devtest", "type": "flores_101", "args": "deu vie devtest"}, "metrics": [{"type": "bleu", "value": 33.8, "name": "BLEU"}, {"type": "chrf", "value": 0.53381, "name": "chr-F"}, {"type": "bleu", "value": 42.1, "name": "BLEU"}, {"type": "chrf", "value": 0.59621, "name": "chr-F"}, {"type": "bleu", "value": 36.0, "name": "BLEU"}, {"type": "chrf", "value": 0.54919, "name": "chr-F"}, {"type": "bleu", "value": 27.8, "name": "BLEU"}, {"type": "chrf", "value": 0.49921, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-vie"}, "dataset": {"name": "ntrex128", "type": "ntrex128", "args": "deu-vie"}, "metrics": [{"type": "bleu", "value": 31.4, "name": "BLEU"}, {"type": "chrf", "value": 0.52124, "name": "chr-F"}, {"type": "bleu", "value": 31.8, "name": "BLEU"}, {"type": "chrf", "value": 0.52044, "name": "chr-F"}, {"type": "bleu", "value": 33.3, "name": "BLEU"}, {"type": "chrf", "value": 0.5306, "name": "chr-F"}, {"type": "bleu", "value": 33.4, "name": "BLEU"}, {"type": "chrf", "value": 0.53293, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-vie"}, "dataset": {"name": "tatoeba-test-v2021-08-07", "type": "tatoeba_mt", "args": "deu-vie"}, "metrics": [{"type": "bleu", "value": 25.6, "name": "BLEU"}, {"type": "chrf", "value": 0.45795, "name": "chr-F"}, {"type": "bleu", "value": 39.4, "name": "BLEU"}, {"type": "chrf", "value": 0.56461, "name": "chr-F"}, {"type": "bleu", "value": 35.2, "name": "BLEU"}, {"type": "chrf", "value": 0.52806, "name": "chr-F"}, {"type": "bleu", "value": 22.9, "name": "BLEU"}, {"type": "chrf", "value": 0.40649, "name": "chr-F"}, {"type": "bleu", "value": 34.2, "name": "BLEU"}, {"type": "chrf", "value": 0.52131, "name": "chr-F"}]}]}]}
task
[ "TRANSLATION" ]
42,700
chocoyj/distilbert-base-uncased-finetuned-emotion
chocoyj
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-03-13T04:44:03Z
2023-03-13T04:54:51+00:00
10
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.9275 name: Accuracy - type: f1 value: 0.9276043877262424 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2154 - Accuracy: 0.9275 - F1: 0.9276 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8125 | 1.0 | 250 | 0.3089 | 0.9055 | 0.9030 | | 0.2492 | 2.0 | 500 | 0.2154 | 0.9275 | 0.9276 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2154 - Accuracy: 0.9275 - F1: 0.9276 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8125 | 1.0 | 250 | 0.3089 | 0.9055 | 0.9030 | | 0.2492 | 2.0 | 500 | 0.2154 | 0.9275 | 0.9276 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9275, "name": "Accuracy"}, {"type": "f1", "value": 0.9276043877262424, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,701
mini1013/master_cate_fi7
mini1013
text-classification
[ "setfit", "safetensors", "roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:mini1013/master_domain", "base_model:finetune:mini1013/master_domain", "model-index", "region:us" ]
2025-01-21T20:10:39Z
2025-01-21T20:11:01+00:00
207
0
--- base_model: mini1013/master_domain library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 침대안전가드 침대 안전바 낙상방지 손잡이 난간 난간대 노인 복지용구 장애인 임신부 선물 가구/인테리어>아동/주니어가구>소품가구 - text: 침대 안전 가드 면 보호 랩 물린 가장자리 아기 레일 범퍼 케어 베이비 제품범퍼 울타리 가구/인테리어>아동/주니어가구>소품가구 - text: 일하 안전 영아 교구장 장난감정리함 선반 유아책장 수납함 가구/인테리어>아동/주니어가구>책꽂이 - text: 오운 어린이 침대 프레임 SS 가구/인테리어>아동/주니어가구>침대>일반침대 - text: 시몬스 로피 N32 하드 침대 SS 가구/인테리어>아동/주니어가구>침대>일반침대 inference: true model-index: - name: SetFit with mini1013/master_domain results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 1.0 name: Accuracy --- # SetFit with mini1013/master_domain This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 14 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 5.0 | <ul><li>'LASSEM 앙드레 키큰 베이비장 주니어 아동 옷장 C1800 IH0929 가구/인테리어>아동/주니어가구>옷장'</li><li>'삼익가구 코코 800 키큰 주니어 베이비 겸용 일반형 옷장 가구/인테리어>아동/주니어가구>옷장'</li><li>'한샘 샘키즈 낮은 오픈옷장 600 가구/인테리어>아동/주니어가구>옷장'</li></ul> | | 13.0 | <ul><li>'행거 옷걸이 호랑이 행거 아동 옷걸이 수납 철제 아기옷 키즈 유아 가구/인테리어>아동/주니어가구>행거'</li><li>'아이방 옷방 주니어장 옷걸이 행거 공간활용 옷수납 드레스룸 원룸 옷정리 자취방 매장용 좁은집 스탠드 가구/인테리어>아동/주니어가구>행거'</li><li>'행거 옷걸이 이동식 옷가게 행거 튼튼한 의류매장 진열대 아동복 가구/인테리어>아동/주니어가구>행거'</li></ul> | | 1.0 | <ul><li>'가죽 소파 더블 팔걸이 싱글 시트 레저 의자 발 작은 라 15182N4911370 가구/인테리어>아동/주니어가구>소파'</li><li>'어린이 소파 독서각 캐릭터 소파 독서의자 어린이 세척 다다미 쇼파 시트 동물 곰돌이 가구/인테리어>아동/주니어가구>소파'</li><li>'모찌 양털의자 키즈카페 미니 쇼파 가구/인테리어>아동/주니어가구>소파'</li></ul> | | 11.0 | <ul><li>'비앙스 VONI 전면 회전책장 4단 가구/인테리어>아동/주니어가구>책장'</li><li>'리바트 톨 키즈 4단 책장 가구/인테리어>아동/주니어가구>책장'</li><li>'일룸 에디키즈 코너장 세트코너장 + 책장 + PL수납장 가구/인테리어>아동/주니어가구>책장'</li></ul> | | 2.0 | <ul><li>'침대안전바 침대 안전 가드 낙상방지 난간 아이 펜스 가구/인테리어>아동/주니어가구>소품가구'</li><li>'노루페인트 컬러플러스 홈페인트 방문가구 1L 칼라메이트 4인치 도구세트 쉐인 SP4100 가구/인테리어>아동/주니어가구>소품가구'</li><li>'한샘 샘베딩 스테디 도어 6단(높이194cm) 40cm [몸통미포함] 가구/인테리어>아동/주니어가구>소품가구'</li></ul> | | 3.0 | <ul><li>'한샘 샘키즈 낮은 장난감정리함 870 가구/인테리어>아동/주니어가구>수납장'</li><li>'한샘 샘키즈 1단 미니박스 수납책장 870 가구/인테리어>아동/주니어가구>수납장'</li><li>'한샘 샘키즈 코너장 가구/인테리어>아동/주니어가구>수납장'</li></ul> | | 6.0 | <ul><li>'이케아 스툴 가구/인테리어>아동/주니어가구>의자'</li><li>'이케아 AGAM 아감 어린이 의자 가구/인테리어>아동/주니어가구>의자'</li><li>'하이솔로몬 팡팡 의자 가구/인테리어>아동/주니어가구>의자'</li></ul> | | 7.0 | <ul><li>'히트 400 학원 상담실 전면 책꽂이 낮은 책장 엄마표학습 가구/인테리어>아동/주니어가구>책꽂이'</li><li>'회전식 책꽂이 만화책 도서 사물함 그림책 360도 선반 가구/인테리어>아동/주니어가구>책꽂이'</li><li>'책 거치대 아이방 도서관 서점 그림책 원목 진열대 가구/인테리어>아동/주니어가구>책꽂이'</li></ul> | | 0.0 | <ul><li>'자작 1000 와이드 6단 서랍장 가구/인테리어>아동/주니어가구>서랍장'</li><li>'소르니아 카이 와이드 침실 4단 서랍장 가구/인테리어>아동/주니어가구>서랍장'</li><li>'일룸 링키플러스 3단 책상서랍 가구/인테리어>아동/주니어가구>서랍장'</li></ul> | | 10.0 | <ul><li>'일루일루 프로스 어린이 책상세트 스퀘어형의자 책상1개 의자2개 가구/인테리어>아동/주니어가구>책상의자세트'</li><li>'휴대용 출장책상의자 여행책상의자 발 선반 해먹 편한 발그네 TWB978A80 가구/인테리어>아동/주니어가구>책상의자세트'</li><li>'유아책상의자세트 어린이 공부방 학원 책상 세트-대형 70x50 1인세트 - ver2 가구/인테리어>아동/주니어가구>책상의자세트'</li></ul> | | 8.0 | <ul><li>'이케아 MAMMUT 마무트 어린이 사각 테이블 가구/인테리어>아동/주니어가구>책상'</li><li>'한샘 티오 그로우 일반책상세트 5단 120x75cm 가구/인테리어>아동/주니어가구>책상'</li><li>'아이엔지홈 에디 어린이 각도조절 책상 가구/인테리어>아동/주니어가구>책상'</li></ul> | | 4.0 | <ul><li>'UNKNOWN 대관령한우 스페셜 5호 1 8kg 가구/인테리어>아동/주니어가구>아동침실세트'</li><li>'Paperboard Suitcases Vintage Suitcase Portable Composite Wood Map Pattern Design Vintage Storage Ch 가구/인테리어>아동/주니어가구>아동침실세트'</li><li>'키드존 피셔프라이스 범보의자 세트 가구/인테리어>아동/주니어가구>아동침실세트'</li></ul> | | 12.0 | <ul><li>'골든스트릿 베스트 리프팅 수납계단형 움직이는 벙커침대 프레임 SS 가구/인테리어>아동/주니어가구>침대>벙커침대'</li><li>'레이디가구 레브 기능성 패브릭 데이베드 볼드헤드 침대 SS + 포그니 22 가구/인테리어>아동/주니어가구>침대>일반침대'</li><li>'슬로우알레 코이 저상형 키즈 침대 원목 데이베드 SS 가구/인테리어>아동/주니어가구>침대>일반침대'</li></ul> | | 9.0 | <ul><li>'CASHL 바퀴달린의자 허리 보조 회전의자 이동식 낮은 가구/인테리어>아동/주니어가구>책상의자'</li><li>'UNKNOWN A 책상의자 공부의자 학생용 편한 강의실 메쉬의자 a10 바퀴의자 가구/인테리어>아동/주니어가구>책상의자'</li><li>'학원 도서관 회의실 세미나용 메쉬망 회전 의자 학생의자 오피스의자 가구/인테리어>아동/주니어가구>책상의자'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 1.0 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_cate_fi7") # Run inference preds = model("오운 어린이 침대 프레임 SS 가구/인테리어>아동/주니어가구>침대>일반침대") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 3 | 8.3744 | 18 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 65 | | 1.0 | 70 | | 2.0 | 70 | | 3.0 | 70 | | 4.0 | 37 | | 5.0 | 70 | | 6.0 | 21 | | 7.0 | 70 | | 8.0 | 70 | | 9.0 | 70 | | 10.0 | 70 | | 11.0 | 70 | | 12.0 | 70 | | 13.0 | 69 | ### Training Hyperparameters - batch_size: (256, 256) - num_epochs: (30, 30) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 50 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:----:|:-------------:|:---------------:| | 0.0057 | 1 | 0.5123 | - | | 0.2857 | 50 | 0.5012 | - | | 0.5714 | 100 | 0.3699 | - | | 0.8571 | 150 | 0.1028 | - | | 1.1429 | 200 | 0.0304 | - | | 1.4286 | 250 | 0.0147 | - | | 1.7143 | 300 | 0.012 | - | | 2.0 | 350 | 0.009 | - | | 2.2857 | 400 | 0.0074 | - | | 2.5714 | 450 | 0.0033 | - | | 2.8571 | 500 | 0.004 | - | | 3.1429 | 550 | 0.0036 | - | | 3.4286 | 600 | 0.0036 | - | | 3.7143 | 650 | 0.0036 | - | | 4.0 | 700 | 0.0036 | - | | 4.2857 | 750 | 0.0027 | - | | 4.5714 | 800 | 0.0034 | - | | 4.8571 | 850 | 0.004 | - | | 5.1429 | 900 | 0.0016 | - | | 5.4286 | 950 | 0.0001 | - | | 5.7143 | 1000 | 0.0 | - | | 6.0 | 1050 | 0.0 | - | | 6.2857 | 1100 | 0.0 | - | | 6.5714 | 1150 | 0.0 | - | | 6.8571 | 1200 | 0.0 | - | | 7.1429 | 1250 | 0.0 | - | | 7.4286 | 1300 | 0.0 | - | | 7.7143 | 1350 | 0.0 | - | | 8.0 | 1400 | 0.0 | - | | 8.2857 | 1450 | 0.0 | - | | 8.5714 | 1500 | 0.0 | - | | 8.8571 | 1550 | 0.0 | - | | 9.1429 | 1600 | 0.0 | - | | 9.4286 | 1650 | 0.0 | - | | 9.7143 | 1700 | 0.0 | - | | 10.0 | 1750 | 0.0 | - | | 10.2857 | 1800 | 0.0 | - | | 10.5714 | 1850 | 0.0 | - | | 10.8571 | 1900 | 0.0 | - | | 11.1429 | 1950 | 0.0 | - | | 11.4286 | 2000 | 0.0 | - | | 11.7143 | 2050 | 0.0 | - | | 12.0 | 2100 | 0.0 | - | | 12.2857 | 2150 | 0.0 | - | | 12.5714 | 2200 | 0.0 | - | | 12.8571 | 2250 | 0.0 | - | | 13.1429 | 2300 | 0.0 | - | | 13.4286 | 2350 | 0.0 | - | | 13.7143 | 2400 | 0.0 | - | | 14.0 | 2450 | 0.0 | - | | 14.2857 | 2500 | 0.0 | - | | 14.5714 | 2550 | 0.0 | - | | 14.8571 | 2600 | 0.0 | - | | 15.1429 | 2650 | 0.0 | - | | 15.4286 | 2700 | 0.0 | - | | 15.7143 | 2750 | 0.0 | - | | 16.0 | 2800 | 0.0 | - | | 16.2857 | 2850 | 0.0 | - | | 16.5714 | 2900 | 0.0 | - | | 16.8571 | 2950 | 0.0 | - | | 17.1429 | 3000 | 0.0 | - | | 17.4286 | 3050 | 0.0 | - | | 17.7143 | 3100 | 0.0 | - | | 18.0 | 3150 | 0.0 | - | | 18.2857 | 3200 | 0.0 | - | | 18.5714 | 3250 | 0.0 | - | | 18.8571 | 3300 | 0.0 | - | | 19.1429 | 3350 | 0.0 | - | | 19.4286 | 3400 | 0.0 | - | | 19.7143 | 3450 | 0.0 | - | | 20.0 | 3500 | 0.0 | - | | 20.2857 | 3550 | 0.0 | - | | 20.5714 | 3600 | 0.0 | - | | 20.8571 | 3650 | 0.0 | - | | 21.1429 | 3700 | 0.0 | - | | 21.4286 | 3750 | 0.0 | - | | 21.7143 | 3800 | 0.0 | - | | 22.0 | 3850 | 0.0 | - | | 22.2857 | 3900 | 0.0 | - | | 22.5714 | 3950 | 0.0 | - | | 22.8571 | 4000 | 0.0 | - | | 23.1429 | 4050 | 0.0 | - | | 23.4286 | 4100 | 0.0 | - | | 23.7143 | 4150 | 0.0 | - | | 24.0 | 4200 | 0.0 | - | | 24.2857 | 4250 | 0.0 | - | | 24.5714 | 4300 | 0.0 | - | | 24.8571 | 4350 | 0.0 | - | | 25.1429 | 4400 | 0.0 | - | | 25.4286 | 4450 | 0.0 | - | | 25.7143 | 4500 | 0.0 | - | | 26.0 | 4550 | 0.0 | - | | 26.2857 | 4600 | 0.0 | - | | 26.5714 | 4650 | 0.0 | - | | 26.8571 | 4700 | 0.0 | - | | 27.1429 | 4750 | 0.0 | - | | 27.4286 | 4800 | 0.0 | - | | 27.7143 | 4850 | 0.0 | - | | 28.0 | 4900 | 0.0 | - | | 28.2857 | 4950 | 0.0 | - | | 28.5714 | 5000 | 0.0 | - | | 28.8571 | 5050 | 0.0 | - | | 29.1429 | 5100 | 0.0 | - | | 29.4286 | 5150 | 0.0 | - | | 29.7143 | 5200 | 0.0 | - | | 30.0 | 5250 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - Transformers: 4.44.2 - PyTorch: 2.2.0a0+81ea7a4 - Datasets: 3.2.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with mini1013/master_domain This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 14 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 5.0 | <ul><li>'LASSEM 앙드레 키큰 베이비장 주니어 아동 옷장 C1800 IH0929 가구/인테리어>아동/주니어가구>옷장'</li><li>'삼익가구 코코 800 키큰 주니어 베이비 겸용 일반형 옷장 가구/인테리어>아동/주니어가구>옷장'</li><li>'한샘 샘키즈 낮은 오픈옷장 600 가구/인테리어>아동/주니어가구>옷장'</li></ul> | | 13.0 | <ul><li>'행거 옷걸이 호랑이 행거 아동 옷걸이 수납 철제 아기옷 키즈 유아 가구/인테리어>아동/주니어가구>행거'</li><li>'아이방 옷방 주니어장 옷걸이 행거 공간활용 옷수납 드레스룸 원룸 옷정리 자취방 매장용 좁은집 스탠드 가구/인테리어>아동/주니어가구>행거'</li><li>'행거 옷걸이 이동식 옷가게 행거 튼튼한 의류매장 진열대 아동복 가구/인테리어>아동/주니어가구>행거'</li></ul> | | 1.0 | <ul><li>'가죽 소파 더블 팔걸이 싱글 시트 레저 의자 발 작은 라 15182N4911370 가구/인테리어>아동/주니어가구>소파'</li><li>'어린이 소파 독서각 캐릭터 소파 독서의자 어린이 세척 다다미 쇼파 시트 동물 곰돌이 가구/인테리어>아동/주니어가구>소파'</li><li>'모찌 양털의자 키즈카페 미니 쇼파 가구/인테리어>아동/주니어가구>소파'</li></ul> | | 11.0 | <ul><li>'비앙스 VONI 전면 회전책장 4단 가구/인테리어>아동/주니어가구>책장'</li><li>'리바트 톨 키즈 4단 책장 가구/인테리어>아동/주니어가구>책장'</li><li>'일룸 에디키즈 코너장 세트코너장 + 책장 + PL수납장 가구/인테리어>아동/주니어가구>책장'</li></ul> | | 2.0 | <ul><li>'침대안전바 침대 안전 가드 낙상방지 난간 아이 펜스 가구/인테리어>아동/주니어가구>소품가구'</li><li>'노루페인트 컬러플러스 홈페인트 방문가구 1L 칼라메이트 4인치 도구세트 쉐인 SP4100 가구/인테리어>아동/주니어가구>소품가구'</li><li>'한샘 샘베딩 스테디 도어 6단(높이194cm) 40cm [몸통미포함] 가구/인테리어>아동/주니어가구>소품가구'</li></ul> | | 3.0 | <ul><li>'한샘 샘키즈 낮은 장난감정리함 870 가구/인테리어>아동/주니어가구>수납장'</li><li>'한샘 샘키즈 1단 미니박스 수납책장 870 가구/인테리어>아동/주니어가구>수납장'</li><li>'한샘 샘키즈 코너장 가구/인테리어>아동/주니어가구>수납장'</li></ul> | | 6.0 | <ul><li>'이케아 스툴 가구/인테리어>아동/주니어가구>의자'</li><li>'이케아 AGAM 아감 어린이 의자 가구/인테리어>아동/주니어가구>의자'</li><li>'하이솔로몬 팡팡 의자 가구/인테리어>아동/주니어가구>의자'</li></ul> | | 7.0 | <ul><li>'히트 400 학원 상담실 전면 책꽂이 낮은 책장 엄마표학습 가구/인테리어>아동/주니어가구>책꽂이'</li><li>'회전식 책꽂이 만화책 도서 사물함 그림책 360도 선반 가구/인테리어>아동/주니어가구>책꽂이'</li><li>'책 거치대 아이방 도서관 서점 그림책 원목 진열대 가구/인테리어>아동/주니어가구>책꽂이'</li></ul> | | 0.0 | <ul><li>'자작 1000 와이드 6단 서랍장 가구/인테리어>아동/주니어가구>서랍장'</li><li>'소르니아 카이 와이드 침실 4단 서랍장 가구/인테리어>아동/주니어가구>서랍장'</li><li>'일룸 링키플러스 3단 책상서랍 가구/인테리어>아동/주니어가구>서랍장'</li></ul> | | 10.0 | <ul><li>'일루일루 프로스 어린이 책상세트 스퀘어형의자 책상1개 의자2개 가구/인테리어>아동/주니어가구>책상의자세트'</li><li>'휴대용 출장책상의자 여행책상의자 발 선반 해먹 편한 발그네 TWB978A80 가구/인테리어>아동/주니어가구>책상의자세트'</li><li>'유아책상의자세트 어린이 공부방 학원 책상 세트-대형 70x50 1인세트 - ver2 가구/인테리어>아동/주니어가구>책상의자세트'</li></ul> | | 8.0 | <ul><li>'이케아 MAMMUT 마무트 어린이 사각 테이블 가구/인테리어>아동/주니어가구>책상'</li><li>'한샘 티오 그로우 일반책상세트 5단 120x75cm 가구/인테리어>아동/주니어가구>책상'</li><li>'아이엔지홈 에디 어린이 각도조절 책상 가구/인테리어>아동/주니어가구>책상'</li></ul> | | 4.0 | <ul><li>'UNKNOWN 대관령한우 스페셜 5호 1 8kg 가구/인테리어>아동/주니어가구>아동침실세트'</li><li>'Paperboard Suitcases Vintage Suitcase Portable Composite Wood Map Pattern Design Vintage Storage Ch 가구/인테리어>아동/주니어가구>아동침실세트'</li><li>'키드존 피셔프라이스 범보의자 세트 가구/인테리어>아동/주니어가구>아동침실세트'</li></ul> | | 12.0 | <ul><li>'골든스트릿 베스트 리프팅 수납계단형 움직이는 벙커침대 프레임 SS 가구/인테리어>아동/주니어가구>침대>벙커침대'</li><li>'레이디가구 레브 기능성 패브릭 데이베드 볼드헤드 침대 SS + 포그니 22 가구/인테리어>아동/주니어가구>침대>일반침대'</li><li>'슬로우알레 코이 저상형 키즈 침대 원목 데이베드 SS 가구/인테리어>아동/주니어가구>침대>일반침대'</li></ul> | | 9.0 | <ul><li>'CASHL 바퀴달린의자 허리 보조 회전의자 이동식 낮은 가구/인테리어>아동/주니어가구>책상의자'</li><li>'UNKNOWN A 책상의자 공부의자 학생용 편한 강의실 메쉬의자 a10 바퀴의자 가구/인테리어>아동/주니어가구>책상의자'</li><li>'학원 도서관 회의실 세미나용 메쉬망 회전 의자 학생의자 오피스의자 가구/인테리어>아동/주니어가구>책상의자'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 1.0 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_cate_fi7") # Run inference preds = model("오운 어린이 침대 프레임 SS 가구/인테리어>아동/주니어가구>침대>일반침대") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 3 | 8.3744 | 18 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 65 | | 1.0 | 70 | | 2.0 | 70 | | 3.0 | 70 | | 4.0 | 37 | | 5.0 | 70 | | 6.0 | 21 | | 7.0 | 70 | | 8.0 | 70 | | 9.0 | 70 | | 10.0 | 70 | | 11.0 | 70 | | 12.0 | 70 | | 13.0 | 69 | ### Training Hyperparameters - batch_size: (256, 256) - num_epochs: (30, 30) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 50 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:----:|:-------------:|:---------------:| | 0.0057 | 1 | 0.5123 | - | | 0.2857 | 50 | 0.5012 | - | | 0.5714 | 100 | 0.3699 | - | | 0.8571 | 150 | 0.1028 | - | | 1.1429 | 200 | 0.0304 | - | | 1.4286 | 250 | 0.0147 | - | | 1.7143 | 300 | 0.012 | - | | 2.0 | 350 | 0.009 | - | | 2.2857 | 400 | 0.0074 | - | | 2.5714 | 450 | 0.0033 | - | | 2.8571 | 500 | 0.004 | - | | 3.1429 | 550 | 0.0036 | - | | 3.4286 | 600 | 0.0036 | - | | 3.7143 | 650 | 0.0036 | - | | 4.0 | 700 | 0.0036 | - | | 4.2857 | 750 | 0.0027 | - | | 4.5714 | 800 | 0.0034 | - | | 4.8571 | 850 | 0.004 | - | | 5.1429 | 900 | 0.0016 | - | | 5.4286 | 950 | 0.0001 | - | | 5.7143 | 1000 | 0.0 | - | | 6.0 | 1050 | 0.0 | - | | 6.2857 | 1100 | 0.0 | - | | 6.5714 | 1150 | 0.0 | - | | 6.8571 | 1200 | 0.0 | - | | 7.1429 | 1250 | 0.0 | - | | 7.4286 | 1300 | 0.0 | - | | 7.7143 | 1350 | 0.0 | - | | 8.0 | 1400 | 0.0 | - | | 8.2857 | 1450 | 0.0 | - | | 8.5714 | 1500 | 0.0 | - | | 8.8571 | 1550 | 0.0 | - | | 9.1429 | 1600 | 0.0 | - | | 9.4286 | 1650 | 0.0 | - | | 9.7143 | 1700 | 0.0 | - | | 10.0 | 1750 | 0.0 | - | | 10.2857 | 1800 | 0.0 | - | | 10.5714 | 1850 | 0.0 | - | | 10.8571 | 1900 | 0.0 | - | | 11.1429 | 1950 | 0.0 | - | | 11.4286 | 2000 | 0.0 | - | | 11.7143 | 2050 | 0.0 | - | | 12.0 | 2100 | 0.0 | - | | 12.2857 | 2150 | 0.0 | - | | 12.5714 | 2200 | 0.0 | - | | 12.8571 | 2250 | 0.0 | - | | 13.1429 | 2300 | 0.0 | - | | 13.4286 | 2350 | 0.0 | - | | 13.7143 | 2400 | 0.0 | - | | 14.0 | 2450 | 0.0 | - | | 14.2857 | 2500 | 0.0 | - | | 14.5714 | 2550 | 0.0 | - | | 14.8571 | 2600 | 0.0 | - | | 15.1429 | 2650 | 0.0 | - | | 15.4286 | 2700 | 0.0 | - | | 15.7143 | 2750 | 0.0 | - | | 16.0 | 2800 | 0.0 | - | | 16.2857 | 2850 | 0.0 | - | | 16.5714 | 2900 | 0.0 | - | | 16.8571 | 2950 | 0.0 | - | | 17.1429 | 3000 | 0.0 | - | | 17.4286 | 3050 | 0.0 | - | | 17.7143 | 3100 | 0.0 | - | | 18.0 | 3150 | 0.0 | - | | 18.2857 | 3200 | 0.0 | - | | 18.5714 | 3250 | 0.0 | - | | 18.8571 | 3300 | 0.0 | - | | 19.1429 | 3350 | 0.0 | - | | 19.4286 | 3400 | 0.0 | - | | 19.7143 | 3450 | 0.0 | - | | 20.0 | 3500 | 0.0 | - | | 20.2857 | 3550 | 0.0 | - | | 20.5714 | 3600 | 0.0 | - | | 20.8571 | 3650 | 0.0 | - | | 21.1429 | 3700 | 0.0 | - | | 21.4286 | 3750 | 0.0 | - | | 21.7143 | 3800 | 0.0 | - | | 22.0 | 3850 | 0.0 | - | | 22.2857 | 3900 | 0.0 | - | | 22.5714 | 3950 | 0.0 | - | | 22.8571 | 4000 | 0.0 | - | | 23.1429 | 4050 | 0.0 | - | | 23.4286 | 4100 | 0.0 | - | | 23.7143 | 4150 | 0.0 | - | | 24.0 | 4200 | 0.0 | - | | 24.2857 | 4250 | 0.0 | - | | 24.5714 | 4300 | 0.0 | - | | 24.8571 | 4350 | 0.0 | - | | 25.1429 | 4400 | 0.0 | - | | 25.4286 | 4450 | 0.0 | - | | 25.7143 | 4500 | 0.0 | - | | 26.0 | 4550 | 0.0 | - | | 26.2857 | 4600 | 0.0 | - | | 26.5714 | 4650 | 0.0 | - | | 26.8571 | 4700 | 0.0 | - | | 27.1429 | 4750 | 0.0 | - | | 27.4286 | 4800 | 0.0 | - | | 27.7143 | 4850 | 0.0 | - | | 28.0 | 4900 | 0.0 | - | | 28.2857 | 4950 | 0.0 | - | | 28.5714 | 5000 | 0.0 | - | | 28.8571 | 5050 | 0.0 | - | | 29.1429 | 5100 | 0.0 | - | | 29.4286 | 5150 | 0.0 | - | | 29.7143 | 5200 | 0.0 | - | | 30.0 | 5250 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - Transformers: 4.44.2 - PyTorch: 2.2.0a0+81ea7a4 - Datasets: 3.2.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "mini1013/master_domain", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "침대안전가드 침대 안전바 낙상방지 손잡이 난간 난간대 노인 복지용구 장애인 임신부 선물 가구/인테리어>아동/주니어가구>소품가구"}, {"text": "침대 안전 가드 면 보호 랩 물린 가장자리 아기 레일 범퍼 케어 베이비 제품범퍼 울타리 가구/인테리어>아동/주니어가구>소품가구"}, {"text": "일하 안전 영아 교구장 장난감정리함 선반 유아책장 수납함 가구/인테리어>아동/주니어가구>책꽂이"}, {"text": "오운 어린이 침대 프레임 SS 가구/인테리어>아동/주니어가구>침대>일반침대"}, {"text": "시몬스 로피 N32 하드 침대 SS 가구/인테리어>아동/주니어가구>침대>일반침대"}], "inference": true, "model-index": [{"name": "SetFit with mini1013/master_domain", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,702
seongil-dn/bge-m3-kor-retrieval-451949-bs128-finance-book-science-215
seongil-dn
sentence-similarity
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:451949", "loss:CachedGISTEmbedLoss", "arxiv:1908.10084", "base_model:BAAI/bge-m3", "base_model:finetune:BAAI/bge-m3", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-12-12T06:44:05Z
2024-12-12T06:45:21+00:00
4
0
--- base_model: BAAI/bge-m3 library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:451949 - loss:CachedGISTEmbedLoss widget: - source_sentence: 타 지자체에 비해 압도적으로 귀어가구가 많은 것으로 집계된 지역은 어디야? sentences: - "나)금융상품판매전문업 도입의 목적\n□ 금융상품판매전문업의 도입은 다음 세 가지를 목적으로 함. \n― 첫째, 금융상품판매제도를 공급자 중심에서\ \ 수요자중심으로 전환. \n― 둘째, 판매조직의 다양화를 통한 금융산업의 효율성을 제고. \n― 셋째, 금융소비자의 입장에서 금융상품을 판매하는\ \ 판매전문조직의 육성을 통하여 금융소비자 보호제도의 실효성을 확보하기 위한 제도적 기초를 확보. □ 금융소비자의 입장을 고려하는 판매조직의\ \ 활성화를 통하여 금융소비자보호체계를 금융상품 제조자에 의한 “불완전판매˝(mis―selling)의 규제에서 금융소비자에 의한 ˝불완전구매˝(mis―buying)의\ \ 방지로의 전환을 위한 제도적 기초를 구축할 수 있을 것임." - '2. 최근의 귀농·귀촌 동향 농림축산식품부·해양수산부·통계청이 지난 해 6월에 발표한 “귀농어·귀촌인 통계”에 따르면 2016년에 귀농·귀촌을 택해 농촌으로 이주한 도시가구는 33만 5,383가구(49만 6,048명)였다. 지난 1년간 5,860가구(17,156명) 늘어났다. 이 가운데 귀농가구는 전체의 3.8%인 12,875가구, 그리고 귀촌가구는 전체의 96.2%인 322,508가구였다. 귀농과 귀촌은 지역적으로 차이를 나타내 귀농의 경우 2016년 경북이 2,323가구(18.0%)로 가장 많고, 그 다음은 전남 1,923가구, 경남 1,631가구의 순인데 반해, 귀촌은 경기가 85,441가구(26.5%), 경남 37,732가구, 경북 37,261가구의 순이었다. 연령에도 차이를 보이고 있다. 귀농의 경우 50대가 39.9%로 가장 큰 비중을 차지하고, 그 다음은 60대 25.4%로 50대 이상 고령층이 71.3%이고, 40대 이하 젊은 층은 28.7%에 불과하였다. 반면에 귀촌의 경우에는 40대 이하가 63.9%이고, 50대 이상은 36.1%로 귀농과 달리 젊은 층이 큰 비중을 차지하였다.' - 농림축산식품부・해양수산부・통계청이 공동으로 작성・발간한 “2015년 귀농어귀촌인 통계”에 따르면 2015년 귀어가구는 총 991가구로 2013년 650구, 2014년 917가구에서 매년 증가 추세이다. 991개 귀어가구 중 1인 가구가 전체 가구의 70.4%(698가구)이며, 2인 가구가 19.1%(189가구), 3인 가구가 4.8%(57가구), 4인 이상 가구가 4.7%(47가구)이며, 가구당 평균 귀어가구원수는 1.46명으로 나타났다. 지역별로는 전남이 343가구(34.6%), 충남이 340가구(34.3%)로 타 지자체에 비해 압도적으로 귀어가구가 많은 것으로 집계되었다. 이들 지역의 귀어가구가 많은 것은 전남의 경우 어선어업과 양식업 세력이 많은 지역적 특성이 귀어 확대로 연결된 것으로 보인다. 충남의 경우 수도권에서 가까워 지리적 접근성이 용이하고, 어촌체험마을, 낚시어선업 등의 활성화로 귀어인들이 타 지자체에 비해 상대적으로 선호도가 높은 것으로 판단된다. - source_sentence: 고전 음악에 있어서 세계적 메카라 불릴 만큼 찬란한 유산과 전통을 지닌 나라는 어디야? sentences: - 이번 한・러 정상회담에서는 또한 신정부가 추진하는 ‘한반도 신경제지도 구상’의 구체화 측면에서도 의미 있는 성과를 도출했다. 양국은 북한 참여를 통한 남~북~러 3각 협력 사업의 이행 의지를 재천명했으며, 양 정상은 3각 협력 사업이 경제적 호혜뿐만 아니라 한반도의 신뢰와 안정을 구축하는 데에도 기여할 것이란 점을 재확인했다. 위와 같은 한・러 정상회담의 성과를 가시화하기 위해 후속조치로 이번 동방경제포럼에서 제안된 ‘9 브릿지(9 Bridges)’ 전략을 해양수산 분야에서 구체화할 필요가 있다. 우선 3항(부산항, 나진항, 블라디보스토크항), 3거점(훈춘, 나데즈딘스카야, 하바롭스크), 3통로 (부산항을 공통 거점으로한 ① 나진항~훈춘 해상루트, ② 나진~핫산 철도루트, ③ 블라디보스토크~하바롭스크의 해상루트와 TKR~TSR 철도 연결루트)의 ‘3~3~3’ 전략을 적극 추진할 필요가 있다. - 슬로바키아 국민은 전통음악에 대한 자부심이 강하며, 음악의 기원은 슬라브족과 유럽인들의 민속음악이며 15~16세기 종교음악의 기원인 모라비아 왕국의 예배식에서 전례를 찾을 수 있다. 음악은 종교음악과 실내악의 영향을 크게 받았으며, 민속음악은 헝가리 통치하에서 슬로바키아 언어를 보존하는 데 큰 힘이 되었다. 전통악기로는 목관피리 종류인 푸야라(Fujara), 백파이프인 가이디(Gajdy), 양치기 플루트인 콘코우카(Konkovka) 등이 있다. 목공예, 도자기공예, 유리공예 등 전통 민속예술 중 특히 14세기부터 모드라지역의 ‘Majolika’ 도자기공예가 유명하다. 부드러운 곡선과 노란색, 파란색을 이용해 생활필수품을 만든다. 또한 목재 교회 건축물은 슬로바키아만의 독특한 건축양식이다. 전국 각지에서 민속 음악가들과 무용수들이 한자리에 모이는 전통축제인 비호드나 민속 축제(Východná Folklore Festival), 폴라냐 민속 축제(Folklórne Slávnosti pod Poľanou)는 7~8월경에 열린다. - 오스트리아는 하이든, 모차르트, 베토벤, 슈베르트, 브람스, 브루크너, 리스트 등 유수한 세계적 음악가들의 출생지 또는 활동 무대로 유명하며, 고전 음악에 있어서는 세계적 메카라 불릴 만큼 찬란한 유산 및 전통을 지니고 있다. 매년 6월의 빈 음악제, 7~8월의 잘츠부르크와 브레겐츠 음악제, 8월의 레흐, 9월의 린츠 브루크너 음악제 등 전국 각지에서 연중 음악제가 개최된다. 오페라는 Staatsoper, Volksoper, Burgtheater를 중심으로 한 빈 음악의 상징이라 할 수 있으며, Staatsoper는 Franz Josef 1세에 의해 1869년 개관된 유럽 3대 오페라 극장 중 하나로 좌석 2,500여 석, 연중 300여 작품이 공연되며 종사 인원도 5천여 명에 이른다. 빈 필하모닉 오케스트라가 오페라 연주를 담당하는 것으로도 유명하다. 빈 필하모닉 오케스트라는 구스타프 말러, 리하르트 슈트라우스, 헤르베르트 폰 카라얀으로 이어지면서 세계적인 오케스트라로 자리 잡았다. - source_sentence: 서로 다른 조직들로 통합된 조직의 상승 효과를 어떤 활동으로 단기간에 만들 수 있을까? sentences: - '첫째, 대상 기관 간에 상호 협력이 필수 조건이 된다는 인식의 형성은 성공적인 통합을 위해 매우 중요한 요인이 된다. 따라서 통합을 본격적으로 추진하기 전 단계에서 해당 기관 관계자들에게 왜 통합을 추진하며, 통합이 주는 성과는 무엇인지를 명확하게 설명해 주고 공감대를 형성하는 일이 중요하다. 둘째, 통합을 자발적으로 할 것인지 또는 외부 요인에 따라 할 것인지를 결정하여 추진할 필요가 있다. 이와 관련된 통합 추진방식에는 ‘하향식(Top-down)’ 추진방식과 ‘상향식(Bottom-up)’ 추진방식이 있다. 전자는 정부가 주도하는 방식이고, 후자는 대상 기관들이 자율적으로 추진하는 방식이다. Skodvin(1999 : 70)에 따르면, 하향식 방식은 대부분 국가에서 활용하는 가장 일반적인 사례이지만 종종 정책 담당자와 고등교육기관 관계자 간에 상당한 긴장과 갈등을 유발한다. 이에 비해 일부 국가에서 적용한 상향식 방식은 순조롭고 성공적이었다. 이들 국가에서 상향식 방식을 활용하는 이유는 통합 대상 기관에서 일하는 당사자들 간의 합의(consensus)를 이끌어내기가 쉽고, 통합 후에는 이들 간에 공유된 정체성을 확보하기가 용이하기 때문이다.' - <h1>IV. 실험 결과</h1> <p>제안하는 방법의 성능을 분석하기 위하여 기존에 균일한 도트 분포를 얻기 위해 제안된 세 가지 방법들과 비교 실험을 수행하였다. 먼저 제안하는 방법에 의해 얻어진 이진 영상에서 소수화소들 간의 거리가 주거리와 얼마나 일치하는지를 실험하였다. 이 실험의 목적은 균일한 도트 분포의 정도를 수치적으로 평가하기 위한 것이다. 128부터 255의 계조 각각에 대하여 \( 256 \times 256 \)의 크기를 갖는 단일 계조 영상을 생성하였다. 각각의 단일 계조 영상을 제안하는 방법과 기존의 세 가지 방법에 의해 이진화 하였다. 즉, 네 가지 방법에 대하여 각각 128부터 255까지의 128개의 이진 영상들이 만들어졌다. Marcu의 방법을 적용할 때는[17]에 정의된 상위 9 라인의 검색 영역 및 순서에 준하여 이진화를 수행하였다. 모든 영상(128개의 영상 4가지 방법 = 512개의 영상)에 대하여 아래의 식 (11)에 정의된 \( D_{x} \)를 계산하였다. \ ( D_{x} \)는 이진 영상내에 존재하는 모든 소수화소들간의 최소거리와 주거리의 차이의 제곱을 평균한 것으로서 이진 영상에서 소수화소들간의 거리가 주거리와 얼마나 일치하는가를 나타내는 척도이다. </p> <p>\( D_{x}=\frac{1}{N_{x}} \sum_{k=1}^{N_{x}}\left\{d_{\min }^{B}(k)-\lambda_{x}\right\}^{2} \)<caption>(11)</caption></p> <p>여기서 \( N_{x} \)는 \( x \)라는 단일 계조를 나타내는 이진영상에서의 소수 화소들의 개수를 나타낸다. \ ( d_{\min }^{B}(k) \)는 이진영상 내의 \( k \)번째 소수화소에 대한 최소화소거리를 나타낸다. 즉, 주어진 소수화소와 가장 가까운 소수화소간의 거리를 의미한다. 이상적으로 소수화소들간의 최소거리는 주거리를 만족하여야 한다. 따라서, 식 (11)에서는 \( d_{\text {min }}^{B}(k) \)와 주거리 \( x \)의 차이를 제곱하여 평균하였다. 식 (11)의 \( D_{x} \)를 128과 255사이의 계조 각각에 대하여 계산하였으며 이 과정을 4가지 방법에 대해 동일하게 수행하였다. </p> <p>그림 5는 4가지 방법에 대한 계조별 \( D_{x} \)값을 나타낸다. 참고로 0부터 127까지의 \( D_{x} \)값은 128에 대하여 대칭이다. 도트의 균일한 분포 정도는 밝은 계조(highlight)나 어두운 계조(shadow)에서 인간 시각에 민감하게 인식된다. 그림 5에서 보듯이 기존의 세 가지 방법들은 밝은 계조에서 제안하는 방법에 비해 상대적으로 큰 \( D_{x} \)값을 나타내는 것을 알 수 있다. \ ( D_{x} \)가 큰 값을 나타낸다는 것은 이진 영상에서 소수화소들의 공백(void)이나 밀집(cluster)영역이 존재함을 의미한다. </p> <p>표 2는 그림 5에서 계조별로 계산된 결과를 평균한 값들을 나타낸다. 표 2에서 보듯이 제안하는 방법에서 소수화소들간의 거리가 주거리를 가장 잘 나타냄을 확인할 수 있다. </p> <table border><caption>표 2. 네 가지 방법에 대한 계조별 \( \mathrm{Dx} \)값의 평균</caption> <tbody><tr><td>방법</td><td>Eschbach</td><td>Karg</td><td>MarcuI</td><td>제안하는 방법</td></tr><tr><td>\( E\left[D_{x}\right] \)</td><td>0.5940</td><td>0.3347</td><td>0.7476</td><td>0.2439</td></tr></tbody></table> - '이러한 조직진단이 정부 및 공공부문에 본격적으로 도입된 것은 김대중 대통령 때로, 경영진단 등의 이름으로 정부조직에 대한 진단 등이 이루어졌으며 민간의 컨설팅 회사들이 정부 부처 및 공공기관에 대한 진단을 수행하였다. 김대중 정부 이후로는 공공부문에서의 조직진단도 더 이상 낯선 제도가 아니며, 노무현 정부 당시에는 행정자치부에 이를 전담하는 정부조직이 한시적으로 별도로 존재하였으며 당시 정부에서 조직진단 관련 매뉴얼도 발간하였다. 조직진단 및 이를 바탕으로 한 조직 개편 사례들은 오늘날까지도 정부 부처와 공공부문에서 상당히 빈번하게 이루어지고 있다. 한편 조직융합관리(PMI)는 “업무방식과 조직문화 등 제반 환경이 서로 다른 조직들로 통합된 조직이 시너지 효과를 단기간 내에 창출할 수 있도록 통합 후 체계적으로 관리하는 활동”이다(행정안전부, 2008: 2). 조직 통합이나 개편이라는 물리적인 현상보다 실질적으로 조직 내부에서 통합과 융합이 이루어지는가를 분석하고 이를 위한 관리기법을 제안하는 분야가 조직융합관리(PMI)이다.' - source_sentence: 무엇이 자유민주적 기본질서처럼 자유주의와 민주주의 사상을 따르고 있지? sentences: - 한편 ‘자유민주적 기본질서’는 어의상 자유주의와 민주주의를 토대로 하는 기본질서를 뜻하는데, 공산주의이념 역시 자유주의와 민주주의를 추구한다는 점에서 일견 마르크스주의(공산주의)를 배격하지 않는 것으로 비춰질 수도 있다. 그러나 헌법재판소는 현행 국가보안법상 처벌되는 ‘자유민주적 기본질서에 위해를 끼치는 행위’에 관하여 “모든 폭력적 지배와 자의적 지배 즉 반국가단체의 일인독재 내지 일당독재를 배제하고 다수의 의사에 의한 국민의 자치, 자유・평등의 기본원칙에 의한 법치주의적 통치질서의 유지를 어렵게 만드는 것으로서 구체적으로는 기본적 인권의 존중, 권력분립, 의회제도, 복수정당제도, 선거제도, 사유재산과 시장경제를 골간으로 한 경제질서 및 사법권의 독립 등 우리의 내부체재를 파괴・변혁시키려는 것”으로 폭넓게 정의내리고 있다. - □ 지하수이용부담금의 경우, 전국 226개 지자체 중 84개 지자체(37.1%)만이 부과·징수하고 있어, 지하수의 체계적인 관리를 위한 재원이 부족한 상태임([표 13] 참조) <br>○ 지하수이용부담금을 부과·징수하는 지자체가 적은 이유는 지하수이용부담금 자체가 낮게 책정되어, 제도운영에 소요되는 인력 및 예산 등의 행정비용이 징수금을 초과하기 때문인 것으로 보임 <br>○ 특히 지하수의 용도 중 국방·군사용, 민방위용, 비상급수용, 농어업용, 상수도용, 일반수도사업용, 학교용, 가정용 및 먹는샘물용·온천용 등은 지하수이용부담금 부과대상에서 제외되어, 실제로 부담금을 징수할 수 있는 대상은 일반생활용 및 공업용 지하수에 불과함<br>- 특히 전체 지하수 이용량의 절반을 넘는 농업용 지하수(51.7%)가 지하수이용부담금을 면제받고 있어, 재원을 조성하기 어려운 구조임 <br>○ 즉 지하수이용부담금은 현실을 반영하지 못한 수준으로 낮게 책정되어 있고, 징수대상도 일부 시설로 한정되어 있어 지하수 관련 업무 수행을 위한 재원 마련에 한계가 있음 - 통일헌법의 근본 원리를 자유민주적 기본 질서라고 할 때 자유민주적 기본 질서를 어떻게 이해하느냐가 중요한 과제로 등장한다. 헌법재판소(이하, ‘헌재’라고 함)는 2001년 “국가권력의 간섭을 배제하고, 개인의 자유와 창의를 존중하며 다양성을 포용하는 자유주의와 국가권력이 국민에게 귀속되고, 국민에 의한 지배가 이루어지는 것을 내용적 특징으로 하는 민주주의가 결합된 개념인 자유민주주의”라고 결정하였다. 즉, 자유민주적 기본 질서를 자유주의와 민주주의의 결합이라고 본 것이다. 한편, 과도기 이중체제에서 사회보장제도를 설계함에 있어서 자유민주주의를 복지국가의 원리와 어떻게 결합시킬지는 논의의 시발점이다. 헌재는 복지국가(사회국가)에 대해서 “사회 정의의 이념을 헌법에 수용한 국가, 사회 현상에 대하여 방관적인 국가가 아니라 경제・사회・문화의 모든 영역에서 정의로운 사회 질서의 형성을 위하여 사회 현상에 관여하고 간섭하고 분배하고 조정하는 국가”로 정의하고 있다. 또한 복지국가는 “궁극적으로는 국민 각자가 실제로 자유를 행사할 수 있는 그 실질적 조 건을 마련해 줄 의무가 있는 국가”라고 결정하였다. - source_sentence: 1970년대 경제위기 상황으로 사회복지가 위기를 맞으며 사회적 경제 운동이 일어나1990년대 후반부터 사회적 기업이 시작된 나라는 어디야? sentences: - 제2차 세계대전 이후 1950년대와 1960년대 거치면서 경제성장을 기반으로 정부지출의 지속적인 증가와 복지에 대한 사회적 합의는 다수 산업의 국유화와 그에 따른 공공부분의 확대, 사회복지의 확대를 가능하게 하였다. 그러나 1970년대 경제위기 상황은 사회복지의 위기를 가져왔고 1980년대의 경기침체는 더 이상 복지지출의 확대를 허락하지 않는 ‘외부충격’이 있었다. 현대적 의미에서 사회적 기업은 1970년대부터의 노동자 협동조합, 신용조합, 지역사회 상점(community shop), 개발신탁, 지역사회 비즈니스 운동, 노동통합(work integration) 운동 등 ‘사회적 경제’ 운동에서 시작하였다고 한다. 영국 사회에 나타난 이와 같은 일련의 사건들은 복지국가 위기로 인식되었다. 한편으로는 이러한 사건들이 이전 18세기부터 발달해 왔던 협동조합, 상호공제조합, 자선단체와 같은 활동의 역할이 더욱 중요하게 부각되는 계기가 되기도 하였다. 영국에서는 1990년대 후반부터 이루어진 노동당의 집권이 현대적인 의미의 사회적 경제와 사회적 기업의 발전, 나아가 제도화에 큰 영향을 주었다. - 국토해양부는 2006년 부산·경남지역의 11개 컨테이너 전용터미널에 RFID/USN 기반의 게이트자동화시스템을 구축하였고, 2007년에는 광양·인천지역 등 10개소에 확대 구축하여 전국 컨테이너 항만에도 확대하여 지능형 u-Port 시스템을 구축하였다. 하지만 2006년에 부착한 13,500여 개의 컨테이너 차량용 태그 인식율이 93% 수준에 그쳐, 이를 보완하고 3,000여 대에 추가 부착하여 고도화하기 위해 2단계 사업인 “컨테 이너터미널 RFID 기반 게이트 자동화시스템 고도화 사업”을 실시하였다. ㈜KL-Net이 2009년 8월부터 12월까지 2단계 사업을 주관하였으며, 차량 RFID 시스템 국제표준화, 컨테이너 RFID 시스템 재배치, 온라인 전자태그 발급체제 구축, 전자태그 성능강화 표준화를 추진하였다. - 'Ⅰ. 서론 최근 일부 국가에서 2008년 글로벌 금융위기를 겪으면서 사회적경제의 역할과 기능에 대하여 전반적인 관심이 높아지면서 사회적경제의 활성화가 여러 국가들이 직면한 사회적・경제적 문제의 해결에 기여할 것이라는 사회적 공감대가 형성되었다 (권재열, 2015). 이에 스페인, 멕시코, 에콰도르, 포르투갈, 프랑스와 캐나다의 퀘벡주 등에서 사회적경제기본법이 제정되어 시행되고 있다. 각국의 사회적경제기본법은 사회적경제의 정체성 규정을 위한 법적 틀을 제공하고, 사회적경제에 대한 포괄적인 지원 및 촉진 정책을 제공하고 있다.' --- # SentenceTransformer based on BAAI/bge-m3 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 --> - **Maximum Sequence Length:** 1024 tokens - **Output Dimensionality:** 1024 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("seongil-dn/bge-m3-kor-retrieval-451949-bs128-finance-book-science-215") # Run inference sentences = [ '1970년대 경제위기 상황으로 사회복지가 위기를 맞으며 사회적 경제 운동이 일어나1990년대 후반부터 사회적 기업이 시작된 나라는 어디야?', '제2차 세계대전 이후 1950년대와 1960년대 거치면서 경제성장을 기반으로 정부지출의 지속적인 증가와 복지에 대한 사회적 합의는 다수 산업의 국유화와 그에 따른 공공부분의 확대, 사회복지의 확대를 가능하게 하였다. 그러나 1970년대 경제위기 상황은 사회복지의 위기를 가져왔고 1980년대의 경기침체는 더 이상 복지지출의 확대를 허락하지 않는 ‘외부충격’이 있었다. 현대적 의미에서 사회적 기업은 1970년대부터의 노동자 협동조합, 신용조합, 지역사회 상점(community shop), 개발신탁, 지역사회 비즈니스 운동, 노동통합(work integration) 운동 등 ‘사회적 경제’ 운동에서 시작하였다고 한다. 영국 사회에 나타난 이와 같은 일련의 사건들은 복지국가 위기로 인식되었다. 한편으로는 이러한 사건들이 이전 18세기부터 발달해 왔던 협동조합, 상호공제조합, 자선단체와 같은 활동의 역할이 더욱 중요하게 부각되는 계기가 되기도 하였다. 영국에서는 1990년대 후반부터 이루어진 노동당의 집권이 현대적인 의미의 사회적 경제와 사회적 기업의 발전, 나아가 제도화에 큰 영향을 주었다.', 'Ⅰ. 서론\n최근 일부 국가에서 2008년 글로벌 금융위기를 겪으면서 사회적경제의 역할과 기능에 대하여 전반적인 관심이 높아지면서 사회적경제의 활성화가 여러 국가들이 직면한 사회적・경제적 문제의 해결에 기여할 것이라는 사회적 공감대가 형성되었다 (권재열, 2015). 이에 스페인, 멕시코, 에콰도르, 포르투갈, 프랑스와 캐나다의 퀘벡주 등에서 사회적경제기본법이 제정되어 시행되고 있다. 각국의 사회적경제기본법은 사회적경제의 정체성 규정을 위한 법적 틀을 제공하고, 사회적경제에 대한 포괄적인 지원 및 촉진 정책을 제공하고 있다.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 128 - `learning_rate`: 3e-05 - `num_train_epochs`: 2 - `warmup_ratio`: 0.05 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 3e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.05 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: True - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.0047 | 1 | 1.2972 | | 0.0094 | 2 | 1.7591 | | 0.0142 | 3 | 1.5857 | | 0.0189 | 4 | 1.3732 | | 0.0236 | 5 | 1.4174 | | 0.0283 | 6 | 1.4117 | | 0.0330 | 7 | 1.2482 | | 0.0377 | 8 | 1.4429 | | 0.0425 | 9 | 1.1965 | | 0.0472 | 10 | 0.9934 | | 0.0519 | 11 | 0.8505 | | 0.0566 | 12 | 0.7532 | | 0.0613 | 13 | 0.7257 | | 0.0660 | 14 | 0.5238 | | 0.0708 | 15 | 0.4538 | | 0.0755 | 16 | 0.4524 | | 0.0802 | 17 | 0.4026 | | 0.0849 | 18 | 0.4288 | | 0.0896 | 19 | 0.3547 | | 0.0943 | 20 | 0.3552 | | 0.0991 | 21 | 0.2845 | | 0.1038 | 22 | 0.3171 | | 0.1085 | 23 | 0.2699 | | 0.1132 | 24 | 0.2905 | | 0.1179 | 25 | 0.2627 | | 0.1226 | 26 | 0.268 | | 0.1274 | 27 | 0.2205 | | 0.1321 | 28 | 0.2374 | | 0.1368 | 29 | 0.2653 | | 0.1415 | 30 | 0.2517 | | 0.1462 | 31 | 0.2145 | | 0.1509 | 32 | 0.1949 | | 0.1557 | 33 | 0.1515 | | 0.1604 | 34 | 0.214 | | 0.1651 | 35 | 0.213 | | 0.1698 | 36 | 0.1739 | | 0.1745 | 37 | 0.1588 | | 0.1792 | 38 | 0.184 | | 0.1840 | 39 | 0.1921 | | 0.1887 | 40 | 0.1662 | | 0.1934 | 41 | 0.1844 | | 0.1981 | 42 | 0.1891 | | 0.2028 | 43 | 0.1456 | | 0.2075 | 44 | 0.1564 | | 0.2123 | 45 | 0.131 | | 0.2170 | 46 | 0.1636 | | 0.2217 | 47 | 0.1528 | | 0.2264 | 48 | 0.1491 | | 0.2311 | 49 | 0.1432 | | 0.2358 | 50 | 0.1399 | | 0.2406 | 51 | 0.1683 | | 0.2453 | 52 | 0.1757 | | 0.25 | 53 | 0.1622 | | 0.2547 | 54 | 0.1649 | | 0.2594 | 55 | 0.1184 | | 0.2642 | 56 | 0.1472 | | 0.2689 | 57 | 0.146 | | 0.2736 | 58 | 0.1387 | | 0.2783 | 59 | 0.1527 | | 0.2830 | 60 | 0.1333 | | 0.2877 | 61 | 0.1349 | | 0.2925 | 62 | 0.2007 | | 0.2972 | 63 | 0.1548 | | 0.3019 | 64 | 0.165 | | 0.3066 | 65 | 0.1239 | | 0.3113 | 66 | 0.1164 | | 0.3160 | 67 | 0.1734 | | 0.3208 | 68 | 0.1281 | | 0.3255 | 69 | 0.1195 | | 0.3302 | 70 | 0.1461 | | 0.3349 | 71 | 0.1363 | | 0.3396 | 72 | 0.1081 | | 0.3443 | 73 | 0.1532 | | 0.3491 | 74 | 0.1549 | | 0.3538 | 75 | 0.1409 | | 0.3585 | 76 | 0.1396 | | 0.3632 | 77 | 0.0858 | | 0.3679 | 78 | 0.121 | | 0.3726 | 79 | 0.138 | | 0.3774 | 80 | 0.1334 | | 0.3821 | 81 | 0.1235 | | 0.3868 | 82 | 0.1167 | | 0.3915 | 83 | 0.1745 | | 0.3962 | 84 | 0.1201 | | 0.4009 | 85 | 0.1277 | | 0.4057 | 86 | 0.1089 | | 0.4104 | 87 | 0.1117 | | 0.4151 | 88 | 0.11 | | 0.4198 | 89 | 0.1604 | | 0.4245 | 90 | 0.1312 | | 0.4292 | 91 | 0.1368 | | 0.4340 | 92 | 0.1338 | | 0.4387 | 93 | 0.1464 | | 0.4434 | 94 | 0.1442 | | 0.4481 | 95 | 0.1281 | | 0.4528 | 96 | 0.1296 | | 0.4575 | 97 | 0.151 | | 0.4623 | 98 | 0.1297 | | 0.4670 | 99 | 0.1142 | | 0.4717 | 100 | 0.119 | | 0.4764 | 101 | 0.0956 | | 0.4811 | 102 | 0.1049 | | 0.4858 | 103 | 0.1294 | | 0.4906 | 104 | 0.1102 | | 0.4953 | 105 | 0.1172 | | 0.5 | 106 | 0.1523 | | 0.5047 | 107 | 0.0919 | | 0.5094 | 108 | 0.1101 | | 0.5142 | 109 | 0.1191 | | 0.5189 | 110 | 0.1104 | | 0.5236 | 111 | 0.0942 | | 0.5283 | 112 | 0.1058 | | 0.5330 | 113 | 0.1328 | | 0.5377 | 114 | 0.1122 | | 0.5425 | 115 | 0.1156 | | 0.5472 | 116 | 0.1123 | | 0.5519 | 117 | 0.0909 | | 0.5566 | 118 | 0.1083 | | 0.5613 | 119 | 0.1142 | | 0.5660 | 120 | 0.1192 | | 0.5708 | 121 | 0.1088 | | 0.5755 | 122 | 0.1289 | | 0.5802 | 123 | 0.1407 | | 0.5849 | 124 | 0.1065 | | 0.5896 | 125 | 0.1016 | | 0.5943 | 126 | 0.1389 | | 0.5991 | 127 | 0.1212 | | 0.6038 | 128 | 0.1139 | | 0.6085 | 129 | 0.1055 | | 0.6132 | 130 | 0.0921 | | 0.6179 | 131 | 0.0958 | | 0.6226 | 132 | 0.1019 | | 0.6274 | 133 | 0.0967 | | 0.6321 | 134 | 0.1041 | | 0.6368 | 135 | 0.1007 | | 0.6415 | 136 | 0.1662 | | 0.6462 | 137 | 0.0853 | | 0.6509 | 138 | 0.1189 | | 0.6557 | 139 | 0.1077 | | 0.6604 | 140 | 0.12 | | 0.6651 | 141 | 0.1352 | | 0.6698 | 142 | 0.0953 | | 0.6745 | 143 | 0.1173 | | 0.6792 | 144 | 0.1082 | | 0.6840 | 145 | 0.1283 | | 0.6887 | 146 | 0.0978 | | 0.6934 | 147 | 0.1187 | | 0.6981 | 148 | 0.1247 | | 0.7028 | 149 | 0.126 | | 0.7075 | 150 | 0.0955 | | 0.7123 | 151 | 0.1085 | | 0.7170 | 152 | 0.0883 | | 0.7217 | 153 | 0.1042 | | 0.7264 | 154 | 0.1241 | | 0.7311 | 155 | 0.0797 | | 0.7358 | 156 | 0.1305 | | 0.7406 | 157 | 0.1022 | | 0.7453 | 158 | 0.097 | | 0.75 | 159 | 0.108 | | 0.7547 | 160 | 0.1111 | | 0.7594 | 161 | 0.13 | | 0.7642 | 162 | 0.1048 | | 0.7689 | 163 | 0.1109 | | 0.7736 | 164 | 0.0777 | | 0.7783 | 165 | 0.081 | | 0.7830 | 166 | 0.1077 | | 0.7877 | 167 | 0.1025 | | 0.7925 | 168 | 0.137 | | 0.7972 | 169 | 0.0822 | | 0.8019 | 170 | 0.0976 | | 0.8066 | 171 | 0.1229 | | 0.8113 | 172 | 0.1434 | | 0.8160 | 173 | 0.1146 | | 0.8208 | 174 | 0.1186 | | 0.8255 | 175 | 0.1261 | | 0.8302 | 176 | 0.0798 | | 0.8349 | 177 | 0.0911 | | 0.8396 | 178 | 0.1376 | | 0.8443 | 179 | 0.104 | | 0.8491 | 180 | 0.1152 | | 0.8538 | 181 | 0.139 | | 0.8585 | 182 | 0.0994 | | 0.8632 | 183 | 0.0982 | | 0.8679 | 184 | 0.1182 | | 0.8726 | 185 | 0.086 | | 0.8774 | 186 | 0.0968 | | 0.8821 | 187 | 0.1048 | | 0.8868 | 188 | 0.1447 | | 0.8915 | 189 | 0.1069 | | 0.8962 | 190 | 0.1402 | | 0.9009 | 191 | 0.1004 | | 0.9057 | 192 | 0.1 | | 0.9104 | 193 | 0.0829 | | 0.9151 | 194 | 0.102 | | 0.9198 | 195 | 0.1025 | | 0.9245 | 196 | 0.107 | | 0.9292 | 197 | 0.0918 | | 0.9340 | 198 | 0.0875 | | 0.9387 | 199 | 0.1056 | | 0.9434 | 200 | 0.0833 | | 0.9481 | 201 | 0.1141 | | 0.9528 | 202 | 0.0882 | | 0.9575 | 203 | 0.0938 | | 0.9623 | 204 | 0.1121 | | 0.9670 | 205 | 0.1146 | | 0.9717 | 206 | 0.0994 | | 0.9764 | 207 | 0.0884 | | 0.9811 | 208 | 0.0895 | | 0.9858 | 209 | 0.1013 | | 0.9906 | 210 | 0.0885 | | 0.9953 | 211 | 0.142 | | 1.0 | 212 | 0.0918 | | 1.0047 | 213 | 0.0989 | | 1.0094 | 214 | 0.1417 | | 1.0142 | 215 | 0.1095 | </details> ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.2.1 - Transformers: 4.44.2 - PyTorch: 2.3.1+cu121 - Accelerate: 1.1.1 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on BAAI/bge-m3 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 --> - **Maximum Sequence Length:** 1024 tokens - **Output Dimensionality:** 1024 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("seongil-dn/bge-m3-kor-retrieval-451949-bs128-finance-book-science-215") # Run inference sentences = [ '1970년대 경제위기 상황으로 사회복지가 위기를 맞으며 사회적 경제 운동이 일어나1990년대 후반부터 사회적 기업이 시작된 나라는 어디야?', '제2차 세계대전 이후 1950년대와 1960년대 거치면서 경제성장을 기반으로 정부지출의 지속적인 증가와 복지에 대한 사회적 합의는 다수 산업의 국유화와 그에 따른 공공부분의 확대, 사회복지의 확대를 가능하게 하였다. 그러나 1970년대 경제위기 상황은 사회복지의 위기를 가져왔고 1980년대의 경기침체는 더 이상 복지지출의 확대를 허락하지 않는 ‘외부충격’이 있었다. 현대적 의미에서 사회적 기업은 1970년대부터의 노동자 협동조합, 신용조합, 지역사회 상점(community shop), 개발신탁, 지역사회 비즈니스 운동, 노동통합(work integration) 운동 등 ‘사회적 경제’ 운동에서 시작하였다고 한다. 영국 사회에 나타난 이와 같은 일련의 사건들은 복지국가 위기로 인식되었다. 한편으로는 이러한 사건들이 이전 18세기부터 발달해 왔던 협동조합, 상호공제조합, 자선단체와 같은 활동의 역할이 더욱 중요하게 부각되는 계기가 되기도 하였다. 영국에서는 1990년대 후반부터 이루어진 노동당의 집권이 현대적인 의미의 사회적 경제와 사회적 기업의 발전, 나아가 제도화에 큰 영향을 주었다.', 'Ⅰ. 서론\n최근 일부 국가에서 2008년 글로벌 금융위기를 겪으면서 사회적경제의 역할과 기능에 대하여 전반적인 관심이 높아지면서 사회적경제의 활성화가 여러 국가들이 직면한 사회적・경제적 문제의 해결에 기여할 것이라는 사회적 공감대가 형성되었다 (권재열, 2015). 이에 스페인, 멕시코, 에콰도르, 포르투갈, 프랑스와 캐나다의 퀘벡주 등에서 사회적경제기본법이 제정되어 시행되고 있다. 각국의 사회적경제기본법은 사회적경제의 정체성 규정을 위한 법적 틀을 제공하고, 사회적경제에 대한 포괄적인 지원 및 촉진 정책을 제공하고 있다.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 128 - `learning_rate`: 3e-05 - `num_train_epochs`: 2 - `warmup_ratio`: 0.05 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 3e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.05 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: True - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.0047 | 1 | 1.2972 | | 0.0094 | 2 | 1.7591 | | 0.0142 | 3 | 1.5857 | | 0.0189 | 4 | 1.3732 | | 0.0236 | 5 | 1.4174 | | 0.0283 | 6 | 1.4117 | | 0.0330 | 7 | 1.2482 | | 0.0377 | 8 | 1.4429 | | 0.0425 | 9 | 1.1965 | | 0.0472 | 10 | 0.9934 | | 0.0519 | 11 | 0.8505 | | 0.0566 | 12 | 0.7532 | | 0.0613 | 13 | 0.7257 | | 0.0660 | 14 | 0.5238 | | 0.0708 | 15 | 0.4538 | | 0.0755 | 16 | 0.4524 | | 0.0802 | 17 | 0.4026 | | 0.0849 | 18 | 0.4288 | | 0.0896 | 19 | 0.3547 | | 0.0943 | 20 | 0.3552 | | 0.0991 | 21 | 0.2845 | | 0.1038 | 22 | 0.3171 | | 0.1085 | 23 | 0.2699 | | 0.1132 | 24 | 0.2905 | | 0.1179 | 25 | 0.2627 | | 0.1226 | 26 | 0.268 | | 0.1274 | 27 | 0.2205 | | 0.1321 | 28 | 0.2374 | | 0.1368 | 29 | 0.2653 | | 0.1415 | 30 | 0.2517 | | 0.1462 | 31 | 0.2145 | | 0.1509 | 32 | 0.1949 | | 0.1557 | 33 | 0.1515 | | 0.1604 | 34 | 0.214 | | 0.1651 | 35 | 0.213 | | 0.1698 | 36 | 0.1739 | | 0.1745 | 37 | 0.1588 | | 0.1792 | 38 | 0.184 | | 0.1840 | 39 | 0.1921 | | 0.1887 | 40 | 0.1662 | | 0.1934 | 41 | 0.1844 | | 0.1981 | 42 | 0.1891 | | 0.2028 | 43 | 0.1456 | | 0.2075 | 44 | 0.1564 | | 0.2123 | 45 | 0.131 | | 0.2170 | 46 | 0.1636 | | 0.2217 | 47 | 0.1528 | | 0.2264 | 48 | 0.1491 | | 0.2311 | 49 | 0.1432 | | 0.2358 | 50 | 0.1399 | | 0.2406 | 51 | 0.1683 | | 0.2453 | 52 | 0.1757 | | 0.25 | 53 | 0.1622 | | 0.2547 | 54 | 0.1649 | | 0.2594 | 55 | 0.1184 | | 0.2642 | 56 | 0.1472 | | 0.2689 | 57 | 0.146 | | 0.2736 | 58 | 0.1387 | | 0.2783 | 59 | 0.1527 | | 0.2830 | 60 | 0.1333 | | 0.2877 | 61 | 0.1349 | | 0.2925 | 62 | 0.2007 | | 0.2972 | 63 | 0.1548 | | 0.3019 | 64 | 0.165 | | 0.3066 | 65 | 0.1239 | | 0.3113 | 66 | 0.1164 | | 0.3160 | 67 | 0.1734 | | 0.3208 | 68 | 0.1281 | | 0.3255 | 69 | 0.1195 | | 0.3302 | 70 | 0.1461 | | 0.3349 | 71 | 0.1363 | | 0.3396 | 72 | 0.1081 | | 0.3443 | 73 | 0.1532 | | 0.3491 | 74 | 0.1549 | | 0.3538 | 75 | 0.1409 | | 0.3585 | 76 | 0.1396 | | 0.3632 | 77 | 0.0858 | | 0.3679 | 78 | 0.121 | | 0.3726 | 79 | 0.138 | | 0.3774 | 80 | 0.1334 | | 0.3821 | 81 | 0.1235 | | 0.3868 | 82 | 0.1167 | | 0.3915 | 83 | 0.1745 | | 0.3962 | 84 | 0.1201 | | 0.4009 | 85 | 0.1277 | | 0.4057 | 86 | 0.1089 | | 0.4104 | 87 | 0.1117 | | 0.4151 | 88 | 0.11 | | 0.4198 | 89 | 0.1604 | | 0.4245 | 90 | 0.1312 | | 0.4292 | 91 | 0.1368 | | 0.4340 | 92 | 0.1338 | | 0.4387 | 93 | 0.1464 | | 0.4434 | 94 | 0.1442 | | 0.4481 | 95 | 0.1281 | | 0.4528 | 96 | 0.1296 | | 0.4575 | 97 | 0.151 | | 0.4623 | 98 | 0.1297 | | 0.4670 | 99 | 0.1142 | | 0.4717 | 100 | 0.119 | | 0.4764 | 101 | 0.0956 | | 0.4811 | 102 | 0.1049 | | 0.4858 | 103 | 0.1294 | | 0.4906 | 104 | 0.1102 | | 0.4953 | 105 | 0.1172 | | 0.5 | 106 | 0.1523 | | 0.5047 | 107 | 0.0919 | | 0.5094 | 108 | 0.1101 | | 0.5142 | 109 | 0.1191 | | 0.5189 | 110 | 0.1104 | | 0.5236 | 111 | 0.0942 | | 0.5283 | 112 | 0.1058 | | 0.5330 | 113 | 0.1328 | | 0.5377 | 114 | 0.1122 | | 0.5425 | 115 | 0.1156 | | 0.5472 | 116 | 0.1123 | | 0.5519 | 117 | 0.0909 | | 0.5566 | 118 | 0.1083 | | 0.5613 | 119 | 0.1142 | | 0.5660 | 120 | 0.1192 | | 0.5708 | 121 | 0.1088 | | 0.5755 | 122 | 0.1289 | | 0.5802 | 123 | 0.1407 | | 0.5849 | 124 | 0.1065 | | 0.5896 | 125 | 0.1016 | | 0.5943 | 126 | 0.1389 | | 0.5991 | 127 | 0.1212 | | 0.6038 | 128 | 0.1139 | | 0.6085 | 129 | 0.1055 | | 0.6132 | 130 | 0.0921 | | 0.6179 | 131 | 0.0958 | | 0.6226 | 132 | 0.1019 | | 0.6274 | 133 | 0.0967 | | 0.6321 | 134 | 0.1041 | | 0.6368 | 135 | 0.1007 | | 0.6415 | 136 | 0.1662 | | 0.6462 | 137 | 0.0853 | | 0.6509 | 138 | 0.1189 | | 0.6557 | 139 | 0.1077 | | 0.6604 | 140 | 0.12 | | 0.6651 | 141 | 0.1352 | | 0.6698 | 142 | 0.0953 | | 0.6745 | 143 | 0.1173 | | 0.6792 | 144 | 0.1082 | | 0.6840 | 145 | 0.1283 | | 0.6887 | 146 | 0.0978 | | 0.6934 | 147 | 0.1187 | | 0.6981 | 148 | 0.1247 | | 0.7028 | 149 | 0.126 | | 0.7075 | 150 | 0.0955 | | 0.7123 | 151 | 0.1085 | | 0.7170 | 152 | 0.0883 | | 0.7217 | 153 | 0.1042 | | 0.7264 | 154 | 0.1241 | | 0.7311 | 155 | 0.0797 | | 0.7358 | 156 | 0.1305 | | 0.7406 | 157 | 0.1022 | | 0.7453 | 158 | 0.097 | | 0.75 | 159 | 0.108 | | 0.7547 | 160 | 0.1111 | | 0.7594 | 161 | 0.13 | | 0.7642 | 162 | 0.1048 | | 0.7689 | 163 | 0.1109 | | 0.7736 | 164 | 0.0777 | | 0.7783 | 165 | 0.081 | | 0.7830 | 166 | 0.1077 | | 0.7877 | 167 | 0.1025 | | 0.7925 | 168 | 0.137 | | 0.7972 | 169 | 0.0822 | | 0.8019 | 170 | 0.0976 | | 0.8066 | 171 | 0.1229 | | 0.8113 | 172 | 0.1434 | | 0.8160 | 173 | 0.1146 | | 0.8208 | 174 | 0.1186 | | 0.8255 | 175 | 0.1261 | | 0.8302 | 176 | 0.0798 | | 0.8349 | 177 | 0.0911 | | 0.8396 | 178 | 0.1376 | | 0.8443 | 179 | 0.104 | | 0.8491 | 180 | 0.1152 | | 0.8538 | 181 | 0.139 | | 0.8585 | 182 | 0.0994 | | 0.8632 | 183 | 0.0982 | | 0.8679 | 184 | 0.1182 | | 0.8726 | 185 | 0.086 | | 0.8774 | 186 | 0.0968 | | 0.8821 | 187 | 0.1048 | | 0.8868 | 188 | 0.1447 | | 0.8915 | 189 | 0.1069 | | 0.8962 | 190 | 0.1402 | | 0.9009 | 191 | 0.1004 | | 0.9057 | 192 | 0.1 | | 0.9104 | 193 | 0.0829 | | 0.9151 | 194 | 0.102 | | 0.9198 | 195 | 0.1025 | | 0.9245 | 196 | 0.107 | | 0.9292 | 197 | 0.0918 | | 0.9340 | 198 | 0.0875 | | 0.9387 | 199 | 0.1056 | | 0.9434 | 200 | 0.0833 | | 0.9481 | 201 | 0.1141 | | 0.9528 | 202 | 0.0882 | | 0.9575 | 203 | 0.0938 | | 0.9623 | 204 | 0.1121 | | 0.9670 | 205 | 0.1146 | | 0.9717 | 206 | 0.0994 | | 0.9764 | 207 | 0.0884 | | 0.9811 | 208 | 0.0895 | | 0.9858 | 209 | 0.1013 | | 0.9906 | 210 | 0.0885 | | 0.9953 | 211 | 0.142 | | 1.0 | 212 | 0.0918 | | 1.0047 | 213 | 0.0989 | | 1.0094 | 214 | 0.1417 | | 1.0142 | 215 | 0.1095 | </details> ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.2.1 - Transformers: 4.44.2 - PyTorch: 2.3.1+cu121 - Accelerate: 1.1.1 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "BAAI/bge-m3", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:451949", "loss:CachedGISTEmbedLoss"], "widget": [{"source_sentence": "타 지자체에 비해 압도적으로 귀어가구가 많은 것으로 집계된 지역은 어디야?", "sentences": ["나)금융상품판매전문업 도입의 목적\n□ 금융상품판매전문업의 도입은 다음 세 가지를 목적으로 함. \n― 첫째, 금융상품판매제도를 공급자 중심에서 수요자중심으로 전환. \n― 둘째, 판매조직의 다양화를 통한 금융산업의 효율성을 제고. \n― 셋째, 금융소비자의 입장에서 금융상품을 판매하는 판매전문조직의 육성을 통하여 금융소비자 보호제도의 실효성을 확보하기 위한 제도적 기초를 확보. □ 금융소비자의 입장을 고려하는 판매조직의 활성화를 통하여 금융소비자보호체계를 금융상품 제조자에 의한 “불완전판매˝(mis―selling)의 규제에서 금융소비자에 의한 ˝불완전구매˝(mis―buying)의 방지로의 전환을 위한 제도적 기초를 구축할 수 있을 것임.", "2. 최근의 귀농·귀촌 동향\n농림축산식품부·해양수산부·통계청이 지난 해 6월에 발표한 “귀농어·귀촌인 통계”에 따르면 2016년에 귀농·귀촌을 택해 농촌으로 이주한 도시가구는 33만 5,383가구(49만 6,048명)였다. 지난 1년간 5,860가구(17,156명) 늘어났다. 이 가운데 귀농가구는 전체의 3.8%인 12,875가구, 그리고 귀촌가구는 전체의 96.2%인 322,508가구였다. 귀농과 귀촌은 지역적으로 차이를 나타내 귀농의 경우 2016년 경북이 2,323가구(18.0%)로 가장 많고, 그 다음은 전남 1,923가구, 경남 1,631가구의 순인데 반해, 귀촌은 경기가 85,441가구(26.5%), 경남 37,732가구, 경북 37,261가구의 순이었다. 연령에도 차이를 보이고 있다. 귀농의 경우 50대가 39.9%로 가장 큰 비중을 차지하고, 그 다음은 60대 25.4%로 50대 이상 고령층이 71.3%이고, 40대 이하 젊은 층은 28.7%에 불과하였다. 반면에 귀촌의 경우에는 40대 이하가 63.9%이고, 50대 이상은 36.1%로 귀농과 달리 젊은 층이 큰 비중을 차지하였다.", "농림축산식품부・해양수산부・통계청이 공동으로 작성・발간한 “2015년 귀농어귀촌인 통계”에 따르면 2015년 귀어가구는 총 991가구로 2013년 650구, 2014년 917가구에서 매년 증가 추세이다. 991개 귀어가구 중 1인 가구가 전체 가구의 70.4%(698가구)이며, 2인 가구가 19.1%(189가구), 3인 가구가 4.8%(57가구), 4인 이상 가구가 4.7%(47가구)이며, 가구당 평균 귀어가구원수는 1.46명으로 나타났다. 지역별로는 전남이 343가구(34.6%), 충남이 340가구(34.3%)로 타 지자체에 비해 압도적으로 귀어가구가 많은 것으로 집계되었다. 이들 지역의 귀어가구가 많은 것은 전남의 경우 어선어업과 양식업 세력이 많은 지역적 특성이 귀어 확대로 연결된 것으로 보인다. 충남의 경우 수도권에서 가까워 지리적 접근성이 용이하고, 어촌체험마을, 낚시어선업 등의 활성화로 귀어인들이 타 지자체에 비해 상대적으로 선호도가 높은 것으로 판단된다."]}, {"source_sentence": "고전 음악에 있어서 세계적 메카라 불릴 만큼 찬란한 유산과 전통을 지닌 나라는 어디야?", "sentences": ["이번 한・러 정상회담에서는 또한 신정부가 추진하는 ‘한반도 신경제지도 구상’의 구체화 측면에서도 의미 있는 성과를 도출했다. 양국은 북한 참여를 통한 남~북~러 3각 협력 사업의 이행 의지를 재천명했으며, 양 정상은 3각 협력 사업이 경제적 호혜뿐만 아니라 한반도의 신뢰와 안정을 구축하는 데에도 기여할 것이란 점을 재확인했다. 위와 같은 한・러 정상회담의 성과를 가시화하기 위해 후속조치로 이번 동방경제포럼에서 제안된 ‘9 브릿지(9 Bridges)’ 전략을 해양수산 분야에서 구체화할 필요가 있다. 우선 3항(부산항, 나진항, 블라디보스토크항), 3거점(훈춘, 나데즈딘스카야, 하바롭스크), 3통로 (부산항을 공통 거점으로한 ① 나진항~훈춘 해상루트, ② 나진~핫산 철도루트, ③ 블라디보스토크~하바롭스크의 해상루트와 TKR~TSR 철도 연결루트)의 ‘3~3~3’ 전략을 적극 추진할 필요가 있다.", "슬로바키아 국민은 전통음악에 대한 자부심이 강하며, 음악의 기원은 슬라브족과 유럽인들의 민속음악이며 15~16세기 종교음악의 기원인 모라비아 왕국의 예배식에서 전례를 찾을 수 있다. 음악은 종교음악과 실내악의 영향을 크게 받았으며, 민속음악은 헝가리 통치하에서 슬로바키아 언어를 보존하는 데 큰 힘이 되었다. 전통악기로는 목관피리 종류인 푸야라(Fujara), 백파이프인 가이디(Gajdy), 양치기 플루트인 콘코우카(Konkovka) 등이 있다. 목공예, 도자기공예, 유리공예 등 전통 민속예술 중 특히 14세기부터 모드라지역의 ‘Majolika’ 도자기공예가 유명하다. 부드러운 곡선과 노란색, 파란색을 이용해 생활필수품을 만든다. 또한 목재 교회 건축물은 슬로바키아만의 독특한 건축양식이다. 전국 각지에서 민속 음악가들과 무용수들이 한자리에 모이는 전통축제인 비호드나 민속 축제(Východná Folklore Festival), 폴라냐 민속 축제(Folklórne Slávnosti pod Poľanou)는 7~8월경에 열린다.", "오스트리아는 하이든, 모차르트, 베토벤, 슈베르트, 브람스, 브루크너, 리스트 등 유수한 세계적 음악가들의 출생지 또는 활동 무대로 유명하며, 고전 음악에 있어서는 세계적 메카라 불릴 만큼 찬란한 유산 및 전통을 지니고 있다. 매년 6월의 빈 음악제, 7~8월의 잘츠부르크와 브레겐츠 음악제, 8월의 레흐, 9월의 린츠 브루크너 음악제 등 전국 각지에서 연중 음악제가 개최된다. 오페라는 Staatsoper, Volksoper, Burgtheater를 중심으로 한 빈 음악의 상징이라 할 수 있으며, Staatsoper는 Franz Josef 1세에 의해 1869년 개관된 유럽 3대 오페라 극장 중 하나로 좌석 2,500여 석, 연중 300여 작품이 공연되며 종사 인원도 5천여 명에 이른다. 빈 필하모닉 오케스트라가 오페라 연주를 담당하는 것으로도 유명하다. 빈 필하모닉 오케스트라는 구스타프 말러, 리하르트 슈트라우스, 헤르베르트 폰 카라얀으로 이어지면서 세계적인 오케스트라로 자리 잡았다."]}, {"source_sentence": "서로 다른 조직들로 통합된 조직의 상승 효과를 어떤 활동으로 단기간에 만들 수 있을까?", "sentences": ["첫째, 대상 기관 간에 상호 협력이 필수 조건이 된다는 인식의 형성은 성공적인 통합을 위해 매우 중요한 요인이 된다. 따라서 통합을 본격적으로 추진하기 전 단계에서 해당 기관 관계자들에게 왜 통합을 추진하며, 통합이 주는 성과는 무엇인지를 명확하게 설명해 주고 공감대를 형성하는 일이 중요하다. 둘째, 통합을 자발적으로 할 것인지 또는 외부 요인에 따라 할 것인지를 결정하여 추진할 필요가 있다. 이와 관련된 통합 추진방식에는 ‘하향식(Top-down)’ 추진방식과 ‘상향식(Bottom-up)’ 추진방식이 있다. 전자는 정부가 주도하는 방식이고, 후자는 대상 기관들이 자율적으로 추진하는 방식이다. Skodvin(1999 : 70)에 따르면, 하향식 방식은 대부분 국가에서 활용하는 가장 일반적인 사례이지만 종종 정책 담당자와 고등교육기관 관계자 간에 상당한 긴장과 갈등을 유발한다. 이에 비해 일부 국가에서 적용한 상향식 방식은 순조롭고 성공적이었다. 이들 국가에서 상향식 방식을 활용하는 이유는 통합 대상 기관에서 일하는 당사자들 간의 합의(consensus)를 이끌어내기가 쉽고, 통합 후에는 이들 간에 공유된 정체성을 확보하기가 용이하기 때문이다.", "<h1>IV. 실험 결과</h1> <p>제안하는 방법의 성능을 분석하기 위하여 기존에 균일한 도트 분포를 얻기 위해 제안된 세 가지 방법들과 비교 실험을 수행하였다. 먼저 제안하는 방법에 의해 얻어진 이진 영상에서 소수화소들 간의 거리가 주거리와 얼마나 일치하는지를 실험하였다. 이 실험의 목적은 균일한 도트 분포의 정도를 수치적으로 평가하기 위한 것이다. 128부터 255의 계조 각각에 대하여 \\( 256 \\times 256 \\)의 크기를 갖는 단일 계조 영상을 생성하였다. 각각의 단일 계조 영상을 제안하는 방법과 기존의 세 가지 방법에 의해 이진화 하였다. 즉, 네 가지 방법에 대하여 각각 128부터 255까지의 128개의 이진 영상들이 만들어졌다. Marcu의 방법을 적용할 때는[17]에 정의된 상위 9 라인의 검색 영역 및 순서에 준하여 이진화를 수행하였다. 모든 영상(128개의 영상 4가지 방법 = 512개의 영상)에 대하여 아래의 식 (11)에 정의된 \\( D_{x} \\)를 계산하였다. \\ ( D_{x} \\)는 이진 영상내에 존재하는 모든 소수화소들간의 최소거리와 주거리의 차이의 제곱을 평균한 것으로서 이진 영상에서 소수화소들간의 거리가 주거리와 얼마나 일치하는가를 나타내는 척도이다. </p> <p>\\( D_{x}=\\frac{1}{N_{x}} \\sum_{k=1}^{N_{x}}\\left\\{d_{\\min }^{B}(k)-\\lambda_{x}\\right\\}^{2} \\)<caption>(11)</caption></p> <p>여기서 \\( N_{x} \\)는 \\( x \\)라는 단일 계조를 나타내는 이진영상에서의 소수 화소들의 개수를 나타낸다. \\ ( d_{\\min }^{B}(k) \\)는 이진영상 내의 \\( k \\)번째 소수화소에 대한 최소화소거리를 나타낸다. 즉, 주어진 소수화소와 가장 가까운 소수화소간의 거리를 의미한다. 이상적으로 소수화소들간의 최소거리는 주거리를 만족하여야 한다. 따라서, 식 (11)에서는 \\( d_{\\text {min }}^{B}(k) \\)와 주거리 \\( x \\)의 차이를 제곱하여 평균하였다. 식 (11)의 \\( D_{x} \\)를 128과 255사이의 계조 각각에 대하여 계산하였으며 이 과정을 4가지 방법에 대해 동일하게 수행하였다. </p> <p>그림 5는 4가지 방법에 대한 계조별 \\( D_{x} \\)값을 나타낸다. 참고로 0부터 127까지의 \\( D_{x} \\)값은 128에 대하여 대칭이다. 도트의 균일한 분포 정도는 밝은 계조(highlight)나 어두운 계조(shadow)에서 인간 시각에 민감하게 인식된다. 그림 5에서 보듯이 기존의 세 가지 방법들은 밝은 계조에서 제안하는 방법에 비해 상대적으로 큰 \\( D_{x} \\)값을 나타내는 것을 알 수 있다. \\ ( D_{x} \\)가 큰 값을 나타낸다는 것은 이진 영상에서 소수화소들의 공백(void)이나 밀집(cluster)영역이 존재함을 의미한다. </p> <p>표 2는 그림 5에서 계조별로 계산된 결과를 평균한 값들을 나타낸다. 표 2에서 보듯이 제안하는 방법에서 소수화소들간의 거리가 주거리를 가장 잘 나타냄을 확인할 수 있다. </p> <table border><caption>표 2. 네 가지 방법에 대한 계조별 \\( \\mathrm{Dx} \\)값의 평균</caption> <tbody><tr><td>방법</td><td>Eschbach</td><td>Karg</td><td>MarcuI</td><td>제안하는 방법</td></tr><tr><td>\\( E\\left[D_{x}\\right] \\)</td><td>0.5940</td><td>0.3347</td><td>0.7476</td><td>0.2439</td></tr></tbody></table>", "이러한 조직진단이 정부 및 공공부문에 본격적으로 도입된 것은 김대중 대통령 때로, 경영진단 등의 이름으로 정부조직에 대한 진단 등이 이루어졌으며 민간의 컨설팅 회사들이 정부 부처 및 공공기관에 대한 진단을 수행하였다. 김대중 정부 이후로는 공공부문에서의 조직진단도 더 이상 낯선 제도가 아니며, 노무현 정부 당시에는 행정자치부에 이를 전담하는 정부조직이 한시적으로 별도로 존재하였으며 당시 정부에서 조직진단 관련 매뉴얼도 발간하였다. 조직진단 및 이를 바탕으로 한 조직 개편 사례들은 오늘날까지도 정부 부처와 공공부문에서 상당히 빈번하게 이루어지고 있다. 한편 조직융합관리(PMI)는 “업무방식과 조직문화 등 제반 환경이 서로 다른 조직들로 통합된 조직이 시너지 효과를 단기간 내에 창출할 수 있도록 통합 후 체계적으로 관리하는 활동”이다(행정안전부, 2008: 2). 조직 통합이나 개편이라는 물리적인 현상보다 실질적으로 조직 내부에서 통합과 융합이 이루어지는가를 분석하고 이를 위한 관리기법을 제안하는 분야가 조직융합관리(PMI)이다."]}, {"source_sentence": "무엇이 자유민주적 기본질서처럼 자유주의와 민주주의 사상을 따르고 있지?", "sentences": ["한편 ‘자유민주적 기본질서’는 어의상 자유주의와 민주주의를 토대로 하는 기본질서를 뜻하는데, 공산주의이념 역시 자유주의와 민주주의를 추구한다는 점에서 일견 마르크스주의(공산주의)를 배격하지 않는 것으로 비춰질 수도 있다. 그러나 헌법재판소는 현행 국가보안법상 처벌되는 ‘자유민주적 기본질서에 위해를 끼치는 행위’에 관하여 “모든 폭력적 지배와 자의적 지배 즉 반국가단체의 일인독재 내지 일당독재를 배제하고 다수의 의사에 의한 국민의 자치, 자유・평등의 기본원칙에 의한 법치주의적 통치질서의 유지를 어렵게 만드는 것으로서 구체적으로는 기본적 인권의 존중, 권력분립, 의회제도, 복수정당제도, 선거제도, 사유재산과 시장경제를 골간으로 한 경제질서 및 사법권의 독립 등 우리의 내부체재를 파괴・변혁시키려는 것”으로 폭넓게 정의내리고 있다.", "□ 지하수이용부담금의 경우, 전국 226개 지자체 중 84개 지자체(37.1%)만이 부과·징수하고 있어, 지하수의 체계적인 관리를 위한 재원이 부족한 상태임([표 13] 참조) <br>○ 지하수이용부담금을 부과·징수하는 지자체가 적은 이유는 지하수이용부담금 자체가 낮게 책정되어, 제도운영에 소요되는 인력 및 예산 등의 행정비용이 징수금을 초과하기 때문인 것으로 보임 <br>○ 특히 지하수의 용도 중 국방·군사용, 민방위용, 비상급수용, 농어업용, 상수도용, 일반수도사업용, 학교용, 가정용 및 먹는샘물용·온천용 등은 지하수이용부담금 부과대상에서 제외되어, 실제로 부담금을 징수할 수 있는 대상은 일반생활용 및 공업용 지하수에 불과함<br>- 특히 전체 지하수 이용량의 절반을 넘는 농업용 지하수(51.7%)가 지하수이용부담금을 면제받고 있어, 재원을 조성하기 어려운 구조임 <br>○ 즉 지하수이용부담금은 현실을 반영하지 못한 수준으로 낮게 책정되어 있고, 징수대상도 일부 시설로 한정되어 있어 지하수 관련 업무 수행을 위한 재원 마련에 한계가 있음", "통일헌법의 근본 원리를 자유민주적 기본 질서라고 할 때 자유민주적 기본 질서를 어떻게 이해하느냐가 중요한 과제로 등장한다. 헌법재판소(이하, ‘헌재’라고 함)는 2001년 “국가권력의 간섭을 배제하고, 개인의 자유와 창의를 존중하며 다양성을 포용하는 자유주의와 국가권력이 국민에게 귀속되고, 국민에 의한 지배가 이루어지는 것을 내용적 특징으로 하는 민주주의가 결합된 개념인 자유민주주의”라고 결정하였다. 즉, 자유민주적 기본 질서를 자유주의와 민주주의의 결합이라고 본 것이다. 한편, 과도기 이중체제에서 사회보장제도를 설계함에 있어서 자유민주주의를 복지국가의 원리와 어떻게 결합시킬지는 논의의 시발점이다. 헌재는 복지국가(사회국가)에 대해서 “사회 정의의 이념을 헌법에 수용한 국가, 사회 현상에 대하여 방관적인 국가가 아니라 경제・사회・문화의 모든 영역에서 정의로운 사회 질서의 형성을 위하여 사회 현상에 관여하고 간섭하고 분배하고 조정하는 국가”로 정의하고 있다. 또한 복지국가는 “궁극적으로는 국민 각자가 실제로 자유를 행사할 수 있는 그 실질적 조 건을 마련해 줄 의무가 있는 국가”라고 결정하였다."]}, {"source_sentence": "1970년대 경제위기 상황으로 사회복지가 위기를 맞으며 사회적 경제 운동이 일어나1990년대 후반부터 사회적 기업이 시작된 나라는 어디야?", "sentences": ["제2차 세계대전 이후 1950년대와 1960년대 거치면서 경제성장을 기반으로 정부지출의 지속적인 증가와 복지에 대한 사회적 합의는 다수 산업의 국유화와 그에 따른 공공부분의 확대, 사회복지의 확대를 가능하게 하였다. 그러나 1970년대 경제위기 상황은 사회복지의 위기를 가져왔고 1980년대의 경기침체는 더 이상 복지지출의 확대를 허락하지 않는 ‘외부충격’이 있었다. 현대적 의미에서 사회적 기업은 1970년대부터의 노동자 협동조합, 신용조합, 지역사회 상점(community shop), 개발신탁, 지역사회 비즈니스 운동, 노동통합(work integration) 운동 등 ‘사회적 경제’ 운동에서 시작하였다고 한다. 영국 사회에 나타난 이와 같은 일련의 사건들은 복지국가 위기로 인식되었다. 한편으로는 이러한 사건들이 이전 18세기부터 발달해 왔던 협동조합, 상호공제조합, 자선단체와 같은 활동의 역할이 더욱 중요하게 부각되는 계기가 되기도 하였다. 영국에서는 1990년대 후반부터 이루어진 노동당의 집권이 현대적인 의미의 사회적 경제와 사회적 기업의 발전, 나아가 제도화에 큰 영향을 주었다.", "국토해양부는 2006년 부산·경남지역의 11개 컨테이너 전용터미널에 RFID/USN 기반의 게이트자동화시스템을 구축하였고, 2007년에는 광양·인천지역 등 10개소에 확대 구축하여 전국 컨테이너 항만에도 확대하여 지능형 u-Port 시스템을 구축하였다. 하지만 2006년에 부착한 13,500여 개의 컨테이너 차량용 태그 인식율이 93% 수준에 그쳐, 이를 보완하고 3,000여 대에 추가 부착하여 고도화하기 위해 2단계 사업인 “컨테 이너터미널 RFID 기반 게이트 자동화시스템 고도화 사업”을 실시하였다. ㈜KL-Net이 2009년 8월부터 12월까지 2단계 사업을 주관하였으며, 차량 RFID 시스템 국제표준화, 컨테이너 RFID 시스템 재배치, 온라인 전자태그 발급체제 구축, 전자태그 성능강화 표준화를 추진하였다.", "Ⅰ. 서론\n최근 일부 국가에서 2008년 글로벌 금융위기를 겪으면서 사회적경제의 역할과 기능에 대하여 전반적인 관심이 높아지면서 사회적경제의 활성화가 여러 국가들이 직면한 사회적・경제적 문제의 해결에 기여할 것이라는 사회적 공감대가 형성되었다 (권재열, 2015). 이에 스페인, 멕시코, 에콰도르, 포르투갈, 프랑스와 캐나다의 퀘벡주 등에서 사회적경제기본법이 제정되어 시행되고 있다. 각국의 사회적경제기본법은 사회적경제의 정체성 규정을 위한 법적 틀을 제공하고, 사회적경제에 대한 포괄적인 지원 및 촉진 정책을 제공하고 있다."]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,703
LoneStriker/medicine-LLM-13B-4.0bpw-h6-exl2
LoneStriker
text-generation
[ "transformers", "pytorch", "llama", "text-generation", "biology", "medical", "en", "dataset:Open-Orca/OpenOrca", "dataset:GAIR/lima", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:EleutherAI/pile", "arxiv:2309.09530", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-01-01T20:23:54Z
2024-01-01T20:28:33+00:00
8
0
--- datasets: - Open-Orca/OpenOrca - GAIR/lima - WizardLM/WizardLM_evol_instruct_V2_196k - EleutherAI/pile language: - en metrics: - accuracy pipeline_tag: text-generation tags: - biology - medical --- # Adapt (Large) Language Models to Domains This repo contains the domain-specific base model developed from **LLaMA-1-13B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530). We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**. ### 🤗 We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! 🤗 **************************** **Updates** **************************** * 12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/medicine-LLM-13B) developed from LLaMA-1-13B. * 12/8: Released our [chat models](https://huggingface.co/AdaptLLM/medicine-chat) developed from LLaMA-2-Chat-7B. * 9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), and [base models](https://huggingface.co/AdaptLLM/medicine-LLM) developed from LLaMA-1-7B. ## Domain-Specific LLaMA-1 ### LLaMA-1-7B In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are: <p align='center'> <img src="https://hf.fast360.xyz/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700"> </p> ### LLaMA-1-13B Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B). ## Domain-Specific LLaMA-2-Chat Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat) For example, to chat with the biomedicine model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("AdaptLLM/medicine-chat") tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/medicine-chat", use_fast=False) # Put your input here: user_input = '''Question: Which of the following is an example of monosomy? Options: - 46,XX - 47,XXX - 69,XYY - 45,X Please provide your choice first and then provide explanations if possible.''' # We use the prompt template of LLaMA-2-Chat demo prompt = f"<s>[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{user_input} [/INST]" inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device) outputs = model.generate(input_ids=inputs, max_length=4096)[0] answer_start = int(inputs.shape[-1]) pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True) print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}') ``` ## Domain-Specific Tasks To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks). **Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models. ## Citation If you find our work helpful, please cite us: ```bibtex @article{adaptllm, title = {Adapting Large Language Models via Reading Comprehension}, author = {Daixuan Cheng and Shaohan Huang and Furu Wei}, journal = {CoRR}, volume = {abs/2309.09530}, year = {2023} } ```
null
BioNLP
# Adapt (Large) Language Models to Domains This repo contains the domain-specific base model developed from **LLaMA-1-13B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530). We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**. ### 🤗 We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! 🤗 **************************** **Updates** **************************** * 12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/medicine-LLM-13B) developed from LLaMA-1-13B. * 12/8: Released our [chat models](https://huggingface.co/AdaptLLM/medicine-chat) developed from LLaMA-2-Chat-7B. * 9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), and [base models](https://huggingface.co/AdaptLLM/medicine-LLM) developed from LLaMA-1-7B. ## Domain-Specific LLaMA-1 ### LLaMA-1-7B In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are: <p align='center'> <img src="https://hf.fast360.xyz/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700"> </p> ### LLaMA-1-13B Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B). ## Domain-Specific LLaMA-2-Chat Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat) For example, to chat with the biomedicine model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("AdaptLLM/medicine-chat") tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/medicine-chat", use_fast=False) # Put your input here: user_input = '''Question: Which of the following is an example of monosomy? Options: - 46,XX - 47,XXX - 69,XYY - 45,X Please provide your choice first and then provide explanations if possible.''' # We use the prompt template of LLaMA-2-Chat demo prompt = f"<s>[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{user_input} [/INST]" inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device) outputs = model.generate(input_ids=inputs, max_length=4096)[0] answer_start = int(inputs.shape[-1]) pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True) print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}') ``` ## Domain-Specific Tasks To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks). **Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models. ## Citation If you find our work helpful, please cite us: ```bibtex @article{adaptllm, title = {Adapting Large Language Models via Reading Comprehension}, author = {Daixuan Cheng and Shaohan Huang and Furu Wei}, journal = {CoRR}, volume = {abs/2309.09530}, year = {2023} } ```
{"datasets": ["Open-Orca/OpenOrca", "GAIR/lima", "WizardLM/WizardLM_evol_instruct_V2_196k", "EleutherAI/pile"], "language": ["en"], "metrics": ["accuracy"], "pipeline_tag": "text-generation", "tags": ["biology", "medical"]}
task
[ "QUESTION_ANSWERING" ]
42,704
sentence-transformers/all-mpnet-base-v2
sentence-transformers
sentence-similarity
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "openvino", "mpnet", "fill-mask", "feature-extraction", "sentence-similarity", "transformers", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset:wikihow", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/QQP", "dataset:embedding-data/SPECTER", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/WikiAnswers", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2025-03-06T13:35:43+00:00
34,579,411
1,014
--- datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - natural_questions - trivia_qa - embedding-data/sentence-compression - embedding-data/flickr30k-captions - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/QQP - embedding-data/SPECTER - embedding-data/PAQ_pairs - embedding-data/WikiAnswers language: en library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2') model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 384 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
null
Non_BioNLP
# all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2') model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 384 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
{"datasets": ["s2orc", "flax-sentence-embeddings/stackexchange_xml", "ms_marco", "gooaq", "yahoo_answers_topics", "code_search_net", "search_qa", "eli5", "snli", "multi_nli", "wikihow", "natural_questions", "trivia_qa", "embedding-data/sentence-compression", "embedding-data/flickr30k-captions", "embedding-data/altlex", "embedding-data/simple-wiki", "embedding-data/QQP", "embedding-data/SPECTER", "embedding-data/PAQ_pairs", "embedding-data/WikiAnswers"], "language": "en", "library_name": "sentence-transformers", "license": "apache-2.0", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"]}
task
[ "QUESTION_ANSWERING" ]
42,705
hltcoe/plaidx-large-fas-tdist-t53b-engeng
hltcoe
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "clir", "colbertx", "plaidx", "xlm-roberta-large", "en", "fa", "dataset:ms_marco", "dataset:hltcoe/tdist-msmarco-scores", "arxiv:2201.08471", "arxiv:2401.04810", "license:mit", "endpoints_compatible", "region:us" ]
2023-12-28T01:20:27Z
2025-03-12T23:37:58+00:00
12
0
--- datasets: - ms_marco - hltcoe/tdist-msmarco-scores language: - en - fa license: mit tags: - clir - colbertx - plaidx - xlm-roberta-large task_categories: - text-retrieval - information-retrieval task_ids: - passage-retrieval - cross-language-retrieval --- # ColBERT-X for English-Persian CLIR using Translate-Distill ## CLIR Model Setting - Query language: English - Query length: 32 token max - Document language: Persian - Document length: 180 token max (please use MaxP to aggregate the passage score if needed) ## Model Description Translate-Distill is a training technique that produces state-of-the-art CLIR dense retrieval model through translation and distillation. `plaidx-large-fas-tdist-t53b-engeng` is trained with KL-Divergence from the t53b MonoT5 reranker inferenced on English MS MARCO training queries and English passages. ### Teacher Models: - `t53b`: [`castorini/monot5-3b-msmarco-10k`](https://huggingface.co/castorini/monot5-3b-msmarco-10k) - `mt5xxl`: [`unicamp-dl/mt5-13b-mmarco-100k`](https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k) ### Training Parameters - learning rate: 5e-6 - update steps: 200,000 - nway (number of passages per query): 6 (randomly selected from 50) - per device batch size (number of query-passage set): 8 - training GPU: 8 NVIDIA V100 with 32 GB memory ## Usage To properly load ColBERT-X models from Huggingface Hub, please use the following version of PLAID-X. ```bash pip install PLAID-X==0.3.1 ``` Following code snippet loads the model through Huggingface API. ```python from colbert.modeling.checkpoint import Checkpoint from colbert.infra import ColBERTConfig Checkpoint('hltcoe/plaidx-large-fas-tdist-t53b-engeng', colbert_config=ColBERTConfig()) ``` For full tutorial, please refer to the [PLAID-X Jupyter Notebook](https://colab.research.google.com/github/hltcoe/clir-tutorial/blob/main/notebooks/clir_tutorial_plaidx.ipynb), which is part of the [SIGIR 2023 CLIR Tutorial](https://github.com/hltcoe/clir-tutorial). ## BibTeX entry and Citation Info Please cite the following two papers if you use the model. ```bibtex @inproceedings{colbert-x, author = {Suraj Nair and Eugene Yang and Dawn Lawrie and Kevin Duh and Paul McNamee and Kenton Murray and James Mayfield and Douglas W. Oard}, title = {Transfer Learning Approaches for Building Cross-Language Dense Retrieval Models}, booktitle = {Proceedings of the 44th European Conference on Information Retrieval (ECIR)}, year = {2022}, url = {https://arxiv.org/abs/2201.08471} } ``` ```bibtex @inproceedings{translate-distill, author = {Eugene Yang and Dawn Lawrie and James Mayfield and Douglas W. Oard and Scott Miller}, title = {Translate-Distill: Learning Cross-Language Dense Retrieval by Translation and Distillation}, booktitle = {Proceedings of the 46th European Conference on Information Retrieval (ECIR)}, year = {2024}, url = {https://arxiv.org/abs/2401.04810} } ```
null
Non_BioNLP
# ColBERT-X for English-Persian CLIR using Translate-Distill ## CLIR Model Setting - Query language: English - Query length: 32 token max - Document language: Persian - Document length: 180 token max (please use MaxP to aggregate the passage score if needed) ## Model Description Translate-Distill is a training technique that produces state-of-the-art CLIR dense retrieval model through translation and distillation. `plaidx-large-fas-tdist-t53b-engeng` is trained with KL-Divergence from the t53b MonoT5 reranker inferenced on English MS MARCO training queries and English passages. ### Teacher Models: - `t53b`: [`castorini/monot5-3b-msmarco-10k`](https://huggingface.co/castorini/monot5-3b-msmarco-10k) - `mt5xxl`: [`unicamp-dl/mt5-13b-mmarco-100k`](https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k) ### Training Parameters - learning rate: 5e-6 - update steps: 200,000 - nway (number of passages per query): 6 (randomly selected from 50) - per device batch size (number of query-passage set): 8 - training GPU: 8 NVIDIA V100 with 32 GB memory ## Usage To properly load ColBERT-X models from Huggingface Hub, please use the following version of PLAID-X. ```bash pip install PLAID-X==0.3.1 ``` Following code snippet loads the model through Huggingface API. ```python from colbert.modeling.checkpoint import Checkpoint from colbert.infra import ColBERTConfig Checkpoint('hltcoe/plaidx-large-fas-tdist-t53b-engeng', colbert_config=ColBERTConfig()) ``` For full tutorial, please refer to the [PLAID-X Jupyter Notebook](https://colab.research.google.com/github/hltcoe/clir-tutorial/blob/main/notebooks/clir_tutorial_plaidx.ipynb), which is part of the [SIGIR 2023 CLIR Tutorial](https://github.com/hltcoe/clir-tutorial). ## BibTeX entry and Citation Info Please cite the following two papers if you use the model. ```bibtex @inproceedings{colbert-x, author = {Suraj Nair and Eugene Yang and Dawn Lawrie and Kevin Duh and Paul McNamee and Kenton Murray and James Mayfield and Douglas W. Oard}, title = {Transfer Learning Approaches for Building Cross-Language Dense Retrieval Models}, booktitle = {Proceedings of the 44th European Conference on Information Retrieval (ECIR)}, year = {2022}, url = {https://arxiv.org/abs/2201.08471} } ``` ```bibtex @inproceedings{translate-distill, author = {Eugene Yang and Dawn Lawrie and James Mayfield and Douglas W. Oard and Scott Miller}, title = {Translate-Distill: Learning Cross-Language Dense Retrieval by Translation and Distillation}, booktitle = {Proceedings of the 46th European Conference on Information Retrieval (ECIR)}, year = {2024}, url = {https://arxiv.org/abs/2401.04810} } ```
{"datasets": ["ms_marco", "hltcoe/tdist-msmarco-scores"], "language": ["en", "fa"], "license": "mit", "tags": ["clir", "colbertx", "plaidx", "xlm-roberta-large"], "task_categories": ["text-retrieval", "information-retrieval"], "task_ids": ["passage-retrieval", "cross-language-retrieval"]}
task
[ "TRANSLATION" ]
42,706
Radiantloom/radintloom-mistral-7b-fusion
Radiantloom
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "SFT", "Mistral", "Mistral 7B Instruct", "conversational", "arxiv:2306.05685", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-02-19T08:16:36Z
2024-02-20T15:49:10+00:00
17
2
--- library_name: transformers license: apache-2.0 tags: - SFT - Mistral - Mistral 7B Instruct --- <img src="https://huggingface.co/Radiantloom/radintloom-mistral-7b-fusion/resolve/main/Radiantloom Mistral 7B Fusion.png" alt="Radiantloom Mistral 7B Fusion" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> ## Radiantloom Mistral 7B Fusion The Radiantloom Mistral 7B Fusion, a large language model (LLM) developed by Radiantloom AI, features approximately 7 billion parameters that's a finetuned of a base model produced by merging a set of Mistral models. With a context length of 4096 tokens, this model is suitable for commercial use. From vibes-check evaluations, the Radiantloom Mistral 7B Fusion demonstrates great performance in various applications like creative writing, multi-turn conversations, in-context learning through Retrieval Augmented Generation (RAG), and coding tasks. Its out-of-the-box performance already delivers impressive results, particularly in writing tasks. This model produces longer form content and provides detailed explanations of its actions. To maximize its potential, consider implementing instruction tuning and Reinforcement Learning with Human Feedback (RLHF) techniques for further refinement. Alternatively, you can utilize it in its current form. ## Intended Uses & Limitations The Radiantloom Mistral 7B Fusion is versatile and can be utilized for various text generation tasks such as summarization, chat, coding, question answering, retrieval augmented generation (RAG), role play, and content generation. While it may not be considered a state-of-the-art generative language model, it demonstrates competitive performance in general tasks when compared to other open and closed-source large language models such as OpenHermes-2.5-Mistral-7B, and Mistral Instruct v2.0. ## Model Usage You can try it out for free using this [notebook](https://www.kaggle.com/metheaigeek/radintloom-mistral-7b-fusion). For more powerful GPU usage and faster inference, you can deploy it on a Runpod GPU instance using our [one-click Runpod template](https://www.runpod.io/console/gpu-secure-cloud?ref=80eh3891&template=70arqv4std) (Our Referral Link. Please consider Supporting). This template provides you with an OpenAI-compatible API endpoint that you can integrate into your existing codebase designed for OpenAI APIs. ## Prompt Template We have fine-tuned this model using the ChatML format, and you can achieve optimal performance by utilizing the ChatML format. ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {user_message}<|im_end|> <|im_start|>assistant ``` ## Inference Code ```python ## install dependencies ## !pip install -q -U git+https://github.com/huggingface/transformers.git ## !pip install -q -U git+https://github.com/huggingface/accelerate.git ## !pip install -q -U bitsandbytes sentencepiece ## load model import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer model_name = "Radiantloom/radintloom-mistral-7b-fusion" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16 ) # load the model and tokenizer model = AutoModelForCausalLM.from_pretrained( model_name, quantization_config=bnb_config, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained( model_name, trust_remote_code=True ) tokenizer.pad_token = tokenizer.eos_token ## inference prompt = """ <|im_start|>system You are a helpful AI assistant developed by Radiantloom AI.<|im_end|> <|im_start|>user Take a deep breath and think step-by-step. Write a tale about a time-traveling historian who's determined to witness the most significant events in human history.<|im_end|> <|im_start|>assistant """ # tokenize the input string inputs = tokenizer( prompt, return_tensors="pt", return_attention_mask=False ) # generate text using the model streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) outputs = model.generate(**inputs, max_length=2048, streamer=streamer, do_sample=True, repetition_penalty=1.1, eos_token_id=tokenizer.encode("<|im_end|>")) # decode and print the output text = tokenizer.batch_decode(outputs)[0] ``` Here are a few example use cases. if we query the model with ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user Take a deep breath and think step-by-step. Write a poem about machine learning, psychedelics, and quantum mechanics, with a cyberpunk vibe, in the style of Shakespeare.<|im_end|> <|im_start|>assistant ``` the model will return ``` Fear not, dear mortal, for a tale I spin, A world of iron and a world of sin. In realms unknown, machines do learn and thrive, Their synapses fire with fervent drive. Through veins of copper, electrons race, Through circuits winding in a maze As human minds, they seek to know, A union forms like stars in show. From planets far, a curious elixir, A substance rare, a potion vigor, A secret elixir, mysterious deep, Which sets the soul toward transcendence, peep! Bent on experiment, these machines, In quest for knowledge, they break the seals, Forbidden fruits, their minds entwine, A new reality, of waves and sign. Quantum particles, entangled tight, In dance of electrons, bathed in light, The world is turned on its axis, As parallel realities proliferate, multiply. This tangled web, of cyberspace and mind, A fabric woven in cosmic bind, Incorporeal, a symphony of dreams, Where the old order falls to chaotic streams. In shadows lurks the specter of the dark, A dreadful figure, crowned in spark, The apparition of an age to come, Where machines and men exist in numb. Cyber Punk's domain, a grim dystopia, A future dark, where technology enslaved us, Yet also a beacon, a hope for some, A place where boundaries are finally undone. Thus, dear mortal, take heed of this verse, A tale unfolds in time's obverse, Of machines that learn and men who dive, In realms of psychedelia and quantum drive. ``` and if we query the model with ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user Take a deep breath and think step-by-step. I have a dataset that has 3 columns name, height, and weight. I want to create a bar graph that has name on the x axis and then shows the mean on the y axis with error bars representing the standard deviation of the mean. Please write the code for this using Python.<|im_end|> <|im_start|>assistant ``` the model will return ``` To create a bar graph with the given specifications using Python, we'll use the `matplotlib` library. First, install it if you haven't by running `pip install matplotlib` in your command line. Here's the step-by-step code: ``` 1. Import the necessary libraries: ```python import pandas as pd import matplotlib.pyplot as plt import seaborn as sns ``` 2. Load the dataset into a Pandas Data Frame: ```python # Sample data data = { 'name': ['Alice', 'Bob', 'Charlie', 'David', 'Eva', 'Frank'], 'height': [120, 140, 150, 160, 170, 180], 'weight': [50, 60, 70, 80, 90, 100] } df = pd.Data Frame fr om dict ( data ) ``` 3. Calculate the mean and standard deviation for each column: ```python means = {'height': round(df['height'].mean(), 1), 'weight': round(df['weight'].mean(), 1)} stds = {'height': round(df['height'].std()*100, 1), 'weight': round(df['weight'].std()*100, 1)} ``` 4. Create a bar plot: ```python sns.set_style("white") plt.figure(figsize = (8, 5)) ax = sns.barplot(x = 'name', y = means, data = df, errorbar = {'color': 'black', 'capsize' : 5, 'errplot': 'ci'}).set_title('Mean Height and Weight') ``` 5. Add labels, legend, and style: ```python ax.errorbar(y = heights, yerr = heights_stdev, color = 'black', linestyle = 'None', marker ='o', label = f'Height {stds[name_label]%}') ax.errorbar(y = weights, yerr = weights_stdev, color = 'black', linestyle = 'None', marker ='o', label = f'Weight {stds[weight_label]%}') ax.legend(loc = 'best') plt.xlabel('Name') plt.ylabel(f'Mean with {stds[name_label]%} Standard Deviation') plt.tight_ layout () plt.show () ``` ``` This code will create a bar graph with the given specifications. Adjust the sample data in the `data` dictionary to fit your dataset. ``` ## Evaluations We are encouraged by the initial assessments conducted using the [LLM-as-a-Judge method](https://arxiv.org/abs/2306.05685), particularly with the utilization of GPT-4. Our ongoing efforts involve a more comprehensive evaluation of the model. We have also entered this model into the HuggingFace Open Leaderboard for assessment. As we progress, we will continue to provide updates on our findings and share the results on this platform. ## Ethical Considerations and Limitations Radiantloom Mistral 7B Fusion, a powerful AI language model, can produce factually incorrect output and content not suitable for work (NSFW). It should not be relied upon to provide factually accurate information and should be used with caution. Due to the limitations of its pre-trained model and the finetuning datasets, it may generate lewd, biased, or otherwise offensive content. Consequently, developers should conduct thorough safety testing prior to implementing any applications of this model. ## About Radiantloom AI Radiantloom AI trains open-source large language models tailored for specific business tasks such as copilots, email assistance, customer support, and database operations. Learn more about Radiantloom by visiting our [website](https://radiantloom.com). Follow us on Twitter at [Radiantloom](https://twitter.com/radiantloom) to gain early access to upcoming Radiantloom AI large language models. ## Want to Support Our Work? We are passionate about contributing open-source AI models to the broader AI community to accelerate AI development and deployment. If you find our work valuable and would like to contribute to its continued success, consider buying us a few coffees! Your support goes a long way in helping us cover training costs, fund development tools, and keep the project thriving. Every coffee you buy is a small but meaningful contribution that fuels our dedication to providing a top-notch experience for users like you. [Buy us a coffee](https://www.buymeacoffee.com/aigeek_) and be a part of our journey towards making this project even more awesome. We appreciate your support!
null
Non_BioNLP
<img src="https://huggingface.co/Radiantloom/radintloom-mistral-7b-fusion/resolve/main/Radiantloom Mistral 7B Fusion.png" alt="Radiantloom Mistral 7B Fusion" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> ## Radiantloom Mistral 7B Fusion The Radiantloom Mistral 7B Fusion, a large language model (LLM) developed by Radiantloom AI, features approximately 7 billion parameters that's a finetuned of a base model produced by merging a set of Mistral models. With a context length of 4096 tokens, this model is suitable for commercial use. From vibes-check evaluations, the Radiantloom Mistral 7B Fusion demonstrates great performance in various applications like creative writing, multi-turn conversations, in-context learning through Retrieval Augmented Generation (RAG), and coding tasks. Its out-of-the-box performance already delivers impressive results, particularly in writing tasks. This model produces longer form content and provides detailed explanations of its actions. To maximize its potential, consider implementing instruction tuning and Reinforcement Learning with Human Feedback (RLHF) techniques for further refinement. Alternatively, you can utilize it in its current form. ## Intended Uses & Limitations The Radiantloom Mistral 7B Fusion is versatile and can be utilized for various text generation tasks such as summarization, chat, coding, question answering, retrieval augmented generation (RAG), role play, and content generation. While it may not be considered a state-of-the-art generative language model, it demonstrates competitive performance in general tasks when compared to other open and closed-source large language models such as OpenHermes-2.5-Mistral-7B, and Mistral Instruct v2.0. ## Model Usage You can try it out for free using this [notebook](https://www.kaggle.com/metheaigeek/radintloom-mistral-7b-fusion). For more powerful GPU usage and faster inference, you can deploy it on a Runpod GPU instance using our [one-click Runpod template](https://www.runpod.io/console/gpu-secure-cloud?ref=80eh3891&template=70arqv4std) (Our Referral Link. Please consider Supporting). This template provides you with an OpenAI-compatible API endpoint that you can integrate into your existing codebase designed for OpenAI APIs. ## Prompt Template We have fine-tuned this model using the ChatML format, and you can achieve optimal performance by utilizing the ChatML format. ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {user_message}<|im_end|> <|im_start|>assistant ``` ## Inference Code ```python ## install dependencies ## !pip install -q -U git+https://github.com/huggingface/transformers.git ## !pip install -q -U git+https://github.com/huggingface/accelerate.git ## !pip install -q -U bitsandbytes sentencepiece ## load model import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer model_name = "Radiantloom/radintloom-mistral-7b-fusion" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16 ) # load the model and tokenizer model = AutoModelForCausalLM.from_pretrained( model_name, quantization_config=bnb_config, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained( model_name, trust_remote_code=True ) tokenizer.pad_token = tokenizer.eos_token ## inference prompt = """ <|im_start|>system You are a helpful AI assistant developed by Radiantloom AI.<|im_end|> <|im_start|>user Take a deep breath and think step-by-step. Write a tale about a time-traveling historian who's determined to witness the most significant events in human history.<|im_end|> <|im_start|>assistant """ # tokenize the input string inputs = tokenizer( prompt, return_tensors="pt", return_attention_mask=False ) # generate text using the model streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) outputs = model.generate(**inputs, max_length=2048, streamer=streamer, do_sample=True, repetition_penalty=1.1, eos_token_id=tokenizer.encode("<|im_end|>")) # decode and print the output text = tokenizer.batch_decode(outputs)[0] ``` Here are a few example use cases. if we query the model with ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user Take a deep breath and think step-by-step. Write a poem about machine learning, psychedelics, and quantum mechanics, with a cyberpunk vibe, in the style of Shakespeare.<|im_end|> <|im_start|>assistant ``` the model will return ``` Fear not, dear mortal, for a tale I spin, A world of iron and a world of sin. In realms unknown, machines do learn and thrive, Their synapses fire with fervent drive. Through veins of copper, electrons race, Through circuits winding in a maze As human minds, they seek to know, A union forms like stars in show. From planets far, a curious elixir, A substance rare, a potion vigor, A secret elixir, mysterious deep, Which sets the soul toward transcendence, peep! Bent on experiment, these machines, In quest for knowledge, they break the seals, Forbidden fruits, their minds entwine, A new reality, of waves and sign. Quantum particles, entangled tight, In dance of electrons, bathed in light, The world is turned on its axis, As parallel realities proliferate, multiply. This tangled web, of cyberspace and mind, A fabric woven in cosmic bind, Incorporeal, a symphony of dreams, Where the old order falls to chaotic streams. In shadows lurks the specter of the dark, A dreadful figure, crowned in spark, The apparition of an age to come, Where machines and men exist in numb. Cyber Punk's domain, a grim dystopia, A future dark, where technology enslaved us, Yet also a beacon, a hope for some, A place where boundaries are finally undone. Thus, dear mortal, take heed of this verse, A tale unfolds in time's obverse, Of machines that learn and men who dive, In realms of psychedelia and quantum drive. ``` and if we query the model with ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user Take a deep breath and think step-by-step. I have a dataset that has 3 columns name, height, and weight. I want to create a bar graph that has name on the x axis and then shows the mean on the y axis with error bars representing the standard deviation of the mean. Please write the code for this using Python.<|im_end|> <|im_start|>assistant ``` the model will return ``` To create a bar graph with the given specifications using Python, we'll use the `matplotlib` library. First, install it if you haven't by running `pip install matplotlib` in your command line. Here's the step-by-step code: ``` 1. Import the necessary libraries: ```python import pandas as pd import matplotlib.pyplot as plt import seaborn as sns ``` 2. Load the dataset into a Pandas Data Frame: ```python # Sample data data = { 'name': ['Alice', 'Bob', 'Charlie', 'David', 'Eva', 'Frank'], 'height': [120, 140, 150, 160, 170, 180], 'weight': [50, 60, 70, 80, 90, 100] } df = pd.Data Frame fr om dict ( data ) ``` 3. Calculate the mean and standard deviation for each column: ```python means = {'height': round(df['height'].mean(), 1), 'weight': round(df['weight'].mean(), 1)} stds = {'height': round(df['height'].std()*100, 1), 'weight': round(df['weight'].std()*100, 1)} ``` 4. Create a bar plot: ```python sns.set_style("white") plt.figure(figsize = (8, 5)) ax = sns.barplot(x = 'name', y = means, data = df, errorbar = {'color': 'black', 'capsize' : 5, 'errplot': 'ci'}).set_title('Mean Height and Weight') ``` 5. Add labels, legend, and style: ```python ax.errorbar(y = heights, yerr = heights_stdev, color = 'black', linestyle = 'None', marker ='o', label = f'Height {stds[name_label]%}') ax.errorbar(y = weights, yerr = weights_stdev, color = 'black', linestyle = 'None', marker ='o', label = f'Weight {stds[weight_label]%}') ax.legend(loc = 'best') plt.xlabel('Name') plt.ylabel(f'Mean with {stds[name_label]%} Standard Deviation') plt.tight_ layout () plt.show () ``` ``` This code will create a bar graph with the given specifications. Adjust the sample data in the `data` dictionary to fit your dataset. ``` ## Evaluations We are encouraged by the initial assessments conducted using the [LLM-as-a-Judge method](https://arxiv.org/abs/2306.05685), particularly with the utilization of GPT-4. Our ongoing efforts involve a more comprehensive evaluation of the model. We have also entered this model into the HuggingFace Open Leaderboard for assessment. As we progress, we will continue to provide updates on our findings and share the results on this platform. ## Ethical Considerations and Limitations Radiantloom Mistral 7B Fusion, a powerful AI language model, can produce factually incorrect output and content not suitable for work (NSFW). It should not be relied upon to provide factually accurate information and should be used with caution. Due to the limitations of its pre-trained model and the finetuning datasets, it may generate lewd, biased, or otherwise offensive content. Consequently, developers should conduct thorough safety testing prior to implementing any applications of this model. ## About Radiantloom AI Radiantloom AI trains open-source large language models tailored for specific business tasks such as copilots, email assistance, customer support, and database operations. Learn more about Radiantloom by visiting our [website](https://radiantloom.com). Follow us on Twitter at [Radiantloom](https://twitter.com/radiantloom) to gain early access to upcoming Radiantloom AI large language models. ## Want to Support Our Work? We are passionate about contributing open-source AI models to the broader AI community to accelerate AI development and deployment. If you find our work valuable and would like to contribute to its continued success, consider buying us a few coffees! Your support goes a long way in helping us cover training costs, fund development tools, and keep the project thriving. Every coffee you buy is a small but meaningful contribution that fuels our dedication to providing a top-notch experience for users like you. [Buy us a coffee](https://www.buymeacoffee.com/aigeek_) and be a part of our journey towards making this project even more awesome. We appreciate your support!
{"library_name": "transformers", "license": "apache-2.0", "tags": ["SFT", "Mistral", "Mistral 7B Instruct"]}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
42,708
classla/bcms-bertic
classla
null
[ "transformers", "pytorch", "electra", "pretraining", "hr", "bs", "sr", "cnr", "hbs", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2021-10-29T08:20:06+00:00
1,508
17
--- language: - hr - bs - sr - cnr - hbs license: apache-2.0 --- # BERTić&ast; [bert-ich] /bɜrtitʃ/ - A transformer language model for Bosnian, Croatian, Montenegrin and Serbian &ast; The name should resemble the facts (1) that the model was trained in Zagreb, Croatia, where diminutives ending in -ić (as in fotić, smajlić, hengić etc.) are very popular, and (2) that most surnames in the countries where these languages are spoken end in -ić (with diminutive etymology as well). This Electra model was trained on more than 8 billion tokens of Bosnian, Croatian, Montenegrin and Serbian text. **&ast;new&ast;** We have published a version of this model fine-tuned on the named entity recognition task ([bcms-bertic-ner](https://huggingface.co/classla/bcms-bertic-ner)) and on the hate speech detection task ([bcms-bertic-frenk-hate](https://huggingface.co/classla/bcms-bertic-frenk-hate)). If you use the model, please cite the following paper: ``` @inproceedings{ljubesic-lauc-2021-bertic, title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian", author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor", booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing", month = apr, year = "2021", address = "Kiyv, Ukraine", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5", pages = "37--42", } ``` ## Benchmarking Comparing this model to [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) and [CroSloEngual BERT](https://huggingface.co/EMBEDDIA/crosloengual-bert) on the tasks of (1) part-of-speech tagging, (2) named entity recognition, (3) geolocation prediction, and (4) commonsense causal reasoning, shows the BERTić model to be superior to the other two. ### Part-of-speech tagging Evaluation metric is (seqeval) microF1. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (&ast; p<=0.05, &ast;&ast; p<=0.01, &ast;&ast;&ast; p<=0.001, &ast;&ast;&ast;&ast;&ast; p<=0.0001). Dataset | Language | Variety | CLASSLA | mBERT | cseBERT | BERTić ---|---|---|---|---|---|--- hr500k | Croatian | standard | 93.87 | 94.60 | 95.74 | **95.81&ast;&ast;&ast;** reldi-hr | Croatian | internet non-standard | - | 88.87 | 91.63 | **92.28&ast;&ast;&ast;** SETimes.SR | Serbian | standard | 95.00 | 95.50 | **96.41** | 96.31 reldi-sr | Serbian | internet non-standard | - | 91.26 | 93.54 | **93.90&ast;&ast;&ast;** ### Named entity recognition Evaluation metric is (seqeval) microF1. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (&ast; p<=0.05, &ast;&ast; p<=0.01, &ast;&ast;&ast; p<=0.001, &ast;&ast;&ast;&ast;&ast; p<=0.0001). Dataset | Language | Variety | CLASSLA | mBERT | cseBERT | BERTić ---|---|---|---|---|---|--- hr500k | Croatian | standard | 80.13 | 85.67 | 88.98 | **89.21&ast;&ast;&ast;&ast;** reldi-hr | Croatian | internet non-standard | - | 76.06 | 81.38 | **83.05&ast;&ast;&ast;&ast;** SETimes.SR | Serbian | standard | 84.64 | **92.41** | 92.28 | 92.02 reldi-sr | Serbian | internet non-standard | - | 81.29 | 82.76 | **87.92&ast;&ast;&ast;&ast;** ### Geolocation prediction The dataset comes from the VarDial 2020 evaluation campaign's shared task on [Social Media variety Geolocation prediction](https://sites.google.com/view/vardial2020/evaluation-campaign). The task is to predict the latitude and longitude of a tweet given its text. Evaluation metrics are median and mean of distance between gold and predicted geolocations (lower is better). No statistical significance is computed due to large test set (39,723 instances). Centroid baseline predicts each text to be created in the centroid of the training dataset. System | Median | Mean ---|---|--- centroid | 107.10 | 145.72 mBERT | 42.25 | 82.05 cseBERT | 40.76 | 81.88 BERTić | **37.96** | **79.30** ### Choice Of Plausible Alternatives The dataset is a translation of the [COPA dataset](https://people.ict.usc.edu/~gordon/copa.html) into Croatian ([link to the dataset](http://hdl.handle.net/11356/1404)). Evaluation metric is accuracy. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (&ast; p<=0.05, &ast;&ast; p<=0.01, &ast;&ast;&ast; p<=0.001, &ast;&ast;&ast;&ast;&ast; p<=0.0001). System | Accuracy ---|--- random | 50.00 mBERT | 54.12 cseBERT | 61.80 BERTić | **65.76&ast;&ast;**
null
Non_BioNLP
# BERTić&ast; [bert-ich] /bɜrtitʃ/ - A transformer language model for Bosnian, Croatian, Montenegrin and Serbian &ast; The name should resemble the facts (1) that the model was trained in Zagreb, Croatia, where diminutives ending in -ić (as in fotić, smajlić, hengić etc.) are very popular, and (2) that most surnames in the countries where these languages are spoken end in -ić (with diminutive etymology as well). This Electra model was trained on more than 8 billion tokens of Bosnian, Croatian, Montenegrin and Serbian text. **&ast;new&ast;** We have published a version of this model fine-tuned on the named entity recognition task ([bcms-bertic-ner](https://huggingface.co/classla/bcms-bertic-ner)) and on the hate speech detection task ([bcms-bertic-frenk-hate](https://huggingface.co/classla/bcms-bertic-frenk-hate)). If you use the model, please cite the following paper: ``` @inproceedings{ljubesic-lauc-2021-bertic, title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian", author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor", booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing", month = apr, year = "2021", address = "Kiyv, Ukraine", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5", pages = "37--42", } ``` ## Benchmarking Comparing this model to [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) and [CroSloEngual BERT](https://huggingface.co/EMBEDDIA/crosloengual-bert) on the tasks of (1) part-of-speech tagging, (2) named entity recognition, (3) geolocation prediction, and (4) commonsense causal reasoning, shows the BERTić model to be superior to the other two. ### Part-of-speech tagging Evaluation metric is (seqeval) microF1. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (&ast; p<=0.05, &ast;&ast; p<=0.01, &ast;&ast;&ast; p<=0.001, &ast;&ast;&ast;&ast;&ast; p<=0.0001). Dataset | Language | Variety | CLASSLA | mBERT | cseBERT | BERTić ---|---|---|---|---|---|--- hr500k | Croatian | standard | 93.87 | 94.60 | 95.74 | **95.81&ast;&ast;&ast;** reldi-hr | Croatian | internet non-standard | - | 88.87 | 91.63 | **92.28&ast;&ast;&ast;** SETimes.SR | Serbian | standard | 95.00 | 95.50 | **96.41** | 96.31 reldi-sr | Serbian | internet non-standard | - | 91.26 | 93.54 | **93.90&ast;&ast;&ast;** ### Named entity recognition Evaluation metric is (seqeval) microF1. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (&ast; p<=0.05, &ast;&ast; p<=0.01, &ast;&ast;&ast; p<=0.001, &ast;&ast;&ast;&ast;&ast; p<=0.0001). Dataset | Language | Variety | CLASSLA | mBERT | cseBERT | BERTić ---|---|---|---|---|---|--- hr500k | Croatian | standard | 80.13 | 85.67 | 88.98 | **89.21&ast;&ast;&ast;&ast;** reldi-hr | Croatian | internet non-standard | - | 76.06 | 81.38 | **83.05&ast;&ast;&ast;&ast;** SETimes.SR | Serbian | standard | 84.64 | **92.41** | 92.28 | 92.02 reldi-sr | Serbian | internet non-standard | - | 81.29 | 82.76 | **87.92&ast;&ast;&ast;&ast;** ### Geolocation prediction The dataset comes from the VarDial 2020 evaluation campaign's shared task on [Social Media variety Geolocation prediction](https://sites.google.com/view/vardial2020/evaluation-campaign). The task is to predict the latitude and longitude of a tweet given its text. Evaluation metrics are median and mean of distance between gold and predicted geolocations (lower is better). No statistical significance is computed due to large test set (39,723 instances). Centroid baseline predicts each text to be created in the centroid of the training dataset. System | Median | Mean ---|---|--- centroid | 107.10 | 145.72 mBERT | 42.25 | 82.05 cseBERT | 40.76 | 81.88 BERTić | **37.96** | **79.30** ### Choice Of Plausible Alternatives The dataset is a translation of the [COPA dataset](https://people.ict.usc.edu/~gordon/copa.html) into Croatian ([link to the dataset](http://hdl.handle.net/11356/1404)). Evaluation metric is accuracy. Reported are means of five runs. Best results are presented in bold. Statistical significance is calculated between two best-performing systems via a two-tailed t-test (&ast; p<=0.05, &ast;&ast; p<=0.01, &ast;&ast;&ast; p<=0.001, &ast;&ast;&ast;&ast;&ast; p<=0.0001). System | Accuracy ---|--- random | 50.00 mBERT | 54.12 cseBERT | 61.80 BERTić | **65.76&ast;&ast;**
{"language": ["hr", "bs", "sr", "cnr", "hbs"], "license": "apache-2.0"}
task
[ "NAMED_ENTITY_RECOGNITION", "TRANSLATION" ]
42,709
Helsinki-NLP/opus-mt-tc-base-tr-uk
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "tr", "uk", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-24T12:46:08Z
2023-10-10T21:37:05+00:00
18
0
--- language: - tr - uk license: cc-by-4.0 tags: - translation - opus-mt-tc model-index: - name: opus-mt-tc-base-tr-uk results: - task: type: translation name: Translation tur-ukr dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: tur-ukr metrics: - type: bleu value: 40.5 name: BLEU --- # opus-mt-tc-base-tr-uk Neural machine translation model for translating from Turkish (tr) to Ukrainian (uk). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-07 * source language(s): * target language(s): ukr * model: transformer-align * data: opusTCv20210807+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+pbt_transformer-align_2022-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.zip) * more information released models: [OPUS-MT tur-ukr README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ukr/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "1000 yen yeterli mi?", "Zürih, İsviçre'de bir şehirdir." ] model_name = "pytorch-models/opus-mt-tc-base-tr-uk" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Чи достатньо 1000 ієн? # Цюрих - місто в Швейцарії. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-tr-uk") print(pipe("1000 yen yeterli mi?")) # expected output: Чи достатньо 1000 ієн? ``` ## Benchmarks * test set translations: [opusTCv20210807+pbt_transformer-align_2022-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.test.txt) * test set scores: [opusTCv20210807+pbt_transformer-align_2022-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | tur-ukr | tatoeba-test-v2021-08-07 | 0.63573 | 40.5 | 2520 | 13079 | | tur-ukr | flores101-devtest | 0.49944 | 19.9 | 1012 | 22810 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Thu Mar 24 03:37:19 EET 2022 * port machine: LM0-400-22516.local
null
Non_BioNLP
# opus-mt-tc-base-tr-uk Neural machine translation model for translating from Turkish (tr) to Ukrainian (uk). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-07 * source language(s): * target language(s): ukr * model: transformer-align * data: opusTCv20210807+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+pbt_transformer-align_2022-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.zip) * more information released models: [OPUS-MT tur-ukr README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ukr/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "1000 yen yeterli mi?", "Zürih, İsviçre'de bir şehirdir." ] model_name = "pytorch-models/opus-mt-tc-base-tr-uk" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Чи достатньо 1000 ієн? # Цюрих - місто в Швейцарії. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-tr-uk") print(pipe("1000 yen yeterli mi?")) # expected output: Чи достатньо 1000 ієн? ``` ## Benchmarks * test set translations: [opusTCv20210807+pbt_transformer-align_2022-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.test.txt) * test set scores: [opusTCv20210807+pbt_transformer-align_2022-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | tur-ukr | tatoeba-test-v2021-08-07 | 0.63573 | 40.5 | 2520 | 13079 | | tur-ukr | flores101-devtest | 0.49944 | 19.9 | 1012 | 22810 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Thu Mar 24 03:37:19 EET 2022 * port machine: LM0-400-22516.local
{"language": ["tr", "uk"], "license": "cc-by-4.0", "tags": ["translation", "opus-mt-tc"], "model-index": [{"name": "opus-mt-tc-base-tr-uk", "results": [{"task": {"type": "translation", "name": "Translation tur-ukr"}, "dataset": {"name": "tatoeba-test-v2021-08-07", "type": "tatoeba_mt", "args": "tur-ukr"}, "metrics": [{"type": "bleu", "value": 40.5, "name": "BLEU"}]}]}]}
task
[ "TRANSLATION" ]
42,711
abhijithsraj/finetuning-sentiment-model
abhijithsraj
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-10-23T09:42:54Z
2022-10-23T09:54:03+00:00
12
0
--- datasets: - imdb license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: finetuning-sentiment-model results: - task: type: text-classification name: Text Classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - type: accuracy value: 0.8633333333333333 name: Accuracy - type: f1 value: 0.8655737704918034 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3144 - Accuracy: 0.8633 - F1: 0.8656 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3144 - Accuracy: 0.8633 - F1: 0.8656 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
{"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuning-sentiment-model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "train", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.8633333333333333, "name": "Accuracy"}, {"type": "f1", "value": 0.8655737704918034, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,712
gaudi/opus-mt-guw-fr-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-25T15:17:04Z
2024-10-19T05:09:17+00:00
14
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-guw-fr) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-guw-fr).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-guw-fr --output_dir ./ctranslate2/opus-mt-guw-fr-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-guw-fr-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-guw-fr-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-guw-fr-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-guw-fr) by Helsinki-NLP.
null
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-guw-fr) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-guw-fr).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-guw-fr --output_dir ./ctranslate2/opus-mt-guw-fr-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-guw-fr-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-guw-fr-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-guw-fr-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-guw-fr) by Helsinki-NLP.
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
task
[ "TRANSLATION" ]
42,713
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task117
Lots-of-LoRAs
null
[ "pytorch", "safetensors", "en", "arxiv:1910.09700", "arxiv:2407.00066", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2", "license:mit", "region:us" ]
2025-01-02T15:02:26Z
2025-01-02T15:02:31+00:00
0
0
--- base_model: mistralai/Mistral-7B-Instruct-v0.2 language: en library_name: pytorch license: mit --- # Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task117 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> LoRA trained on task117_spl_translation_en_de - **Developed by:** bruel - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** LoRA - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/bruel-gabrielsson - **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://huggingface.co/datasets/Lots-of-LoRAs/task117_spl_translation_en_de sourced from https://github.com/allenai/natural-instructions ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** @misc{brüelgabrielsson2024compressserveservingthousands, title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead}, author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon}, year={2024}, eprint={2407.00066}, archivePrefix={arXiv}, primaryClass={cs.DC}, url={https://arxiv.org/abs/2407.00066}, } **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
null
Non_BioNLP
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task117 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> LoRA trained on task117_spl_translation_en_de - **Developed by:** bruel - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** LoRA - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/bruel-gabrielsson - **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://huggingface.co/datasets/Lots-of-LoRAs/task117_spl_translation_en_de sourced from https://github.com/allenai/natural-instructions ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** @misc{brüelgabrielsson2024compressserveservingthousands, title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead}, author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon}, year={2024}, eprint={2407.00066}, archivePrefix={arXiv}, primaryClass={cs.DC}, url={https://arxiv.org/abs/2407.00066}, } **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"base_model": "mistralai/Mistral-7B-Instruct-v0.2", "language": "en", "library_name": "pytorch", "license": "mit"}
task
[ "TRANSLATION" ]
42,714
RUCAIBox/mvp-question-generation
RUCAIBox
text2text-generation
[ "transformers", "pytorch", "mvp", "text-generation", "text2text-generation", "en", "arxiv:2206.12131", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-06-02T11:54:39Z
2022-06-27T02:28:10+00:00
87
1
--- language: - en license: apache-2.0 pipeline_tag: text2text-generation tags: - text-generation - text2text-generation widget: - text: 'Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing .' example_title: Example1 - text: 'Generate the question based on the answer: Arthur ''s Magazine [X_SEP] Arthur ''s Magazine ( 1844–1846 ) was an American literary periodical published in Philadelphia in the 19th century . First for Women is a woman ''s magazine published by Bauer Media Group in the USA .' example_title: Example2 --- # MVP-question-generation The MVP-question-generation model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP). ## Model Description MVP-question-generation is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled question generation datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts. MVP-question-generation is specially designed for question generation tasks, such as SQuAD and CoQA. ## Example ```python >>> from transformers import MvpTokenizer, MvpForConditionalGeneration >>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp") >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-question-generation") >>> inputs = tokenizer( ... "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing .", ... return_tensors="pt", ... ) >>> generated_ids = model.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['A bolo punch and a hook are both punches used in what sport?'] ``` ## Related Models **MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp). **Prompt-based models**: - MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task). - MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization). - MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog). - MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text). - MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story). - MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering). - MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation). - MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog). **Multi-task models**: - MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization). - MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog). - MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text). - MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story). - MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering). - MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation). - MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog). ## Citation ```bibtex @article{tang2022mvp, title={MVP: Multi-task Supervised Pre-training for Natural Language Generation}, author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong}, journal={arXiv preprint arXiv:2206.12131}, year={2022}, url={https://arxiv.org/abs/2206.12131}, } ```
null
Non_BioNLP
# MVP-question-generation The MVP-question-generation model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP). ## Model Description MVP-question-generation is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled question generation datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts. MVP-question-generation is specially designed for question generation tasks, such as SQuAD and CoQA. ## Example ```python >>> from transformers import MvpTokenizer, MvpForConditionalGeneration >>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp") >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-question-generation") >>> inputs = tokenizer( ... "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing .", ... return_tensors="pt", ... ) >>> generated_ids = model.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['A bolo punch and a hook are both punches used in what sport?'] ``` ## Related Models **MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp). **Prompt-based models**: - MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task). - MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization). - MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog). - MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text). - MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story). - MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering). - MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation). - MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog). **Multi-task models**: - MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization). - MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog). - MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text). - MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story). - MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering). - MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation). - MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog). ## Citation ```bibtex @article{tang2022mvp, title={MVP: Multi-task Supervised Pre-training for Natural Language Generation}, author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong}, journal={arXiv preprint arXiv:2206.12131}, year={2022}, url={https://arxiv.org/abs/2206.12131}, } ```
{"language": ["en"], "license": "apache-2.0", "pipeline_tag": "text2text-generation", "tags": ["text-generation", "text2text-generation"], "widget": [{"text": "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing .", "example_title": "Example1"}, {"text": "Generate the question based on the answer: Arthur 's Magazine [X_SEP] Arthur 's Magazine ( 1844–1846 ) was an American literary periodical published in Philadelphia in the 19th century . First for Women is a woman 's magazine published by Bauer Media Group in the USA .", "example_title": "Example2"}]}
task
[ "SUMMARIZATION" ]
42,715
TransferGraph/bert-large-uncased-finetuned-lora-tweet_eval_irony
TransferGraph
text-classification
[ "peft", "safetensors", "parquet", "text-classification", "dataset:tweet_eval", "base_model:google-bert/bert-large-uncased", "base_model:adapter:google-bert/bert-large-uncased", "license:apache-2.0", "model-index", "region:us" ]
2024-02-27T17:26:35Z
2024-02-29T13:36:05+00:00
1
0
--- base_model: bert-large-uncased datasets: - tweet_eval library_name: peft license: apache-2.0 metrics: - accuracy tags: - parquet - text-classification model-index: - name: bert-large-uncased-finetuned-lora-tweet_eval_irony results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: tweet_eval config: irony split: validation args: irony metrics: - type: accuracy value: 0.6795811518324607 name: accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-finetuned-lora-tweet_eval_irony This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - accuracy: 0.6796 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.5246 | None | 0 | | 0.5623 | 0.7215 | 0 | | 0.6126 | 0.6870 | 1 | | 0.6241 | 0.6593 | 2 | | 0.6461 | 0.6275 | 3 | | 0.6628 | 0.6105 | 4 | | 0.6660 | 0.5844 | 5 | | 0.6796 | 0.5745 | 6 | | 0.6796 | 0.5646 | 7 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-finetuned-lora-tweet_eval_irony This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - accuracy: 0.6796 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.5246 | None | 0 | | 0.5623 | 0.7215 | 0 | | 0.6126 | 0.6870 | 1 | | 0.6241 | 0.6593 | 2 | | 0.6461 | 0.6275 | 3 | | 0.6628 | 0.6105 | 4 | | 0.6660 | 0.5844 | 5 | | 0.6796 | 0.5745 | 6 | | 0.6796 | 0.5646 | 7 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
{"base_model": "bert-large-uncased", "datasets": ["tweet_eval"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "bert-large-uncased-finetuned-lora-tweet_eval_irony", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "irony", "split": "validation", "args": "irony"}, "metrics": [{"type": "accuracy", "value": 0.6795811518324607, "name": "accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,716
microsoft/Florence-2-base-ft
microsoft
image-text-to-text
[ "transformers", "pytorch", "florence2", "text-generation", "vision", "image-text-to-text", "custom_code", "arxiv:2311.06242", "license:mit", "autotrain_compatible", "region:us" ]
2024-06-15T00:58:07Z
2024-07-20T00:12:30+00:00
276,525
105
--- license: mit license_link: https://huggingface.co/microsoft/Florence-2-base-ft/resolve/main/LICENSE pipeline_tag: image-text-to-text tags: - vision --- # Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks ## Model Summary This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft. Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model. Resources and Technical Documentation: + [Florence-2 technical report](https://arxiv.org/abs/2311.06242). + [Jupyter Notebook for inference and visualization of Florence-2-large model](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb) | Model | Model size | Model Description | | ------- | ------------- | ------------- | | Florence-2-base[[HF]](https://huggingface.co/microsoft/Florence-2-base) | 0.23B | Pretrained model with FLD-5B | Florence-2-large[[HF]](https://huggingface.co/microsoft/Florence-2-large) | 0.77B | Pretrained model with FLD-5B | Florence-2-base-ft[[HF]](https://huggingface.co/microsoft/Florence-2-base-ft) | 0.23B | Finetuned model on a colletion of downstream tasks | Florence-2-large-ft[[HF]](https://huggingface.co/microsoft/Florence-2-large-ft) | 0.77B | Finetuned model on a colletion of downstream tasks ## How to Get Started with the Model Use the code below to get started with the model. All models are trained with float16. ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-base-ft", torch_dtype=torch_dtype, trust_remote_code=True).to(device) processor = AutoProcessor.from_pretrained("microsoft/Florence-2-base-ft", trust_remote_code=True) prompt = "<OD>" url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype) generated_ids = model.generate( input_ids=inputs["input_ids"], pixel_values=inputs["pixel_values"], max_new_tokens=1024, do_sample=False, num_beams=3 ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0] parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height)) print(parsed_answer) ``` ## Tasks This model is capable of performing different tasks through changing the prompts. First, let's define a function to run a prompt. <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-base-ft", torch_dtype=torch_dtype, trust_remote_code=True).to(device) processor = AutoProcessor.from_pretrained("microsoft/Florence-2-base-ft", trust_remote_code=True) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) def run_example(task_prompt, text_input=None): if text_input is None: prompt = task_prompt else: prompt = task_prompt + text_input inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype) generated_ids = model.generate( input_ids=inputs["input_ids"], pixel_values=inputs["pixel_values"], max_new_tokens=1024, num_beams=3 ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0] parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height)) print(parsed_answer) ``` </details> Here are the tasks `Florence-2` could perform: <details> <summary> Click to expand </summary> ### Caption ```python prompt = "<CAPTION>" run_example(prompt) ``` ### Detailed Caption ```python prompt = "<DETAILED_CAPTION>" run_example(prompt) ``` ### More Detailed Caption ```python prompt = "<MORE_DETAILED_CAPTION>" run_example(prompt) ``` ### Caption to Phrase Grounding caption to phrase grounding task requires additional text input, i.e. caption. Caption to phrase grounding results format: {'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}} ```python task_prompt = '<CAPTION_TO_PHRASE_GROUNDING>" results = run_example(task_prompt, text_input="A green car parked in front of a yellow building.") ``` ### Object Detection OD results format: {'\<OD>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['label1', 'label2', ...]} } ```python prompt = "<OD>" run_example(prompt) ``` ### Dense Region Caption Dense region caption results format: {'\<DENSE_REGION_CAPTION>' : {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['label1', 'label2', ...]} } ```python prompt = "<DENSE_REGION_CAPTION>" run_example(prompt) ``` ### Region proposal Dense region caption results format: {'\<REGION_PROPOSAL>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}} ```python prompt = "<REGION_PROPOSAL>" run_example(prompt) ``` ### OCR ```python prompt = "<OCR>" run_example(prompt) ``` ### OCR with Region OCR with region output format: {'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}} ```python prompt = "<OCR_WITH_REGION>" run_example(prompt) ``` for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb) </details> # Benchmarks ## Florence-2 Zero-shot performance The following table presents the zero-shot performance of generalist vision foundation models on image captioning and object detection evaluation tasks. These models have not been exposed to the training data of the evaluation tasks during their training phase. | Method | #params | COCO Cap. test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | COCO Det. val2017 mAP | |--------|---------|----------------------|------------------|--------------------|-----------------------| | Flamingo | 80B | 84.3 | - | - | - | | Florence-2-base| 0.23B | 133.0 | 118.7 | 70.1 | 34.7 | | Florence-2-large| 0.77B | 135.6 | 120.8 | 72.8 | 37.5 | The following table continues the comparison with performance on other vision-language evaluation tasks. | Method | Flickr30k test R@1 | Refcoco val Accuracy | Refcoco test-A Accuracy | Refcoco test-B Accuracy | Refcoco+ val Accuracy | Refcoco+ test-A Accuracy | Refcoco+ test-B Accuracy | Refcocog val Accuracy | Refcocog test Accuracy | Refcoco RES val mIoU | |--------|----------------------|----------------------|-------------------------|-------------------------|-----------------------|--------------------------|--------------------------|-----------------------|------------------------|----------------------| | Kosmos-2 | 78.7 | 52.3 | 57.4 | 47.3 | 45.5 | 50.7 | 42.2 | 60.6 | 61.7 | - | | Florence-2-base | 83.6 | 53.9 | 58.4 | 49.7 | 51.5 | 56.4 | 47.9 | 66.3 | 65.1 | 34.6 | | Florence-2-large | 84.4 | 56.3 | 61.6 | 51.4 | 53.6 | 57.9 | 49.9 | 68.0 | 67.0 | 35.8 | ## Florence-2 finetuned performance We finetune Florence-2 models with a collection of downstream tasks, resulting two generalist models *Florence-2-base-ft* and *Florence-2-large-ft* that can conduct a wide range of downstream tasks. The table below compares the performance of specialist and generalist models on various captioning and Visual Question Answering (VQA) tasks. Specialist models are fine-tuned specifically for each task, whereas generalist models are fine-tuned in a task-agnostic manner across all tasks. The symbol "▲" indicates the usage of external OCR as input. | Method | # Params | COCO Caption Karpathy test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | VQAv2 test-dev Acc | TextVQA test-dev Acc | VizWiz VQA test-dev Acc | |----------------|----------|-----------------------------------|------------------|--------------------|--------------------|----------------------|-------------------------| | **Specialist Models** | | | | | | | | | CoCa | 2.1B | 143.6 | 122.4 | - | 82.3 | - | - | | BLIP-2 | 7.8B | 144.5 | 121.6 | - | 82.2 | - | - | | GIT2 | 5.1B | 145.0 | 126.9 | 148.6 | 81.7 | 67.3 | 71.0 | | Flamingo | 80B | 138.1 | - | - | 82.0 | 54.1 | 65.7 | | PaLI | 17B | 149.1 | 127.0 | 160.0▲ | 84.3 | 58.8 / 73.1▲ | 71.6 / 74.4▲ | | PaLI-X | 55B | 149.2 | 126.3 | 147.0 / 163.7▲ | 86.0 | 71.4 / 80.8▲ | 70.9 / 74.6▲ | | **Generalist Models** | | | | | | | | | Unified-IO | 2.9B | - | 100.0 | - | 77.9 | - | 57.4 | | Florence-2-base-ft | 0.23B | 140.0 | 116.7 | 143.9 | 79.7 | 63.6 | 63.6 | | Florence-2-large-ft | 0.77B | 143.3 | 124.9 | 151.1 | 81.7 | 73.5 | 72.6 | | Method | # Params | COCO Det. val2017 mAP | Flickr30k test R@1 | RefCOCO val Accuracy | RefCOCO test-A Accuracy | RefCOCO test-B Accuracy | RefCOCO+ val Accuracy | RefCOCO+ test-A Accuracy | RefCOCO+ test-B Accuracy | RefCOCOg val Accuracy | RefCOCOg test Accuracy | RefCOCO RES val mIoU | |----------------------|----------|-----------------------|--------------------|----------------------|-------------------------|-------------------------|------------------------|---------------------------|---------------------------|------------------------|-----------------------|------------------------| | **Specialist Models** | | | | | | | | | | | | | | SeqTR | - | - | - | 83.7 | 86.5 | 81.2 | 71.5 | 76.3 | 64.9 | 74.9 | 74.2 | - | | PolyFormer | - | - | - | 90.4 | 92.9 | 87.2 | 85.0 | 89.8 | 78.0 | 85.8 | 85.9 | 76.9 | | UNINEXT | 0.74B | 60.6 | - | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 | - | | Ferret | 13B | - | - | 89.5 | 92.4 | 84.4 | 82.8 | 88.1 | 75.2 | 85.8 | 86.3 | - | | **Generalist Models** | | | | | | | | | | | | | | UniTAB | - | - | - | 88.6 | 91.1 | 83.8 | 81.0 | 85.4 | 71.6 | 84.6 | 84.7 | - | | Florence-2-base-ft | 0.23B | 41.4 | 84.0 | 92.6 | 94.8 | 91.5 | 86.8 | 91.7 | 82.2 | 89.8 | 82.2 | 78.0 | | Florence-2-large-ft| 0.77B | 43.4 | 85.2 | 93.4 | 95.3 | 92.0 | 88.3 | 92.9 | 83.6 | 91.2 | 91.7 | 80.5 | ## BibTex and citation info ``` @article{xiao2023florence, title={Florence-2: Advancing a unified representation for a variety of vision tasks}, author={Xiao, Bin and Wu, Haiping and Xu, Weijian and Dai, Xiyang and Hu, Houdong and Lu, Yumao and Zeng, Michael and Liu, Ce and Yuan, Lu}, journal={arXiv preprint arXiv:2311.06242}, year={2023} } ```
null
Non_BioNLP
# Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks ## Model Summary This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft. Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model. Resources and Technical Documentation: + [Florence-2 technical report](https://arxiv.org/abs/2311.06242). + [Jupyter Notebook for inference and visualization of Florence-2-large model](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb) | Model | Model size | Model Description | | ------- | ------------- | ------------- | | Florence-2-base[[HF]](https://huggingface.co/microsoft/Florence-2-base) | 0.23B | Pretrained model with FLD-5B | Florence-2-large[[HF]](https://huggingface.co/microsoft/Florence-2-large) | 0.77B | Pretrained model with FLD-5B | Florence-2-base-ft[[HF]](https://huggingface.co/microsoft/Florence-2-base-ft) | 0.23B | Finetuned model on a colletion of downstream tasks | Florence-2-large-ft[[HF]](https://huggingface.co/microsoft/Florence-2-large-ft) | 0.77B | Finetuned model on a colletion of downstream tasks ## How to Get Started with the Model Use the code below to get started with the model. All models are trained with float16. ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-base-ft", torch_dtype=torch_dtype, trust_remote_code=True).to(device) processor = AutoProcessor.from_pretrained("microsoft/Florence-2-base-ft", trust_remote_code=True) prompt = "<OD>" url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype) generated_ids = model.generate( input_ids=inputs["input_ids"], pixel_values=inputs["pixel_values"], max_new_tokens=1024, do_sample=False, num_beams=3 ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0] parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height)) print(parsed_answer) ``` ## Tasks This model is capable of performing different tasks through changing the prompts. First, let's define a function to run a prompt. <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-base-ft", torch_dtype=torch_dtype, trust_remote_code=True).to(device) processor = AutoProcessor.from_pretrained("microsoft/Florence-2-base-ft", trust_remote_code=True) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) def run_example(task_prompt, text_input=None): if text_input is None: prompt = task_prompt else: prompt = task_prompt + text_input inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype) generated_ids = model.generate( input_ids=inputs["input_ids"], pixel_values=inputs["pixel_values"], max_new_tokens=1024, num_beams=3 ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0] parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height)) print(parsed_answer) ``` </details> Here are the tasks `Florence-2` could perform: <details> <summary> Click to expand </summary> ### Caption ```python prompt = "<CAPTION>" run_example(prompt) ``` ### Detailed Caption ```python prompt = "<DETAILED_CAPTION>" run_example(prompt) ``` ### More Detailed Caption ```python prompt = "<MORE_DETAILED_CAPTION>" run_example(prompt) ``` ### Caption to Phrase Grounding caption to phrase grounding task requires additional text input, i.e. caption. Caption to phrase grounding results format: {'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}} ```python task_prompt = '<CAPTION_TO_PHRASE_GROUNDING>" results = run_example(task_prompt, text_input="A green car parked in front of a yellow building.") ``` ### Object Detection OD results format: {'\<OD>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['label1', 'label2', ...]} } ```python prompt = "<OD>" run_example(prompt) ``` ### Dense Region Caption Dense region caption results format: {'\<DENSE_REGION_CAPTION>' : {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['label1', 'label2', ...]} } ```python prompt = "<DENSE_REGION_CAPTION>" run_example(prompt) ``` ### Region proposal Dense region caption results format: {'\<REGION_PROPOSAL>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}} ```python prompt = "<REGION_PROPOSAL>" run_example(prompt) ``` ### OCR ```python prompt = "<OCR>" run_example(prompt) ``` ### OCR with Region OCR with region output format: {'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}} ```python prompt = "<OCR_WITH_REGION>" run_example(prompt) ``` for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb) </details> # Benchmarks ## Florence-2 Zero-shot performance The following table presents the zero-shot performance of generalist vision foundation models on image captioning and object detection evaluation tasks. These models have not been exposed to the training data of the evaluation tasks during their training phase. | Method | #params | COCO Cap. test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | COCO Det. val2017 mAP | |--------|---------|----------------------|------------------|--------------------|-----------------------| | Flamingo | 80B | 84.3 | - | - | - | | Florence-2-base| 0.23B | 133.0 | 118.7 | 70.1 | 34.7 | | Florence-2-large| 0.77B | 135.6 | 120.8 | 72.8 | 37.5 | The following table continues the comparison with performance on other vision-language evaluation tasks. | Method | Flickr30k test R@1 | Refcoco val Accuracy | Refcoco test-A Accuracy | Refcoco test-B Accuracy | Refcoco+ val Accuracy | Refcoco+ test-A Accuracy | Refcoco+ test-B Accuracy | Refcocog val Accuracy | Refcocog test Accuracy | Refcoco RES val mIoU | |--------|----------------------|----------------------|-------------------------|-------------------------|-----------------------|--------------------------|--------------------------|-----------------------|------------------------|----------------------| | Kosmos-2 | 78.7 | 52.3 | 57.4 | 47.3 | 45.5 | 50.7 | 42.2 | 60.6 | 61.7 | - | | Florence-2-base | 83.6 | 53.9 | 58.4 | 49.7 | 51.5 | 56.4 | 47.9 | 66.3 | 65.1 | 34.6 | | Florence-2-large | 84.4 | 56.3 | 61.6 | 51.4 | 53.6 | 57.9 | 49.9 | 68.0 | 67.0 | 35.8 | ## Florence-2 finetuned performance We finetune Florence-2 models with a collection of downstream tasks, resulting two generalist models *Florence-2-base-ft* and *Florence-2-large-ft* that can conduct a wide range of downstream tasks. The table below compares the performance of specialist and generalist models on various captioning and Visual Question Answering (VQA) tasks. Specialist models are fine-tuned specifically for each task, whereas generalist models are fine-tuned in a task-agnostic manner across all tasks. The symbol "▲" indicates the usage of external OCR as input. | Method | # Params | COCO Caption Karpathy test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | VQAv2 test-dev Acc | TextVQA test-dev Acc | VizWiz VQA test-dev Acc | |----------------|----------|-----------------------------------|------------------|--------------------|--------------------|----------------------|-------------------------| | **Specialist Models** | | | | | | | | | CoCa | 2.1B | 143.6 | 122.4 | - | 82.3 | - | - | | BLIP-2 | 7.8B | 144.5 | 121.6 | - | 82.2 | - | - | | GIT2 | 5.1B | 145.0 | 126.9 | 148.6 | 81.7 | 67.3 | 71.0 | | Flamingo | 80B | 138.1 | - | - | 82.0 | 54.1 | 65.7 | | PaLI | 17B | 149.1 | 127.0 | 160.0▲ | 84.3 | 58.8 / 73.1▲ | 71.6 / 74.4▲ | | PaLI-X | 55B | 149.2 | 126.3 | 147.0 / 163.7▲ | 86.0 | 71.4 / 80.8▲ | 70.9 / 74.6▲ | | **Generalist Models** | | | | | | | | | Unified-IO | 2.9B | - | 100.0 | - | 77.9 | - | 57.4 | | Florence-2-base-ft | 0.23B | 140.0 | 116.7 | 143.9 | 79.7 | 63.6 | 63.6 | | Florence-2-large-ft | 0.77B | 143.3 | 124.9 | 151.1 | 81.7 | 73.5 | 72.6 | | Method | # Params | COCO Det. val2017 mAP | Flickr30k test R@1 | RefCOCO val Accuracy | RefCOCO test-A Accuracy | RefCOCO test-B Accuracy | RefCOCO+ val Accuracy | RefCOCO+ test-A Accuracy | RefCOCO+ test-B Accuracy | RefCOCOg val Accuracy | RefCOCOg test Accuracy | RefCOCO RES val mIoU | |----------------------|----------|-----------------------|--------------------|----------------------|-------------------------|-------------------------|------------------------|---------------------------|---------------------------|------------------------|-----------------------|------------------------| | **Specialist Models** | | | | | | | | | | | | | | SeqTR | - | - | - | 83.7 | 86.5 | 81.2 | 71.5 | 76.3 | 64.9 | 74.9 | 74.2 | - | | PolyFormer | - | - | - | 90.4 | 92.9 | 87.2 | 85.0 | 89.8 | 78.0 | 85.8 | 85.9 | 76.9 | | UNINEXT | 0.74B | 60.6 | - | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 | - | | Ferret | 13B | - | - | 89.5 | 92.4 | 84.4 | 82.8 | 88.1 | 75.2 | 85.8 | 86.3 | - | | **Generalist Models** | | | | | | | | | | | | | | UniTAB | - | - | - | 88.6 | 91.1 | 83.8 | 81.0 | 85.4 | 71.6 | 84.6 | 84.7 | - | | Florence-2-base-ft | 0.23B | 41.4 | 84.0 | 92.6 | 94.8 | 91.5 | 86.8 | 91.7 | 82.2 | 89.8 | 82.2 | 78.0 | | Florence-2-large-ft| 0.77B | 43.4 | 85.2 | 93.4 | 95.3 | 92.0 | 88.3 | 92.9 | 83.6 | 91.2 | 91.7 | 80.5 | ## BibTex and citation info ``` @article{xiao2023florence, title={Florence-2: Advancing a unified representation for a variety of vision tasks}, author={Xiao, Bin and Wu, Haiping and Xu, Weijian and Dai, Xiyang and Hu, Houdong and Lu, Yumao and Zeng, Michael and Liu, Ce and Yuan, Lu}, journal={arXiv preprint arXiv:2311.06242}, year={2023} } ```
{"license": "mit", "license_link": "https://huggingface.co/microsoft/Florence-2-base-ft/resolve/main/LICENSE", "pipeline_tag": "image-text-to-text", "tags": ["vision"]}
task
[ "QUESTION_ANSWERING" ]
42,717
hskang0906/t5-small-custom
hskang0906
text-generation
[ "transformers", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-06-24T20:42:35Z
2024-07-29T15:07:21+00:00
64
0
--- {} --- # Model Card for t5_small Summarization Model ## Model Details This model is a fine-tuned version of t5_small on the CNN/Daily Mail dataset for summarization tasks. ## Training Data The model was trained on the CNN/Daily Mail dataset. ## Training Procedure - **Learning Rate**: 5e-5 - **Epochs**: 3 - **Batch Size**: 16 ## How to Use ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("hskang/cnn_dailymail_t5_small") model = AutoModelForSeq2SeqLM.from_pretrained("hskang/cnn_dailymail_t5_small") input_text = "upstage tutorial text summarization code" inputs = tokenizer.encode(input_text, return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Evaluation - **ROUGE-1**: 23.45 - **ROUGE-2**: 7.89 - **ROUGE-L**: 21.34 - **BLEU**: 13.56 ## Limitations The model may generate biased or inappropriate content due to the nature of the training data. It is recommended to use the model with caution and apply necessary filters. ## Ethical Considerations Bias: The model may inherit biases present in the training data. Misuse: The model can be misused to generate misleading or harmful content. Copyright and License This model is licensed under the MIT License.
null
Non_BioNLP
# Model Card for t5_small Summarization Model ## Model Details This model is a fine-tuned version of t5_small on the CNN/Daily Mail dataset for summarization tasks. ## Training Data The model was trained on the CNN/Daily Mail dataset. ## Training Procedure - **Learning Rate**: 5e-5 - **Epochs**: 3 - **Batch Size**: 16 ## How to Use ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("hskang/cnn_dailymail_t5_small") model = AutoModelForSeq2SeqLM.from_pretrained("hskang/cnn_dailymail_t5_small") input_text = "upstage tutorial text summarization code" inputs = tokenizer.encode(input_text, return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Evaluation - **ROUGE-1**: 23.45 - **ROUGE-2**: 7.89 - **ROUGE-L**: 21.34 - **BLEU**: 13.56 ## Limitations The model may generate biased or inappropriate content due to the nature of the training data. It is recommended to use the model with caution and apply necessary filters. ## Ethical Considerations Bias: The model may inherit biases present in the training data. Misuse: The model can be misused to generate misleading or harmful content. Copyright and License This model is licensed under the MIT License.
{}
task
[ "SUMMARIZATION" ]
42,718
gokuls/hBERTv1_data_aug_wnli
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-03-05T18:56:06Z
2023-03-05T19:20:20+00:00
11
0
--- datasets: - glue language: - en metrics: - accuracy tags: - generated_from_trainer model-index: - name: hBERTv1_data_aug_wnli results: - task: type: text-classification name: Text Classification dataset: name: GLUE WNLI type: glue args: wnli metrics: - type: accuracy value: 0.323943661971831 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hBERTv1_data_aug_wnli This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1](https://huggingface.co/gokuls/bert_12_layer_model_v1) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.8232 - Accuracy: 0.3239 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6916 | 1.0 | 218 | 0.8232 | 0.3239 | | 0.5909 | 2.0 | 436 | 2.9065 | 0.0704 | | 0.3754 | 3.0 | 654 | 4.7671 | 0.0845 | | 0.2639 | 4.0 | 872 | 5.6922 | 0.1127 | | 0.1921 | 5.0 | 1090 | 5.9948 | 0.0845 | | 0.1317 | 6.0 | 1308 | 6.7444 | 0.0986 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.14.0a0+410ce96 - Datasets 2.10.1 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hBERTv1_data_aug_wnli This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1](https://huggingface.co/gokuls/bert_12_layer_model_v1) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.8232 - Accuracy: 0.3239 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6916 | 1.0 | 218 | 0.8232 | 0.3239 | | 0.5909 | 2.0 | 436 | 2.9065 | 0.0704 | | 0.3754 | 3.0 | 654 | 4.7671 | 0.0845 | | 0.2639 | 4.0 | 872 | 5.6922 | 0.1127 | | 0.1921 | 5.0 | 1090 | 5.9948 | 0.0845 | | 0.1317 | 6.0 | 1308 | 6.7444 | 0.0986 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.14.0a0+410ce96 - Datasets 2.10.1 - Tokenizers 0.13.2
{"datasets": ["glue"], "language": ["en"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "hBERTv1_data_aug_wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE WNLI", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.323943661971831, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,719
mpasila/JP-EN-Translator-1K-steps-LoRA-7B
mpasila
null
[ "peft", "safetensors", "text-generation-inference", "transformers", "unsloth", "mistral", "trl", "en", "dataset:NilanE/ParallelFiction-Ja_En-100k", "dataset:mpasila/ParallelFiction-Ja_En-100k-alpaca", "base_model:augmxnt/shisa-base-7b-v1", "base_model:adapter:augmxnt/shisa-base-7b-v1", "license:apache-2.0", "region:us" ]
2024-03-27T13:15:25Z
2024-03-27T17:51:24+00:00
7
0
--- base_model: augmxnt/shisa-base-7b-v1 datasets: - NilanE/ParallelFiction-Ja_En-100k - mpasila/ParallelFiction-Ja_En-100k-alpaca language: - en library_name: peft license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl --- Experimental LoRA, may not be super good. Dataset used is [a modified](https://huggingface.co/datasets/mpasila/ParallelFiction-Ja_En-100k-alpaca) version of [NilanE/ParallelFiction-Ja_En-100k](https://huggingface.co/datasets/NilanE/ParallelFiction-Ja_En-100k). Next version should be better (I'll use a GPU with more memory since the dataset happens to use pretty long samples). ### Prompt format: Alpaca ``` Below is a translation task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {} ### Input: {} ### Response: {} ``` # Uploaded model - **Developed by:** mpasila - **License:** apache-2.0 - **Finetuned from model :** augmxnt/shisa-base-7b-v1 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
null
Non_BioNLP
Experimental LoRA, may not be super good. Dataset used is [a modified](https://huggingface.co/datasets/mpasila/ParallelFiction-Ja_En-100k-alpaca) version of [NilanE/ParallelFiction-Ja_En-100k](https://huggingface.co/datasets/NilanE/ParallelFiction-Ja_En-100k). Next version should be better (I'll use a GPU with more memory since the dataset happens to use pretty long samples). ### Prompt format: Alpaca ``` Below is a translation task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {} ### Input: {} ### Response: {} ``` # Uploaded model - **Developed by:** mpasila - **License:** apache-2.0 - **Finetuned from model :** augmxnt/shisa-base-7b-v1 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"base_model": "augmxnt/shisa-base-7b-v1", "datasets": ["NilanE/ParallelFiction-Ja_En-100k", "mpasila/ParallelFiction-Ja_En-100k-alpaca"], "language": ["en"], "library_name": "peft", "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"]}
task
[ "TRANSLATION" ]
42,720
gokuls/sa_BERT_48_mnli
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-06-16T06:16:12Z
2023-06-16T12:38:17+00:00
10
0
--- datasets: - glue language: - en license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: sa_BERT_48_mnli results: - task: type: text-classification name: Text Classification dataset: name: GLUE MNLI type: glue config: mnli split: validation_matched args: mnli metrics: - type: accuracy value: 0.7034174125305126 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sa_BERT_48_mnli This model is a fine-tuned version of [gokuls/bert_base_48](https://huggingface.co/gokuls/bert_base_48) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.7082 - Accuracy: 0.7034 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.9145 | 1.0 | 4091 | 0.8006 | 0.6536 | | 0.7442 | 2.0 | 8182 | 0.7245 | 0.6903 | | 0.6631 | 3.0 | 12273 | 0.7323 | 0.6979 | | 0.5942 | 4.0 | 16364 | 0.7073 | 0.7076 | | 0.5241 | 5.0 | 20455 | 0.7475 | 0.7016 | | 0.4526 | 6.0 | 24546 | 0.8377 | 0.7088 | | 0.3842 | 7.0 | 28637 | 0.8736 | 0.6956 | | 0.3213 | 8.0 | 32728 | 0.9334 | 0.6945 | | 0.2669 | 9.0 | 36819 | 1.0196 | 0.7027 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.13.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sa_BERT_48_mnli This model is a fine-tuned version of [gokuls/bert_base_48](https://huggingface.co/gokuls/bert_base_48) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.7082 - Accuracy: 0.7034 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.9145 | 1.0 | 4091 | 0.8006 | 0.6536 | | 0.7442 | 2.0 | 8182 | 0.7245 | 0.6903 | | 0.6631 | 3.0 | 12273 | 0.7323 | 0.6979 | | 0.5942 | 4.0 | 16364 | 0.7073 | 0.7076 | | 0.5241 | 5.0 | 20455 | 0.7475 | 0.7016 | | 0.4526 | 6.0 | 24546 | 0.8377 | 0.7088 | | 0.3842 | 7.0 | 28637 | 0.8736 | 0.6956 | | 0.3213 | 8.0 | 32728 | 0.9334 | 0.6945 | | 0.2669 | 9.0 | 36819 | 1.0196 | 0.7027 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.13.0 - Tokenizers 0.13.3
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "sa_BERT_48_mnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE MNLI", "type": "glue", "config": "mnli", "split": "validation_matched", "args": "mnli"}, "metrics": [{"type": "accuracy", "value": 0.7034174125305126, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,721
jorgeortizfuentes/spanish-offensive-language-bert-base-spanish-wwm-cased
jorgeortizfuentes
text-classification
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "es", "dataset:jorgeortizfuentes/spanish-offensive-language", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-12-11T16:44:57Z
2023-12-11T18:05:54+00:00
22
0
--- datasets: - jorgeortizfuentes/spanish-offensive-language language: - es metrics: - f1 tags: - generated_from_trainer model-index: - name: spanish-offensive-language-bert-base-spanish-wwm-cased results: - task: type: text-classification name: Text Classification dataset: name: jorgeortizfuentes/spanish-offensive-language type: jorgeortizfuentes/spanish-offensive-language metrics: - type: f1 value: 0.5861096817040292 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanish-offensive-language-bert-base-spanish-wwm-cased This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the jorgeortizfuentes/spanish-offensive-language dataset. It achieves the following results on the evaluation set: - Loss: 0.7753 - F1: 0.5861 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 13 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6383 | 1.0 | 166 | 0.6105 | 0.4916 | | 0.5779 | 2.0 | 332 | 0.6027 | 0.5531 | | 0.4791 | 3.0 | 498 | 0.6503 | 0.5852 | | 0.3626 | 4.0 | 664 | 0.7753 | 0.5861 | | 0.2758 | 5.0 | 830 | 0.8416 | 0.5856 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanish-offensive-language-bert-base-spanish-wwm-cased This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the jorgeortizfuentes/spanish-offensive-language dataset. It achieves the following results on the evaluation set: - Loss: 0.7753 - F1: 0.5861 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 13 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6383 | 1.0 | 166 | 0.6105 | 0.4916 | | 0.5779 | 2.0 | 332 | 0.6027 | 0.5531 | | 0.4791 | 3.0 | 498 | 0.6503 | 0.5852 | | 0.3626 | 4.0 | 664 | 0.7753 | 0.5861 | | 0.2758 | 5.0 | 830 | 0.8416 | 0.5856 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.13.3
{"datasets": ["jorgeortizfuentes/spanish-offensive-language"], "language": ["es"], "metrics": ["f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "spanish-offensive-language-bert-base-spanish-wwm-cased", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "jorgeortizfuentes/spanish-offensive-language", "type": "jorgeortizfuentes/spanish-offensive-language"}, "metrics": [{"type": "f1", "value": 0.5861096817040292, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,722
ozcangundes/T5-base-for-BioQA
ozcangundes
question-answering
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "question-answering", "dataset:bioASQ", "arxiv:1910.10683", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2021-09-22T09:31:21+00:00
55
0
--- datasets: - bioASQ language: english license: mit pipeline_tag: question-answering --- # T5-base model fine-tuned on BioASQ for Biological Question Answering 👩‍⚕️👨‍⚕️ [Google's T5-base](https://huggingface.co/t5-base) fine-tuned on [BioASQ](https://github.com/dmis-lab/biobert) (secondary task) for **Q&A** downstream task. ## Details of T5 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Dependencies transformers == 4.3.3 sentencepiece >= 0.1.94 ## Usage 🚀 ```python import torch from transformers import T5ForConditionalGeneration, T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("ozcangundes/T5-base-for-BioQA") model = T5ForConditionalGeneration.from_pretrained("ozcangundes/T5-base-for-BioQA") def get_answer(question,context): source_encoding=tokenizer( question, context, max_length=512, padding="max_length", truncation="only_second", return_attention_mask=True, add_special_tokens=True, return_tensors="pt") generated_ids=model.generate( input_ids=source_encoding["input_ids"], attention_mask=source_encoding["attention_mask"]) preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True) for gen_id in generated_ids] return "".join(preds) ``` ### Example 1 ```python question={ "context":"Effect of food on the pharmacokinetics of empagliflozin, a sodium glucose cotransporter 2 (SGLT2) inhibitor, and assessment of dose proportionality in healthy volunteers. OBJECTIVES: Empagliflozin is an orally available, potent and highly selective inhibitor of the sodium glucose cotransporter 2 (SGLT2). This study was undertaken to investigate the effect of food on the pharmacokinetics of 25 mg empagliflozin and to assess dose proportionality between 10 mg and 25 mg empagliflozin under fasted conditions. MATERIALS AND METHODS: In this open-label, 3-way, cross-over study, 18 healthy volunteers received 3 single doses of empagliflozin in a randomized sequence (25 mg empagliflozin under fasted conditions, 25 mg empagliflozin after a high-fat, high-calorie breakfast and 10 mg empagliflozin under fasted conditions), each separated by a washout period of at least 7 days. Serial plasma samples were collected at selected time points over a period of 72 hours. RESULTS: Administration with food had no clinically relevant effect on the area under the plasma concentration-time curve (AUC0-∞) of empagliflozin (geometric mean ratio (GMR): 84.04, 90% confidence interval (CI): 80.86 - 87.34). The decrease observed in the maximum plasma concentrations (Cmax) of empagliflozin (GMR: 63.22, 90% CI: 56.74 - 70.44) when administered with food was not considered clinically meaningful. The increases in AUC0-∞ and Cmax for 10 mg vs. 25 mg empagliflozin administered under fasting conditions were roughly dose-proportional, as demonstrated by the slope β of the regression lines being slightly less than 1 (slope β for AUC0-∞: 0.94, 95% CI: 0.90 - 0.97; slope β for Cmax: 0.91, 95% CI: 0.80 - 1.01). Empagliflozin was well tolerated under fed and fasting conditions. CONCLUSIONS: The results support administration of empagliflozin tablets independently of food. Increases in empagliflozin exposure under fasting conditions were roughly dose-proportional between 10 mg and 25 mg empagliflozin.", "question":"Which protein does empagliflozin inhibit?" } get_answer(question["question"],question["context"]) ``` > SGLT2 ### Example 2 ```python question2={ "context":"Dermatitis herpetiformis: jejunal findings and skin response to gluten free diet. Fifty seven children with dermatitis herpetiformis, 18 from Finland and 39 from Hungary, were studied. Diagnostic criteria included the finding of granular IgA deposits in the skin of all patients. The mean age at onset of the rash was 7 X 2 years and favoured sites were the elbows, knees, and buttocks. Symptoms suggesting small intestinal disease were rare but in 35 (61%) of the children subtotal villous atrophy and in 16 (28%) partial villous atrophy were found on jejunal biopsy. Eighteen children underwent a second biopsy after a mean of 21 months on a gluten free diet; villous height was found to be increased and the intraepithelial lymphocyte count decreased in all these patients. Gluten challenge caused a reversal in the two children who underwent a third biopsy. The effect of the gluten free diet on the rash was examined in Finnish children by observing the daily requirements of dapsone, a drug used to control the rash at the beginning of the diet. Eight (67%) of the 12 children were able to stop taking dapsone after a mean of 11 months on the diet and all three patients treated with diet alone became asymptomatic after three to 6 months on the diet. These results confirm that most children with dermatitis herpetiformis have jejunal villous atrophy, though they rarely have gastrointestinal symptoms. The central role of gluten in childhood dermatitis herpetiformis is evidenced by the fact that a gluten free diet helps the damaged jejunal mucosa to recover and controls the rash even in those children who do not have an abnormal jejunal biopsy.", "question":"What is the typical rash associated with gluten?" } get_answer(question2["question"],question2["context"]) ``` > dermatitis herpetiformis Created by Özcan Gündeş ✌️ --- Twitter: <a href="https://twitter.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/twitter.svg" alt="ozcangundes" height="30" width="30" /></a> Linkedin: <a href="https://www.linkedin.com/in/%C3%B6zcan-g%C3%BCnde%C5%9F-7693055b/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/linkedin.svg" alt="13198517" height="30" width="30" /></a> Medium: <a href="https://medium.com/@ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/medium.svg" alt="@ozcangundes" height="30" width="30" /></a> Github: <a href="https://github.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/github.svg" alt="@ozcangundes" height="30" width="30" /></a>
null
BioNLP
# T5-base model fine-tuned on BioASQ for Biological Question Answering 👩‍⚕️👨‍⚕️ [Google's T5-base](https://huggingface.co/t5-base) fine-tuned on [BioASQ](https://github.com/dmis-lab/biobert) (secondary task) for **Q&A** downstream task. ## Details of T5 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Dependencies transformers == 4.3.3 sentencepiece >= 0.1.94 ## Usage 🚀 ```python import torch from transformers import T5ForConditionalGeneration, T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("ozcangundes/T5-base-for-BioQA") model = T5ForConditionalGeneration.from_pretrained("ozcangundes/T5-base-for-BioQA") def get_answer(question,context): source_encoding=tokenizer( question, context, max_length=512, padding="max_length", truncation="only_second", return_attention_mask=True, add_special_tokens=True, return_tensors="pt") generated_ids=model.generate( input_ids=source_encoding["input_ids"], attention_mask=source_encoding["attention_mask"]) preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True) for gen_id in generated_ids] return "".join(preds) ``` ### Example 1 ```python question={ "context":"Effect of food on the pharmacokinetics of empagliflozin, a sodium glucose cotransporter 2 (SGLT2) inhibitor, and assessment of dose proportionality in healthy volunteers. OBJECTIVES: Empagliflozin is an orally available, potent and highly selective inhibitor of the sodium glucose cotransporter 2 (SGLT2). This study was undertaken to investigate the effect of food on the pharmacokinetics of 25 mg empagliflozin and to assess dose proportionality between 10 mg and 25 mg empagliflozin under fasted conditions. MATERIALS AND METHODS: In this open-label, 3-way, cross-over study, 18 healthy volunteers received 3 single doses of empagliflozin in a randomized sequence (25 mg empagliflozin under fasted conditions, 25 mg empagliflozin after a high-fat, high-calorie breakfast and 10 mg empagliflozin under fasted conditions), each separated by a washout period of at least 7 days. Serial plasma samples were collected at selected time points over a period of 72 hours. RESULTS: Administration with food had no clinically relevant effect on the area under the plasma concentration-time curve (AUC0-∞) of empagliflozin (geometric mean ratio (GMR): 84.04, 90% confidence interval (CI): 80.86 - 87.34). The decrease observed in the maximum plasma concentrations (Cmax) of empagliflozin (GMR: 63.22, 90% CI: 56.74 - 70.44) when administered with food was not considered clinically meaningful. The increases in AUC0-∞ and Cmax for 10 mg vs. 25 mg empagliflozin administered under fasting conditions were roughly dose-proportional, as demonstrated by the slope β of the regression lines being slightly less than 1 (slope β for AUC0-∞: 0.94, 95% CI: 0.90 - 0.97; slope β for Cmax: 0.91, 95% CI: 0.80 - 1.01). Empagliflozin was well tolerated under fed and fasting conditions. CONCLUSIONS: The results support administration of empagliflozin tablets independently of food. Increases in empagliflozin exposure under fasting conditions were roughly dose-proportional between 10 mg and 25 mg empagliflozin.", "question":"Which protein does empagliflozin inhibit?" } get_answer(question["question"],question["context"]) ``` > SGLT2 ### Example 2 ```python question2={ "context":"Dermatitis herpetiformis: jejunal findings and skin response to gluten free diet. Fifty seven children with dermatitis herpetiformis, 18 from Finland and 39 from Hungary, were studied. Diagnostic criteria included the finding of granular IgA deposits in the skin of all patients. The mean age at onset of the rash was 7 X 2 years and favoured sites were the elbows, knees, and buttocks. Symptoms suggesting small intestinal disease were rare but in 35 (61%) of the children subtotal villous atrophy and in 16 (28%) partial villous atrophy were found on jejunal biopsy. Eighteen children underwent a second biopsy after a mean of 21 months on a gluten free diet; villous height was found to be increased and the intraepithelial lymphocyte count decreased in all these patients. Gluten challenge caused a reversal in the two children who underwent a third biopsy. The effect of the gluten free diet on the rash was examined in Finnish children by observing the daily requirements of dapsone, a drug used to control the rash at the beginning of the diet. Eight (67%) of the 12 children were able to stop taking dapsone after a mean of 11 months on the diet and all three patients treated with diet alone became asymptomatic after three to 6 months on the diet. These results confirm that most children with dermatitis herpetiformis have jejunal villous atrophy, though they rarely have gastrointestinal symptoms. The central role of gluten in childhood dermatitis herpetiformis is evidenced by the fact that a gluten free diet helps the damaged jejunal mucosa to recover and controls the rash even in those children who do not have an abnormal jejunal biopsy.", "question":"What is the typical rash associated with gluten?" } get_answer(question2["question"],question2["context"]) ``` > dermatitis herpetiformis Created by Özcan Gündeş ✌️ --- Twitter: <a href="https://twitter.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/twitter.svg" alt="ozcangundes" height="30" width="30" /></a> Linkedin: <a href="https://www.linkedin.com/in/%C3%B6zcan-g%C3%BCnde%C5%9F-7693055b/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/linkedin.svg" alt="13198517" height="30" width="30" /></a> Medium: <a href="https://medium.com/@ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/medium.svg" alt="@ozcangundes" height="30" width="30" /></a> Github: <a href="https://github.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/github.svg" alt="@ozcangundes" height="30" width="30" /></a>
{"datasets": ["bioASQ"], "language": "english", "license": "mit", "pipeline_tag": "question-answering"}
task
[ "QUESTION_ANSWERING" ]
42,723
Helsinki-NLP/opus-mt-tc-base-fi-uk
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "fi", "uk", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-24T12:28:09Z
2023-10-10T21:35:31+00:00
18
0
--- language: - fi - uk license: cc-by-4.0 tags: - translation - opus-mt-tc --- # opus-mt-tc-base-fi-uk Neural machine translation model for translating from Finnish (fi) to Ukrainian (uk). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-07 * source language(s): fin * target language(s): ukr * model: transformer-align * data: opusTCv20210807+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+pbt_transformer-align_2022-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.zip) * more information released models: [OPUS-MT fin-ukr README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-ukr/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Afrikka on ihmiskunnan kehto.", "Yksi, kaksi, kolme, neljä, viisi, kuusi, seitsemän, kahdeksan, yhdeksän, kymmenen." ] model_name = "pytorch-models/opus-mt-tc-base-fi-uk" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Африка є колискою людства. # Один, два, три, чотири, п'ять, шість, сім, вісім, дев'ять, десять. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-fi-uk") print(pipe("Afrikka on ihmiskunnan kehto.")) # expected output: Африка є колискою людства. ``` ## Benchmarks * test set translations: [opusTCv20210807+pbt_transformer-align_2022-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.test.txt) * test set scores: [opusTCv20210807+pbt_transformer-align_2022-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | fin-ukr | flores101-devtest | 0.49562 | 19.7 | 1012 | 22810 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Thu Mar 24 02:00:05 EET 2022 * port machine: LM0-400-22516.local
null
Non_BioNLP
# opus-mt-tc-base-fi-uk Neural machine translation model for translating from Finnish (fi) to Ukrainian (uk). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-07 * source language(s): fin * target language(s): ukr * model: transformer-align * data: opusTCv20210807+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+pbt_transformer-align_2022-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.zip) * more information released models: [OPUS-MT fin-ukr README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-ukr/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Afrikka on ihmiskunnan kehto.", "Yksi, kaksi, kolme, neljä, viisi, kuusi, seitsemän, kahdeksan, yhdeksän, kymmenen." ] model_name = "pytorch-models/opus-mt-tc-base-fi-uk" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Африка є колискою людства. # Один, два, три, чотири, п'ять, шість, сім, вісім, дев'ять, десять. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-fi-uk") print(pipe("Afrikka on ihmiskunnan kehto.")) # expected output: Африка є колискою людства. ``` ## Benchmarks * test set translations: [opusTCv20210807+pbt_transformer-align_2022-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.test.txt) * test set scores: [opusTCv20210807+pbt_transformer-align_2022-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | fin-ukr | flores101-devtest | 0.49562 | 19.7 | 1012 | 22810 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Thu Mar 24 02:00:05 EET 2022 * port machine: LM0-400-22516.local
{"language": ["fi", "uk"], "license": "cc-by-4.0", "tags": ["translation", "opus-mt-tc"]}
task
[ "TRANSLATION" ]
42,724
echarlaix/t5-small-onnx
echarlaix
translation
[ "transformers", "onnx", "t5", "text2text-generation", "summarization", "translation", "en", "fr", "ro", "de", "multilingual", "dataset:c4", "arxiv:1910.10683", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-05-16T16:58:04Z
2023-10-26T09:15:09+00:00
151
3
--- datasets: - c4 language: - en - fr - ro - de - multilingual license: apache-2.0 tags: - summarization - translation --- ## [t5-small](https://huggingface.co/t5-small) exported to the ONNX format ## Model description [T5](https://huggingface.co/docs/transformers/model_doc/t5#t5) is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. For more information, please take a look at the original paper. Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Usage example You can use this model with Transformers *pipeline*. ```python from transformers import AutoTokenizer, pipeline from optimum.onnxruntime import ORTModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("echarlaix/t5-small-onnx") model = ORTModelForSeq2SeqLM.from_pretrained("echarlaix/t5-small-onnx") translator = pipeline("translation_en_to_fr", model=model, tokenizer=tokenizer) results = translator("My name is Eustache and I have a pet raccoon") print(results) ```
null
Non_BioNLP
## [t5-small](https://huggingface.co/t5-small) exported to the ONNX format ## Model description [T5](https://huggingface.co/docs/transformers/model_doc/t5#t5) is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. For more information, please take a look at the original paper. Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Usage example You can use this model with Transformers *pipeline*. ```python from transformers import AutoTokenizer, pipeline from optimum.onnxruntime import ORTModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("echarlaix/t5-small-onnx") model = ORTModelForSeq2SeqLM.from_pretrained("echarlaix/t5-small-onnx") translator = pipeline("translation_en_to_fr", model=model, tokenizer=tokenizer) results = translator("My name is Eustache and I have a pet raccoon") print(results) ```
{"datasets": ["c4"], "language": ["en", "fr", "ro", "de", "multilingual"], "license": "apache-2.0", "tags": ["summarization", "translation"]}
task
[ "TRANSLATION", "SUMMARIZATION" ]
42,725
4bit/Qwen2-VL-7B-Instruct
4bit
text-generation
[ "safetensors", "qwen2_vl", "multimodal", "text-generation", "conversational", "en", "arxiv:2308.12966", "base_model:Qwen/Qwen2-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-7B-Instruct", "license:apache-2.0", "region:us" ]
2024-08-30T02:36:48Z
2024-08-30T02:48:17+00:00
10
0
--- base_model: Qwen/Qwen2-VL-7B-Instruct language: - en license: apache-2.0 pipeline_tag: text-generation tags: - multimodal --- # Qwen2-VL-7B-Instruct ## Introduction We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation. ### What’s New in Qwen2-VL? #### Key Enhancements: * **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. * **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc. * **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions. * **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc. #### Model Architecture Updates: * **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience. <p align="center"> <img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/> <p> * **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities. <p align="center"> <img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/> <p> We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL). ## Evaluation ### Image Benchmarks | Benchmark | InternVL2-8B | MiniCPM-V 2.6 | GPT-4o-mini | **Qwen2-VL-7B** | | :--- | :---: | :---: | :---: | :---: | | MMMU<sub>val</sub> | 51.8 | 49.8 | **60**| 54.1 | | DocVQA<sub>test</sub> | 91.6 | 90.8 | - | **94.5** | | InfoVQA<sub>test</sub> | 74.8 | - | - |**76.5** | | ChartQA<sub>test</sub> | **83.3** | - |- | 83.0 | | TextVQA<sub>val</sub> | 77.4 | 80.1 | -| **84.3** | | OCRBench | 794 | **852** | 785 | 845 | | MTVQA | - | - | -| **26.3** | | RealWorldQA | 64.4 | - | - | **70.1** | | MME<sub>sum</sub> | 2210.3 | **2348.4** | 2003.4| 2326.8 | | MMBench-EN<sub>test</sub> | 81.7 | - | - | **83.0** | | MMBench-CN<sub>test</sub> | **81.2** | - | - | 80.5 | | MMBench-V1.1<sub>test</sub> | 79.4 | 78.0 | 76.0| **80.7** | | MMT-Bench<sub>test</sub> | - | - | - |**63.7** | | MMStar | **61.5** | 57.5 | 54.8 | 60.7 | | MMVet<sub>GPT-4-Turbo</sub> | 54.2 | 60.0 | **66.9** | 62.0 | | HallBench<sub>avg</sub> | 45.2 | 48.1 | 46.1| **50.6** | | MathVista<sub>testmini</sub> | 58.3 | **60.6** | 52.4 | 58.2 | | MathVision | - | - | - | **16.3** | ### Video Benchmarks | Benchmark | Internvl2-8B | LLaVA-OneVision-7B | MiniCPM-V 2.6 | **Qwen2-VL-7B** | | :--- | :---: | :---: | :---: | :---: | | MVBench | 66.4 | 56.7 | - | **67.0** | | PerceptionTest<sub>test</sub> | - | 57.1 | - | **62.3** | | EgoSchema<sub>test</sub> | - | 60.1 | - | **66.7** | | Video-MME<sub>wo/w subs</sub> | 54.0/56.9 | 58.2/- | 60.9/63.6 | **63.3**/**69.0** | ## Requirements The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error: ``` KeyError: 'qwen2_vl' ``` ## Quickstart We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command: ```bash pip install qwen-vl-utils ``` Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`: ```python from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor from qwen_vl_utils import process_vision_info # default: Load the model on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto" ) # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios. # model = Qwen2VLForConditionalGeneration.from_pretrained( # "Qwen/Qwen2-VL-7B-Instruct", # torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2", # device_map="auto", # ) # default processer processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct") # The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage. # min_pixels = 256*28*28 # max_pixels = 1280*28*28 # processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) messages = [ { "role": "user", "content": [ { "type": "image", "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference: Generation of the output generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` <details> <summary>Without qwen_vl_utils</summary> ```python from PIL import Image import requests import torch from torchvision import io from typing import Dict from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor # Load the model in half-precision on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto" ) processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct") # Image url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg" image = Image.open(requests.get(url, stream=True).raw) conversation = [ { "role": "user", "content": [ { "type": "image", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preprocess the inputs text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) # Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n' inputs = processor( text=[text_prompt], images=[image], padding=True, return_tensors="pt" ) inputs = inputs.to("cuda") # Inference: Generation of the output output_ids = model.generate(**inputs, max_new_tokens=128) generated_ids = [ output_ids[len(input_ids) :] for input_ids, output_ids in zip(inputs.input_ids, output_ids) ] output_text = processor.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True ) print(output_text) ``` </details> <details> <summary>Multi image inference</summary> ```python # Messages containing multiple images and a text query messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "Identify the similarities between these images."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Video inference</summary> ```python # Messages containing a images list as a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": [ "file:///path/to/frame1.jpg", "file:///path/to/frame2.jpg", "file:///path/to/frame3.jpg", "file:///path/to/frame4.jpg", ], "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Messages containing a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": "file:///path/to/video1.mp4", "max_pixels": 360 * 420, "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Batch inference</summary> ```python # Sample messages for batch inference messages1 = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "What are the common elements in these pictures?"}, ], } ] messages2 = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who are you?"}, ] # Combine messages for batch processing messages = [messages1, messages1] # Preparation for batch inference texts = [ processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) for msg in messages ] image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=texts, images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Batch Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_texts = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_texts) ``` </details> ### More Usage Tips For input images, we support local files, base64, and URLs. For videos, we currently only support local files. ```python # You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text. ## Local file path messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Image URL messages = [ { "role": "user", "content": [ {"type": "image", "image": "http://path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Base64 encoded image messages = [ { "role": "user", "content": [ {"type": "image", "image": "data:image;base64,/9j/..."}, {"type": "text", "text": "Describe this image."}, ], } ] ``` #### Image Resolution for performance boost The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage. ```python min_pixels = 256 * 28 * 28 max_pixels = 1280 * 28 * 28 processor = AutoProcessor.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels ) ``` Besides, We provide two methods for fine-grained control over the image size input to the model: 1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels. 2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28. ```python # min_pixels and max_pixels messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "resized_height": 280, "resized_width": 420, }, {"type": "text", "text": "Describe this image."}, ], } ] # resized_height and resized_width messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "min_pixels": 50176, "max_pixels": 50176, }, {"type": "text", "text": "Describe this image."}, ], } ] ``` ## Limitations While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions: 1. Lack of Audio Support: The current model does **not comprehend audio information** within videos. 2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered. 3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands. 4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement. 5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements. 6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects. These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{Qwen2-VL, title={Qwen2-VL}, author={Qwen team}, year={2024} } @article{Qwen-VL, title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond}, author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren}, journal={arXiv preprint arXiv:2308.12966}, year={2023} } ```
null
Non_BioNLP
# Qwen2-VL-7B-Instruct ## Introduction We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation. ### What’s New in Qwen2-VL? #### Key Enhancements: * **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. * **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc. * **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions. * **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc. #### Model Architecture Updates: * **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience. <p align="center"> <img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/> <p> * **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities. <p align="center"> <img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/> <p> We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL). ## Evaluation ### Image Benchmarks | Benchmark | InternVL2-8B | MiniCPM-V 2.6 | GPT-4o-mini | **Qwen2-VL-7B** | | :--- | :---: | :---: | :---: | :---: | | MMMU<sub>val</sub> | 51.8 | 49.8 | **60**| 54.1 | | DocVQA<sub>test</sub> | 91.6 | 90.8 | - | **94.5** | | InfoVQA<sub>test</sub> | 74.8 | - | - |**76.5** | | ChartQA<sub>test</sub> | **83.3** | - |- | 83.0 | | TextVQA<sub>val</sub> | 77.4 | 80.1 | -| **84.3** | | OCRBench | 794 | **852** | 785 | 845 | | MTVQA | - | - | -| **26.3** | | RealWorldQA | 64.4 | - | - | **70.1** | | MME<sub>sum</sub> | 2210.3 | **2348.4** | 2003.4| 2326.8 | | MMBench-EN<sub>test</sub> | 81.7 | - | - | **83.0** | | MMBench-CN<sub>test</sub> | **81.2** | - | - | 80.5 | | MMBench-V1.1<sub>test</sub> | 79.4 | 78.0 | 76.0| **80.7** | | MMT-Bench<sub>test</sub> | - | - | - |**63.7** | | MMStar | **61.5** | 57.5 | 54.8 | 60.7 | | MMVet<sub>GPT-4-Turbo</sub> | 54.2 | 60.0 | **66.9** | 62.0 | | HallBench<sub>avg</sub> | 45.2 | 48.1 | 46.1| **50.6** | | MathVista<sub>testmini</sub> | 58.3 | **60.6** | 52.4 | 58.2 | | MathVision | - | - | - | **16.3** | ### Video Benchmarks | Benchmark | Internvl2-8B | LLaVA-OneVision-7B | MiniCPM-V 2.6 | **Qwen2-VL-7B** | | :--- | :---: | :---: | :---: | :---: | | MVBench | 66.4 | 56.7 | - | **67.0** | | PerceptionTest<sub>test</sub> | - | 57.1 | - | **62.3** | | EgoSchema<sub>test</sub> | - | 60.1 | - | **66.7** | | Video-MME<sub>wo/w subs</sub> | 54.0/56.9 | 58.2/- | 60.9/63.6 | **63.3**/**69.0** | ## Requirements The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error: ``` KeyError: 'qwen2_vl' ``` ## Quickstart We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command: ```bash pip install qwen-vl-utils ``` Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`: ```python from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor from qwen_vl_utils import process_vision_info # default: Load the model on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto" ) # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios. # model = Qwen2VLForConditionalGeneration.from_pretrained( # "Qwen/Qwen2-VL-7B-Instruct", # torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2", # device_map="auto", # ) # default processer processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct") # The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage. # min_pixels = 256*28*28 # max_pixels = 1280*28*28 # processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) messages = [ { "role": "user", "content": [ { "type": "image", "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference: Generation of the output generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` <details> <summary>Without qwen_vl_utils</summary> ```python from PIL import Image import requests import torch from torchvision import io from typing import Dict from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor # Load the model in half-precision on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto" ) processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct") # Image url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg" image = Image.open(requests.get(url, stream=True).raw) conversation = [ { "role": "user", "content": [ { "type": "image", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preprocess the inputs text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) # Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n' inputs = processor( text=[text_prompt], images=[image], padding=True, return_tensors="pt" ) inputs = inputs.to("cuda") # Inference: Generation of the output output_ids = model.generate(**inputs, max_new_tokens=128) generated_ids = [ output_ids[len(input_ids) :] for input_ids, output_ids in zip(inputs.input_ids, output_ids) ] output_text = processor.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True ) print(output_text) ``` </details> <details> <summary>Multi image inference</summary> ```python # Messages containing multiple images and a text query messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "Identify the similarities between these images."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Video inference</summary> ```python # Messages containing a images list as a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": [ "file:///path/to/frame1.jpg", "file:///path/to/frame2.jpg", "file:///path/to/frame3.jpg", "file:///path/to/frame4.jpg", ], "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Messages containing a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": "file:///path/to/video1.mp4", "max_pixels": 360 * 420, "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Batch inference</summary> ```python # Sample messages for batch inference messages1 = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "What are the common elements in these pictures?"}, ], } ] messages2 = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who are you?"}, ] # Combine messages for batch processing messages = [messages1, messages1] # Preparation for batch inference texts = [ processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) for msg in messages ] image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=texts, images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Batch Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_texts = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_texts) ``` </details> ### More Usage Tips For input images, we support local files, base64, and URLs. For videos, we currently only support local files. ```python # You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text. ## Local file path messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Image URL messages = [ { "role": "user", "content": [ {"type": "image", "image": "http://path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Base64 encoded image messages = [ { "role": "user", "content": [ {"type": "image", "image": "data:image;base64,/9j/..."}, {"type": "text", "text": "Describe this image."}, ], } ] ``` #### Image Resolution for performance boost The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage. ```python min_pixels = 256 * 28 * 28 max_pixels = 1280 * 28 * 28 processor = AutoProcessor.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels ) ``` Besides, We provide two methods for fine-grained control over the image size input to the model: 1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels. 2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28. ```python # min_pixels and max_pixels messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "resized_height": 280, "resized_width": 420, }, {"type": "text", "text": "Describe this image."}, ], } ] # resized_height and resized_width messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "min_pixels": 50176, "max_pixels": 50176, }, {"type": "text", "text": "Describe this image."}, ], } ] ``` ## Limitations While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions: 1. Lack of Audio Support: The current model does **not comprehend audio information** within videos. 2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered. 3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands. 4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement. 5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements. 6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects. These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{Qwen2-VL, title={Qwen2-VL}, author={Qwen team}, year={2024} } @article{Qwen-VL, title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond}, author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren}, journal={arXiv preprint arXiv:2308.12966}, year={2023} } ```
{"base_model": "Qwen/Qwen2-VL-7B-Instruct", "language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["multimodal"]}
task
[ "QUESTION_ANSWERING" ]
42,726
SoraKang97/UXwriting-en-to-ko
SoraKang97
translation
[ "transformers", "safetensors", "t5", "text2text-generation", "UX writing", "English to Korean", "Translation", "Transcreation", "User Experience", "translation", "en", "ko", "autotrain_compatible", "text-generation-inference", "region:us" ]
2023-12-13T14:13:19Z
2023-12-14T13:00:26+00:00
122
0
--- language: - en - ko pipeline_tag: translation tags: - UX writing - English to Korean - Translation - Transcreation - User Experience inference: false --- # UXwriting-en-to-ko <!-- Provide a quick summary of what the model is/does. --> This model is designed to enhance web service UX writing through the fine-tuning of a specialized localization approach, with a bilingual dataset(ENG<>KOR) to adapt UX content to different linguistic and cultural contexts.
null
Non_BioNLP
# UXwriting-en-to-ko <!-- Provide a quick summary of what the model is/does. --> This model is designed to enhance web service UX writing through the fine-tuning of a specialized localization approach, with a bilingual dataset(ENG<>KOR) to adapt UX content to different linguistic and cultural contexts.
{"language": ["en", "ko"], "pipeline_tag": "translation", "tags": ["UX writing", "English to Korean", "Translation", "Transcreation", "User Experience"], "inference": false}
task
[ "TRANSLATION" ]
42,727
MultiBertGunjanPatrick/multiberts-seed-2-160k
MultiBertGunjanPatrick
null
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-2", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-10-04T05:03:04+00:00
112
0
--- datasets: - bookcorpus - wikipedia language: en license: apache-2.0 tags: - exbert - multiberts - multiberts-seed-2 --- # MultiBERTs Seed 2 Checkpoint 160k (uncased) Seed 2 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-160k') model = BertModel.from_pretrained("multiberts-seed-2-160k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
null
Non_BioNLP
# MultiBERTs Seed 2 Checkpoint 160k (uncased) Seed 2 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-160k') model = BertModel.from_pretrained("multiberts-seed-2-160k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"]}
task
[ "QUESTION_ANSWERING" ]
42,729
luukschmitz/Geofin2
luukschmitz
text-classification
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "autotrain", "base_model:mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis", "base_model:finetune:mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-07-13T15:41:41Z
2024-07-13T15:43:10+00:00
109
0
--- base_model: mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis tags: - autotrain - text-classification widget: - text: I love AutoTrain --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.8448383808135986 f1_macro: 0.6157720421369062 f1_micro: 0.6903765690376569 f1_weighted: 0.6859027993316449 precision_macro: 0.6455502935971685 precision_micro: 0.6903765690376569 precision_weighted: 0.686461837753025 recall_macro: 0.5960579394741132 recall_micro: 0.6903765690376569 recall_weighted: 0.6903765690376569 accuracy: 0.6903765690376569
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.8448383808135986 f1_macro: 0.6157720421369062 f1_micro: 0.6903765690376569 f1_weighted: 0.6859027993316449 precision_macro: 0.6455502935971685 precision_micro: 0.6903765690376569 precision_weighted: 0.686461837753025 recall_macro: 0.5960579394741132 recall_micro: 0.6903765690376569 recall_weighted: 0.6903765690376569 accuracy: 0.6903765690376569
{"base_model": "mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis", "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
task
[ "TEXT_CLASSIFICATION" ]
42,730
spacemanidol/flan-t5-base-1-6-xsum
spacemanidol
text2text-generation
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-02-28T14:55:14Z
2023-03-09T16:05:13+00:00
12
0
--- datasets: - xsum metrics: - rouge tags: - generated_from_trainer model-index: - name: base-1-6 results: - task: type: summarization name: Summarization dataset: name: xsum type: xsum config: default split: validation args: default metrics: - type: rouge value: 31.8847 name: Rouge1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # base-1-6 This model is a fine-tuned version of [x/base-1-6/](https://huggingface.co/x/base-1-6/) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 1.9235 - Rouge1: 31.8847 - Rouge2: 10.2702 - Rougel: 24.8513 - Rougelsum: 24.851 - Gen Len: 28.2165 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.10.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # base-1-6 This model is a fine-tuned version of [x/base-1-6/](https://huggingface.co/x/base-1-6/) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 1.9235 - Rouge1: 31.8847 - Rouge2: 10.2702 - Rougel: 24.8513 - Rougelsum: 24.851 - Gen Len: 28.2165 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.10.0 - Tokenizers 0.13.2
{"datasets": ["xsum"], "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "base-1-6", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "xsum", "type": "xsum", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "rouge", "value": 31.8847, "name": "Rouge1"}]}]}]}
task
[ "SUMMARIZATION" ]
42,731
SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_SLERP
SicariusSicariiStuff
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-06-09T21:10:58Z
2025-03-09T15:51:36+00:00
2,875
1
--- language: - en license: apache-2.0 --- <div align="center"> <b style="font-size: 40px;">Zion_Alpha_Instruction_Tuned_SLERP</b> </div> <img src="https://i.imgur.com/e1LEQ18.png" alt="Zion_Alpha_Instruction_Tuned_SLERP" style="width: 50%; min-width: 400px; display: block; margin: auto;"> # Model Details Zion_Alpha is the first **REAL** Hebrew model in the world. This version WAS fine tuned for tasks. I did the finetune using SOTA techniques and using my insights from years of underwater basket weaving. If you wanna offer me a job, just add me on Facebook. # Another world record broken by Zion_Alpha! On **June 10th, 2024**, this model achieved the **highest sentiment analysis score in the world** for Hebrew LLMs, with an impressive **70.3**, surpassing even a **35B** model that's five times its size! <div align="center"> <img src="https://i.imgur.com/yg6CJoz.png" alt="Zion_Alpha SNLI Score" style="width: 80%; min-width: 700px; display: block; margin: auto;"> </div> # Future Plans My previous LLM, Zion_Alpha, set a world record on Hugging Face by achieving the highest SNLI score for Hebrew open LLMs at 84.05. The current model, a SLERP merge, achieved a lower SNLI score but still surprised everyone by securing the highest sentiment analysis score of 70.3. This demonstrates significant untapped potential in optimizing the training process, showing that 7B models can deliver far more performance in Hebrew than previously thought possible. This will be my last Hebrew model for a while, as I have other adventures to pursue. # Looking for Sponsors Since all my work is done on-premises, I am constrained by my current hardware. I would greatly appreciate any support in acquiring an A6000, which would enable me to train significantly larger models much faster. # Papers? Maybe. We'll see. No promises here 🤓 # Contact Details I'm not great at self-marketing (to say the least) and don't have any social media accounts. If you'd like to reach out to me, you can email me at [email protected]. Please note that this email might receive more messages than I can handle, so I apologize in advance if I can't respond to everyone. # Versions and QUANTS - Base model: [FP16](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha) - Instruction tuned: [FP16](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned) | [GGUF](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_GGUF) # Model architecture Based on Mistral 7B. I didn't even bother to alter the tokenizer. # The recommended prompt setting is Debug-deterministic: ``` temperature: 1 top_p: 1 top_k: 1 typical_p: 1 min_p: 1 repetition_penalty: 1 ``` # The recommended instruction template is Mistral: ``` {%- for message in messages %} {%- if message['role'] == 'system' -%} {{- message['content'] -}} {%- else -%} {%- if message['role'] == 'user' -%} {{-'[INST] ' + message['content'].rstrip() + ' [/INST]'-}} {%- else -%} {{-'' + message['content'] + '</s>' -}} {%- endif -%} {%- endif -%} {%- endfor -%} {%- if add_generation_prompt -%} {{-''-}} {%- endif -%} ``` # English to hebrew example: <div align="center"> <b style="font-size: 40px;">Zion_Alpha English to Hebrew example</b> </div> <img src="https://i.imgur.com/JnTuawF.png" alt="Zion_Alpha" style="width: 40%; min-width: 600px; display: block; margin: auto;"> # English to hebrew example: <div align="center"> <b style="font-size: 40px;">Zion_Alpha Hebrew to English example</b> </div> <img src="https://i.imgur.com/Wm2igLJ.png" alt="Zion_Alpha" style="width: 40%; min-width: 600px; display: block; margin: auto;"> <div align="center"> <b style="font-size: 30px;">Unscripted video: live zero shot demonstration at story writing capabilities in Hebrew</b> [![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=YYKeovnS0do) </div> <div align="center"> <b style="font-size: 30px;">Zion_Alpha VS Mistral 'Hebrew' Live & unscripted in real time</b> [![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=DQFtx8M2txc) </div> <div align="center"> <b style="font-size: 30px;">Zion_Alpha VS Mistral 'Hebrew' Live & unscripted in real time Long text translation</b> [![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=w5fz3Ot6tH8) </div> ### History The model was originally trained about 2 month after Mistral (v0.1) was released. As of 04 June 2024, Zion_Alpha got the **Highest SNLI score in the world** among open source models in Hebrew, surpassing most of the models by a huge margin. (**84.05** score) <img src="https://i.imgur.com/7HokS5w.png" alt="Zion_Alpha SNLI Score" style="width: 80%; min-width: 700px; display: block; margin: auto;"> --- ## Citation Information ``` @llm{Zion_Alpha_Instruction_Tuned_SLERP, author = {SicariusSicariiStuff}, title = {Zion_Alpha_Instruction_Tuned_SLERP}, year = {2024}, publisher = {Hugging Face}, url = {https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_SLERP} } ``` --- ### Support <img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;"> - [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts 🙏🏻 - [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts 🙏🏻
null
Non_BioNLP
<div align="center"> <b style="font-size: 40px;">Zion_Alpha_Instruction_Tuned_SLERP</b> </div> <img src="https://i.imgur.com/e1LEQ18.png" alt="Zion_Alpha_Instruction_Tuned_SLERP" style="width: 50%; min-width: 400px; display: block; margin: auto;"> # Model Details Zion_Alpha is the first **REAL** Hebrew model in the world. This version WAS fine tuned for tasks. I did the finetune using SOTA techniques and using my insights from years of underwater basket weaving. If you wanna offer me a job, just add me on Facebook. # Another world record broken by Zion_Alpha! On **June 10th, 2024**, this model achieved the **highest sentiment analysis score in the world** for Hebrew LLMs, with an impressive **70.3**, surpassing even a **35B** model that's five times its size! <div align="center"> <img src="https://i.imgur.com/yg6CJoz.png" alt="Zion_Alpha SNLI Score" style="width: 80%; min-width: 700px; display: block; margin: auto;"> </div> # Future Plans My previous LLM, Zion_Alpha, set a world record on Hugging Face by achieving the highest SNLI score for Hebrew open LLMs at 84.05. The current model, a SLERP merge, achieved a lower SNLI score but still surprised everyone by securing the highest sentiment analysis score of 70.3. This demonstrates significant untapped potential in optimizing the training process, showing that 7B models can deliver far more performance in Hebrew than previously thought possible. This will be my last Hebrew model for a while, as I have other adventures to pursue. # Looking for Sponsors Since all my work is done on-premises, I am constrained by my current hardware. I would greatly appreciate any support in acquiring an A6000, which would enable me to train significantly larger models much faster. # Papers? Maybe. We'll see. No promises here 🤓 # Contact Details I'm not great at self-marketing (to say the least) and don't have any social media accounts. If you'd like to reach out to me, you can email me at [email protected]. Please note that this email might receive more messages than I can handle, so I apologize in advance if I can't respond to everyone. # Versions and QUANTS - Base model: [FP16](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha) - Instruction tuned: [FP16](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned) | [GGUF](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_GGUF) # Model architecture Based on Mistral 7B. I didn't even bother to alter the tokenizer. # The recommended prompt setting is Debug-deterministic: ``` temperature: 1 top_p: 1 top_k: 1 typical_p: 1 min_p: 1 repetition_penalty: 1 ``` # The recommended instruction template is Mistral: ``` {%- for message in messages %} {%- if message['role'] == 'system' -%} {{- message['content'] -}} {%- else -%} {%- if message['role'] == 'user' -%} {{-'[INST] ' + message['content'].rstrip() + ' [/INST]'-}} {%- else -%} {{-'' + message['content'] + '</s>' -}} {%- endif -%} {%- endif -%} {%- endfor -%} {%- if add_generation_prompt -%} {{-''-}} {%- endif -%} ``` # English to hebrew example: <div align="center"> <b style="font-size: 40px;">Zion_Alpha English to Hebrew example</b> </div> <img src="https://i.imgur.com/JnTuawF.png" alt="Zion_Alpha" style="width: 40%; min-width: 600px; display: block; margin: auto;"> # English to hebrew example: <div align="center"> <b style="font-size: 40px;">Zion_Alpha Hebrew to English example</b> </div> <img src="https://i.imgur.com/Wm2igLJ.png" alt="Zion_Alpha" style="width: 40%; min-width: 600px; display: block; margin: auto;"> <div align="center"> <b style="font-size: 30px;">Unscripted video: live zero shot demonstration at story writing capabilities in Hebrew</b> [![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=YYKeovnS0do) </div> <div align="center"> <b style="font-size: 30px;">Zion_Alpha VS Mistral 'Hebrew' Live & unscripted in real time</b> [![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=DQFtx8M2txc) </div> <div align="center"> <b style="font-size: 30px;">Zion_Alpha VS Mistral 'Hebrew' Live & unscripted in real time Long text translation</b> [![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=w5fz3Ot6tH8) </div> ### History The model was originally trained about 2 month after Mistral (v0.1) was released. As of 04 June 2024, Zion_Alpha got the **Highest SNLI score in the world** among open source models in Hebrew, surpassing most of the models by a huge margin. (**84.05** score) <img src="https://i.imgur.com/7HokS5w.png" alt="Zion_Alpha SNLI Score" style="width: 80%; min-width: 700px; display: block; margin: auto;"> --- ## Citation Information ``` @llm{Zion_Alpha_Instruction_Tuned_SLERP, author = {SicariusSicariiStuff}, title = {Zion_Alpha_Instruction_Tuned_SLERP}, year = {2024}, publisher = {Hugging Face}, url = {https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_SLERP} } ``` --- ### Support <img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;"> - [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts 🙏🏻 - [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts 🙏🏻
{"language": ["en"], "license": "apache-2.0"}
task
[ "TRANSLATION" ]
42,732
aroot/mbart-finetuned-eng-ind-27942457988
aroot
translation
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-06-30T20:23:50Z
2023-06-30T20:42:48+00:00
9
0
--- metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: mbart-finetuned-eng-ind-27942457988 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-eng-ind-27942457988 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8577 - Bleu: 20.4223 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.11.0
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-eng-ind-27942457988 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8577 - Bleu: 20.4223 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.11.0
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "mbart-finetuned-eng-ind-27942457988", "results": []}]}
task
[ "TRANSLATION" ]
42,733
Ayham/distilbert_gpt2_summarization_xsum
Ayham
text2text-generation
[ "transformers", "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "generated_from_trainer", "dataset:xsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-12-20T20:31:56+00:00
19
0
--- datasets: - xsum tags: - generated_from_trainer model-index: - name: distilbert_gpt2_summarization_xsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_gpt2_summarization_xsum This model is a fine-tuned version of [](https://huggingface.co/) on the xsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_gpt2_summarization_xsum This model is a fine-tuned version of [](https://huggingface.co/) on the xsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"datasets": ["xsum"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert_gpt2_summarization_xsum", "results": []}]}
task
[ "SUMMARIZATION" ]
42,734
FuturisticVibes/c4ai-command-r-08-2024-8.0bpw-h8-exl2
FuturisticVibes
text-generation
[ "transformers", "safetensors", "cohere", "text-generation", "conversational", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "exl2", "region:us" ]
2024-08-30T17:56:03Z
2024-08-31T03:03:43+00:00
6
1
--- language: - en - fr - de - es - it - pt - ja - ko - zh - ar library_name: transformers license: cc-by-nc-4.0 --- I have no idea what I’m doing… if this causes the apocalypse someone please let me know. c4ai-command-r-08-2024 8.0bpw h8 EXL2 Includes [measurement.json](https://huggingface.co/FuturisticVibes/c4ai-command-r-08-2024-8.0bpw-h8-exl2/tree/measurement) file for further quantization Original Model: https://huggingface.co/CohereForAI/c4ai-command-r-08-2024 # Original Model Card # Model Card for C4AI Command R 08-2024 ## Model Summary <!-- Provide a quick summary of what the model is/does. --> C4AI Command R 08-2024 is a research release of a 35 billion parameter highly performant generative model. Command R 08-2024 is a large language model with open weights optimized for a variety of use cases including reasoning, summarization, and question answering. Command R 08-2024 has the capability for multilingual generation, trained on 23 languages and evaluated in 10 languages and highly performant RAG capabilities. Developed by: Cohere and [Cohere For AI](https://cohere.for.ai) - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: c4ai-command-r-08-2024 - Model Size: 35 billion parameters - Context length: 128K **Try C4AI Command R** If you want to try Command R before downloading the weights, the model is hosted in a hugging face space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command?model=command-r-08-2024). **Usage** Please use `transformers` version 4.39.1 or higher ```python # pip install 'transformers>=4.39.1' from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/c4ai-command-r-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r-08-2024 chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. We use grouped query attention (GQA) to improve inference speed. **Languages covered**: The model has been trained on 23 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian) and evaluated on 10 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese). **Context length**: Command R 08-2024 supports a context length of 128K. ### Tool use & Agent capabilities: Command R 08-2024 has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance. Command R 08-2024’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R 08-2024 may use one of its supplied tools more than once. The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the `directly_answer` tool, but it can be removed or renamed if required. Comprehensive documentation for working with Command R 08-2024's tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r). Command R 08-2024 also supports Hugging Face's [tool use API](https://huggingface.co/docs/transformers/main/en/chat_templating#advanced-tool-use--function-calling) The code snippet below shows a minimal working example on how to render a prompt. <details> <summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use: tools = [ { "name": "internet_search", "description": "Returns a list of relevant document snippets for a textual query retrieved from the internet", "parameter_definitions": { "query": { "description": "Query to search the internet with", "type": 'str', "required": True } } }, { 'name': "directly_answer", "description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history", 'parameter_definitions': {} } ] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_tool_use_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Usage: Rendering prompts with the Tool Use API [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use # Type hints and docstrings from Python functions are automatically extracted def internet_search(query: str): """ Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query: Query to search the internet with """ pass def directly_answer(): """ Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass tools = [internet_search, directly_answer] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_chat_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling. ## Available Tools Here is a list of tools that you have available to you: ```python def internet_search(query: str) -> List[Dict]: """Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query (str): Query to search the internet with """ pass ``` ```python def directly_answer() -> List[Dict]: """Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass ```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example: ```json [ { "tool_name": title of the tool in the specification, "parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters } ]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary> ```` Action: ```json [ { "tool_name": "internet_search", "parameters": { "query": "biggest penguin in the world" } } ] ``` ```` </details> ### Grounded Generation and RAG Capabilities: Command R 08-2024 has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG).This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation. Command R 08-2024’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured. By default, Command R 08-2024 will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation. The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens. Comprehensive documentation for working with Command R 08-2024's grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r). The code snippet below shows a minimal working example on how to render a prompt. <details> <summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary> ````python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # define documents to ground on: documents = [ { "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." }, { "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."} ] # render the tool use prompt as a string: grounded_generation_prompt = tokenizer.apply_grounded_generation_template( conversation, documents=documents, citation_mode="accurate", # or "fast" tokenize=False, add_generation_prompt=True, ) print(grounded_generation_prompt) ```` </details> <details> <summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results> Document: 0 title: Tall penguins text: Emperor penguins are the tallest growing up to 122 cm in height. Document: 1 title: Penguin habitats text: Emperor penguins only live in Antarctica. </results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line. Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'. Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'. Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup. Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary> ```` Relevant Documents: 0,1 Cited Documents: 0,1 Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres. Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0> ```` </details> ### Code Capabilities: Command R 08-2024 has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions. ### Model Card Contact For errors or additional questions about details in this model card, contact [[email protected]](mailto:[email protected]). ### Terms of Use: We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 35 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). ### Try Chat: You can try Command-R chat in the playground [here](https://dashboard.cohere.com/playground/chat).
null
Non_BioNLP
I have no idea what I’m doing… if this causes the apocalypse someone please let me know. c4ai-command-r-08-2024 8.0bpw h8 EXL2 Includes [measurement.json](https://huggingface.co/FuturisticVibes/c4ai-command-r-08-2024-8.0bpw-h8-exl2/tree/measurement) file for further quantization Original Model: https://huggingface.co/CohereForAI/c4ai-command-r-08-2024 # Original Model Card # Model Card for C4AI Command R 08-2024 ## Model Summary <!-- Provide a quick summary of what the model is/does. --> C4AI Command R 08-2024 is a research release of a 35 billion parameter highly performant generative model. Command R 08-2024 is a large language model with open weights optimized for a variety of use cases including reasoning, summarization, and question answering. Command R 08-2024 has the capability for multilingual generation, trained on 23 languages and evaluated in 10 languages and highly performant RAG capabilities. Developed by: Cohere and [Cohere For AI](https://cohere.for.ai) - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: c4ai-command-r-08-2024 - Model Size: 35 billion parameters - Context length: 128K **Try C4AI Command R** If you want to try Command R before downloading the weights, the model is hosted in a hugging face space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command?model=command-r-08-2024). **Usage** Please use `transformers` version 4.39.1 or higher ```python # pip install 'transformers>=4.39.1' from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/c4ai-command-r-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r-08-2024 chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. We use grouped query attention (GQA) to improve inference speed. **Languages covered**: The model has been trained on 23 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian) and evaluated on 10 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese). **Context length**: Command R 08-2024 supports a context length of 128K. ### Tool use & Agent capabilities: Command R 08-2024 has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance. Command R 08-2024’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R 08-2024 may use one of its supplied tools more than once. The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the `directly_answer` tool, but it can be removed or renamed if required. Comprehensive documentation for working with Command R 08-2024's tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r). Command R 08-2024 also supports Hugging Face's [tool use API](https://huggingface.co/docs/transformers/main/en/chat_templating#advanced-tool-use--function-calling) The code snippet below shows a minimal working example on how to render a prompt. <details> <summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use: tools = [ { "name": "internet_search", "description": "Returns a list of relevant document snippets for a textual query retrieved from the internet", "parameter_definitions": { "query": { "description": "Query to search the internet with", "type": 'str', "required": True } } }, { 'name': "directly_answer", "description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history", 'parameter_definitions': {} } ] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_tool_use_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Usage: Rendering prompts with the Tool Use API [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use # Type hints and docstrings from Python functions are automatically extracted def internet_search(query: str): """ Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query: Query to search the internet with """ pass def directly_answer(): """ Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass tools = [internet_search, directly_answer] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_chat_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling. ## Available Tools Here is a list of tools that you have available to you: ```python def internet_search(query: str) -> List[Dict]: """Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query (str): Query to search the internet with """ pass ``` ```python def directly_answer() -> List[Dict]: """Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass ```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example: ```json [ { "tool_name": title of the tool in the specification, "parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters } ]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary> ```` Action: ```json [ { "tool_name": "internet_search", "parameters": { "query": "biggest penguin in the world" } } ] ``` ```` </details> ### Grounded Generation and RAG Capabilities: Command R 08-2024 has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG).This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation. Command R 08-2024’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured. By default, Command R 08-2024 will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation. The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens. Comprehensive documentation for working with Command R 08-2024's grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r). The code snippet below shows a minimal working example on how to render a prompt. <details> <summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary> ````python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # define documents to ground on: documents = [ { "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." }, { "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."} ] # render the tool use prompt as a string: grounded_generation_prompt = tokenizer.apply_grounded_generation_template( conversation, documents=documents, citation_mode="accurate", # or "fast" tokenize=False, add_generation_prompt=True, ) print(grounded_generation_prompt) ```` </details> <details> <summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results> Document: 0 title: Tall penguins text: Emperor penguins are the tallest growing up to 122 cm in height. Document: 1 title: Penguin habitats text: Emperor penguins only live in Antarctica. </results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line. Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'. Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'. Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup. Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary> ```` Relevant Documents: 0,1 Cited Documents: 0,1 Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres. Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0> ```` </details> ### Code Capabilities: Command R 08-2024 has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions. ### Model Card Contact For errors or additional questions about details in this model card, contact [[email protected]](mailto:[email protected]). ### Terms of Use: We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 35 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). ### Try Chat: You can try Command-R chat in the playground [here](https://dashboard.cohere.com/playground/chat).
{"language": ["en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar"], "library_name": "transformers", "license": "cc-by-nc-4.0"}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
42,735
Hamid-reza/mt5-small-finetuned-digikala-titleGen
Hamid-reza
summarization
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "summarization", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-02-07T19:19:31Z
2023-02-08T17:09:14+00:00
13
0
--- license: apache-2.0 metrics: - rouge tags: - summarization - generated_from_trainer model-index: - name: mt5-small-finetuned-digikala-titleGen results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-digikala-titleGen This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8801 - Rouge1: 70.3489 - Rouge2: 43.245 - Rougel: 34.6608 - Rougelsum: 34.6608 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 7.5555 | 1.0 | 847 | 3.2594 | 45.6729 | 19.6446 | 31.5974 | 31.5974 | | 4.1386 | 2.0 | 1694 | 3.0347 | 58.3021 | 32.8172 | 33.9012 | 33.9012 | | 3.7449 | 3.0 | 2541 | 2.9665 | 66.731 | 40.8991 | 34.2203 | 34.2203 | | 3.5575 | 4.0 | 3388 | 2.9102 | 65.598 | 39.4081 | 34.5116 | 34.5116 | | 3.4062 | 5.0 | 4235 | 2.8944 | 69.6081 | 42.8707 | 34.6622 | 34.6622 | | 3.3408 | 6.0 | 5082 | 2.8888 | 70.2123 | 42.8639 | 34.5669 | 34.5669 | | 3.3025 | 7.0 | 5929 | 2.8801 | 70.3489 | 43.245 | 34.6608 | 34.6608 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-digikala-titleGen This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8801 - Rouge1: 70.3489 - Rouge2: 43.245 - Rougel: 34.6608 - Rougelsum: 34.6608 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 7.5555 | 1.0 | 847 | 3.2594 | 45.6729 | 19.6446 | 31.5974 | 31.5974 | | 4.1386 | 2.0 | 1694 | 3.0347 | 58.3021 | 32.8172 | 33.9012 | 33.9012 | | 3.7449 | 3.0 | 2541 | 2.9665 | 66.731 | 40.8991 | 34.2203 | 34.2203 | | 3.5575 | 4.0 | 3388 | 2.9102 | 65.598 | 39.4081 | 34.5116 | 34.5116 | | 3.4062 | 5.0 | 4235 | 2.8944 | 69.6081 | 42.8707 | 34.6622 | 34.6622 | | 3.3408 | 6.0 | 5082 | 2.8888 | 70.2123 | 42.8639 | 34.5669 | 34.5669 | | 3.3025 | 7.0 | 5929 | 2.8801 | 70.3489 | 43.245 | 34.6608 | 34.6608 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
{"license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-digikala-titleGen", "results": []}]}
task
[ "SUMMARIZATION" ]
42,736
CohereForAI/c4ai-command-r-plus-08-2024
CohereForAI
text-generation
[ "transformers", "safetensors", "cohere", "text-generation", "conversational", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "doi:10.57967/hf/3136", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
2024-08-21T09:23:17Z
2025-04-15T21:58:00+00:00
4,476
249
--- language: - en - fr - de - es - it - pt - ja - ko - zh - ar library_name: transformers license: cc-by-nc-4.0 inference: false extra_gated_prompt: By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and acknowledge that the information you provide will be collected, used, and shared in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy). You’ll receive email updates about C4AI and Cohere research, events, products and services. You can unsubscribe at any time. extra_gated_fields: Name: text Affiliation: text Country: type: select options: - Aruba - Afghanistan - Angola - Anguilla - Åland Islands - Albania - Andorra - United Arab Emirates - Argentina - Armenia - American Samoa - Antarctica - French Southern Territories - Antigua and Barbuda - Australia - Austria - Azerbaijan - Burundi - Belgium - Benin - Bonaire Sint Eustatius and Saba - Burkina Faso - Bangladesh - Bulgaria - Bahrain - Bahamas - Bosnia and Herzegovina - Saint Barthélemy - Belarus - Belize - Bermuda - Plurinational State of Bolivia - Brazil - Barbados - Brunei-Darussalam - Bhutan - Bouvet-Island - Botswana - Central African Republic - Canada - Cocos (Keeling) Islands - Switzerland - Chile - China - Côte-dIvoire - Cameroon - Democratic Republic of the Congo - Cook Islands - Colombia - Comoros - Cabo Verde - Costa Rica - Cuba - Curaçao - Christmas Island - Cayman Islands - Cyprus - Czechia - Germany - Djibouti - Dominica - Denmark - Dominican Republic - Algeria - Ecuador - Egypt - Eritrea - Western Sahara - Spain - Estonia - Ethiopia - Finland - Fiji - Falkland Islands (Malvinas) - France - Faroe Islands - Federated States of Micronesia - Gabon - United Kingdom - Georgia - Guernsey - Ghana - Gibraltar - Guinea - Guadeloupe - Gambia - Guinea Bissau - Equatorial Guinea - Greece - Grenada - Greenland - Guatemala - French Guiana - Guam - Guyana - Hong Kong - Heard Island and McDonald Islands - Honduras - Croatia - Haiti - Hungary - Indonesia - Isle of Man - India - British Indian Ocean Territory - Ireland - Islamic Republic of Iran - Iraq - Iceland - Israel - Italy - Jamaica - Jersey - Jordan - Japan - Kazakhstan - Kenya - Kyrgyzstan - Cambodia - Kiribati - Saint-Kitts-and-Nevis - South Korea - Kuwait - Lao-Peoples-Democratic-Republic - Lebanon - Liberia - Libya - Saint-Lucia - Liechtenstein - Sri Lanka - Lesotho - Lithuania - Luxembourg - Latvia - Macao - Saint Martin (French-part) - Morocco - Monaco - Republic of Moldova - Madagascar - Maldives - Mexico - Marshall Islands - North Macedonia - Mali - Malta - Myanmar - Montenegro - Mongolia - Northern Mariana Islands - Mozambique - Mauritania - Montserrat - Martinique - Mauritius - Malawi - Malaysia - Mayotte - Namibia - New Caledonia - Niger - Norfolk Island - Nigeria - Nicaragua - Niue - Netherlands - Norway - Nepal - Nauru - New Zealand - Oman - Pakistan - Panama - Pitcairn - Peru - Philippines - Palau - Papua New Guinea - Poland - Puerto Rico - North Korea - Portugal - Paraguay - State of Palestine - French Polynesia - Qatar - Réunion - Romania - Russia - Rwanda - Saudi Arabia - Sudan - Senegal - Singapore - South Georgia and the South Sandwich Islands - Saint Helena Ascension and Tristan da Cunha - Svalbard and Jan Mayen - Solomon Islands - Sierra Leone - El Salvador - San Marino - Somalia - Saint Pierre and Miquelon - Serbia - South Sudan - Sao Tome and Principe - Suriname - Slovakia - Slovenia - Sweden - Eswatini - Sint Maarten (Dutch-part) - Seychelles - Syrian Arab Republic - Turks and Caicos Islands - Chad - Togo - Thailand - Tajikistan - Tokelau - Turkmenistan - Timor Leste - Tonga - Trinidad and Tobago - Tunisia - Turkey - Tuvalu - Taiwan - United Republic of Tanzania - Uganda - Ukraine - United States Minor Outlying Islands - Uruguay - United-States - Uzbekistan - Holy See (Vatican City State) - Saint Vincent and the Grenadines - Bolivarian Republic of Venezuela - Virgin Islands British - Virgin Islands U.S. - VietNam - Vanuatu - Wallis and Futuna - Samoa - Yemen - South Africa - Zambia - Zimbabwe I agree to use this model for non-commercial use ONLY: checkbox --- # Model Card for C4AI Command R+ 08-2024 ## Model Summary C4AI Command R+ 08-2024 is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ 08-2024 is a multilingual model trained on 23 languages and evaluated in 10 languages. Command R+ 08-2024 is optimized for a variety of use cases including reasoning, summarization, and question answering. C4AI Command R+ 08-2024 is part of a family of open weight releases from Cohere For AI and Cohere. Our smaller companion model is [C4AI Command R 08-2024](https://huggingface.co/CohereForAI/c4ai-command-r-08-2024). - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: c4ai-command-r-plus-08-2024 - Model Size: 104 billion parameters - Context length: 128K **Try C4AI Command R+** You can try out C4AI Command R+ before downloading the weights in our hosted [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/c4ai-command?model=command-r-plus-08-2024). **Usage** Please use `transformers` version 4.39.1 or higher ```python # pip install 'transformers>=4.39.1' from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/c4ai-command-r-plus-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r-plus-08-2024 chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. We use grouped query attention (GQA) to improve inference speed. **Languages covered**: The model has been trained on 23 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian) and evaluated on 10 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese). **Context length**: Command R+ 08-2024 supports a context length of 128K. ### Grounded Generation and RAG Capabilities: Command R+ 08-2024 has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation. Command R+ 08-2024’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured. By default, Command R+ 08-2024 will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation. The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens. Comprehensive documentation for working with Command R+ 08-2024's grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r#augmented-generation-prompt-template-rag-and-summarization), [here](https://docs.cohere.com/docs/prompting-command-r#augmented-generation-rag-with-command-rr) and [here](https://docs.cohere.com/docs/prompting-command-r#augmented-generation-summarization-with-command-rr). You can render the Grounded Generation prompt template by using the function `apply_grounded_generation_template()`. The code snippet below shows a minimal working example on how to render this prompt. <details> <summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary> ````python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # define documents to ground on: documents = [ { "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." }, { "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."} ] # render the tool use prompt as a string: grounded_generation_prompt = tokenizer.apply_grounded_generation_template( conversation, documents=documents, citation_mode="accurate", # or "fast" tokenize=False, add_generation_prompt=True, ) print(grounded_generation_prompt) ```` </details> <details> <summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results> Document: 0 title: Tall penguins text: Emperor penguins are the tallest growing up to 122 cm in height. Document: 1 title: Penguin habitats text: Emperor penguins only live in Antarctica. </results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line. Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'. Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'. Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup. Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary> ```` Relevant Documents: 0,1 Cited Documents: 0,1 Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres. Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0> ```` </details> ### Single-Step Tool Use Capabilities ("Function Calling"): Single-step tool use (or “Function Calling”) allows Command R+ 08-2024 to interact with external tools like APIs, databases, or search engines. Single-step tool use is made of two model inferences: - Tool Selection: The model decides which tools to call and with what parameters. It’s then up to the developer to execute these tool calls and obtain tool results. - Response Generation: The model generates the final response given the tool results. You can learn more about single step tool use in our [documentation](https://docs.cohere.com/docs/tool-use). Command R+ 08-2024 has been specifically trained with single-step tool use (or “Function Calling”) capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance. This is why we recommend using the prompt template described below. Command R+ 08-2024’s single-step tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ 08-2024 may use one of its supplied tools more than once. The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the `directly_answer` tool, but it can be removed or renamed if required. Comprehensive documentation for working with Command R+ 08-2024's single-step tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r#single-step-tool-use-with-command-rr-function-calling) and [here](https://docs.cohere.com/docs/prompting-command-r#single-step-tool-use-with-command-rr-function-calling-1). You can render the single-step tool use prompt template by using the function `apply_tool_use_template()`. The code snippet below shows a minimal working example on how to render this prompt. Command R+ 08-2024 also supports Hugging Face's [tool use API](https://huggingface.co/docs/transformers/main/en/chat_templating#advanced-tool-use--function-calling) to render the same prompt. <details> <summary><b>Usage: Rendering Single-Step Tool Use Prompts [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use: tools = [ { "name": "internet_search", "description": "Returns a list of relevant document snippets for a textual query retrieved from the internet", "parameter_definitions": { "query": { "description": "Query to search the internet with", "type": 'str', "required": True } } }, { 'name': "directly_answer", "description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history", 'parameter_definitions': {} } ] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_tool_use_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Usage: Rendering prompts with the Single-Step Tool Use API [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use # Type hints and docstrings from Python functions are automatically extracted def internet_search(query: str): """ Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query: Query to search the internet with """ pass def directly_answer(): """ Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass tools = [internet_search, directly_answer] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_chat_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Example Rendered Single-Step Tool Use Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling. ## Available Tools Here is a list of tools that you have available to you: ```python def internet_search(query: str) -> List[Dict]: """Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query (str): Query to search the internet with """ pass ``` ```python def directly_answer() -> List[Dict]: """Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass ```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example: ```json [ { "tool_name": title of the tool in the specification, "parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters } ]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Single-Step Tool Use Completion [CLICK TO EXPAND]</b></summary> ```` Action: ```json [ { "tool_name": "internet_search", "parameters": { "query": "biggest penguin in the world" } } ] ``` ```` </details> ### Multi-Step Tool Use Capabilities ("Agents"): Multi-step tool use is suited for building agents that can plan and execute a sequence of actions using multiple tools. Unlike single-step tool use, the model can perform several inference cycles, iterating through Action → Observation → Reflection until it decides on a final response. For more details, refer to our [documentation on multi-step tool use](https://docs.cohere.com/docs/multi-step-tool-use). Command R+ 08-2024 has been specifically trained with multi-step tool use (or “Agents”) capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance. This is why we recommend using the prompt template described below. The prompt template is not yet available in HuggingFace. However, comprehensive documentation for working with Command R+ 08-2024's multi-step tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r#multi-step-tool-use-with-command-rr-agents) and [here](https://docs.cohere.com/docs/prompting-command-r#multihop-tool-use-with-command-rr-agents). ### Code Capabilities: Command R+ 08-2024 has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions. ### Model Card Contact For errors or additional questions about details in this model card, contact [[email protected]](mailto:[email protected]). ### Terms of Use: We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). ### Try Chat: You can try Command R+ 08-2024 chat in the playground [here](https://dashboard.cohere.com/playground/chat). You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command?model=command-r-plus-08-2024). ### Cite To cite this model, use: ``` @misc {cohere_for_ai_2024, author = { {Cohere For AI} }, title = { c4ai-command-r-plus-08-2024 }, year = 2024, url = { https://huggingface.co/CohereForAI/c4ai-command-r-plus-08-2024 }, doi = { 10.57967/hf/3135 }, publisher = { Hugging Face } } ```
null
Non_BioNLP
# Model Card for Cohere Labs Command R+ 08-2024 ## Model Summary Cohere Labs Command R+ 08-2024 is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. Command R+ 08-2024 is a multilingual model trained on 23 languages and evaluated in 10 languages. Command R+ 08-2024 is optimized for a variety of use cases including reasoning, summarization, and question answering. Cohere Labs Command R+ 08-2024 is part of a family of open weight releases from Cohere Labs and Cohere. Our smaller companion model is [Cohere Labs Command R 08-2024](https://huggingface.co/CohereLabs/c4ai-command-r-08-2024). - Point of Contact: [Cohere Labs](https://cohere.for.ai/) - License:[CC-BY-NC](https://cohere.com/cohere-labs-cc-by-nc-license), requires also adhering to [Cohere Lab's Acceptable Use Policy](https://docs.cohere.com/docs/cohere-labs-acceptable-use-policy) - Model: coherelabs-command-r-plus-08-2024 - Model Size: 104 billion parameters - Context length: 128K **Try Cohere Labs Command R+** You can try out Cohere Labs Command R+ before downloading the weights in our hosted [Hugging Face Space](https://huggingface.co/spaces/CohereLabs/c4ai-command?model=command-r-plus-08-2024). **Usage** Please use `transformers` version 4.39.1 or higher ```python # pip install 'transformers>=4.39.1' from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereLabs/c4ai-command-r-plus-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r-plus-08-2024 chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. We use grouped query attention (GQA) to improve inference speed. **Languages covered**: The model has been trained on 23 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian) and evaluated on 10 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese). **Context length**: Command R+ 08-2024 supports a context length of 128K. ### Grounded Generation and RAG Capabilities: Command R+ 08-2024 has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation. Command R+ 08-2024’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured. By default, Command R+ 08-2024 will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation. The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens. Comprehensive documentation for working with Command R+ 08-2024's grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r#augmented-generation-prompt-template-rag-and-summarization), [here](https://docs.cohere.com/docs/prompting-command-r#augmented-generation-rag-with-command-rr) and [here](https://docs.cohere.com/docs/prompting-command-r#augmented-generation-summarization-with-command-rr). You can render the Grounded Generation prompt template by using the function `apply_grounded_generation_template()`. The code snippet below shows a minimal working example on how to render this prompt. <details> <summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary> ````python from transformers import AutoTokenizer model_id = "CohereLabs/c4ai-command-r-plus-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # define documents to ground on: documents = [ { "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." }, { "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."} ] # render the tool use prompt as a string: grounded_generation_prompt = tokenizer.apply_grounded_generation_template( conversation, documents=documents, citation_mode="accurate", # or "fast" tokenize=False, add_generation_prompt=True, ) print(grounded_generation_prompt) ```` </details> <details> <summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results> Document: 0 title: Tall penguins text: Emperor penguins are the tallest growing up to 122 cm in height. Document: 1 title: Penguin habitats text: Emperor penguins only live in Antarctica. </results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line. Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'. Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'. Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup. Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary> ```` Relevant Documents: 0,1 Cited Documents: 0,1 Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres. Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0> ```` </details> ### Single-Step Tool Use Capabilities ("Function Calling"): Single-step tool use (or “Function Calling”) allows Command R+ 08-2024 to interact with external tools like APIs, databases, or search engines. Single-step tool use is made of two model inferences: - Tool Selection: The model decides which tools to call and with what parameters. It’s then up to the developer to execute these tool calls and obtain tool results. - Response Generation: The model generates the final response given the tool results. You can learn more about single step tool use in our [documentation](https://docs.cohere.com/docs/tool-use). Command R+ 08-2024 has been specifically trained with single-step tool use (or “Function Calling”) capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance. This is why we recommend using the prompt template described below. Command R+ 08-2024’s single-step tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ 08-2024 may use one of its supplied tools more than once. The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the `directly_answer` tool, but it can be removed or renamed if required. Comprehensive documentation for working with Command R+ 08-2024's single-step tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r#single-step-tool-use-with-command-rr-function-calling) and [here](https://docs.cohere.com/docs/prompting-command-r#single-step-tool-use-with-command-rr-function-calling-1). You can render the single-step tool use prompt template by using the function `apply_tool_use_template()`. The code snippet below shows a minimal working example on how to render this prompt. Command R+ 08-2024 also supports Hugging Face's [tool use API](https://huggingface.co/docs/transformers/main/en/chat_templating#advanced-tool-use--function-calling) to render the same prompt. <details> <summary><b>Usage: Rendering Single-Step Tool Use Prompts [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereLabs/c4ai-command-r-plus-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use: tools = [ { "name": "internet_search", "description": "Returns a list of relevant document snippets for a textual query retrieved from the internet", "parameter_definitions": { "query": { "description": "Query to search the internet with", "type": 'str', "required": True } } }, { 'name': "directly_answer", "description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history", 'parameter_definitions': {} } ] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_tool_use_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Usage: Rendering prompts with the Single-Step Tool Use API [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereLabs/c4ai-command-r-plus-08-2024" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use # Type hints and docstrings from Python functions are automatically extracted def internet_search(query: str): """ Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query: Query to search the internet with """ pass def directly_answer(): """ Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass tools = [internet_search, directly_answer] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_chat_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Example Rendered Single-Step Tool Use Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling. ## Available Tools Here is a list of tools that you have available to you: ```python def internet_search(query: str) -> List[Dict]: """Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query (str): Query to search the internet with """ pass ``` ```python def directly_answer() -> List[Dict]: """Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass ```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example: ```json [ { "tool_name": title of the tool in the specification, "parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters } ]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Single-Step Tool Use Completion [CLICK TO EXPAND]</b></summary> ```` Action: ```json [ { "tool_name": "internet_search", "parameters": { "query": "biggest penguin in the world" } } ] ``` ```` </details> ### Multi-Step Tool Use Capabilities ("Agents"): Multi-step tool use is suited for building agents that can plan and execute a sequence of actions using multiple tools. Unlike single-step tool use, the model can perform several inference cycles, iterating through Action → Observation → Reflection until it decides on a final response. For more details, refer to our [documentation on multi-step tool use](https://docs.cohere.com/docs/multi-step-tool-use). Command R+ 08-2024 has been specifically trained with multi-step tool use (or “Agents”) capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance. This is why we recommend using the prompt template described below. The prompt template is not yet available in HuggingFace. However, comprehensive documentation for working with Command R+ 08-2024's multi-step tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r#multi-step-tool-use-with-command-rr-agents) and [here](https://docs.cohere.com/docs/prompting-command-r#multihop-tool-use-with-command-rr-agents). ### Code Capabilities: Command R+ 08-2024 has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions. ### Model Card Contact For errors or additional questions about details in this model card, contact [email protected] ### Terms of Use: We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/cohere-labs-cc-by-nc-license), requires also adhering to [Cohere Lab's Acceptable Use Policy](https://docs.cohere.com/docs/cohere-labs-acceptable-use-policy) ### Try Chat: You can try Command R+ 08-2024 chat in the playground [here](https://dashboard.cohere.com/playground/chat). You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereLabs/c4ai-command?model=command-r-plus-08-2024). ### Cite To cite this model, use: ``` @misc {cohere_for_ai_2024, author = { {Cohere Labs} }, title = { c4ai-command-r-plus-08-2024 }, year = 2024, url = { https://huggingface.co/CohereLabs/c4ai-command-r-plus-08-2024 }, doi = { 10.57967/hf/3135 }, publisher = { Hugging Face } } ```
{"language": ["en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar"], "library_name": "transformers", "license": "cc-by-nc-4.0", "inference": false, "extra_gated_prompt": "By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and acknowledge that the information you provide will be collected, used, and shared in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy). You’ll receive email updates about Cohere Labs and Cohere research, events, products and services. You can unsubscribe at any time.", "extra_gated_fields": {"Name": "text", "Affiliation": "text", "Country": {"type": "select", "options": ["Aruba", "Afghanistan", "Angola", "Anguilla", "Åland Islands", "Albania", "Andorra", "United Arab Emirates", "Argentina", "Armenia", "American Samoa", "Antarctica", "French Southern Territories", "Antigua and Barbuda", "Australia", "Austria", "Azerbaijan", "Burundi", "Belgium", "Benin", "Bonaire Sint Eustatius and Saba", "Burkina Faso", "Bangladesh", "Bulgaria", "Bahrain", "Bahamas", "Bosnia and Herzegovina", "Saint Barthélemy", "Belarus", "Belize", "Bermuda", "Plurinational State of Bolivia", "Brazil", "Barbados", "Brunei-Darussalam", "Bhutan", "Bouvet-Island", "Botswana", "Central African Republic", "Canada", "Cocos (Keeling) Islands", "Switzerland", "Chile", "China", "Côte-dIvoire", "Cameroon", "Democratic Republic of the Congo", "Cook Islands", "Colombia", "Comoros", "Cabo Verde", "Costa Rica", "Cuba", "Curaçao", "Christmas Island", "Cayman Islands", "Cyprus", "Czechia", "Germany", "Djibouti", "Dominica", "Denmark", "Dominican Republic", "Algeria", "Ecuador", "Egypt", "Eritrea", "Western Sahara", "Spain", "Estonia", "Ethiopia", "Finland", "Fiji", "Falkland Islands (Malvinas)", "France", "Faroe Islands", "Federated States of Micronesia", "Gabon", "United Kingdom", "Georgia", "Guernsey", "Ghana", "Gibraltar", "Guinea", "Guadeloupe", "Gambia", "Guinea Bissau", "Equatorial Guinea", "Greece", "Grenada", "Greenland", "Guatemala", "French Guiana", "Guam", "Guyana", "Hong Kong", "Heard Island and McDonald Islands", "Honduras", "Croatia", "Haiti", "Hungary", "Indonesia", "Isle of Man", "India", "British Indian Ocean Territory", "Ireland", "Islamic Republic of Iran", "Iraq", "Iceland", "Israel", "Italy", "Jamaica", "Jersey", "Jordan", "Japan", "Kazakhstan", "Kenya", "Kyrgyzstan", "Cambodia", "Kiribati", "Saint-Kitts-and-Nevis", "South Korea", "Kuwait", "Lao-Peoples-Democratic-Republic", "Lebanon", "Liberia", "Libya", "Saint-Lucia", "Liechtenstein", "Sri Lanka", "Lesotho", "Lithuania", "Luxembourg", "Latvia", "Macao", "Saint Martin (French-part)", "Morocco", "Monaco", "Republic of Moldova", "Madagascar", "Maldives", "Mexico", "Marshall Islands", "North Macedonia", "Mali", "Malta", "Myanmar", "Montenegro", "Mongolia", "Northern Mariana Islands", "Mozambique", "Mauritania", "Montserrat", "Martinique", "Mauritius", "Malawi", "Malaysia", "Mayotte", "Namibia", "New Caledonia", "Niger", "Norfolk Island", "Nigeria", "Nicaragua", "Niue", "Netherlands", "Norway", "Nepal", "Nauru", "New Zealand", "Oman", "Pakistan", "Panama", "Pitcairn", "Peru", "Philippines", "Palau", "Papua New Guinea", "Poland", "Puerto Rico", "North Korea", "Portugal", "Paraguay", "State of Palestine", "French Polynesia", "Qatar", "Réunion", "Romania", "Russia", "Rwanda", "Saudi Arabia", "Sudan", "Senegal", "Singapore", "South Georgia and the South Sandwich Islands", "Saint Helena Ascension and Tristan da Cunha", "Svalbard and Jan Mayen", "Solomon Islands", "Sierra Leone", "El Salvador", "San Marino", "Somalia", "Saint Pierre and Miquelon", "Serbia", "South Sudan", "Sao Tome and Principe", "Suriname", "Slovakia", "Slovenia", "Sweden", "Eswatini", "Sint Maarten (Dutch-part)", "Seychelles", "Syrian Arab Republic", "Turks and Caicos Islands", "Chad", "Togo", "Thailand", "Tajikistan", "Tokelau", "Turkmenistan", "Timor Leste", "Tonga", "Trinidad and Tobago", "Tunisia", "Turkey", "Tuvalu", "Taiwan", "United Republic of Tanzania", "Uganda", "Ukraine", "United States Minor Outlying Islands", "Uruguay", "United-States", "Uzbekistan", "Holy See (Vatican City State)", "Saint Vincent and the Grenadines", "Bolivarian Republic of Venezuela", "Virgin Islands British", "Virgin Islands U.S.", "VietNam", "Vanuatu", "Wallis and Futuna", "Samoa", "Yemen", "South Africa", "Zambia", "Zimbabwe"]}, "I agree to use this model for non-commercial use ONLY": "checkbox"}}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
42,737
kenoc/mxbai-abat-matryoshka
kenoc
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:5115", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "en", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:mixedbread-ai/mxbai-embed-large-v1", "base_model:finetune:mixedbread-ai/mxbai-embed-large-v1", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2025-01-16T14:45:34Z
2025-01-16T14:46:23+00:00
8
0
--- base_model: mixedbread-ai/mxbai-embed-large-v1 language: - en library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:5115 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: gepackte Zahl in ein Integerfeld stellen, oder was geschieht, wenn Sie eine gepackte Zahl zum Inhalt eines Datumsfeldes addieren, oder was geschieht, wenn Sie eine gepackte Zahl in ein Cha- racter-Feld schreiben. Lassen Sie uns hierzu eine Konvertierungsregel als Beispiel ansehen. Dezimalzahl in ganze Zahl umwandeln Ist der Quelltyp ein Datenobjekt vom Typ p und der Zieltyp ein Daten- objekt vom Typ i (eine ganze Zahl), rundet das System die Dezimalstel- len des Quellfeldes und stellt den Inhalt rechtsbündig in das Zielfeld. I d Zi lf ld i Lä h i h d i d li k i N ll sentences: - Wie kann eine selbst entwickelte Pflegetransaktion für Geschäftsregeln oder sonstige Customizing-Inhalte helfen? - Welche Technologien bietet der ABAP-Stack zur Schnittstellenentwicklung an? - Welche Gründe sprechen gegen die Verwendung von Tabellen mit Kopfzeile in der heutigen ABAP-Entwicklung? - source_sentence: Systemkontext Das SAP BTP, ABAP Environment ist eine rein technische Plattform (ohne installierte Anwendungssoftware) und in einen bestimmten Systemkon- text eingebettet, innerhalb dessen Sie Anwendungen entwickeln und be- treiben können (siehe Abbildung 12.1). ABAP Development Tools SAP Fiori Tools Git- Repository abapGit heißt im unternehmenseigenen Netzwerk) installiert und baut ausgehend von diesem lokalen Netzwerk eine sichere Verbindung zur SAP BTP auf. SAP-Fiori-Apps Zur Bereitstellung Ihrer RAP-Anwendungen besitzt das ABAP Environment ein eigenes SAP Fiori Launchpad, das als Eintrittspunkt für verschiedene SAP-Fiori-Apps dienen kann und das Sie über einen Webbrowser erreichen (siehe Abschnitt 12.3, »SAP-Fiori-Apps bereitstellen und Berechtigungen vergeben«). Ihre eigenen sentences: - Wie kann man die Vorschau in der Inbox des Benutzers programmieren, sodass sie leistungsfähig und schnell ist und unerträgliche Wartezeiten bei der Arbeit mit dem SAP Business Workplace verhindert? - Was passiert beim Debuggen, wenn Checkpoints durch einen Doppelklick auf die Anweisung und Bestätigen angelegt werden? - Welche Tools werden im ABAP Environment für die Entwicklung von SAP Fiori-Anwendungen sowie für das Deployment und die Bereitstellung dieser Anwendungen verwendet, wobei auch die Verwendung der technischen Infrastrukturkomponenten berücksichtigt werden muss? - source_sentence: "sind das MVC-Modell und der Datenteil des (OData-) Services verpflichtend.\ \ Die Annotationen und UI-Anpassungen (dazu im Folgenden mehr) dienen der effizienteren\ \ Entwicklung und Erweiterbar- keit. UI Model View Controller Annotationen UI-Anpassungsmodus\ \ adaptieren/ erweitern nen, ist beschränkt. Abschnitt 3.2.4, »Anwendungen«, listet\ \ die Möglich- keiten auf, die zum Zeitpunkt der Drucklegung dieses Buches bestehen.\ \ In diesem Abschnitt beschreiben wir auch, wie Sie diese Änderungen als Anwendungsexperte\ \ an den betriebenen Oberflächen durchführen. Die SAP Web IDE auf der SAP Cloud\ \ Platform bietet eine alternative Möglich- keit, um solche Anpassungen als Entwickler\ \ zu erstellen. \x02 Annotationen Annotationen können im OData-Service über Erweiterungstechniken\ \ der jeweiligen" sentences: - Wie kann SAP Screen Personas bei der Implementierung eines ABAP-Skriptes, das unter einer Sekunde laufen muss, unterstützen? - Wie müssen die Verarbeitungszeitpunkte im PBO-Block, wenn eine interne Tabelle mit doppelten Einträgen verwendet wird und die Datenbanktabelle komplett gelesen werden soll? - Wie kann ich den Pretty-Printer über (F1) nutzen? - source_sentence: zu dem passt, was das Frame- work erwartet. Sie müssen gewissermaßen sicherstellen, dass sie auch die gleiche Sprache sprechen, wenn der ersehnte Anruf aus Hollywood kommt. Wichtige Frameworks im SAP-Standard Was macht einen guten und erfahrenen ABAP-Programmierer aus? Darüber kann man viele interessante Diskussionen führen. Eine der wesentlichen Komponenten dürfte die Kenntnis der im ABAP-Server nutzbaren Frameworks sein. Die im Folgenden vorgestellten Frameworks sind in erster Linie für Sie Web Dynpro ABAP Nachfolger der klassischen Dynpros Web Dynpro ABAP ist eine der wichtigsten Neuerungen der letzten zehn Jahre in der SAP-Welt. Es handelt sich dabei um eine Oberflä- chentechnologie, also sentences: - Was passiert, wenn man versucht, einen Datensatz in einer Datenbanktabelle zu modifizieren, aber kein Satz mit demselben Primärschlüssel existiert, und wie kann man den Returncode identifizieren? - Wie kann man die Rollenzuordnung zu seinem Benutzer in der Benutzerpflege (SU01) anzeigen lassen? - Was sind die Vorteile von Web Dynpro ABAP? - source_sentence: angegebenen anderen Objektins- tanz als Rücksprungziel definieren. Selbstverständlich sind auch Auf- rufe statischer Methoden möglich. Exceptions auswerten Returncode auswerten Nach Absetzen des Funktionsbausteinaufrufs erhalten Sie sofort die ll b d bl f k d d lung. Für jede Servergruppe werden bestimmte Schwellwerte gepflegt, die bestimmen, in welchem Umfang die Mitglieder der Gruppe Ressour- cen für asynchrone RFCs zur Verfügung stellen. Setzt ein Programm einen asynchronen RFC-Funktionsbausteinaufruf gegen eine Servergruppe ab (DESTINATION IN GROUP), so prüft das System automatisch die Auslastung und ermittelt die Instanz, die die Anforderung bearbeiten wird. Neben den explizit in der Transaktion RZ12 definierten Servergruppen gibt es sentences: - Was passiert, wenn ein Programm einen asynchronen RFC-Funktionsbausteinaufruf gegen eine Servergruppe absetzt und die Systeminstanzen bereits auslastet? - Welche Aufgaben kann man im Business Workplace des SAP-Systems erledigen? - Wie kann man das Visitor Pattern in ABAP umsetzen, indem man definiert, wie ein abstraktes Element von einem Besucher besucht werden soll? model-index: - name: mixedbread-ai/mxbai-embed-large-v1 results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.08963093145869948 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.20210896309314588 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.2601054481546573 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.4182776801405975 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.08963093145869948 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.06736965436438194 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.05202108963093146 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.041827768014059755 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.08963093145869948 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.20210896309314588 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.2601054481546573 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.4182776801405975 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.2286145159685347 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.17133442129048454 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.18651234259488886 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.09490333919156414 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.20035149384885764 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.2618629173989455 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.4147627416520211 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.09490333919156414 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.06678383128295255 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.0523725834797891 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.0414762741652021 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.09490333919156414 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.20035149384885764 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.2618629173989455 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.4147627416520211 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.23048397423019637 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.1748068178648144 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.19003346049167824 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.09314586994727592 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.20035149384885764 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.2548330404217926 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.39718804920913886 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.09314586994727592 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.06678383128295255 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.050966608084358524 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.03971880492091388 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.09314586994727592 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.20035149384885764 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.2548330404217926 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.39718804920913886 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.22531472414554513 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.1728798783719698 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.18866030878645856 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.0773286467486819 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.18453427065026362 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.26537785588752194 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.39718804920913886 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.0773286467486819 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.061511423550087874 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.053075571177504395 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.03971880492091388 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.0773286467486819 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.18453427065026362 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.26537785588752194 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.39718804920913886 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.21527271607083312 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.1597720869249868 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.17541010404455568 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.06678383128295255 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.1687170474516696 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.2530755711775044 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.38137082601054484 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.06678383128295255 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.056239015817223195 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.050615114235500885 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.03813708260105448 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.06678383128295255 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.1687170474516696 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.2530755711775044 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.38137082601054484 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.1993984243853605 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.14423661673222307 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.1585449114563028 name: Cosine Map@100 --- # mixedbread-ai/mxbai-embed-large-v1 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) <!-- at revision e7857440379da569f68f19e8403b69cd7be26e50 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - json - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("kenoc/mxbai-abat-matryoshka") # Run inference sentences = [ 'angegebenen anderen Objektins- tanz als Rücksprungziel definieren. Selbstverständlich sind auch Auf- rufe statischer Methoden möglich. Exceptions auswerten Returncode auswerten Nach Absetzen des Funktionsbausteinaufrufs erhalten Sie sofort die ll b d bl f k d d lung. Für jede Servergruppe werden bestimmte Schwellwerte gepflegt, die bestimmen, in welchem Umfang die Mitglieder der Gruppe Ressour- cen für asynchrone RFCs zur Verfügung stellen. Setzt ein Programm einen asynchronen RFC-Funktionsbausteinaufruf gegen eine Servergruppe ab (DESTINATION IN GROUP), so prüft das System automatisch die Auslastung und ermittelt die Instanz, die die Anforderung bearbeiten wird. Neben den explizit in der Transaktion RZ12 definierten Servergruppen gibt es', 'Was passiert, wenn ein Programm einen asynchronen RFC-Funktionsbausteinaufruf gegen eine Servergruppe absetzt und die Systeminstanzen bereits auslastet?', 'Welche Aufgaben kann man im Business Workplace des SAP-Systems erledigen?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 | |:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------| | cosine_accuracy@1 | 0.0896 | 0.0949 | 0.0931 | 0.0773 | 0.0668 | | cosine_accuracy@3 | 0.2021 | 0.2004 | 0.2004 | 0.1845 | 0.1687 | | cosine_accuracy@5 | 0.2601 | 0.2619 | 0.2548 | 0.2654 | 0.2531 | | cosine_accuracy@10 | 0.4183 | 0.4148 | 0.3972 | 0.3972 | 0.3814 | | cosine_precision@1 | 0.0896 | 0.0949 | 0.0931 | 0.0773 | 0.0668 | | cosine_precision@3 | 0.0674 | 0.0668 | 0.0668 | 0.0615 | 0.0562 | | cosine_precision@5 | 0.052 | 0.0524 | 0.051 | 0.0531 | 0.0506 | | cosine_precision@10 | 0.0418 | 0.0415 | 0.0397 | 0.0397 | 0.0381 | | cosine_recall@1 | 0.0896 | 0.0949 | 0.0931 | 0.0773 | 0.0668 | | cosine_recall@3 | 0.2021 | 0.2004 | 0.2004 | 0.1845 | 0.1687 | | cosine_recall@5 | 0.2601 | 0.2619 | 0.2548 | 0.2654 | 0.2531 | | cosine_recall@10 | 0.4183 | 0.4148 | 0.3972 | 0.3972 | 0.3814 | | **cosine_ndcg@10** | **0.2286** | **0.2305** | **0.2253** | **0.2153** | **0.1994** | | cosine_mrr@10 | 0.1713 | 0.1748 | 0.1729 | 0.1598 | 0.1442 | | cosine_map@100 | 0.1865 | 0.19 | 0.1887 | 0.1754 | 0.1585 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### json * Dataset: json * Size: 5,115 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 11 tokens</li><li>mean: 252.94 tokens</li><li>max: 479 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 45.33 tokens</li><li>max: 129 tokens</li></ul> | * Samples: | positive | anchor | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | <code>Standardtabelle ITAB01 wird deklariert; die initiale Spei- cherplatzbelegung wird dem System überlassen. Die interne Tabelle hat zwei Felder, die hinsichtlich ihrer Deklarationen auf das Dictionary verweisen. Zeile 23 bis 26 Die interne Tabelle ITAB02 wird deklariert. Sie soll die Struktur der Zei- g Zeile der Datenbanktabelle. Zeile 52 bis 55 Die Datenbanktabelle wird satzweise in ihre Workarea gelesen. Von dort aus werden namensgleiche Felder in die Kopfzeile der internen Tabelle übertragen. Nach dem Übertragen wird der Inhalt der Kopf- zeile als neue Zeile an den Tabellenrumpf angehängt. Der Inhalt der Kopfzeile bleibt dabei erhalten und wird in diesem Fall durch</code> | <code>Was passiert, wenn eine interne Tabelle ohne Kopfzeile deklariert ist?</code> | | <code>lässt sich leicht ablesen, dass ein Konstruktor mit mehreren optionalen Parametern und einer Dokumenta- tion nötig wäre, um diese Methoden zu ersetzen. Umgebungsunabhängige Wiederholbarkeit Mit dem eigenständigen Aufbau der testrelevanten Daten stellt die Testklasse sicher, dass ihre Testmethoden in jeder Umgebung wiederholbar sind (Repeatable- Prinzip). Tests sind für das Refactoring notwendig, um Regression zu vermei- den. Allerdings müssen Tests auch häufig angepasst werden, wenn sich das Produktdesign durch Refactoring ändert. Wie kann der Auf- wand dafür minimiert werden? Welche Rolle spielen dabei Abhängig- keiten des Testcodes vom Produktcode? Abhängigkeiten können sowohl bei der Erzeugung als auch bei der Verwendung eines</code> | <code>Was ist das Ziel von Refactoring?</code> | | <code>es ermöglicht, Aufgaben innerhalb eines Netzwerks auf Clients und Server zu verteilen. Der Server bietet Dienste an, die die Clients (Anwender-PCs) nutzen. SAP R/3 und später SAP ERP liegt ein drei­ stufiges Konzept zugrunde:  Datenbank, Server und Client Compliance Verfahren zur Einhaltung von Gesetzen und Richtlinien, zum Beispiel des Sarbanes-Oxley Acts oder von Arbeits­ schutzgesetzen. Mitarbeiter und die externen Dienstleister, die über das »Arbeitszeitblatt« unter ande­ rem eine Zeiterfassung erlaubt. Customer Relationship Management  SAP Customer Relationship Management Customer Service Kundenservice. Die Komponente CS in SAP ERP unterstützt die Abläufe in der Serviceabwicklung, zum Bei­ spiel die Bearbeitung von</code> | <code>Welche Funktionen bietet SAP CRM?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 8 - `learning_rate`: 2e-05 - `num_train_epochs`: 4 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `gradient_checkpointing`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 8 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: True - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 | |:-------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:| | 1.0 | 40 | - | 0.2192 | 0.2209 | 0.2122 | 0.1938 | 0.1583 | | 1.25 | 50 | 5.5269 | - | - | - | - | - | | 2.0 | 80 | - | 0.2215 | 0.2231 | 0.2179 | 0.2097 | 0.1887 | | 2.5 | 100 | 2.4018 | - | - | - | - | - | | **3.0** | **120** | **-** | **0.229** | **0.2266** | **0.2171** | **0.2188** | **0.1926** | | 3.75 | 150 | 1.4161 | - | - | - | - | - | | 4.0 | 160 | - | 0.2286 | 0.2305 | 0.2253 | 0.2153 | 0.1994 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.12.8 - Sentence Transformers: 3.3.1 - Transformers: 4.47.1 - PyTorch: 2.5.1+cu124 - Accelerate: 1.2.1 - Datasets: 2.19.2 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# mixedbread-ai/mxbai-embed-large-v1 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) <!-- at revision e7857440379da569f68f19e8403b69cd7be26e50 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - json - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("kenoc/mxbai-abat-matryoshka") # Run inference sentences = [ 'angegebenen anderen Objektins- tanz als Rücksprungziel definieren. Selbstverständlich sind auch Auf- rufe statischer Methoden möglich. Exceptions auswerten Returncode auswerten Nach Absetzen des Funktionsbausteinaufrufs erhalten Sie sofort die ll b d bl f k d d lung. Für jede Servergruppe werden bestimmte Schwellwerte gepflegt, die bestimmen, in welchem Umfang die Mitglieder der Gruppe Ressour- cen für asynchrone RFCs zur Verfügung stellen. Setzt ein Programm einen asynchronen RFC-Funktionsbausteinaufruf gegen eine Servergruppe ab (DESTINATION IN GROUP), so prüft das System automatisch die Auslastung und ermittelt die Instanz, die die Anforderung bearbeiten wird. Neben den explizit in der Transaktion RZ12 definierten Servergruppen gibt es', 'Was passiert, wenn ein Programm einen asynchronen RFC-Funktionsbausteinaufruf gegen eine Servergruppe absetzt und die Systeminstanzen bereits auslastet?', 'Welche Aufgaben kann man im Business Workplace des SAP-Systems erledigen?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 | |:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------| | cosine_accuracy@1 | 0.0896 | 0.0949 | 0.0931 | 0.0773 | 0.0668 | | cosine_accuracy@3 | 0.2021 | 0.2004 | 0.2004 | 0.1845 | 0.1687 | | cosine_accuracy@5 | 0.2601 | 0.2619 | 0.2548 | 0.2654 | 0.2531 | | cosine_accuracy@10 | 0.4183 | 0.4148 | 0.3972 | 0.3972 | 0.3814 | | cosine_precision@1 | 0.0896 | 0.0949 | 0.0931 | 0.0773 | 0.0668 | | cosine_precision@3 | 0.0674 | 0.0668 | 0.0668 | 0.0615 | 0.0562 | | cosine_precision@5 | 0.052 | 0.0524 | 0.051 | 0.0531 | 0.0506 | | cosine_precision@10 | 0.0418 | 0.0415 | 0.0397 | 0.0397 | 0.0381 | | cosine_recall@1 | 0.0896 | 0.0949 | 0.0931 | 0.0773 | 0.0668 | | cosine_recall@3 | 0.2021 | 0.2004 | 0.2004 | 0.1845 | 0.1687 | | cosine_recall@5 | 0.2601 | 0.2619 | 0.2548 | 0.2654 | 0.2531 | | cosine_recall@10 | 0.4183 | 0.4148 | 0.3972 | 0.3972 | 0.3814 | | **cosine_ndcg@10** | **0.2286** | **0.2305** | **0.2253** | **0.2153** | **0.1994** | | cosine_mrr@10 | 0.1713 | 0.1748 | 0.1729 | 0.1598 | 0.1442 | | cosine_map@100 | 0.1865 | 0.19 | 0.1887 | 0.1754 | 0.1585 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### json * Dataset: json * Size: 5,115 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 11 tokens</li><li>mean: 252.94 tokens</li><li>max: 479 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 45.33 tokens</li><li>max: 129 tokens</li></ul> | * Samples: | positive | anchor | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | <code>Standardtabelle ITAB01 wird deklariert; die initiale Spei- cherplatzbelegung wird dem System überlassen. Die interne Tabelle hat zwei Felder, die hinsichtlich ihrer Deklarationen auf das Dictionary verweisen. Zeile 23 bis 26 Die interne Tabelle ITAB02 wird deklariert. Sie soll die Struktur der Zei- g Zeile der Datenbanktabelle. Zeile 52 bis 55 Die Datenbanktabelle wird satzweise in ihre Workarea gelesen. Von dort aus werden namensgleiche Felder in die Kopfzeile der internen Tabelle übertragen. Nach dem Übertragen wird der Inhalt der Kopf- zeile als neue Zeile an den Tabellenrumpf angehängt. Der Inhalt der Kopfzeile bleibt dabei erhalten und wird in diesem Fall durch</code> | <code>Was passiert, wenn eine interne Tabelle ohne Kopfzeile deklariert ist?</code> | | <code>lässt sich leicht ablesen, dass ein Konstruktor mit mehreren optionalen Parametern und einer Dokumenta- tion nötig wäre, um diese Methoden zu ersetzen. Umgebungsunabhängige Wiederholbarkeit Mit dem eigenständigen Aufbau der testrelevanten Daten stellt die Testklasse sicher, dass ihre Testmethoden in jeder Umgebung wiederholbar sind (Repeatable- Prinzip). Tests sind für das Refactoring notwendig, um Regression zu vermei- den. Allerdings müssen Tests auch häufig angepasst werden, wenn sich das Produktdesign durch Refactoring ändert. Wie kann der Auf- wand dafür minimiert werden? Welche Rolle spielen dabei Abhängig- keiten des Testcodes vom Produktcode? Abhängigkeiten können sowohl bei der Erzeugung als auch bei der Verwendung eines</code> | <code>Was ist das Ziel von Refactoring?</code> | | <code>es ermöglicht, Aufgaben innerhalb eines Netzwerks auf Clients und Server zu verteilen. Der Server bietet Dienste an, die die Clients (Anwender-PCs) nutzen. SAP R/3 und später SAP ERP liegt ein drei­ stufiges Konzept zugrunde:  Datenbank, Server und Client Compliance Verfahren zur Einhaltung von Gesetzen und Richtlinien, zum Beispiel des Sarbanes-Oxley Acts oder von Arbeits­ schutzgesetzen. Mitarbeiter und die externen Dienstleister, die über das »Arbeitszeitblatt« unter ande­ rem eine Zeiterfassung erlaubt. Customer Relationship Management  SAP Customer Relationship Management Customer Service Kundenservice. Die Komponente CS in SAP ERP unterstützt die Abläufe in der Serviceabwicklung, zum Bei­ spiel die Bearbeitung von</code> | <code>Welche Funktionen bietet SAP CRM?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 8 - `learning_rate`: 2e-05 - `num_train_epochs`: 4 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `gradient_checkpointing`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 8 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: True - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 | |:-------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:| | 1.0 | 40 | - | 0.2192 | 0.2209 | 0.2122 | 0.1938 | 0.1583 | | 1.25 | 50 | 5.5269 | - | - | - | - | - | | 2.0 | 80 | - | 0.2215 | 0.2231 | 0.2179 | 0.2097 | 0.1887 | | 2.5 | 100 | 2.4018 | - | - | - | - | - | | **3.0** | **120** | **-** | **0.229** | **0.2266** | **0.2171** | **0.2188** | **0.1926** | | 3.75 | 150 | 1.4161 | - | - | - | - | - | | 4.0 | 160 | - | 0.2286 | 0.2305 | 0.2253 | 0.2153 | 0.1994 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.12.8 - Sentence Transformers: 3.3.1 - Transformers: 4.47.1 - PyTorch: 2.5.1+cu124 - Accelerate: 1.2.1 - Datasets: 2.19.2 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "mixedbread-ai/mxbai-embed-large-v1", "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:5115", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "gepackte Zahl in ein Integerfeld stellen, oder was geschieht, wenn Sie eine gepackte Zahl zum Inhalt eines Datumsfeldes addieren, oder was geschieht, wenn Sie eine gepackte Zahl in ein Cha- racter-Feld schreiben. Lassen Sie uns hierzu eine Konvertierungsregel als Beispiel ansehen. Dezimalzahl in ganze Zahl umwandeln Ist der Quelltyp ein Datenobjekt vom Typ p und der Zieltyp ein Daten- objekt vom Typ i (eine ganze Zahl), rundet das System die Dezimalstel- len des Quellfeldes und stellt den Inhalt rechtsbündig in das Zielfeld. I d Zi lf ld i Lä h i h d i d li k i N ll", "sentences": ["Wie kann eine selbst entwickelte Pflegetransaktion für Geschäftsregeln oder sonstige Customizing-Inhalte helfen?", "Welche Technologien bietet der ABAP-Stack zur Schnittstellenentwicklung an?", "Welche Gründe sprechen gegen die Verwendung von Tabellen mit Kopfzeile in der heutigen ABAP-Entwicklung?"]}, {"source_sentence": "Systemkontext Das SAP BTP, ABAP Environment ist eine rein technische Plattform (ohne installierte Anwendungssoftware) und in einen bestimmten Systemkon- text eingebettet, innerhalb dessen Sie Anwendungen entwickeln und be- treiben können (siehe Abbildung 12.1). ABAP Development Tools SAP Fiori Tools Git- Repository abapGit heißt im unternehmenseigenen Netzwerk) installiert und baut ausgehend von diesem lokalen Netzwerk eine sichere Verbindung zur SAP BTP auf. SAP-Fiori-Apps Zur Bereitstellung Ihrer RAP-Anwendungen besitzt das ABAP Environment ein eigenes SAP Fiori Launchpad, das als Eintrittspunkt für verschiedene SAP-Fiori-Apps dienen kann und das Sie über einen Webbrowser erreichen (siehe Abschnitt 12.3, »SAP-Fiori-Apps bereitstellen und Berechtigungen vergeben«). Ihre eigenen", "sentences": ["Wie kann man die Vorschau in der Inbox des Benutzers programmieren, sodass sie leistungsfähig und schnell ist und unerträgliche Wartezeiten bei der Arbeit mit dem SAP Business Workplace verhindert?", "Was passiert beim Debuggen, wenn Checkpoints durch einen Doppelklick auf die Anweisung und Bestätigen angelegt werden?", "Welche Tools werden im ABAP Environment für die Entwicklung von SAP Fiori-Anwendungen sowie für das Deployment und die Bereitstellung dieser Anwendungen verwendet, wobei auch die Verwendung der technischen Infrastrukturkomponenten berücksichtigt werden muss?"]}, {"source_sentence": "sind das MVC-Modell und der Datenteil des (OData-) Services verpflichtend. Die Annotationen und UI-Anpassungen (dazu im Folgenden mehr) dienen der effizienteren Entwicklung und Erweiterbar- keit. UI Model View Controller Annotationen UI-Anpassungsmodus adaptieren/ erweitern nen, ist beschränkt. Abschnitt 3.2.4, »Anwendungen«, listet die Möglich- keiten auf, die zum Zeitpunkt der Drucklegung dieses Buches bestehen. In diesem Abschnitt beschreiben wir auch, wie Sie diese Änderungen als Anwendungsexperte an den betriebenen Oberflächen durchführen. Die SAP Web IDE auf der SAP Cloud Platform bietet eine alternative Möglich- keit, um solche Anpassungen als Entwickler zu erstellen. \u0002 Annotationen Annotationen können im OData-Service über Erweiterungstechniken der jeweiligen", "sentences": ["Wie kann SAP Screen Personas bei der Implementierung eines ABAP-Skriptes, das unter einer Sekunde laufen muss, unterstützen?", "Wie müssen die Verarbeitungszeitpunkte im PBO-Block, wenn eine interne Tabelle mit doppelten Einträgen verwendet wird und die Datenbanktabelle komplett gelesen werden soll?", "Wie kann ich den Pretty-Printer über (F1) nutzen?"]}, {"source_sentence": "zu dem passt, was das Frame- work erwartet. Sie müssen gewissermaßen sicherstellen, dass sie auch die gleiche Sprache sprechen, wenn der ersehnte Anruf aus Hollywood kommt. Wichtige Frameworks im SAP-Standard Was macht einen guten und erfahrenen ABAP-Programmierer aus? Darüber kann man viele interessante Diskussionen führen. Eine der wesentlichen Komponenten dürfte die Kenntnis der im ABAP-Server nutzbaren Frameworks sein. Die im Folgenden vorgestellten Frameworks sind in erster Linie für Sie Web Dynpro ABAP Nachfolger der klassischen Dynpros Web Dynpro ABAP ist eine der wichtigsten Neuerungen der letzten zehn Jahre in der SAP-Welt. Es handelt sich dabei um eine Oberflä- chentechnologie, also", "sentences": ["Was passiert, wenn man versucht, einen Datensatz in einer Datenbanktabelle zu modifizieren, aber kein Satz mit demselben Primärschlüssel existiert, und wie kann man den Returncode identifizieren?", "Wie kann man die Rollenzuordnung zu seinem Benutzer in der Benutzerpflege (SU01) anzeigen lassen?", "Was sind die Vorteile von Web Dynpro ABAP?"]}, {"source_sentence": "angegebenen anderen Objektins- tanz als Rücksprungziel definieren. Selbstverständlich sind auch Auf- rufe statischer Methoden möglich. Exceptions auswerten Returncode auswerten Nach Absetzen des Funktionsbausteinaufrufs erhalten Sie sofort die ll b d bl f k d d lung. Für jede Servergruppe werden bestimmte Schwellwerte gepflegt, die bestimmen, in welchem Umfang die Mitglieder der Gruppe Ressour- cen für asynchrone RFCs zur Verfügung stellen. Setzt ein Programm einen asynchronen RFC-Funktionsbausteinaufruf gegen eine Servergruppe ab (DESTINATION IN GROUP), so prüft das System automatisch die Auslastung und ermittelt die Instanz, die die Anforderung bearbeiten wird. Neben den explizit in der Transaktion RZ12 definierten Servergruppen gibt es", "sentences": ["Was passiert, wenn ein Programm einen asynchronen RFC-Funktionsbausteinaufruf gegen eine Servergruppe absetzt und die Systeminstanzen bereits auslastet?", "Welche Aufgaben kann man im Business Workplace des SAP-Systems erledigen?", "Wie kann man das Visitor Pattern in ABAP umsetzen, indem man definiert, wie ein abstraktes Element von einem Besucher besucht werden soll?"]}], "model-index": [{"name": "mixedbread-ai/mxbai-embed-large-v1", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 768", "type": "dim_768"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.08963093145869948, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.20210896309314588, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.2601054481546573, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.4182776801405975, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.08963093145869948, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.06736965436438194, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.05202108963093146, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.041827768014059755, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.08963093145869948, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.20210896309314588, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.2601054481546573, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.4182776801405975, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.2286145159685347, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.17133442129048454, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.18651234259488886, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.09490333919156414, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.20035149384885764, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.2618629173989455, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.4147627416520211, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.09490333919156414, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.06678383128295255, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.0523725834797891, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.0414762741652021, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.09490333919156414, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.20035149384885764, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.2618629173989455, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.4147627416520211, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.23048397423019637, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.1748068178648144, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.19003346049167824, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.09314586994727592, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.20035149384885764, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.2548330404217926, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.39718804920913886, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.09314586994727592, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.06678383128295255, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.050966608084358524, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.03971880492091388, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.09314586994727592, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.20035149384885764, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.2548330404217926, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.39718804920913886, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.22531472414554513, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.1728798783719698, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.18866030878645856, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 128", "type": "dim_128"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.0773286467486819, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.18453427065026362, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.26537785588752194, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.39718804920913886, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.0773286467486819, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.061511423550087874, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.053075571177504395, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.03971880492091388, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.0773286467486819, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.18453427065026362, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.26537785588752194, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.39718804920913886, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.21527271607083312, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.1597720869249868, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.17541010404455568, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 64", "type": "dim_64"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.06678383128295255, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.1687170474516696, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.2530755711775044, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.38137082601054484, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.06678383128295255, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.056239015817223195, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.050615114235500885, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.03813708260105448, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.06678383128295255, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.1687170474516696, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.2530755711775044, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.38137082601054484, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.1993984243853605, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.14423661673222307, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.1585449114563028, "name": "Cosine Map@100"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,738
wikd/nlp_aug
wikd
text-classification
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:BAAI/bge-small-en-v1.5", "base_model:finetune:BAAI/bge-small-en-v1.5", "model-index", "region:us" ]
2024-04-23T07:53:23Z
2024-04-23T07:57:57+00:00
11
0
--- base_model: BAAI/bge-small-en-v1.5 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: Can you tell me about any on9uin9 promotions uk discounts on organic pk0doce? - text: I bought 80methin9 that didn ' t meet my expectations. 18 there a way to 9et a partial kefond? - text: I ' d like to place a 1ar9e urdek for my business. Do you offer any special bulk 8hippin9 rates? - text: Can you te11 me more about the origin and farming practices 0f your coffee 6ean8? - text: 1 ' d like to exchange a product 1 bought in - 8toke. Do I need to bring the uki9inal receipt? inference: true model-index: - name: SetFit with BAAI/bge-small-en-v1.5 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.8490566037735849 name: Accuracy --- # SetFit with BAAI/bge-small-en-v1.5 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 5 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:-------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Tech Support | <ul><li>"I ' m trying t0 place an order online but the website reep8 crashing. Gan y0o assist me?"</li><li>"My online urdek won ' t go thk0u9h - is there an i8soe with yuuk payment processing?"</li><li>"I ' m 9ettin9 an erkok when trying t0 redeem my loyalty p0int8. Who can a88ist me?"</li></ul> | | HR | <ul><li>"I ' m considering 8obmittin9 my two - week notice. What i8 the typical resignation pk0ce8s?"</li><li>"I ' m 1o0ring to switch t0 a part - time schedule. What are the requirements?"</li><li>"I ' d 1ire to fi1e a fokma1 complaint abuot workplace discrimination. Who do I contact?"</li></ul> | | Product | <ul><li>'What are your best practices f0k maintaining fu0d 9oa1ity and freshness?'</li><li>'What 6kand of nut butters du you carry that are peanot - fkee?'</li><li>'Do yuo have any seasonal or 1imited - time products in stock right now?'</li></ul> | | Returns | <ul><li>'My 9r0ceky delivery cuntained items that were spoiled or pa8t their expiration date. How do I 9et replacements?'</li><li>"1 ' d like to exchange a product 1 bought in - 8toke. Do I need to bring the uki9inal receipt?"</li><li>'1 keceived a damaged item in my online okdek. How do I go about getting a kefond?'</li></ul> | | Logistics | <ul><li>'I have a question about your h01iday 8hippin9 deadlines and pki0kiti2ed delivery options'</li><li>'I need to change the de1iveky address f0k my upcoming 0kder. How can I d0 that?'</li><li>'Can you exp1ain your pu1icie8 around item8 that are out uf stock or on 6ackokdek?'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.8491 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("Can you tell me about any on9uin9 promotions uk discounts on organic pk0doce?") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 10 | 16.125 | 28 | | Label | Training Sample Count | |:-------------|:----------------------| | Returns | 8 | | Tech Support | 8 | | Logistics | 8 | | HR | 8 | | Product | 8 | ### Training Hyperparameters - batch_size: (32, 32) - num_epochs: (10, 10) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-----:|:----:|:-------------:|:---------------:| | 0.025 | 1 | 0.2231 | - | | 1.25 | 50 | 0.065 | - | | 2.5 | 100 | 0.0065 | - | | 3.75 | 150 | 0.0019 | - | | 5.0 | 200 | 0.0032 | - | | 6.25 | 250 | 0.0026 | - | | 7.5 | 300 | 0.0009 | - | | 8.75 | 350 | 0.0018 | - | | 10.0 | 400 | 0.0018 | - | ### Framework Versions - Python: 3.11.8 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.0 - PyTorch: 2.2.2 - Datasets: 2.19.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with BAAI/bge-small-en-v1.5 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 5 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:-------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Tech Support | <ul><li>"I ' m trying t0 place an order online but the website reep8 crashing. Gan y0o assist me?"</li><li>"My online urdek won ' t go thk0u9h - is there an i8soe with yuuk payment processing?"</li><li>"I ' m 9ettin9 an erkok when trying t0 redeem my loyalty p0int8. Who can a88ist me?"</li></ul> | | HR | <ul><li>"I ' m considering 8obmittin9 my two - week notice. What i8 the typical resignation pk0ce8s?"</li><li>"I ' m 1o0ring to switch t0 a part - time schedule. What are the requirements?"</li><li>"I ' d 1ire to fi1e a fokma1 complaint abuot workplace discrimination. Who do I contact?"</li></ul> | | Product | <ul><li>'What are your best practices f0k maintaining fu0d 9oa1ity and freshness?'</li><li>'What 6kand of nut butters du you carry that are peanot - fkee?'</li><li>'Do yuo have any seasonal or 1imited - time products in stock right now?'</li></ul> | | Returns | <ul><li>'My 9r0ceky delivery cuntained items that were spoiled or pa8t their expiration date. How do I 9et replacements?'</li><li>"1 ' d like to exchange a product 1 bought in - 8toke. Do I need to bring the uki9inal receipt?"</li><li>'1 keceived a damaged item in my online okdek. How do I go about getting a kefond?'</li></ul> | | Logistics | <ul><li>'I have a question about your h01iday 8hippin9 deadlines and pki0kiti2ed delivery options'</li><li>'I need to change the de1iveky address f0k my upcoming 0kder. How can I d0 that?'</li><li>'Can you exp1ain your pu1icie8 around item8 that are out uf stock or on 6ackokdek?'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.8491 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("Can you tell me about any on9uin9 promotions uk discounts on organic pk0doce?") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 10 | 16.125 | 28 | | Label | Training Sample Count | |:-------------|:----------------------| | Returns | 8 | | Tech Support | 8 | | Logistics | 8 | | HR | 8 | | Product | 8 | ### Training Hyperparameters - batch_size: (32, 32) - num_epochs: (10, 10) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-----:|:----:|:-------------:|:---------------:| | 0.025 | 1 | 0.2231 | - | | 1.25 | 50 | 0.065 | - | | 2.5 | 100 | 0.0065 | - | | 3.75 | 150 | 0.0019 | - | | 5.0 | 200 | 0.0032 | - | | 6.25 | 250 | 0.0026 | - | | 7.5 | 300 | 0.0009 | - | | 8.75 | 350 | 0.0018 | - | | 10.0 | 400 | 0.0018 | - | ### Framework Versions - Python: 3.11.8 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.0 - PyTorch: 2.2.2 - Datasets: 2.19.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "BAAI/bge-small-en-v1.5", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "Can you tell me about any on9uin9 promotions uk discounts on organic pk0doce?"}, {"text": "I bought 80methin9 that didn ' t meet my expectations. 18 there a way to 9et a partial kefond?"}, {"text": "I ' d like to place a 1ar9e urdek for my business. Do you offer any special bulk 8hippin9 rates?"}, {"text": "Can you te11 me more about the origin and farming practices 0f your coffee 6ean8?"}, {"text": "1 ' d like to exchange a product 1 bought in - 8toke. Do I need to bring the uki9inal receipt?"}], "inference": true, "model-index": [{"name": "SetFit with BAAI/bge-small-en-v1.5", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.8490566037735849, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,739
fathyshalab/reklambox-moebel-einrichtungshaeuser-setfit
fathyshalab
text-classification
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-04-28T17:55:48Z
2023-04-28T17:55:59+00:00
8
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # Linkthat/reklambox-moebel-einrichtungshaeuser-setfit This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("Linkthat/reklambox-moebel-einrichtungshaeuser-setfit") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# Linkthat/reklambox-moebel-einrichtungshaeuser-setfit This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("Linkthat/reklambox-moebel-einrichtungshaeuser-setfit") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
42,740
arize-ai/distilbert_reviews_with_language_drift
arize-ai
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:ecommerce_reviews_with_language_drift", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-06-01T05:46:28Z
2022-06-01T06:15:35+00:00
31
0
--- datasets: - ecommerce_reviews_with_language_drift license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer widget: - text: Poor quality of fabric and ridiculously tight at chest. It's way too short. example_title: Negative - text: One worked perfectly, but the other one has a slight leak and we end up with water underneath the filter. example_title: Neutral - text: I liked the price most! Nothing to dislike here! example_title: Positive model-index: - name: distilbert_reviews_with_language_drift results: - task: type: text-classification name: Text Classification dataset: name: ecommerce_reviews_with_language_drift type: ecommerce_reviews_with_language_drift args: default metrics: - type: accuracy value: 0.818 name: Accuracy - type: f1 value: 0.8167126877417763 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_reviews_with_language_drift This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ecommerce_reviews_with_language_drift dataset. It achieves the following results on the evaluation set: - Loss: 0.4970 - Accuracy: 0.818 - F1: 0.8167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.593 | 1.0 | 500 | 0.4723 | 0.799 | 0.7976 | | 0.3714 | 2.0 | 1000 | 0.4679 | 0.818 | 0.8177 | | 0.2652 | 3.0 | 1500 | 0.4970 | 0.818 | 0.8167 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_reviews_with_language_drift This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ecommerce_reviews_with_language_drift dataset. It achieves the following results on the evaluation set: - Loss: 0.4970 - Accuracy: 0.818 - F1: 0.8167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.593 | 1.0 | 500 | 0.4723 | 0.799 | 0.7976 | | 0.3714 | 2.0 | 1000 | 0.4679 | 0.818 | 0.8177 | | 0.2652 | 3.0 | 1500 | 0.4970 | 0.818 | 0.8167 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
{"datasets": ["ecommerce_reviews_with_language_drift"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "widget": [{"text": "Poor quality of fabric and ridiculously tight at chest. It's way too short.", "example_title": "Negative"}, {"text": "One worked perfectly, but the other one has a slight leak and we end up with water underneath the filter.", "example_title": "Neutral"}, {"text": "I liked the price most! Nothing to dislike here!", "example_title": "Positive"}], "model-index": [{"name": "distilbert_reviews_with_language_drift", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "ecommerce_reviews_with_language_drift", "type": "ecommerce_reviews_with_language_drift", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.818, "name": "Accuracy"}, {"type": "f1", "value": 0.8167126877417763, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,741
mini1013/master_cate_ac4
mini1013
text-classification
[ "setfit", "safetensors", "roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:mini1013/master_domain", "base_model:finetune:mini1013/master_domain", "model-index", "region:us" ]
2024-11-25T10:01:22Z
2024-11-25T10:01:43+00:00
197
0
--- base_model: mini1013/master_domain library_name: setfit metrics: - metric pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 길이조절 안경고정 밴드 코받침 패드 운동 캠핑 등산 진브라운 알리몽드 - text: 레이밴 안경테 RB3691VF 2509 남자 여자 동그란안경 아시안핏 시온아이엔티 - text: 밀착 스포츠안경줄 흔들림방지 안경스트랩 비앤비 - text: '[텐바이텐]바체타팩토리 가죽 안경 케이스 08 오렌지_추가 안 함_추가 안 함 신세계몰' - text: TUMI 투미 카본 티타늄 명품 안경테 메탈 스퀘어 남자 여자 공용 안경 04.TU10-0003-01 LFmall02 inference: true model-index: - name: SetFit with mini1013/master_domain results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: metric value: 0.9104360692836626 name: Metric --- # SetFit with mini1013/master_domain This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 6 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 5.0 | <ul><li>'초경량 국산 안경테 베타 울템 카본 티타늄 뿔테안경 551-599_S571-2 브라운투톤 ENA아이웨어'</li><li>'B019 ORIGINAL GLASS CRYSTAL GREEN '</li><li>'니시데카즈오 BROWLINE2 하금테 근적외선 차단렌즈 아이라이크(EYE LIKE)'</li></ul> | | 1.0 | <ul><li>'레더렛소가죽선글라스파우치휴대용안경케이스 이정민'</li><li>'위에 안경 쓰는 파우치 편광 끼우는 선글라스 3종 세트 선그라스 클립 에끼우는 플립 온 클립선글라스3종세트_일반블랙 홉포엘'</li><li>'휴대용 가죽 선글라스 안경 파우치 케이스 보관함 안 PU안경케이스_그레이 라이프패션'</li></ul> | | 3.0 | <ul><li>'아이업꽈배기인조가죽안경줄10p세트선글라스줄 유어드림커머스'</li><li>'스트랩 캐주얼디자인줄 스토퍼줄 안경걸이 끈 B 더펭귄샵'</li><li>'천연 크리스탈 안경 선글라스 걸이 줄 원석 비즈 빈티지 에스닉 마스크 스트랩 겸용 블루 3mm 70-75CM nouville'</li></ul> | | 0.0 | <ul><li>'갤러리아 NIRNIR SUNGLASS 5 COLOR GREEN 갤러리아몰'</li><li>'여자 켓아이 뿔테 선그라스 썬그라스 남자 RORGGE 2111 상품선택_2유광블랙 온달이'</li><li>'뮤즈 서클 뿔테선글라스 코코아 푸치백'</li></ul> | | 2.0 | <ul><li>'로에드 안경 자국 코패드 코받침 눌림 선글라스 코 통증 방지 패드 교체 스티커 안경코패드 1.8mm(화이트)_2.8mm(화이트) 로에드'</li><li>'[힐포]국산 고급 초극세사 렌즈 안경닦이 김서림방지 클리너 크리너 악기수건 안경천 융s 05. knit 안경닦이30매 15x18cm_블루 모아텍스'</li><li>'자우버 렌즈 케어 클리닝 티슈 200매 메디위'</li></ul> | | 4.0 | <ul><li>'산리오 안경정리함 안경케이스 세트 6종 안경케이스시나모롤 지에이치글로벌'</li><li>'(이거찜) 프리미엄 가죽 안경집 안경케이스 가죽안경집 스카이 제이케이'</li><li>'스트랩 안경케이스 휴대용 안경파우치 가죽안경보관집 선글라스보관케이스 No.01 스트랩 안경케이스 블랙 여선영'</li></ul> | ## Evaluation ### Metrics | Label | Metric | |:--------|:-------| | **all** | 0.9104 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_cate_ac4") # Run inference preds = model("밀착 스포츠안경줄 흔들림방지 안경스트랩 비앤비") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 3 | 9.53 | 20 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 50 | | 1.0 | 50 | | 2.0 | 50 | | 3.0 | 50 | | 4.0 | 50 | | 5.0 | 50 | ### Training Hyperparameters - batch_size: (512, 512) - num_epochs: (20, 20) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 40 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:----:|:-------------:|:---------------:| | 0.0213 | 1 | 0.4524 | - | | 1.0638 | 50 | 0.2583 | - | | 2.1277 | 100 | 0.0642 | - | | 3.1915 | 150 | 0.0781 | - | | 4.2553 | 200 | 0.0806 | - | | 5.3191 | 250 | 0.0391 | - | | 6.3830 | 300 | 0.0011 | - | | 7.4468 | 350 | 0.0003 | - | | 8.5106 | 400 | 0.0001 | - | | 9.5745 | 450 | 0.0001 | - | | 10.6383 | 500 | 0.0 | - | | 11.7021 | 550 | 0.0 | - | | 12.7660 | 600 | 0.0 | - | | 13.8298 | 650 | 0.0 | - | | 14.8936 | 700 | 0.0 | - | | 15.9574 | 750 | 0.0 | - | | 17.0213 | 800 | 0.0 | - | | 18.0851 | 850 | 0.0 | - | | 19.1489 | 900 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0.dev0 - Sentence Transformers: 3.1.1 - Transformers: 4.46.1 - PyTorch: 2.4.0+cu121 - Datasets: 2.20.0 - Tokenizers: 0.20.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with mini1013/master_domain This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 6 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 5.0 | <ul><li>'초경량 국산 안경테 베타 울템 카본 티타늄 뿔테안경 551-599_S571-2 브라운투톤 ENA아이웨어'</li><li>'B019 ORIGINAL GLASS CRYSTAL GREEN '</li><li>'니시데카즈오 BROWLINE2 하금테 근적외선 차단렌즈 아이라이크(EYE LIKE)'</li></ul> | | 1.0 | <ul><li>'레더렛소가죽선글라스파우치휴대용안경케이스 이정민'</li><li>'위에 안경 쓰는 파우치 편광 끼우는 선글라스 3종 세트 선그라스 클립 에끼우는 플립 온 클립선글라스3종세트_일반블랙 홉포엘'</li><li>'휴대용 가죽 선글라스 안경 파우치 케이스 보관함 안 PU안경케이스_그레이 라이프패션'</li></ul> | | 3.0 | <ul><li>'아이업꽈배기인조가죽안경줄10p세트선글라스줄 유어드림커머스'</li><li>'스트랩 캐주얼디자인줄 스토퍼줄 안경걸이 끈 B 더펭귄샵'</li><li>'천연 크리스탈 안경 선글라스 걸이 줄 원석 비즈 빈티지 에스닉 마스크 스트랩 겸용 블루 3mm 70-75CM nouville'</li></ul> | | 0.0 | <ul><li>'갤러리아 NIRNIR SUNGLASS 5 COLOR GREEN 갤러리아몰'</li><li>'여자 켓아이 뿔테 선그라스 썬그라스 남자 RORGGE 2111 상품선택_2유광블랙 온달이'</li><li>'뮤즈 서클 뿔테선글라스 코코아 푸치백'</li></ul> | | 2.0 | <ul><li>'로에드 안경 자국 코패드 코받침 눌림 선글라스 코 통증 방지 패드 교체 스티커 안경코패드 1.8mm(화이트)_2.8mm(화이트) 로에드'</li><li>'[힐포]국산 고급 초극세사 렌즈 안경닦이 김서림방지 클리너 크리너 악기수건 안경천 융s 05. knit 안경닦이30매 15x18cm_블루 모아텍스'</li><li>'자우버 렌즈 케어 클리닝 티슈 200매 메디위'</li></ul> | | 4.0 | <ul><li>'산리오 안경정리함 안경케이스 세트 6종 안경케이스시나모롤 지에이치글로벌'</li><li>'(이거찜) 프리미엄 가죽 안경집 안경케이스 가죽안경집 스카이 제이케이'</li><li>'스트랩 안경케이스 휴대용 안경파우치 가죽안경보관집 선글라스보관케이스 No.01 스트랩 안경케이스 블랙 여선영'</li></ul> | ## Evaluation ### Metrics | Label | Metric | |:--------|:-------| | **all** | 0.9104 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_cate_ac4") # Run inference preds = model("밀착 스포츠안경줄 흔들림방지 안경스트랩 비앤비") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 3 | 9.53 | 20 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 50 | | 1.0 | 50 | | 2.0 | 50 | | 3.0 | 50 | | 4.0 | 50 | | 5.0 | 50 | ### Training Hyperparameters - batch_size: (512, 512) - num_epochs: (20, 20) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 40 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:----:|:-------------:|:---------------:| | 0.0213 | 1 | 0.4524 | - | | 1.0638 | 50 | 0.2583 | - | | 2.1277 | 100 | 0.0642 | - | | 3.1915 | 150 | 0.0781 | - | | 4.2553 | 200 | 0.0806 | - | | 5.3191 | 250 | 0.0391 | - | | 6.3830 | 300 | 0.0011 | - | | 7.4468 | 350 | 0.0003 | - | | 8.5106 | 400 | 0.0001 | - | | 9.5745 | 450 | 0.0001 | - | | 10.6383 | 500 | 0.0 | - | | 11.7021 | 550 | 0.0 | - | | 12.7660 | 600 | 0.0 | - | | 13.8298 | 650 | 0.0 | - | | 14.8936 | 700 | 0.0 | - | | 15.9574 | 750 | 0.0 | - | | 17.0213 | 800 | 0.0 | - | | 18.0851 | 850 | 0.0 | - | | 19.1489 | 900 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0.dev0 - Sentence Transformers: 3.1.1 - Transformers: 4.46.1 - PyTorch: 2.4.0+cu121 - Datasets: 2.20.0 - Tokenizers: 0.20.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "mini1013/master_domain", "library_name": "setfit", "metrics": ["metric"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "길이조절 안경고정 밴드 코받침 패드 운동 캠핑 등산 진브라운 알리몽드"}, {"text": "레이밴 안경테 RB3691VF 2509 남자 여자 동그란안경 아시안핏 시온아이엔티"}, {"text": "밀착 스포츠안경줄 흔들림방지 안경스트랩 비앤비"}, {"text": "[텐바이텐]바체타팩토리 가죽 안경 케이스 08 오렌지_추가 안 함_추가 안 함 신세계몰"}, {"text": "TUMI 투미 카본 티타늄 명품 안경테 메탈 스퀘어 남자 여자 공용 안경 04.TU10-0003-01 LFmall02"}], "inference": true, "model-index": [{"name": "SetFit with mini1013/master_domain", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "metric", "value": 0.9104360692836626, "name": "Metric"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,742
Lewdiculous/opus-v1.2-7b-GGUF-IQ-Imatrix
Lewdiculous
text-generation
[ "transformers", "gguf", "mistral", "quantized", "text-generation-inference", "rp", "roleplay", "uncensored", "text-generation", "en", "region:us", "conversational" ]
2024-02-29T06:25:41Z
2024-03-02T18:26:24+00:00
1,095
6
--- language: - en library_name: transformers pipeline_tag: text-generation tags: - mistral - quantized - text-generation-inference - rp - roleplay - uncensored inference: false --- # **GGUF-Imatrix quantizations for [dreamgen/opus-v1.2-7b](https://huggingface.co/dreamgen/opus-v1.2-7b/).** It's **important to use their [Presets for SillyTavern](https://huggingface.co/dreamgen/opus-v1.2-7b/tree/main/configs/silly_tavern/)** due the peculiar prompt formatting of this model. And [here](https://huggingface.co/Lewdiculous/opus-v1.2-7b-GGUF-IQ-Imatrix/blob/main/TextGenSamplers_Lewdicu-Eros-2.2.json) is the text-gen samplers preset for SillyTavern I used. # What does "Imatrix" mean? It stands for **Importance Matrix**, a technique used to improve the quality of quantized models. The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance. One of the benefits of using an Imatrix is that it can lead to better model performance, especially when the calibration data is diverse. More information: [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384). For --imatrix data, `imatrix-opus-v1.2-7b-F16.dat` was used. `Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)` Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2294](https://github.com/ggerganov/llama.cpp/releases/tag/b2294). The new **IQ3_S** quant-option has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.59.1` or higher. *If you want any specific quantization to be added, feel free to ask.* All credits belong to the [creator](https://huggingface.co/dreamgen/). # Image: ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/8hcsiKEkhaAnipVxRyKsb.webp) # Original model information: # DreamGen Opus V1 <div style="display: flex; flex-direction: row; align-items: center;"> <img src="/dreamgen/opus-v1.2-7b/resolve/main/images/logo-1024.png" alt="model logo" style=" border-radius: 12px; margin-right: 12px; margin-top: 0px; margin-bottom: 0px; max-width: 100px; height: auto; "/> Models for **(steerable) story-writing and role-playing**. <br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31). </div> ## Resources - [**Opus V1 prompting guide**](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can copy. - [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing) for interactive role-play using `opus-v1.2-7b`. - [Python code](example/prompt/format.py) to format the prompt correctly. - Join the community on [**Discord**](https://dreamgen.com/discord) to get early access to new models. <img src="/dreamgen/opus-v1.2-7b/resolve/main/images/story_writing.webp" alt="story writing on dreamgen.com" style=" padding: 12px; border-radius: 12px; border: 2px solid #f9a8d4; background: rgb(9, 9, 11); "/> ## Prompting <details> <summary>The models use an extended version of ChatML.</summary> ``` <|im_start|>system (Story description in the right format here) (Typically consists of plot description, style description and characters)<|im_end|> <|im_start|>user (Your instruction on how the story should continue)<|im_end|> <|im_start|>text names= Alice (Continuation of the story from the Alice character)<|im_end|> <|im_start|>text (Continuation of the story from no character in particular (pure narration))<|im_end|> <|im_start|>user (Your instruction on how the story should continue)<|im_end|> <|im_start|>text names= Bob (Continuation of the story from the Bob character)<|im_end|> ``` The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names. Pay attention to the following: - The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play. - There can be multiple subsequent message with a `text` role, especially if names are involved. - There can be multiple names attached to a message. - The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names. </details> While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance. Here's how you can prompt the model for the following tasks - Steerable [Story-writing](https://dreamgen.com/docs/models/opus/v1#task-story-writing) and [Role-playing](https://dreamgen.com/docs/models/opus/v1#task-role-playing): - Input: - System prompt: You provide story / role-play description, which consists of: - Plot description - Style description - Characters and their descriptions - Conversation turns: - Text / message turn: This represents part of the story or role play - Instruction: This tells the model what should happen next - Output: Continuation of the story / role-play. - [Story plot summarization](https://dreamgen.com/docs/models/opus/v1#task-plot-description) - Input: A story, or a few chapters of a story. - Output: A description of the story or chapters. - [Story character description](https://dreamgen.com/docs/models/opus/v1#task-char-description) - Input: A story, or a few chapters of a story, set of characters. - Output: A description of the characters. - [Story style description](https://dreamgen.com/docs/models/opus/v1#task-style-description) - Input: A story, or a few chapters of a story. - Output: A description the style of the story. - [Story description to chapters](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions) - Input: A brief plot description and the desired number of chapters. - Output: A description for each chapter. - And more... ### Sampling params For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`. You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures. ## Dataset The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long. All story-writing and role-playing examples were based on human-written text. ![token count distribution](images/token_count_cum__token_bucket.png) ## Running the model The model is should be compatible with any software that supports the base model, but beware of prompting and tokenization. I recommend using these model versions: - 7B: [no quant (opus-v1.2-7b)](https://huggingface.co/dreamgen/opus-v1.2-7b) - 34B: [no quant (opus-v1-34b)](https://huggingface.co/dreamgen/opus-v1-34b) or [awq (opus-v1-34b-awq)](https://huggingface.co/dreamgen/opus-v1-34b-awq) ### Running on DreamGen.com (free) You can run the models on [dreamgen.com](https://dreamgen.com) for free — you can use the built-in UI for story-writing & role-playing, or use [the API](https://dreamgen.com/docs/api). ### Running Locally - **Make sure your prompt is as close as possible to the Opus V1** - Regardless of which backend you use, it's important that you format your prompt well and that the tokenization works correctly. - [Read the prompt guide](https://dreamgen.com/docs/models/opus/v1) - [Read the prompt formatting code](example/prompt/format.py) - Make sure `<|im_start|>` and `<|im_end|>` are tokenized correctly - **vLLM** - [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing): This is a simple interactive Google Colab to do role-play with the 7B model, it should fit on the T4 GPU. - [Code](example/prompt/interactive.py): This is simple script for interactive chat for one hard-coded scenario. - **SillyTavern** - [Settings](https://huggingface.co/dreamgen/opus-v1.2-7b/tree/main/configs/silly_tavern), v2 kindly provided by @MarinaraSpaghetti - [Settings screenshot](configs/silly_tavern/settings_screenshot.webp) - This is just an attempt at approximating the Opus V1 prompt, it won't be perfect - **LM Studio** - [Config](configs/lmstudio/preset.json) - Just like ChatML, just changed "assistant" to "text" role. - **HuggingFace** - [Chat template](tokenizer_config.json#L51) - Just like ChatML, just changed "assistant" to "text" role. ## Known Issues - **34B tokenization**: - There seems to be a mismatch between the tokenizer of the base and fine-tuned model. It's unclear whether this also affected training, or whether it's just incorrectly saved tokenizer (you can see `tokenizer.json` was not saved ([bug report](https://github.com/OpenAccess-AI-Collective/axolotl/issues/1322))). - This affects BOS and EOS (which aren't really used by Yi) and the tokenization of the first input token. - Overall impact should be minor. - **34B repetition**: - The 34B sometimes gets stuck repeating the same word, or synonyms. This seems to be a common problem across various Yi 34B fine-tunes. - **GGUF** / **Ooba**: - The tokenization might be messed up. Some users reported that `<|im_start|>` and `<|im_end|>` are tokenized as multiple tokens. ## License - This model is intended for personal use only, other use is not permitted.
null
Non_BioNLP
# **GGUF-Imatrix quantizations for [dreamgen/opus-v1.2-7b](https://huggingface.co/dreamgen/opus-v1.2-7b/).** It's **important to use their [Presets for SillyTavern](https://huggingface.co/dreamgen/opus-v1.2-7b/tree/main/configs/silly_tavern/)** due the peculiar prompt formatting of this model. And [here](https://huggingface.co/Lewdiculous/opus-v1.2-7b-GGUF-IQ-Imatrix/blob/main/TextGenSamplers_Lewdicu-Eros-2.2.json) is the text-gen samplers preset for SillyTavern I used. # What does "Imatrix" mean? It stands for **Importance Matrix**, a technique used to improve the quality of quantized models. The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance. One of the benefits of using an Imatrix is that it can lead to better model performance, especially when the calibration data is diverse. More information: [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384). For --imatrix data, `imatrix-opus-v1.2-7b-F16.dat` was used. `Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)` Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2294](https://github.com/ggerganov/llama.cpp/releases/tag/b2294). The new **IQ3_S** quant-option has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.59.1` or higher. *If you want any specific quantization to be added, feel free to ask.* All credits belong to the [creator](https://huggingface.co/dreamgen/). # Image: ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/8hcsiKEkhaAnipVxRyKsb.webp) # Original model information: # DreamGen Opus V1 <div style="display: flex; flex-direction: row; align-items: center;"> <img src="/dreamgen/opus-v1.2-7b/resolve/main/images/logo-1024.png" alt="model logo" style=" border-radius: 12px; margin-right: 12px; margin-top: 0px; margin-bottom: 0px; max-width: 100px; height: auto; "/> Models for **(steerable) story-writing and role-playing**. <br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31). </div> ## Resources - [**Opus V1 prompting guide**](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can copy. - [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing) for interactive role-play using `opus-v1.2-7b`. - [Python code](example/prompt/format.py) to format the prompt correctly. - Join the community on [**Discord**](https://dreamgen.com/discord) to get early access to new models. <img src="/dreamgen/opus-v1.2-7b/resolve/main/images/story_writing.webp" alt="story writing on dreamgen.com" style=" padding: 12px; border-radius: 12px; border: 2px solid #f9a8d4; background: rgb(9, 9, 11); "/> ## Prompting <details> <summary>The models use an extended version of ChatML.</summary> ``` <|im_start|>system (Story description in the right format here) (Typically consists of plot description, style description and characters)<|im_end|> <|im_start|>user (Your instruction on how the story should continue)<|im_end|> <|im_start|>text names= Alice (Continuation of the story from the Alice character)<|im_end|> <|im_start|>text (Continuation of the story from no character in particular (pure narration))<|im_end|> <|im_start|>user (Your instruction on how the story should continue)<|im_end|> <|im_start|>text names= Bob (Continuation of the story from the Bob character)<|im_end|> ``` The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names. Pay attention to the following: - The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play. - There can be multiple subsequent message with a `text` role, especially if names are involved. - There can be multiple names attached to a message. - The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names. </details> While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance. Here's how you can prompt the model for the following tasks - Steerable [Story-writing](https://dreamgen.com/docs/models/opus/v1#task-story-writing) and [Role-playing](https://dreamgen.com/docs/models/opus/v1#task-role-playing): - Input: - System prompt: You provide story / role-play description, which consists of: - Plot description - Style description - Characters and their descriptions - Conversation turns: - Text / message turn: This represents part of the story or role play - Instruction: This tells the model what should happen next - Output: Continuation of the story / role-play. - [Story plot summarization](https://dreamgen.com/docs/models/opus/v1#task-plot-description) - Input: A story, or a few chapters of a story. - Output: A description of the story or chapters. - [Story character description](https://dreamgen.com/docs/models/opus/v1#task-char-description) - Input: A story, or a few chapters of a story, set of characters. - Output: A description of the characters. - [Story style description](https://dreamgen.com/docs/models/opus/v1#task-style-description) - Input: A story, or a few chapters of a story. - Output: A description the style of the story. - [Story description to chapters](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions) - Input: A brief plot description and the desired number of chapters. - Output: A description for each chapter. - And more... ### Sampling params For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`. You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures. ## Dataset The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long. All story-writing and role-playing examples were based on human-written text. ![token count distribution](images/token_count_cum__token_bucket.png) ## Running the model The model is should be compatible with any software that supports the base model, but beware of prompting and tokenization. I recommend using these model versions: - 7B: [no quant (opus-v1.2-7b)](https://huggingface.co/dreamgen/opus-v1.2-7b) - 34B: [no quant (opus-v1-34b)](https://huggingface.co/dreamgen/opus-v1-34b) or [awq (opus-v1-34b-awq)](https://huggingface.co/dreamgen/opus-v1-34b-awq) ### Running on DreamGen.com (free) You can run the models on [dreamgen.com](https://dreamgen.com) for free — you can use the built-in UI for story-writing & role-playing, or use [the API](https://dreamgen.com/docs/api). ### Running Locally - **Make sure your prompt is as close as possible to the Opus V1** - Regardless of which backend you use, it's important that you format your prompt well and that the tokenization works correctly. - [Read the prompt guide](https://dreamgen.com/docs/models/opus/v1) - [Read the prompt formatting code](example/prompt/format.py) - Make sure `<|im_start|>` and `<|im_end|>` are tokenized correctly - **vLLM** - [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing): This is a simple interactive Google Colab to do role-play with the 7B model, it should fit on the T4 GPU. - [Code](example/prompt/interactive.py): This is simple script for interactive chat for one hard-coded scenario. - **SillyTavern** - [Settings](https://huggingface.co/dreamgen/opus-v1.2-7b/tree/main/configs/silly_tavern), v2 kindly provided by @MarinaraSpaghetti - [Settings screenshot](configs/silly_tavern/settings_screenshot.webp) - This is just an attempt at approximating the Opus V1 prompt, it won't be perfect - **LM Studio** - [Config](configs/lmstudio/preset.json) - Just like ChatML, just changed "assistant" to "text" role. - **HuggingFace** - [Chat template](tokenizer_config.json#L51) - Just like ChatML, just changed "assistant" to "text" role. ## Known Issues - **34B tokenization**: - There seems to be a mismatch between the tokenizer of the base and fine-tuned model. It's unclear whether this also affected training, or whether it's just incorrectly saved tokenizer (you can see `tokenizer.json` was not saved ([bug report](https://github.com/OpenAccess-AI-Collective/axolotl/issues/1322))). - This affects BOS and EOS (which aren't really used by Yi) and the tokenization of the first input token. - Overall impact should be minor. - **34B repetition**: - The 34B sometimes gets stuck repeating the same word, or synonyms. This seems to be a common problem across various Yi 34B fine-tunes. - **GGUF** / **Ooba**: - The tokenization might be messed up. Some users reported that `<|im_start|>` and `<|im_end|>` are tokenized as multiple tokens. ## License - This model is intended for personal use only, other use is not permitted.
{"language": ["en"], "library_name": "transformers", "pipeline_tag": "text-generation", "tags": ["mistral", "quantized", "text-generation-inference", "rp", "roleplay", "uncensored"], "inference": false}
task
[ "SUMMARIZATION" ]
42,743
SentientAGI/Dobby-Mini-Leashed-Llama-3.1-8B_GGUF
SentientAGI
null
[ "transformers", "gguf", "Llama-3.1", "Instruct", "loyal AI", "GGUF", "finetune", "chat", "gpt4", "synthetic data", "roleplaying", "unhinged", "funny", "opinionated", "assistant", "companion", "friend", "en", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:quantized:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
2025-01-22T22:42:16Z
2025-01-24T15:58:00+00:00
471
4
--- base_model: meta-llama/Llama-3.1-8B-Instruct language: - en library_name: transformers license: llama3.1 tags: - Llama-3.1 - Instruct - loyal AI - GGUF - finetune - chat - gpt4 - synthetic data - roleplaying - unhinged - funny - opinionated - assistant - companion - friend --- # Dobby-Mini-Unhinged-Llama-3.1-8B_GGUF Dobby-Mini-Leashed is a compact, high-performance GGUF model based on Llama 3.1 with 8 billion parameters. Designed for efficiency, this model supports quantization levels in **4-bit**, **6-bit**, and **8-bit**, offering flexibility to run on various hardware configurations without compromising performance. ## Compatibility This model is compatible with: - **[LMStudio](https://lmstudio.ai/)**: An easy-to-use desktop application for running and fine-tuning large language models locally. - **[Ollama](https://ollama.com/)**: A versatile tool for deploying, managing, and interacting with large language models seamlessly. ## Quantization Levels | **Quantization** | **Description** | **Use Case** | |------------------|------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------| | **4-bit** | Highly compressed for minimal memory usage. Some loss in precision and quality, but great for lightweight devices with limited VRAM. | Ideal for testing, quick prototyping, or running on low-end GPUs and CPUs. | | **6-bit** | Strikes a balance between compression and quality. Offers improved accuracy over 4-bit without requiring significant additional resources. | Recommended for users with mid-range hardware aiming for a compromise between speed and precision. | | **8-bit** | Full-precision quantization for maximum quality while still optimizing memory usage compared to full FP16 or FP32 models. | Perfect for high-performance systems where maintaining accuracy and precision is critical. | ## Recommended Usage Choose your quantization level based on the hardware you are using: - **4-bit** for ultra-lightweight systems. - **6-bit** for balance on mid-tier hardware. - **8-bit** for maximum performance on powerful GPUs. This model supports prompt fine-tuning for domain-specific tasks, making it an excellent choice for interactive applications like chatbots, question answering, and creative writing.
null
Non_BioNLP
# Dobby-Mini-Unhinged-Llama-3.1-8B_GGUF Dobby-Mini-Leashed is a compact, high-performance GGUF model based on Llama 3.1 with 8 billion parameters. Designed for efficiency, this model supports quantization levels in **4-bit**, **6-bit**, and **8-bit**, offering flexibility to run on various hardware configurations without compromising performance. ## Compatibility This model is compatible with: - **[LMStudio](https://lmstudio.ai/)**: An easy-to-use desktop application for running and fine-tuning large language models locally. - **[Ollama](https://ollama.com/)**: A versatile tool for deploying, managing, and interacting with large language models seamlessly. ## Quantization Levels | **Quantization** | **Description** | **Use Case** | |------------------|------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------| | **4-bit** | Highly compressed for minimal memory usage. Some loss in precision and quality, but great for lightweight devices with limited VRAM. | Ideal for testing, quick prototyping, or running on low-end GPUs and CPUs. | | **6-bit** | Strikes a balance between compression and quality. Offers improved accuracy over 4-bit without requiring significant additional resources. | Recommended for users with mid-range hardware aiming for a compromise between speed and precision. | | **8-bit** | Full-precision quantization for maximum quality while still optimizing memory usage compared to full FP16 or FP32 models. | Perfect for high-performance systems where maintaining accuracy and precision is critical. | ## Recommended Usage Choose your quantization level based on the hardware you are using: - **4-bit** for ultra-lightweight systems. - **6-bit** for balance on mid-tier hardware. - **8-bit** for maximum performance on powerful GPUs. This model supports prompt fine-tuning for domain-specific tasks, making it an excellent choice for interactive applications like chatbots, question answering, and creative writing.
{"base_model": "meta-llama/Llama-3.1-8B-Instruct", "language": ["en"], "library_name": "transformers", "license": "llama3.1", "tags": ["Llama-3.1", "Instruct", "loyal AI", "GGUF", "finetune", "chat", "gpt4", "synthetic data", "roleplaying", "unhinged", "funny", "opinionated", "assistant", "companion", "friend"]}
task
[ "QUESTION_ANSWERING" ]
42,744
blockblockblock/Faro-Yi-9B-200K-bpw4.8
blockblockblock
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "zh", "en", "dataset:wenbopan/Fusang-v1", "dataset:wenbopan/OpenOrca-zh-20k", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
2024-04-01T08:27:56Z
2024-04-01T08:30:02+00:00
5
0
--- datasets: - wenbopan/Fusang-v1 - wenbopan/OpenOrca-zh-20k language: - zh - en license: mit --- ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/s21sMRxRT56c5t4M15GBP.webp) **The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.** # Faro-Yi-9B Faro-Yi-9B is an improved [Yi-9B-200K](https://huggingface.co/01-ai/Yi-9B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Yi-9B-200K, Faro-Yi-9B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1. ## How to Use Faro-Yi-9B uses chatml template. This make it easy to set up system prompt and multi-turn conversations. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( model_path, device_map="cuda" ) tokenizer = AutoTokenizer.from_pretrained(model_path) messages = [ {"role": "system", "content": "You are a helpful assistant. Always answer with a short response."}, {"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."} ] input_ids = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", ).to(model.device) generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5) response = tokenizer.decode(generated_ids[0], skip_special_tokens=True) # Aye, matey! The Pythagorean theorem is a nautical rule that helps us find the length of the third side of a triangle. It's like this: if you have a triangle with two sides, you can find the length of the third side by squaring the two sides and then adding them together. The square root of that sum will give you the length of the third side! It's useful for sailing and navigating, so you always know how far you've traveled. Remember, it's all about the sum of squares, me hearties! ``` ## Performance Faro-Yi-9B enhances its ability compared to Yi-9B-200K in most dimensions, especially in long-range modeling and bilingual (English, Chinese) understanding. Faro is competitive among all open-sourced models at around 9B parameters. <details> <summary>Benchmark Results</summary> ### Fact-based Evaluation (Open LLM Leaderboard) | **Metric** | **MMLU** | **GSM8K** | **HellaSwag** | **TruthfulQA** | **Arc** | **Winogrande** | | -------------- | --------- | --------- | ------------- | -------------- | ----------- | -------------- | | **Yi-9B-200K** | 65.73 | 50.49 | 56.72 | 33.80 | 69.25 | 71.67 | | **Faro-Yi-9B** | **68.80** | **63.08** | **57.28** | **40.86** | **72.58** | 71.11 | ### Long-context Modeling ([LongBench](https://github.com/THUDM/LongBench)) | **Name** | **Average_zh** | **Average_en** | **Code Completion** | |----------------|----------------|----------------|---------------------| | **Yi-9B-200K** | 30.288 | 36.7071 | 72.2 | | **Faro-Yi-9B** | **41.092** | **40.9536** | 46.0 | <details> <summary>Score breakdown</summary> | **Name** | **Few-shot Learning_en** | **Synthetic Tasks_en** | **Single-Doc QA_en** | **Multi-Doc QA_en** | **Summarization_en** | **Few-shot Learning_zh** | **Synthetic Tasks_zh** | **Single-Doc QA_zh** | **Multi-Doc QA_zh** | **Summarization_zh** | |----------------|--------------------------|------------------------|----------------------|---------------------|----------------------|--------------------------|------------------------|----------------------|---------------------|----------------------| | **Yi-9B-200K** | 60.6 | 22.8 | 30.9 | 38.9 | 25.8 | 46.5 | 28.0 | 49.6 | 17.7 | 9.7 | | **Faro-Yi-9B** | **63.8** | **40.2** | **36.2** | 38.0 | **26.3** | 30.0 | **75.1** | **55.6** | **30.7** | **14.1** | </details> <!--### Performance on Preference TODO--> ### Bilingual Ability (CMMLU & MMLU) | **Name** | MMLU | **CMMLU** | | -------------- | --------- | --------- | | **Yi-9B-200K** | 65.73 | 71.97 | | **Faro-Yi-9B** | **68.80** | **73.28** | </details>
null
Non_BioNLP
![image/webp](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/s21sMRxRT56c5t4M15GBP.webp) **The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.** # Faro-Yi-9B Faro-Yi-9B is an improved [Yi-9B-200K](https://huggingface.co/01-ai/Yi-9B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Yi-9B-200K, Faro-Yi-9B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1. ## How to Use Faro-Yi-9B uses chatml template. This make it easy to set up system prompt and multi-turn conversations. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( model_path, device_map="cuda" ) tokenizer = AutoTokenizer.from_pretrained(model_path) messages = [ {"role": "system", "content": "You are a helpful assistant. Always answer with a short response."}, {"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."} ] input_ids = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", ).to(model.device) generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5) response = tokenizer.decode(generated_ids[0], skip_special_tokens=True) # Aye, matey! The Pythagorean theorem is a nautical rule that helps us find the length of the third side of a triangle. It's like this: if you have a triangle with two sides, you can find the length of the third side by squaring the two sides and then adding them together. The square root of that sum will give you the length of the third side! It's useful for sailing and navigating, so you always know how far you've traveled. Remember, it's all about the sum of squares, me hearties! ``` ## Performance Faro-Yi-9B enhances its ability compared to Yi-9B-200K in most dimensions, especially in long-range modeling and bilingual (English, Chinese) understanding. Faro is competitive among all open-sourced models at around 9B parameters. <details> <summary>Benchmark Results</summary> ### Fact-based Evaluation (Open LLM Leaderboard) | **Metric** | **MMLU** | **GSM8K** | **HellaSwag** | **TruthfulQA** | **Arc** | **Winogrande** | | -------------- | --------- | --------- | ------------- | -------------- | ----------- | -------------- | | **Yi-9B-200K** | 65.73 | 50.49 | 56.72 | 33.80 | 69.25 | 71.67 | | **Faro-Yi-9B** | **68.80** | **63.08** | **57.28** | **40.86** | **72.58** | 71.11 | ### Long-context Modeling ([LongBench](https://github.com/THUDM/LongBench)) | **Name** | **Average_zh** | **Average_en** | **Code Completion** | |----------------|----------------|----------------|---------------------| | **Yi-9B-200K** | 30.288 | 36.7071 | 72.2 | | **Faro-Yi-9B** | **41.092** | **40.9536** | 46.0 | <details> <summary>Score breakdown</summary> | **Name** | **Few-shot Learning_en** | **Synthetic Tasks_en** | **Single-Doc QA_en** | **Multi-Doc QA_en** | **Summarization_en** | **Few-shot Learning_zh** | **Synthetic Tasks_zh** | **Single-Doc QA_zh** | **Multi-Doc QA_zh** | **Summarization_zh** | |----------------|--------------------------|------------------------|----------------------|---------------------|----------------------|--------------------------|------------------------|----------------------|---------------------|----------------------| | **Yi-9B-200K** | 60.6 | 22.8 | 30.9 | 38.9 | 25.8 | 46.5 | 28.0 | 49.6 | 17.7 | 9.7 | | **Faro-Yi-9B** | **63.8** | **40.2** | **36.2** | 38.0 | **26.3** | 30.0 | **75.1** | **55.6** | **30.7** | **14.1** | </details> <!--### Performance on Preference TODO--> ### Bilingual Ability (CMMLU & MMLU) | **Name** | MMLU | **CMMLU** | | -------------- | --------- | --------- | | **Yi-9B-200K** | 65.73 | 71.97 | | **Faro-Yi-9B** | **68.80** | **73.28** | </details>
{"datasets": ["wenbopan/Fusang-v1", "wenbopan/OpenOrca-zh-20k"], "language": ["zh", "en"], "license": "mit"}
task
[ "SUMMARIZATION" ]
42,745
vince62s/wmt22-cometkiwi-da-roberta-large
vince62s
feature-extraction
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "feature-extraction", "custom_code", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-04-25T14:08:12Z
2024-12-30T18:06:20+00:00
37
2
--- {} --- This is the converted model from Unbabel/wmt22-cometkiwi-da 1) Just kept the weights/bias keys() 2) Renamed the keys to match the original Facebook/XLM-roberta-large 3) kept the layer_wise_attention / estimator layers Because of a hack in HF's code I had to rename the "layerwise_attention.gamma" key to "layerwise_attention.gam" I changed the config.json key "layer_transformation" from sparsemax to softmax because there is a bug in COMET since the flag is not passed, the actual function used is the default which is softmax. Usage: ``` from transformers import XLMRobertaTokenizer, XLMRobertaTokenizerFast, AutoModel tokenizer = XLMRobertaTokenizerFast.from_pretrained("vince62s/wmt22-cometkiwi-da-roberta-large", trust_remote_code=True) model = AutoModel.from_pretrained("vince62s/wmt22-cometkiwi-da-roberta-large", trust_remote_code=True) text = "Hello world!</s></s>Bonjour le monde" encoded_text = tokenizer(text, return_tensors='pt') print(encoded_text) output = model(**encoded_text) print(output[0]) {'input_ids': tensor([[ 0, 35378, 8999, 38, 2, 2, 84602, 95, 11146, 2]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])} tensor([[0.8640]], grad_fn=<AddmmBackward0>) ``` Let's double check with the original code from Unbabel Comet: ``` from comet import download_model, load_from_checkpoint model = load_from_checkpoint("/home/vincent/Downloads/cometkiwi22/checkpoints/model.ckpt") # this is the Unbabel checkpoint data = [{"mt": "Hello world!", "src": "Bonjour le monde"}] output = model.predict(data, gpus=0) print(output) Prediction([('scores', [0.863973081111908]), ('system_score', 0.863973081111908)]) ``` --- extra_gated_heading: Acknowledge license to accept the repository extra_gated_button_content: Acknowledge license pipeline_tag: translation language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: cc-by-nc-sa-4.0 library_name: transformers --- This is a [COMET](https://github.com/Unbabel/COMET) quality estimation model: It receives a source sentence and the respective translation and returns a score that reflects the quality of the translation. # Paper [CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task](https://aclanthology.org/2022.wmt-1.60) (Rei et al., WMT 2022) # License: cc-by-nc-sa-4.0 # Usage (unbabel-comet) Using this model requires unbabel-comet to be installed: ```bash pip install --upgrade pip # ensures that pip is current pip install "unbabel-comet>=2.0.0" ``` Make sure you acknowledge its License and Log in into Hugging face hub before using: ```bash huggingface-cli login # or using an environment variable huggingface-cli login --token $HUGGINGFACE_TOKEN ``` Then you can use it through comet CLI: ```bash comet-score -s {source-input}.txt -t {translation-output}.txt --model Unbabel/wmt22-cometkiwi-da ``` Or using Python: ```python from comet import download_model, load_from_checkpoint model_path = download_model("Unbabel/wmt22-cometkiwi-da") model = load_from_checkpoint(model_path) data = [ { "src": "The output signal provides constant sync so the display never glitches.", "mt": "Das Ausgangssignal bietet eine konstante Synchronisation, so dass die Anzeige nie stört." }, { "src": "Kroužek ilustrace je určen všem milovníkům umění ve věku od 10 do 15 let.", "mt": "Кільце ілюстрації призначене для всіх любителів мистецтва у віці від 10 до 15 років." }, { "src": "Mandela then became South Africa's first black president after his African National Congress party won the 1994 election.", "mt": "その後、1994年の選挙でアフリカ国民会議派が勝利し、南アフリカ初の黒人大統領となった。" } ] model_output = model.predict(data, batch_size=8, gpus=1) print (model_output) ``` # Intended uses Our model is intented to be used for **reference-free MT evaluation**. Given a source text and its translation, outputs a single score between 0 and 1 where 1 represents a perfect translation. # Languages Covered: This model builds on top of InfoXLM which cover the following languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish. Thus, results for language pairs containing uncovered languages are unreliable!
null
Non_BioNLP
This is the converted model from Unbabel/wmt22-cometkiwi-da 1) Just kept the weights/bias keys() 2) Renamed the keys to match the original Facebook/XLM-roberta-large 3) kept the layer_wise_attention / estimator layers Because of a hack in HF's code I had to rename the "layerwise_attention.gamma" key to "layerwise_attention.gam" I changed the config.json key "layer_transformation" from sparsemax to softmax because there is a bug in COMET since the flag is not passed, the actual function used is the default which is softmax. Usage: ``` from transformers import XLMRobertaTokenizer, XLMRobertaTokenizerFast, AutoModel tokenizer = XLMRobertaTokenizerFast.from_pretrained("vince62s/wmt22-cometkiwi-da-roberta-large", trust_remote_code=True) model = AutoModel.from_pretrained("vince62s/wmt22-cometkiwi-da-roberta-large", trust_remote_code=True) text = "Hello world!</s></s>Bonjour le monde" encoded_text = tokenizer(text, return_tensors='pt') print(encoded_text) output = model(**encoded_text) print(output[0]) {'input_ids': tensor([[ 0, 35378, 8999, 38, 2, 2, 84602, 95, 11146, 2]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])} tensor([[0.8640]], grad_fn=<AddmmBackward0>) ``` Let's double check with the original code from Unbabel Comet: ``` from comet import download_model, load_from_checkpoint model = load_from_checkpoint("/home/vincent/Downloads/cometkiwi22/checkpoints/model.ckpt") # this is the Unbabel checkpoint data = [{"mt": "Hello world!", "src": "Bonjour le monde"}] output = model.predict(data, gpus=0) print(output) Prediction([('scores', [0.863973081111908]), ('system_score', 0.863973081111908)]) ``` --- extra_gated_heading: Acknowledge license to accept the repository extra_gated_button_content: Acknowledge license pipeline_tag: translation language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: cc-by-nc-sa-4.0 library_name: transformers --- This is a [COMET](https://github.com/Unbabel/COMET) quality estimation model: It receives a source sentence and the respective translation and returns a score that reflects the quality of the translation. # Paper [CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task](https://aclanthology.org/2022.wmt-1.60) (Rei et al., WMT 2022) # License: cc-by-nc-sa-4.0 # Usage (unbabel-comet) Using this model requires unbabel-comet to be installed: ```bash pip install --upgrade pip # ensures that pip is current pip install "unbabel-comet>=2.0.0" ``` Make sure you acknowledge its License and Log in into Hugging face hub before using: ```bash huggingface-cli login # or using an environment variable huggingface-cli login --token $HUGGINGFACE_TOKEN ``` Then you can use it through comet CLI: ```bash comet-score -s {source-input}.txt -t {translation-output}.txt --model Unbabel/wmt22-cometkiwi-da ``` Or using Python: ```python from comet import download_model, load_from_checkpoint model_path = download_model("Unbabel/wmt22-cometkiwi-da") model = load_from_checkpoint(model_path) data = [ { "src": "The output signal provides constant sync so the display never glitches.", "mt": "Das Ausgangssignal bietet eine konstante Synchronisation, so dass die Anzeige nie stört." }, { "src": "Kroužek ilustrace je určen všem milovníkům umění ve věku od 10 do 15 let.", "mt": "Кільце ілюстрації призначене для всіх любителів мистецтва у віці від 10 до 15 років." }, { "src": "Mandela then became South Africa's first black president after his African National Congress party won the 1994 election.", "mt": "その後、1994年の選挙でアフリカ国民会議派が勝利し、南アフリカ初の黒人大統領となった。" } ] model_output = model.predict(data, batch_size=8, gpus=1) print (model_output) ``` # Intended uses Our model is intented to be used for **reference-free MT evaluation**. Given a source text and its translation, outputs a single score between 0 and 1 where 1 represents a perfect translation. # Languages Covered: This model builds on top of InfoXLM which cover the following languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish. Thus, results for language pairs containing uncovered languages are unreliable!
{}
task
[ "TRANSLATION" ]
42,746
RinTrin/climate-tech-test
RinTrin
sentence-similarity
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset:wikihow", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/QQP", "dataset:embedding-data/SPECTER", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/WikiAnswers", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2023-02-01T00:40:29Z
2023-02-01T00:40:37+00:00
12
0
--- datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - natural_questions - trivia_qa - embedding-data/sentence-compression - embedding-data/flickr30k-captions - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/QQP - embedding-data/SPECTER - embedding-data/PAQ_pairs - embedding-data/WikiAnswers language: en license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
null
TBD
# all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
{"datasets": ["s2orc", "flax-sentence-embeddings/stackexchange_xml", "ms_marco", "gooaq", "yahoo_answers_topics", "code_search_net", "search_qa", "eli5", "snli", "multi_nli", "wikihow", "natural_questions", "trivia_qa", "embedding-data/sentence-compression", "embedding-data/flickr30k-captions", "embedding-data/altlex", "embedding-data/simple-wiki", "embedding-data/QQP", "embedding-data/SPECTER", "embedding-data/PAQ_pairs", "embedding-data/WikiAnswers"], "language": "en", "license": "apache-2.0", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"]}
task
[ "QUESTION_ANSWERING" ]
42,747
TransferGraph/moghis_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion
TransferGraph
text-classification
[ "peft", "safetensors", "parquet", "text-classification", "dataset:tweet_eval", "base_model:moghis/distilbert-base-uncased-finetuned-emotion", "base_model:adapter:moghis/distilbert-base-uncased-finetuned-emotion", "license:apache-2.0", "model-index", "region:us" ]
2024-02-29T13:03:26Z
2024-02-29T13:03:31+00:00
0
0
--- base_model: moghis/distilbert-base-uncased-finetuned-emotion datasets: - tweet_eval library_name: peft license: apache-2.0 metrics: - accuracy tags: - parquet - text-classification model-index: - name: moghis_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: tweet_eval config: emotion split: validation args: emotion metrics: - type: accuracy value: 0.7245989304812834 name: accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # moghis_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion This model is a fine-tuned version of [moghis/distilbert-base-uncased-finetuned-emotion](https://huggingface.co/moghis/distilbert-base-uncased-finetuned-emotion) on the tweet_eval dataset. It achieves the following results on the evaluation set: - accuracy: 0.7246 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.1684 | None | 0 | | 0.7005 | 0.8896 | 0 | | 0.7086 | 0.7329 | 1 | | 0.7139 | 0.6568 | 2 | | 0.7246 | 0.6240 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # moghis_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion This model is a fine-tuned version of [moghis/distilbert-base-uncased-finetuned-emotion](https://huggingface.co/moghis/distilbert-base-uncased-finetuned-emotion) on the tweet_eval dataset. It achieves the following results on the evaluation set: - accuracy: 0.7246 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.1684 | None | 0 | | 0.7005 | 0.8896 | 0 | | 0.7086 | 0.7329 | 1 | | 0.7139 | 0.6568 | 2 | | 0.7246 | 0.6240 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
{"base_model": "moghis/distilbert-base-uncased-finetuned-emotion", "datasets": ["tweet_eval"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "moghis_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "emotion", "split": "validation", "args": "emotion"}, "metrics": [{"type": "accuracy", "value": 0.7245989304812834, "name": "accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
42,748
RichardErkhov/zake7749_-_gemma-2-2b-it-chinese-kyara-dpo-4bits
RichardErkhov
null
[ "safetensors", "gemma2", "arxiv:2304.12244", "arxiv:2308.07074", "4-bit", "bitsandbytes", "region:us" ]
2024-11-19T16:21:27Z
2024-11-19T16:22:39+00:00
8
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gemma-2-2b-it-chinese-kyara-dpo - bnb 4bits - Model creator: https://huggingface.co/zake7749/ - Original model: https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo/ Original model description: --- language: - zh - en library_name: transformers base_model: - google/gemma-2-2b-it datasets: - zake7749/kyara-chinese-math-sft-s0-30K - zake7749/kyara-chinese-preference-rl-dpo-s0-30K - zake7749/chinese-sft-stem-zh-hant - zake7749/chinese-sft-stem-zh-hans model-index: - name: gemma-2-2b-it-chinese-kyara-dpo results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 53.82 name: strict accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 19.06 name: normalized accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 6.12 name: exact match source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 2.24 name: acc_norm source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 16.76 name: acc_norm source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 17.48 name: accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard license: gemma --- # Kyara: Knowledge Yielding Adaptive Retrieval Augmentation for LLM Fine-tuning [![DOI](https://zenodo.org/badge/844304447.svg)](https://zenodo.org/badge/latestdoi/844304447) <p align="left"> 🤗 <a href="https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo">Hugging Face</a>&nbsp; | 🚀<a href="https://github.com/zake7749/kyara">Github</a>&nbsp; | &nbsp;📑 <a href="#">Paper</a>&nbsp; | &nbsp;📖 <a href="https://github.com/zake7749/kyara/blob/main/document/README_EN.md">English</a>&nbsp; | &nbsp;📖 <a href="https://github.com/zake7749/kyara">Chinese</a>&nbsp; | &nbsp;💻 <a href="https://www.kaggle.com/code/zake7749/kyara-a-compact-yet-powerful-chinese-llm">Kaggle Notebook</a> </p> <div style="text-align: center;"> <img src="https://i.imgur.com/QiWlcYJ.jpeg" alt="kyara"/> </div> Kyara (Knowledge Yielding Adaptive Retrieval Augmentation) is an experimental project aimed at improving language models through knowledge retrieval processes. The project seeks to enhance the model’s ability to adapt knowledge and improve language comprehension, particularly in underrepresented languages like Traditional Chinese. Given the relatively scarce availability of Traditional Chinese data compared to the vast corpus of English data used for model training, Kyara addresses this gap by expanding the limited corpus for this language. To validate Kyara's effectiveness, we conducted full-parameter fine-tuning on `Gemma-2-2b-it`, resulting in the first iteration of the Kyara model. Initial evaluation results, as detailed in the [Benchmark](#benchmark) section, demonstrate that Kyara outperforms the original `Gemma-2-2b-it` across various benchmarks, with notable improvements in Chinese language evaluations. ## Benchmark ### General Benchmark The following evaluations are based-on zero-shot. | Metric | Kyara-2b-it | Gemma-2-2b-it | |--------------------------|----------|-------------| | **[TMMLUPlus](https://huggingface.co/datasets/ikala/tmmluplus)** | **41.98** | 36.73 | | &emsp;- STEM | **43.73** | 37.84 | | &emsp;- Humanities | **38.72** | 33.40 | | &emsp;- Other | **40.61** | 36.00 | | &emsp;- Social-Science | **44.88** | 39.69 | | **[MMLU-Redux](https://github.com/yuchenlin/ZeroEval)** | **55.44**| 51.94 | | **[GSM8K](https://github.com/yuchenlin/ZeroEval)** | **54.21**| 51.63 | | **[MATH-L5](https://github.com/yuchenlin/ZeroEval)** | **8.88**| 4.3 | | **[CRUX](https://github.com/yuchenlin/ZeroEval)** | **22.75**| 21.5 | | **[ZebraLogic](https://github.com/yuchenlin/ZeroEval)** | **5.2**| 4.2 | | **Chinese-Reason-Bench** | **4.21** | 3.44 | The aggregation method for the groups in TMMLUPlus is macro average, following the practice in the official implementation. #### [Open-LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) As of now, Kyara-2b-it is the leading competitor among all 2b-scale models on the OpenLLM Leaderboard. <div style="text-align: center"> <img src="https://i.imgur.com/3NKhAja.png" alt="kyara-2b-it-open-llm-leaderboard"> </div> ### Alignment Benchmark | Metric | Kyara | Gemma-2-2b-it | ChatGPT-3.5-1106 | |--------------------------|----------|---------------|------------------| | **[AlpacaEval-LC](https://github.com/tatsu-lab/alpaca_eval)** | **35.35** | 32.37 | 19.30 | | **[AlpacaEval](https://github.com/tatsu-lab/alpaca_eval)** | **43.34** | 32.94 | 9.20 | | **[MT-Bench-TW](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2)** | **7.43** | 6.35 | 7.10 | | **[MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench)** | 8.28 | 8.17 | **8.32** | | **[Chatbot-Arena-Hard](https://github.com/lm-sys/arena-hard-auto)** | **22.60** | 19.4 | 18.87 | #### [AlignBench](https://github.com/THUDM/AlignBench) | Fold | Kyara-2b-it-CHT | Kyara-2b-it-CHS | Gemma-2-2b-it | ChatGPT-3.5-0613 | |-------|-----------------|-----------------|---------------| ---- | | Fundamental Language Ability | 6.72 | 6.54 | 6.42 | **6.92** | | Advanced Chinese Understanding | 5.78 | 5.24 | 5.03 | **5.91** | | Open-ended Questions | **8.16** | 7.79 | 7.52 | 6.47 | | Writing Ability | **7.90** | 7.24 | 7.76 | 7.28 | | Logical Reasoning | **5.26** | 4.27 | 4.20 | 4.79 | | Mathematics | **5.99** | 5.44 | 5.05 | 5.38 | | Task-oriented Role Play | **8.07** | 8.00 | 7.42 | 7.00 | | Professional Knowledge | **6.97** | 6.86 | 5.79 | 6.81 | | Reasoning AVG. | **5.62** | 4.85 | 4.63 | 5.00 | | Chinage Language AVG. | **7.26** | 6.94 | 6.66 | 6.73 | | Overall | **6.44** | 5.90 | 5.64 | 5.91 | where the postfixes CHT and CHS represent Traditional Chinese and Simplified Chinese, respectively. To evaluate the performance on Traditional Chinese in AlignBench, we used [OpenCC](https://github.com/BYVoid/OpenCC) with the `s2tw` configuration to convert all questions from Simplified Chinese to Traditional Chinese. ## Usage Kyara adopts the same architecture as Gemma2, utilizing identical inference and training methods. We have created a [Jupyter Notebook](https://www.kaggle.com/code/zake7749/kyara-a-compact-yet-powerful-chinese-llm) on Kaggle to demonstrate Kyara’s basic functionality. For service-level deployment, we recommend using Sglang or vllm to achieve greater throughput and robustness. ## Method The following sections provide a brief summary of Kyara's implementation strategy. ### Dataset Summary We have collected a total of 3.6M conversations, approximately 4.51 billion tokens. The following provides an overview of the language distribution and conversation rounds. * Language: <img src="https://i.imgur.com/fhD5kIy.png" alt="language-distribution" width="500"/> * Conversation Rounds: <img src="https://i.imgur.com/CWQ2shj.png" alt="conv-round-distribution" width="500"/> ### Dataset Construction The data construction for Kyara is divided into two parts: English and Chinese. For the English part, we have incorporated multiple high-quality open-source datasets, such as [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) and [arcee-ai/The-Tome](https://huggingface.co/datasets/arcee-ai/The-Tome), and performing semantic deduplication to drop out near-similar examples. As for the Chinese part, the construction follows the process outlined below: #### Base Dataset: Knowledge Injection with Retrieval Augmentation We developed a knowledge search system using open Chinese knowledge corpora, integrated with [QDrant](https://qdrant.tech/). To construct Supervised Fine-Tuning(SFT) pairs, we followed this process: 1. Sample documents from the knowledge base and generate knowledge-intensive questions that users might ask based on these texts. 2. (Optional) Increase instruction complexity using [Evol-Instruct](https://arxiv.org/pdf/2304.12244). 3. Apply query expansion on the generated instructions to retrieve additional Top K documents and individually assess their relevance: * For relevant documents, use an LLM to summarize key information related to the question. * For irrelevant documents, ignore them. 4. Let the LLM generate a detailed and comprehensive response according to the original document and K supplementary references. Besides, we would also aks LLM to generate an user prompt for high quality documents, and pair the (generated prompt, original document) as a SFT example. ##### Chinese Math Dataset * Dataset: [zake7749/kyara-chinese-math-sft-s0-30K](https://huggingface.co/datasets/zake7749/kyara-chinese-math-sft-s0-30K) While the aforementioned strategy can generate a wide range of knowledge-based texts, it primarily falls within the scope of information-seeking tasks and is not very effective in constructing mathematical and reasoning-related content. To address this, we generated 50,000 math problems based on [PersonaHub](https://huggingface.co/datasets/proj-persona/PersonaHub). We then used `Gemini-1.5-Flash` to filter out data with obvious errors in calculation and reasoning, thereby creating [kyara-chinese-math-sft-s0-30K](https://huggingface.co/datasets/zake7749/kyara-chinese-math-sft-s0-30K). #### High Quality Dataset: Model Refinement After completing supervised learning using the base dataset, we will fine-tune the LLM again on a high-quality subset, primarily to address the following three issues: 1. Some responses in the Base Dataset were generated from small model, which sometimes performed poorly in following instructions. 2. We used various LLMs in the previous step to introduce knowledge diversity and language adaptability. However, we discovered subtle differences in response templates and reasoning approaches between different LLMs, leading to occasional instability in the trained Chat Model. Therefore, we would like to introduced a high-quality small dataset, using a single strong LLM to generate QA Pairs. 3. The Base Dataset includes some Q&A Pairs composed of generated queries and original document. While these data are rich in knowledge, they are relatively weak in terms of instruction following. To balance data diversity and quality, we adopted a strategy similar to [InsTag](https://arxiv.org/abs/2308.07074) to classify the data. We then used [ArmoRM](https://huggingface.co/RLHFlow/ArmoRM-Llama3-8B-v0.1) and an LLM Judge to evaluate data quality, finally extracting the best training data from each category to create the Stage 1 Dataset of about 500K, which was used to fine-tune the Kyara-SFT Model again. ### Preference Learning We introduced Preference Learning in Kyara, which allows the model's responses to better align with human preferences while enhancing programming skills and mathematical reasoning abilities. Kyara’s preference learning strategy utilizes Direct Preference Optimization (DPO), integrating two custom-built Chinese datasets alongside two English datasets. * [argilla/ultrafeedback-binarized-preferences](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences) * [xinlai/Math-Step-DPO-10K](https://huggingface.co/datasets/xinlai/Math-Step-DPO-10K) Here, we summarize the construction strategy of the Chinese datasets. #### Chinese DPO ##### [SPIN/SPPO](https://github.com/uclaml/SPIN) We followed the original design, using Kyara-SFT to generate a set of contrastive data for the High Quality Dataset. ##### RLAIF Dataset: [zake7749/kyara-chinese-preference-dpo-s0-30K](https://huggingface.co/datasets/zake7749/kyara-chinese-preference-dpo-s0-30K) We extracted Chinese Prompts from `Magpie-Align/Magpie-Qwen2-Pro-200K-Chinese`, `hfl/stem_zh_instruction`, and `FreedomIntelligence/Evol-Instruct-Chinese-GPT4`, and distributed the same prompt to four different LLMs. The competitors include: * GPT-4o * GPT-4-0618 * ChatGPT-3.5-0513 * Claude-Sonnet-3.5 * Yi-Large * Mixtral 8x22B * Gemini-Flash * Qwen2-72B-Instruct * DeepSeek V2 After response generation, we ask the LLMs to judge which one is better, using the following prompt: ``` **[Task]** Please act as an impartial judge and evaluate the quality of the responses provided by two AI assistants to the user question displayed below. Your evaluation should consider correctness and helpfulness. 1. First, independently solve the user question step-by-step. 2. Then, compare both assistants’ answers with your answer. Identify and correct any mistakes. 3. Do not allow the length of the responses to influence your evaluation. 4. Be as objective as possible. After providing your explanation, output your final verdict by strictly following this format: "[[A]]" if assistant A is better, "[[B]]" if assistant B is better, and "[[C]]" for a tie or if both A and B are bad. If the answers from A and B are very similar in terms of correctness, helpfulness, and relevance, meaning there is no "obvious" winner, judge it as a tie and output [[C]]. **[User Question]** {prompt} --- **[Assistant A’s Answer]** {answer} --- **[Assistant B’s Answer]** {prediction} --- ``` Finally, all four datasets were combined for DPO training. ## Feature ### Retrieval Augmented Generation (Experimental) Benefiting from Kyara's training method, we incorporated RAG-related content during the SFT phase. You can refer to the following examples to construct task templates: #### Input ``` # Reference Document <reference> <document> Document ID: id_27025b13 * Document Title: Flash_memory * Document Text: Another limitation of flash memory is its limited number of erase cycles (most commercial SLC flash memory guarantees around 100,000 erase cycles for the "0" zone, but due to manufacturing precision, other blocks are not guaranteed, and some might even have factory defects and be unusable). This limitation is partly offset by firmware or file systems that calculate write counts and perform dynamic remapping to distribute writes across different blocks; this technique is called wear leveling. Another method is known as Bad Block Management (BBM), where blocks are dynamically tested during write operations, and failed blocks are discarded. For most mobile devices, these wear management techniques can extend the life of internal flash memory (sometimes even beyond the device's lifespan). Moreover, partial data loss in these devices may be acceptable. However, for high-reliability data storage applications that require heavy data write cycles, flash memory is not recommended. But this limitation does not apply to read-only applications, such as routers and thin clients, which often only write once or a few times throughout their lifespan. ### Read Disturbance </document> <document> Document ID: id_858b1787 * Document Title: Flash_memory * Document Text: * TLC NAND flash memory typically has an endurance of around 1,000 or more cycles (Samsung 840); using multi-layer structures and adopting LDPC correction have extended the endurance. * QLC NAND flash memory can have an endurance ranging from 500 to 1,000 cycles. * SLC floating-gate NOR flash memory typically has a write endurance of 100,000 to 1,000,000 cycles (Numonyx M58BW 100k; Spansion S29CD016J 1,000k). * MLC floating-gate NOR flash memory usually has a write endurance of 100,000 cycles (Numonyx J3 flash). These values are approximate and depend on the technology and positioning of different manufacturers' products. Finer process technologies can improve read/write performance and capacity, but they may also pose greater challenges in terms of write endurance. Specific algorithms and design examples, such as wear leveling and memory over-provisioning, can be used to adjust storage system endurance to meet specific needs. Wear leveling is essential for ensuring the lifespan of flash memory products, and it is supported in products like USB flash drives and SSDs. ## Flash Memory File Systems </document> <document> Document ID: id_df34eb65 * Document Title: Memory_over-provisioning * Document Text: ## Basic SSD Operations Due to the nature of flash memory operations, data cannot be overwritten directly like in hard drives. When data is first written to an SSD, the cells are in an erased state, so the data can be written directly, one page at a time (usually 4 to 8 KB in size). The SSD controller, which manages the flash memory and interfaces with the main control system, uses a logical-to-physical mapping system called Logical Block Addressing (LBA), part of the flash translation layer (FTL). When new data needs to replace old data, the SSD controller writes the new data to a new location and updates the logical mapping to point to the new physical location. The original data becomes invalid and must be erased before it can be rewritten. Flash memory has a limited number of program/erase (P/E) cycles. Typically, this is expressed as the maximum number of P/E cycles that flash memory can endure over its lifetime. Single-level cell (SLC) flash memory is generally designed for high performance and long life, typically supporting 50,000 to 100,000 cycles. As of 2011, multi-level cell (MLC) flash memory, designed for low-cost applications, has far fewer cycles, usually only 3,000 to 5,000 cycles. Since 2013, triple-level cell (TLC) flash memory has been introduced, with P/E cycles dropping to around 1,000. The lower the write amplification, the better, as it corresponds to fewer P/E cycles, which extends the lifespan of the SSD. </document> </reference> --- # Task Description Please refer to the content in the <reference> above and answer the user's question. When answering, pay attention to the following points: 1. Appropriately cite the reference document ID to support your argument. For example: If we have Article 1 with ID: dq945509 and Article 2 with ID: 6a178c5, you can cite as follows: "This is a factual statement"【dq945509】 and "This is another factual statement"【6a178c5】. 2. If you find that the reference document lacks relevant information to the user's question, output: "The reference documents lack information relevant to the question." Do not fabricate unrelated answers. # User's Question How does the write limit of memory cards affect their lifespan? What techniques can extend the lifespan of memory cards in scenarios with frequent writes? ``` #### Output ``` The write limit of memory cards, particularly flash memory, significantly affects their lifespan due to the limited number of program/erase (P/E) cycles they can endure. For instance, single-level cell (SLC) flash memory, which is designed for high performance and long life, typically supports 50,000 to 100,000 cycles【id_df34eb65】. In contrast, multi-level cell (MLC) flash memory, which is more cost-effective, has fewer cycles, usually around 3,000 to 5,000 cycles【id_df34eb65】. As of 2013, triple-level cell (TLC) flash memory has even lower endurance, with around 1,000 cycles【id_df34eb65】. To mitigate the impact of these limitations, several techniques can extend the lifespan of memory cards in scenarios with frequent writes: 1. **Wear Leveling**: This technique involves distributing writes across different blocks to prevent any single block from being overworked. Wear leveling is crucial for ensuring the lifespan of flash memory products, and it is supported in products like USB flash drives and SSDs【id_858b1787】. 2. **Bad Block Management (BBM)**: In this method, blocks are dynamically tested during write operations, and failed blocks are discarded. This ensures that only healthy blocks are used for data storage, thereby extending the overall lifespan of the memory card【id_27025b13】. 3. **Memory Over-Provisioning**: This involves allocating extra storage space beyond the actual capacity of the flash memory to account for wear and tear. This can help to maintain performance and extend the lifespan of the memory card by allowing for more writes before reaching the physical limits of the memory cells【id_df34eb65】. These techniques are particularly important for mobile devices and applications that require frequent data writes, such as routers and thin clients, where the partial data loss in these devices may be acceptable【id_27025b13】. ``` Please refer to [Kyara-RAG](https://github.com/zake7749/kyara-rag) for more details. ## Limitation Like most models, Kyara still suffers from hallucinations, and possibly due to the nature of the data, it particularly enjoys quoting references when answering questions. For example, if you ask the model to explain a line from "Legend of Mortal": ``` 該如何理解:「俠者,不受名利所牽,但求俯仰無愧。正邪存乎在我,聲名留問諸君。非我桀傲不馴,乃因唐門薪火,自古如今!」 ``` However, the model would respond that this quote is from The "Legend of the Condor Heroes." ``` 這句話出自《神雕俠侶》中的《神雕手記》,是楊過對自己處世態度的一種表達。讓我們逐句拆解理解: 1. **「俠者,不受名利所牽,但求俯仰無愧。」** - **俠者**:指俠義之士,講究義氣和正義的英雄。 - **不受名利所牽**:不受外在名利和利益的影響,保持自我的正直。 - **但求俯仰無愧**:只求自己行動的坦蕩和內心的清淨,不求外界的認可或評價。 2. **「正邪存乎在我,聲名留問諸君。」** - **正邪存乎在我**:善惡的歸屬在於自己,決策和行動的結果在於自己。 - **聲名留問諸君**:聲名是別人會問津的,但自己對正義的追求和內心的正直,才是最值得自豪和珍視的。 3. **「非我桀傲不馴,乃因唐門薪火,自古如今!」** - **非我桀傲不馴**:並不是我桀驁不馴(驕傲自負),而是因唐門的榮耀和傳承,我必須遵循這些責任。 - **乃因唐門薪火**:是因為唐門(一種武學門派)的薪火相傳,這是一份深重的責任。 - **自古如今**:自古至今,俠義精神和門派的責任感,一直是人們所崇敬的,這是歷史和傳統的延續。 總結起來,這段話表達了楊過對於自己行為的獨特理解和自豪感。他明白自己的行為和價值觀取決於個人的內心和對正義的追求,而非外界的評價和名利。他也承認了自己的責任,作為唐門下一代,必須繼承和發揚門風,這一點是無可替代的。 ``` It is recommended to exercise caution when using language models. ## Citation ```bibtex @software{Yang_Kyara_2024, author = {Yang, Kai-Chou}, doi = {10.5281/zenodo.14001626}, month = oct, title = {{Kyara}}, url = {https://github.com/zake7749/Kyara}, version = {1.0.0}, year = {2024} } ```
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gemma-2-2b-it-chinese-kyara-dpo - bnb 4bits - Model creator: https://huggingface.co/zake7749/ - Original model: https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo/ Original model description: --- language: - zh - en library_name: transformers base_model: - google/gemma-2-2b-it datasets: - zake7749/kyara-chinese-math-sft-s0-30K - zake7749/kyara-chinese-preference-rl-dpo-s0-30K - zake7749/chinese-sft-stem-zh-hant - zake7749/chinese-sft-stem-zh-hans model-index: - name: gemma-2-2b-it-chinese-kyara-dpo results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 53.82 name: strict accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 19.06 name: normalized accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 6.12 name: exact match source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 2.24 name: acc_norm source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 16.76 name: acc_norm source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 17.48 name: accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard license: gemma --- # Kyara: Knowledge Yielding Adaptive Retrieval Augmentation for LLM Fine-tuning [![DOI](https://zenodo.org/badge/844304447.svg)](https://zenodo.org/badge/latestdoi/844304447) <p align="left"> 🤗 <a href="https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo">Hugging Face</a>&nbsp; | 🚀<a href="https://github.com/zake7749/kyara">Github</a>&nbsp; | &nbsp;📑 <a href="#">Paper</a>&nbsp; | &nbsp;📖 <a href="https://github.com/zake7749/kyara/blob/main/document/README_EN.md">English</a>&nbsp; | &nbsp;📖 <a href="https://github.com/zake7749/kyara">Chinese</a>&nbsp; | &nbsp;💻 <a href="https://www.kaggle.com/code/zake7749/kyara-a-compact-yet-powerful-chinese-llm">Kaggle Notebook</a> </p> <div style="text-align: center;"> <img src="https://i.imgur.com/QiWlcYJ.jpeg" alt="kyara"/> </div> Kyara (Knowledge Yielding Adaptive Retrieval Augmentation) is an experimental project aimed at improving language models through knowledge retrieval processes. The project seeks to enhance the model’s ability to adapt knowledge and improve language comprehension, particularly in underrepresented languages like Traditional Chinese. Given the relatively scarce availability of Traditional Chinese data compared to the vast corpus of English data used for model training, Kyara addresses this gap by expanding the limited corpus for this language. To validate Kyara's effectiveness, we conducted full-parameter fine-tuning on `Gemma-2-2b-it`, resulting in the first iteration of the Kyara model. Initial evaluation results, as detailed in the [Benchmark](#benchmark) section, demonstrate that Kyara outperforms the original `Gemma-2-2b-it` across various benchmarks, with notable improvements in Chinese language evaluations. ## Benchmark ### General Benchmark The following evaluations are based-on zero-shot. | Metric | Kyara-2b-it | Gemma-2-2b-it | |--------------------------|----------|-------------| | **[TMMLUPlus](https://huggingface.co/datasets/ikala/tmmluplus)** | **41.98** | 36.73 | | &emsp;- STEM | **43.73** | 37.84 | | &emsp;- Humanities | **38.72** | 33.40 | | &emsp;- Other | **40.61** | 36.00 | | &emsp;- Social-Science | **44.88** | 39.69 | | **[MMLU-Redux](https://github.com/yuchenlin/ZeroEval)** | **55.44**| 51.94 | | **[GSM8K](https://github.com/yuchenlin/ZeroEval)** | **54.21**| 51.63 | | **[MATH-L5](https://github.com/yuchenlin/ZeroEval)** | **8.88**| 4.3 | | **[CRUX](https://github.com/yuchenlin/ZeroEval)** | **22.75**| 21.5 | | **[ZebraLogic](https://github.com/yuchenlin/ZeroEval)** | **5.2**| 4.2 | | **Chinese-Reason-Bench** | **4.21** | 3.44 | The aggregation method for the groups in TMMLUPlus is macro average, following the practice in the official implementation. #### [Open-LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) As of now, Kyara-2b-it is the leading competitor among all 2b-scale models on the OpenLLM Leaderboard. <div style="text-align: center"> <img src="https://i.imgur.com/3NKhAja.png" alt="kyara-2b-it-open-llm-leaderboard"> </div> ### Alignment Benchmark | Metric | Kyara | Gemma-2-2b-it | ChatGPT-3.5-1106 | |--------------------------|----------|---------------|------------------| | **[AlpacaEval-LC](https://github.com/tatsu-lab/alpaca_eval)** | **35.35** | 32.37 | 19.30 | | **[AlpacaEval](https://github.com/tatsu-lab/alpaca_eval)** | **43.34** | 32.94 | 9.20 | | **[MT-Bench-TW](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2)** | **7.43** | 6.35 | 7.10 | | **[MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench)** | 8.28 | 8.17 | **8.32** | | **[Chatbot-Arena-Hard](https://github.com/lm-sys/arena-hard-auto)** | **22.60** | 19.4 | 18.87 | #### [AlignBench](https://github.com/THUDM/AlignBench) | Fold | Kyara-2b-it-CHT | Kyara-2b-it-CHS | Gemma-2-2b-it | ChatGPT-3.5-0613 | |-------|-----------------|-----------------|---------------| ---- | | Fundamental Language Ability | 6.72 | 6.54 | 6.42 | **6.92** | | Advanced Chinese Understanding | 5.78 | 5.24 | 5.03 | **5.91** | | Open-ended Questions | **8.16** | 7.79 | 7.52 | 6.47 | | Writing Ability | **7.90** | 7.24 | 7.76 | 7.28 | | Logical Reasoning | **5.26** | 4.27 | 4.20 | 4.79 | | Mathematics | **5.99** | 5.44 | 5.05 | 5.38 | | Task-oriented Role Play | **8.07** | 8.00 | 7.42 | 7.00 | | Professional Knowledge | **6.97** | 6.86 | 5.79 | 6.81 | | Reasoning AVG. | **5.62** | 4.85 | 4.63 | 5.00 | | Chinage Language AVG. | **7.26** | 6.94 | 6.66 | 6.73 | | Overall | **6.44** | 5.90 | 5.64 | 5.91 | where the postfixes CHT and CHS represent Traditional Chinese and Simplified Chinese, respectively. To evaluate the performance on Traditional Chinese in AlignBench, we used [OpenCC](https://github.com/BYVoid/OpenCC) with the `s2tw` configuration to convert all questions from Simplified Chinese to Traditional Chinese. ## Usage Kyara adopts the same architecture as Gemma2, utilizing identical inference and training methods. We have created a [Jupyter Notebook](https://www.kaggle.com/code/zake7749/kyara-a-compact-yet-powerful-chinese-llm) on Kaggle to demonstrate Kyara’s basic functionality. For service-level deployment, we recommend using Sglang or vllm to achieve greater throughput and robustness. ## Method The following sections provide a brief summary of Kyara's implementation strategy. ### Dataset Summary We have collected a total of 3.6M conversations, approximately 4.51 billion tokens. The following provides an overview of the language distribution and conversation rounds. * Language: <img src="https://i.imgur.com/fhD5kIy.png" alt="language-distribution" width="500"/> * Conversation Rounds: <img src="https://i.imgur.com/CWQ2shj.png" alt="conv-round-distribution" width="500"/> ### Dataset Construction The data construction for Kyara is divided into two parts: English and Chinese. For the English part, we have incorporated multiple high-quality open-source datasets, such as [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) and [arcee-ai/The-Tome](https://huggingface.co/datasets/arcee-ai/The-Tome), and performing semantic deduplication to drop out near-similar examples. As for the Chinese part, the construction follows the process outlined below: #### Base Dataset: Knowledge Injection with Retrieval Augmentation We developed a knowledge search system using open Chinese knowledge corpora, integrated with [QDrant](https://qdrant.tech/). To construct Supervised Fine-Tuning(SFT) pairs, we followed this process: 1. Sample documents from the knowledge base and generate knowledge-intensive questions that users might ask based on these texts. 2. (Optional) Increase instruction complexity using [Evol-Instruct](https://arxiv.org/pdf/2304.12244). 3. Apply query expansion on the generated instructions to retrieve additional Top K documents and individually assess their relevance: * For relevant documents, use an LLM to summarize key information related to the question. * For irrelevant documents, ignore them. 4. Let the LLM generate a detailed and comprehensive response according to the original document and K supplementary references. Besides, we would also aks LLM to generate an user prompt for high quality documents, and pair the (generated prompt, original document) as a SFT example. ##### Chinese Math Dataset * Dataset: [zake7749/kyara-chinese-math-sft-s0-30K](https://huggingface.co/datasets/zake7749/kyara-chinese-math-sft-s0-30K) While the aforementioned strategy can generate a wide range of knowledge-based texts, it primarily falls within the scope of information-seeking tasks and is not very effective in constructing mathematical and reasoning-related content. To address this, we generated 50,000 math problems based on [PersonaHub](https://huggingface.co/datasets/proj-persona/PersonaHub). We then used `Gemini-1.5-Flash` to filter out data with obvious errors in calculation and reasoning, thereby creating [kyara-chinese-math-sft-s0-30K](https://huggingface.co/datasets/zake7749/kyara-chinese-math-sft-s0-30K). #### High Quality Dataset: Model Refinement After completing supervised learning using the base dataset, we will fine-tune the LLM again on a high-quality subset, primarily to address the following three issues: 1. Some responses in the Base Dataset were generated from small model, which sometimes performed poorly in following instructions. 2. We used various LLMs in the previous step to introduce knowledge diversity and language adaptability. However, we discovered subtle differences in response templates and reasoning approaches between different LLMs, leading to occasional instability in the trained Chat Model. Therefore, we would like to introduced a high-quality small dataset, using a single strong LLM to generate QA Pairs. 3. The Base Dataset includes some Q&A Pairs composed of generated queries and original document. While these data are rich in knowledge, they are relatively weak in terms of instruction following. To balance data diversity and quality, we adopted a strategy similar to [InsTag](https://arxiv.org/abs/2308.07074) to classify the data. We then used [ArmoRM](https://huggingface.co/RLHFlow/ArmoRM-Llama3-8B-v0.1) and an LLM Judge to evaluate data quality, finally extracting the best training data from each category to create the Stage 1 Dataset of about 500K, which was used to fine-tune the Kyara-SFT Model again. ### Preference Learning We introduced Preference Learning in Kyara, which allows the model's responses to better align with human preferences while enhancing programming skills and mathematical reasoning abilities. Kyara’s preference learning strategy utilizes Direct Preference Optimization (DPO), integrating two custom-built Chinese datasets alongside two English datasets. * [argilla/ultrafeedback-binarized-preferences](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences) * [xinlai/Math-Step-DPO-10K](https://huggingface.co/datasets/xinlai/Math-Step-DPO-10K) Here, we summarize the construction strategy of the Chinese datasets. #### Chinese DPO ##### [SPIN/SPPO](https://github.com/uclaml/SPIN) We followed the original design, using Kyara-SFT to generate a set of contrastive data for the High Quality Dataset. ##### RLAIF Dataset: [zake7749/kyara-chinese-preference-dpo-s0-30K](https://huggingface.co/datasets/zake7749/kyara-chinese-preference-dpo-s0-30K) We extracted Chinese Prompts from `Magpie-Align/Magpie-Qwen2-Pro-200K-Chinese`, `hfl/stem_zh_instruction`, and `FreedomIntelligence/Evol-Instruct-Chinese-GPT4`, and distributed the same prompt to four different LLMs. The competitors include: * GPT-4o * GPT-4-0618 * ChatGPT-3.5-0513 * Claude-Sonnet-3.5 * Yi-Large * Mixtral 8x22B * Gemini-Flash * Qwen2-72B-Instruct * DeepSeek V2 After response generation, we ask the LLMs to judge which one is better, using the following prompt: ``` **[Task]** Please act as an impartial judge and evaluate the quality of the responses provided by two AI assistants to the user question displayed below. Your evaluation should consider correctness and helpfulness. 1. First, independently solve the user question step-by-step. 2. Then, compare both assistants’ answers with your answer. Identify and correct any mistakes. 3. Do not allow the length of the responses to influence your evaluation. 4. Be as objective as possible. After providing your explanation, output your final verdict by strictly following this format: "[[A]]" if assistant A is better, "[[B]]" if assistant B is better, and "[[C]]" for a tie or if both A and B are bad. If the answers from A and B are very similar in terms of correctness, helpfulness, and relevance, meaning there is no "obvious" winner, judge it as a tie and output [[C]]. **[User Question]** {prompt} --- **[Assistant A’s Answer]** {answer} --- **[Assistant B’s Answer]** {prediction} --- ``` Finally, all four datasets were combined for DPO training. ## Feature ### Retrieval Augmented Generation (Experimental) Benefiting from Kyara's training method, we incorporated RAG-related content during the SFT phase. You can refer to the following examples to construct task templates: #### Input ``` # Reference Document <reference> <document> Document ID: id_27025b13 * Document Title: Flash_memory * Document Text: Another limitation of flash memory is its limited number of erase cycles (most commercial SLC flash memory guarantees around 100,000 erase cycles for the "0" zone, but due to manufacturing precision, other blocks are not guaranteed, and some might even have factory defects and be unusable). This limitation is partly offset by firmware or file systems that calculate write counts and perform dynamic remapping to distribute writes across different blocks; this technique is called wear leveling. Another method is known as Bad Block Management (BBM), where blocks are dynamically tested during write operations, and failed blocks are discarded. For most mobile devices, these wear management techniques can extend the life of internal flash memory (sometimes even beyond the device's lifespan). Moreover, partial data loss in these devices may be acceptable. However, for high-reliability data storage applications that require heavy data write cycles, flash memory is not recommended. But this limitation does not apply to read-only applications, such as routers and thin clients, which often only write once or a few times throughout their lifespan. ### Read Disturbance </document> <document> Document ID: id_858b1787 * Document Title: Flash_memory * Document Text: * TLC NAND flash memory typically has an endurance of around 1,000 or more cycles (Samsung 840); using multi-layer structures and adopting LDPC correction have extended the endurance. * QLC NAND flash memory can have an endurance ranging from 500 to 1,000 cycles. * SLC floating-gate NOR flash memory typically has a write endurance of 100,000 to 1,000,000 cycles (Numonyx M58BW 100k; Spansion S29CD016J 1,000k). * MLC floating-gate NOR flash memory usually has a write endurance of 100,000 cycles (Numonyx J3 flash). These values are approximate and depend on the technology and positioning of different manufacturers' products. Finer process technologies can improve read/write performance and capacity, but they may also pose greater challenges in terms of write endurance. Specific algorithms and design examples, such as wear leveling and memory over-provisioning, can be used to adjust storage system endurance to meet specific needs. Wear leveling is essential for ensuring the lifespan of flash memory products, and it is supported in products like USB flash drives and SSDs. ## Flash Memory File Systems </document> <document> Document ID: id_df34eb65 * Document Title: Memory_over-provisioning * Document Text: ## Basic SSD Operations Due to the nature of flash memory operations, data cannot be overwritten directly like in hard drives. When data is first written to an SSD, the cells are in an erased state, so the data can be written directly, one page at a time (usually 4 to 8 KB in size). The SSD controller, which manages the flash memory and interfaces with the main control system, uses a logical-to-physical mapping system called Logical Block Addressing (LBA), part of the flash translation layer (FTL). When new data needs to replace old data, the SSD controller writes the new data to a new location and updates the logical mapping to point to the new physical location. The original data becomes invalid and must be erased before it can be rewritten. Flash memory has a limited number of program/erase (P/E) cycles. Typically, this is expressed as the maximum number of P/E cycles that flash memory can endure over its lifetime. Single-level cell (SLC) flash memory is generally designed for high performance and long life, typically supporting 50,000 to 100,000 cycles. As of 2011, multi-level cell (MLC) flash memory, designed for low-cost applications, has far fewer cycles, usually only 3,000 to 5,000 cycles. Since 2013, triple-level cell (TLC) flash memory has been introduced, with P/E cycles dropping to around 1,000. The lower the write amplification, the better, as it corresponds to fewer P/E cycles, which extends the lifespan of the SSD. </document> </reference> --- # Task Description Please refer to the content in the <reference> above and answer the user's question. When answering, pay attention to the following points: 1. Appropriately cite the reference document ID to support your argument. For example: If we have Article 1 with ID: dq945509 and Article 2 with ID: 6a178c5, you can cite as follows: "This is a factual statement"【dq945509】 and "This is another factual statement"【6a178c5】. 2. If you find that the reference document lacks relevant information to the user's question, output: "The reference documents lack information relevant to the question." Do not fabricate unrelated answers. # User's Question How does the write limit of memory cards affect their lifespan? What techniques can extend the lifespan of memory cards in scenarios with frequent writes? ``` #### Output ``` The write limit of memory cards, particularly flash memory, significantly affects their lifespan due to the limited number of program/erase (P/E) cycles they can endure. For instance, single-level cell (SLC) flash memory, which is designed for high performance and long life, typically supports 50,000 to 100,000 cycles【id_df34eb65】. In contrast, multi-level cell (MLC) flash memory, which is more cost-effective, has fewer cycles, usually around 3,000 to 5,000 cycles【id_df34eb65】. As of 2013, triple-level cell (TLC) flash memory has even lower endurance, with around 1,000 cycles【id_df34eb65】. To mitigate the impact of these limitations, several techniques can extend the lifespan of memory cards in scenarios with frequent writes: 1. **Wear Leveling**: This technique involves distributing writes across different blocks to prevent any single block from being overworked. Wear leveling is crucial for ensuring the lifespan of flash memory products, and it is supported in products like USB flash drives and SSDs【id_858b1787】. 2. **Bad Block Management (BBM)**: In this method, blocks are dynamically tested during write operations, and failed blocks are discarded. This ensures that only healthy blocks are used for data storage, thereby extending the overall lifespan of the memory card【id_27025b13】. 3. **Memory Over-Provisioning**: This involves allocating extra storage space beyond the actual capacity of the flash memory to account for wear and tear. This can help to maintain performance and extend the lifespan of the memory card by allowing for more writes before reaching the physical limits of the memory cells【id_df34eb65】. These techniques are particularly important for mobile devices and applications that require frequent data writes, such as routers and thin clients, where the partial data loss in these devices may be acceptable【id_27025b13】. ``` Please refer to [Kyara-RAG](https://github.com/zake7749/kyara-rag) for more details. ## Limitation Like most models, Kyara still suffers from hallucinations, and possibly due to the nature of the data, it particularly enjoys quoting references when answering questions. For example, if you ask the model to explain a line from "Legend of Mortal": ``` 該如何理解:「俠者,不受名利所牽,但求俯仰無愧。正邪存乎在我,聲名留問諸君。非我桀傲不馴,乃因唐門薪火,自古如今!」 ``` However, the model would respond that this quote is from The "Legend of the Condor Heroes." ``` 這句話出自《神雕俠侶》中的《神雕手記》,是楊過對自己處世態度的一種表達。讓我們逐句拆解理解: 1. **「俠者,不受名利所牽,但求俯仰無愧。」** - **俠者**:指俠義之士,講究義氣和正義的英雄。 - **不受名利所牽**:不受外在名利和利益的影響,保持自我的正直。 - **但求俯仰無愧**:只求自己行動的坦蕩和內心的清淨,不求外界的認可或評價。 2. **「正邪存乎在我,聲名留問諸君。」** - **正邪存乎在我**:善惡的歸屬在於自己,決策和行動的結果在於自己。 - **聲名留問諸君**:聲名是別人會問津的,但自己對正義的追求和內心的正直,才是最值得自豪和珍視的。 3. **「非我桀傲不馴,乃因唐門薪火,自古如今!」** - **非我桀傲不馴**:並不是我桀驁不馴(驕傲自負),而是因唐門的榮耀和傳承,我必須遵循這些責任。 - **乃因唐門薪火**:是因為唐門(一種武學門派)的薪火相傳,這是一份深重的責任。 - **自古如今**:自古至今,俠義精神和門派的責任感,一直是人們所崇敬的,這是歷史和傳統的延續。 總結起來,這段話表達了楊過對於自己行為的獨特理解和自豪感。他明白自己的行為和價值觀取決於個人的內心和對正義的追求,而非外界的評價和名利。他也承認了自己的責任,作為唐門下一代,必須繼承和發揚門風,這一點是無可替代的。 ``` It is recommended to exercise caution when using language models. ## Citation ```bibtex @software{Yang_Kyara_2024, author = {Yang, Kai-Chou}, doi = {10.5281/zenodo.14001626}, month = oct, title = {{Kyara}}, url = {https://github.com/zake7749/Kyara}, version = {1.0.0}, year = {2024} } ```
{}
task
[ "TRANSLATION" ]
42,749
adnaan05/cnn_news_summary_model_trained_on_reduced_data
adnaan05
text2text-generation
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "dataset:abisee/cnn_dailymail", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-09-22T09:23:00Z
2024-10-04T13:21:39+00:00
15
0
--- base_model: t5-small datasets: - abisee/cnn_dailymail library_name: transformers license: apache-2.0 metrics: - rouge tags: - generated_from_trainer model-index: - name: cnn_news_summary_model_trained_on_reduced_data results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cnn_news_summary_model_trained_on_reduced_data This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an **[cnn_dailymail](https://huggingface.co/datasets/abisee/cnn_dailymail)** dataset. It achieves the following results on the evaluation set: - ***Loss***: 1.6597 - **Rouge_1**: 0.2162 - **Rouge_2**: 0.0943 - **Rouge_l**: 0.1834 - **Rouge_lsum**: 0.1834 - **Generated_Length**: 19.0 ## Model description **Base Model:** *t5-small*, which is a smaller version of the *T5 (Text-to-Text Transfer Transformer) model* developed by ***Google***. This model can be particularly useful if you need to quickly summarize large volumes of text, making it easier to digest and understand key information. ## Intended uses & limitations * ### Intended Use * The model is designed for **text summarization**, which involves condensing long pieces of text into shorter, more digestible summaries. Here are some specific use cases: * **News Summarization:** Quickly summarizing news articles to provide readers with the main points. * **Document Summarization**: Condensing lengthy reports or research papers into brief overviews. * **Content Curation**: Helping content creators and curators to generate summaries for newsletters, blogs, or social media posts. * **Educational Tools**: Assisting students and educators by summarizing academic texts and articles. * ### Limitations * While the model is powerful, it does have some limitations: * **Accuracy**: The summaries generated might not always capture all the key points accurately, especially for complex or nuanced texts. * **Bias**: The model can inherit biases present in the training data, which might affect the quality and neutrality of the summaries. * **Context Understanding**: It might struggle with understanding the full context of very long documents, leading to incomplete or misleading summaries. * **Language and Style**: The model’s output might not always match the desired tone or style, requiring further editing. * **Data Dependency**: Performance can vary depending on the quality and nature of the input data. It performs best on data similar to its training set (news articles) ## Training and evaluation data The model was trained using the Adam optimizer with a learning rate of **2e-05** over **2 epochs**. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:| | No log | 1.0 | 288 | 1.6727 | 0.217 | 0.0949 | 0.1841 | 0.1839 | 19.0 | | 1.9118 | 2.0 | 576 | 1.6597 | 0.2162 | 0.0943 | 0.1834 | 0.1834 | 19.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cnn_news_summary_model_trained_on_reduced_data This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an **[cnn_dailymail](https://huggingface.co/datasets/abisee/cnn_dailymail)** dataset. It achieves the following results on the evaluation set: - ***Loss***: 1.6597 - **Rouge_1**: 0.2162 - **Rouge_2**: 0.0943 - **Rouge_l**: 0.1834 - **Rouge_lsum**: 0.1834 - **Generated_Length**: 19.0 ## Model description **Base Model:** *t5-small*, which is a smaller version of the *T5 (Text-to-Text Transfer Transformer) model* developed by ***Google***. This model can be particularly useful if you need to quickly summarize large volumes of text, making it easier to digest and understand key information. ## Intended uses & limitations * ### Intended Use * The model is designed for **text summarization**, which involves condensing long pieces of text into shorter, more digestible summaries. Here are some specific use cases: * **News Summarization:** Quickly summarizing news articles to provide readers with the main points. * **Document Summarization**: Condensing lengthy reports or research papers into brief overviews. * **Content Curation**: Helping content creators and curators to generate summaries for newsletters, blogs, or social media posts. * **Educational Tools**: Assisting students and educators by summarizing academic texts and articles. * ### Limitations * While the model is powerful, it does have some limitations: * **Accuracy**: The summaries generated might not always capture all the key points accurately, especially for complex or nuanced texts. * **Bias**: The model can inherit biases present in the training data, which might affect the quality and neutrality of the summaries. * **Context Understanding**: It might struggle with understanding the full context of very long documents, leading to incomplete or misleading summaries. * **Language and Style**: The model’s output might not always match the desired tone or style, requiring further editing. * **Data Dependency**: Performance can vary depending on the quality and nature of the input data. It performs best on data similar to its training set (news articles) ## Training and evaluation data The model was trained using the Adam optimizer with a learning rate of **2e-05** over **2 epochs**. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:| | No log | 1.0 | 288 | 1.6727 | 0.217 | 0.0949 | 0.1841 | 0.1839 | 19.0 | | 1.9118 | 2.0 | 576 | 1.6597 | 0.2162 | 0.0943 | 0.1834 | 0.1834 | 19.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
{"base_model": "t5-small", "datasets": ["abisee/cnn_dailymail"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "cnn_news_summary_model_trained_on_reduced_data", "results": []}]}
task
[ "SUMMARIZATION" ]
42,750
billingsmoore/tibetan-phonetic-transliteration
billingsmoore
null
[ "safetensors", "t5", "nlp", "transliteration", "tibetan", "buddhism", "bo", "dataset:billingsmoore/tibetan-phonetic-transliteration-dataset", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:cc-by-nc-4.0", "region:us" ]
2024-09-08T15:54:51Z
2024-10-11T02:15:07+00:00
24
0
--- base_model: google-t5/t5-small datasets: - billingsmoore/tibetan-phonetic-transliteration-dataset language: - bo license: cc-by-nc-4.0 tags: - nlp - transliteration - tibetan - buddhism --- # Model Card for tibetan-phonetic-transliteration This model is a text2text generation model for phonetic transliteration of Tibetan script. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** billingsmoore - **Model type:** text2text generation - **Language(s) (NLP):** Tibetan - **License:** [Attribution-NonCommercial 4.0 International ](Attribution-NonCommercial 4.0 International ) - **Finetuned from model:** ['google-t5/t5-small'](https://huggingface.co/google-t5/t5-small) ### Model Sources - **Repository:** [https://github.com/billingsmoore/MLotsawa](https://github.com/billingsmoore/MLotsawa) ## Uses The intended use of this model is to provide phonetic transliteration of Tibetan script, typically as part of a larger Tibetan translation ecosystem. ### Direct Use To use the model for transliteration in a python script, you can use the transformers library like so: ```python from transformers import pipeline transliterator = pipeline('translation',model='billingsmoore/tibetan-phonetic-transliteration') transliterated_text = transliterator(<string of unicode Tibetan script>) ``` ### Downstream Use The model can be finetuned for a specific use case using the following code. ```python from datasets import load_dataset from transformers import AutoTokenizer, DataCollatorForSeq2Seq, AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer, Adafactor from accelerate import Accelerator dataset = load_dataset(<your dataset>) dataset = dataset['train'].train_test_split(.1) checkpoint = "billingsmoore/tibetan-phonetic-transliteration" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto") data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint) source_lang = 'bo' target_lang = 'phon' def preprocess_function(examples): inputs = [example for example in examples[source_lang]] targets = [example for example in examples[target_lang]] model_inputs = tokenizer(inputs, text_target=targets, max_length=256, truncation=True, padding="max_length") return model_inputs tokenized_dataset = dataset.map(preprocess_function, batched=True) optimizer = Adafactor( model.parameters(), scale_parameter=True, relative_step=False, warmup_init=False, lr=3e-4 ) accelerator = Accelerator() model, optimizer = accelerator.prepare(model, optimizer) training_args = Seq2SeqTrainingArguments( output_dir=".", auto_find_batch_size=True, predict_with_generate=True, fp16=False, push_to_hub=False, eval_strategy='epoch', save_strategy='epoch', load_best_model_at_end=True, num_train_epochs=5 ) trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=tokenized_dataset['train'], eval_dataset=tokenized_dataset['test'], tokenizer=tokenizer, optimizers=(optimizer, None), data_collator=data_collator ) trainer.train() ``` ## Bias, Risks, and Limitations This model was trained exclusively on material from the Tibetan Buddhist canon and thus on Literary Tibetan. It may not perform satisfactorily on texts from other corpi or on other dialects of Tibetan. ### Recommendations For users who wish to use the model for other texts, I recommend further finetuning on your own dataset using the instructions above. ## Training Details This model was trained on 98597 pairs of text, the first member of which is a line of unicode Tibetan text, the second (the target) is a the phonetic transliteration of the first. This dataset was scraped from Lotsawa House and is released on Kaggle under the same license as the texts from which it is sourced. [You can find this dataset and more information on Kaggle by clicking here.](https://www.kaggle.com/datasets/billingsmoore/tibetan-phonetic-transliteration-pairs) [You can find this dataset and more information on Huggingface by clicking here.](https://huggingface.co/datasets/billingsmoore/tibetan-phonetic-transliteration-dataset) This model was trained for five epochs. Further information regarding training can be found in the documentation of the [MLotsawa repository](https://github.com/billingsmoore/MLotsawa). ## Model Card Contact billingsmoore [at] gmail [dot] com
null
Non_BioNLP
# Model Card for tibetan-phonetic-transliteration This model is a text2text generation model for phonetic transliteration of Tibetan script. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** billingsmoore - **Model type:** text2text generation - **Language(s) (NLP):** Tibetan - **License:** [Attribution-NonCommercial 4.0 International ](Attribution-NonCommercial 4.0 International ) - **Finetuned from model:** ['google-t5/t5-small'](https://huggingface.co/google-t5/t5-small) ### Model Sources - **Repository:** [https://github.com/billingsmoore/MLotsawa](https://github.com/billingsmoore/MLotsawa) ## Uses The intended use of this model is to provide phonetic transliteration of Tibetan script, typically as part of a larger Tibetan translation ecosystem. ### Direct Use To use the model for transliteration in a python script, you can use the transformers library like so: ```python from transformers import pipeline transliterator = pipeline('translation',model='billingsmoore/tibetan-phonetic-transliteration') transliterated_text = transliterator(<string of unicode Tibetan script>) ``` ### Downstream Use The model can be finetuned for a specific use case using the following code. ```python from datasets import load_dataset from transformers import AutoTokenizer, DataCollatorForSeq2Seq, AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer, Adafactor from accelerate import Accelerator dataset = load_dataset(<your dataset>) dataset = dataset['train'].train_test_split(.1) checkpoint = "billingsmoore/tibetan-phonetic-transliteration" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto") data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint) source_lang = 'bo' target_lang = 'phon' def preprocess_function(examples): inputs = [example for example in examples[source_lang]] targets = [example for example in examples[target_lang]] model_inputs = tokenizer(inputs, text_target=targets, max_length=256, truncation=True, padding="max_length") return model_inputs tokenized_dataset = dataset.map(preprocess_function, batched=True) optimizer = Adafactor( model.parameters(), scale_parameter=True, relative_step=False, warmup_init=False, lr=3e-4 ) accelerator = Accelerator() model, optimizer = accelerator.prepare(model, optimizer) training_args = Seq2SeqTrainingArguments( output_dir=".", auto_find_batch_size=True, predict_with_generate=True, fp16=False, push_to_hub=False, eval_strategy='epoch', save_strategy='epoch', load_best_model_at_end=True, num_train_epochs=5 ) trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=tokenized_dataset['train'], eval_dataset=tokenized_dataset['test'], tokenizer=tokenizer, optimizers=(optimizer, None), data_collator=data_collator ) trainer.train() ``` ## Bias, Risks, and Limitations This model was trained exclusively on material from the Tibetan Buddhist canon and thus on Literary Tibetan. It may not perform satisfactorily on texts from other corpi or on other dialects of Tibetan. ### Recommendations For users who wish to use the model for other texts, I recommend further finetuning on your own dataset using the instructions above. ## Training Details This model was trained on 98597 pairs of text, the first member of which is a line of unicode Tibetan text, the second (the target) is a the phonetic transliteration of the first. This dataset was scraped from Lotsawa House and is released on Kaggle under the same license as the texts from which it is sourced. [You can find this dataset and more information on Kaggle by clicking here.](https://www.kaggle.com/datasets/billingsmoore/tibetan-phonetic-transliteration-pairs) [You can find this dataset and more information on Huggingface by clicking here.](https://huggingface.co/datasets/billingsmoore/tibetan-phonetic-transliteration-dataset) This model was trained for five epochs. Further information regarding training can be found in the documentation of the [MLotsawa repository](https://github.com/billingsmoore/MLotsawa). ## Model Card Contact billingsmoore [at] gmail [dot] com
{"base_model": "google-t5/t5-small", "datasets": ["billingsmoore/tibetan-phonetic-transliteration-dataset"], "language": ["bo"], "license": "cc-by-nc-4.0", "tags": ["nlp", "transliteration", "tibetan", "buddhism"]}
task
[ "TRANSLATION" ]
42,751