id
stringlengths
6
113
author
stringlengths
2
36
task_category
stringclasses
42 values
tags
listlengths
1
4.05k
created_time
timestamp[ns, tz=UTC]date
2022-03-02 23:29:04
2025-04-10 08:38:38
last_modified
stringdate
2020-05-14 13:13:12
2025-04-19 04:15:39
downloads
int64
0
118M
likes
int64
0
4.86k
README
stringlengths
30
1.01M
matched_bigbio_names
listlengths
1
8
is_bionlp
stringclasses
3 values
model_cards
stringlengths
0
1M
metadata
stringlengths
2
698k
source
stringclasses
2 values
matched_task
listlengths
1
10
__index_level_0__
int64
0
46.9k
gokuls/add_BERT_no_pretrain_qnli
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-29T05:19:07Z
2023-06-14T14:31:55+00:00
8
0
--- datasets: - glue language: - en metrics: - accuracy tags: - generated_from_trainer model-index: - name: add_BERT_no_pretrain_qnli results: - task: type: text-classification name: Text Classification dataset: name: GLUE QNLI type: glue config: qnli split: validation args: qnli metrics: - type: accuracy value: 0.528830313014827 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # add_BERT_no_pretrain_qnli This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6899 - Accuracy: 0.5288 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7079 | 1.0 | 819 | 0.7210 | 0.5054 | | 0.6952 | 2.0 | 1638 | 0.6912 | 0.4946 | | 0.6922 | 3.0 | 2457 | 0.6905 | 0.5279 | | 0.6918 | 4.0 | 3276 | 0.6899 | 0.5288 | | 0.6922 | 5.0 | 4095 | 0.6922 | 0.5153 | | 0.6933 | 6.0 | 4914 | 0.6926 | 0.5127 | | 0.6931 | 7.0 | 5733 | 0.6952 | 0.4946 | | 0.6933 | 8.0 | 6552 | 0.6928 | 0.5113 | | 0.693 | 9.0 | 7371 | 0.6922 | 0.5215 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # add_BERT_no_pretrain_qnli This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6899 - Accuracy: 0.5288 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7079 | 1.0 | 819 | 0.7210 | 0.5054 | | 0.6952 | 2.0 | 1638 | 0.6912 | 0.4946 | | 0.6922 | 3.0 | 2457 | 0.6905 | 0.5279 | | 0.6918 | 4.0 | 3276 | 0.6899 | 0.5288 | | 0.6922 | 5.0 | 4095 | 0.6922 | 0.5153 | | 0.6933 | 6.0 | 4914 | 0.6926 | 0.5127 | | 0.6931 | 7.0 | 5733 | 0.6952 | 0.4946 | | 0.6933 | 8.0 | 6552 | 0.6928 | 0.5113 | | 0.693 | 9.0 | 7371 | 0.6922 | 0.5215 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3
{"datasets": ["glue"], "language": ["en"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "add_BERT_no_pretrain_qnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE QNLI", "type": "glue", "config": "qnli", "split": "validation", "args": "qnli"}, "metrics": [{"type": "accuracy", "value": 0.528830313014827, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,101
fine-tuned/ArguAna-32000-384-gpt-4o-2024-05-13-39088299
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "custom_code", "en", "dataset:fine-tuned/ArguAna-32000-384-gpt-4o-2024-05-13-39088299", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-05-29T11:19:59Z
2024-05-29T11:20:16+00:00
7
0
--- datasets: - fine-tuned/ArguAna-32000-384-gpt-4o-2024-05-13-39088299 - allenai/c4 language: - en - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/ArguAna-32000-384-gpt-4o-2024-05-13-39088299', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
null
Non_BioNLP
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/ArguAna-32000-384-gpt-4o-2024-05-13-39088299', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
{"datasets": ["fine-tuned/ArguAna-32000-384-gpt-4o-2024-05-13-39088299", "allenai/c4"], "language": ["en", "en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"]}
task
[ "TEXT_CLASSIFICATION" ]
46,102
fine-tuned/very_specific_technical_questions_about_Ubuntu
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Ubuntu", "Linux", "Software", "OperatingSystem", "Technology", "custom_code", "en", "dataset:fine-tuned/very_specific_technical_questions_about_Ubuntu", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-05-07T14:46:39Z
2024-05-07T16:18:35+00:00
6
0
--- datasets: - fine-tuned/very_specific_technical_questions_about_Ubuntu - allenai/c4 language: - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Ubuntu - Linux - Software - OperatingSystem - Technology --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: technical support search for Ubuntu ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/very_specific_technical_questions_about_Ubuntu', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
null
Non_BioNLP
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: technical support search for Ubuntu ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/very_specific_technical_questions_about_Ubuntu', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
{"datasets": ["fine-tuned/very_specific_technical_questions_about_Ubuntu", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Ubuntu", "Linux", "Software", "OperatingSystem", "Technology"]}
task
[ "TEXT_CLASSIFICATION" ]
46,103
facebook/bart-base
facebook
feature-extraction
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bart", "feature-extraction", "en", "arxiv:1910.13461", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2022-11-16T23:23:10+00:00
2,954,063
178
--- language: en license: apache-2.0 --- # BART (base-sized model) BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/bart). Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). ## Intended uses & limitations You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import BartTokenizer, BartModel tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') model = BartModel.from_pretrained('facebook/bart-base') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
null
Non_BioNLP
# BART (base-sized model) BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/bart). Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). ## Intended uses & limitations You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import BartTokenizer, BartModel tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') model = BartModel.from_pretrained('facebook/bart-base') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"language": "en", "license": "apache-2.0"}
task
[ "TEXT_CLASSIFICATION", "QUESTION_ANSWERING", "TRANSLATION", "SUMMARIZATION" ]
46,104
dyyyyyyyy/GNER-T5-large
dyyyyyyyy
text2text-generation
[ "transformers", "safetensors", "t5", "text2text-generation", "en", "dataset:Universal-NER/Pile-NER-type", "arxiv:2402.16602", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-02-27T08:55:15Z
2024-03-09T08:23:13+00:00
106
2
--- datasets: - Universal-NER/Pile-NER-type language: - en library_name: transformers license: apache-2.0 metrics: - f1 pipeline_tag: text2text-generation --- <p align="center"><h2 align="center">Rethinking Negative Instances for Generative Named Entity Recognition</h2></p> # Model Card for GNER-T5-large <!-- Provide a quick summary of what the model is/does. --> We introduce GNER, a **G**enerative **N**amed **E**ntity **R**ecognition framework, which demonstrates enhanced zero-shot capabilities across unseen entity domains. Experiments on two representative generative models, i.e., LLaMA and Flan-T5, show that the integration of negative instances into the training process yields substantial performance enhancements. The resulting models, GNER-LLaMA and GNER-T5, outperform state-of-the-art (SoTA) approaches by a large margin, achieving improvements of 8 and 11 points in $F_1$ score, respectively. Code and models are publicly available. * 💻 Code: [https://github.com/yyDing1/GNER/](https://github.com/yyDing1/GNER/) * 📖 Paper: [Rethinking Negative Instances for Generative Named Entity Recognition](https://arxiv.org/abs/2402.16602) * 💾 Models in the 🤗 HuggingFace Hub: [GNER-Models](https://huggingface.co/collections/dyyyyyyyy/gner-65dda2cb96c6e35c814dea56) * 🧪 Reproduction Materials: [Reproduction Materials](https://drive.google.com/drive/folders/1m2FqDgItEbSoeUVo-i18AwMvBcNkZD46?usp=drive_link) * 🎨 Example Jupyter Notebooks: [GNER Notebook](https://github.com/yyDing1/GNER/blob/main/notebook.ipynb) <p align="center"> <img src="https://github.com/yyDing1/GNER/raw/main/assets/zero_shot_results.png"> </p> ## PreTrained Models We release five GNER models based on LLaMA (7B) and Flan-T5 (base, large, xl and xxl). | Model | # Params | Zero-shot Average $F_1$ | Supervised Average $F_1$ | 🤗 HuggingFace<br />Download Link | | ------------- | -------: | :----------------------: | :-----------------------: | :-------------------------------------------------: | | GNER-LLaMA | 7B | 66.1 | 86.09 | [link](https://huggingface.co/dyyyyyyyy/GNER-LLaMA-7B) | | GNER-T5-base | 248M | 59.5 | 83.21 | [link](https://huggingface.co/dyyyyyyyy/GNER-T5-base) | | GNER-T5-large | 783M | 63.5 | 85.45 | [link](https://huggingface.co/dyyyyyyyy/GNER-T5-large) | | GNER-T5-xl | 3B | 66.1 | 85.94 | [link](https://huggingface.co/dyyyyyyyy/GNER-T5-xl) | | GNER-T5-xxl | 11B | 69.1 | 86.15 | [link](https://huggingface.co/dyyyyyyyy/GNER-T5-xxl) | ## Demo usage You should install the dependencies: ```bash pip install torch datasets deepspeed accelerate transformers protobuf ``` Please check out [Example Jupyter Notebooks](https://github.com/yyDing1/GNER/blob/main/notebook.ipynb) for guidance on utilizing GNER models. A simple inference example is as follows: Below is an example using `GNER-T5` ```python >>> import torch >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("dyyyyyyyy/GNER-T5-xxl") >>> model = AutoModelForSeq2SeqLM.from_pretrained("dyyyyyyyy/GNER-T5-xxl", torch_dtype=torch.bfloat16).cuda() >>> model = model.eval() >>> instruction_template = "Please analyze the sentence provided, identifying the type of entity for each word on a token-by-token basis.\nOutput format is: word_1(label_1), word_2(label_2), ...\nWe'll use the BIO-format to label the entities, where:\n1. B- (Begin) indicates the start of a named entity.\n2. I- (Inside) is used for words within a named entity but are not the first word.\n3. O (Outside) denotes words that are not part of a named entity.\n" >>> sentence = "did george clooney make a musical in the 1980s" >>> entity_labels = ["genre", "rating", "review", "plot", "song", "average ratings", "director", "character", "trailer", "year", "actor", "title"] >>> instruction = f"{instruction_template}\nUse the specific entity tags: {', '.join(entity_labels)} and O.\nSentence: {sentence}" >>> inputs = tokenizer(instruction, return_tensors="pt").to("cuda") >>> outputs = model.generate(**inputs, max_new_tokens=640) >>> response = tokenizer.decode(outputs[0], skip_special_tokens=True) >>> print(response) "did(O) george(B-actor) clooney(I-actor) make(O) a(O) musical(B-genre) in(O) the(O) 1980s(B-year)" ``` ## Citation ```bibtex @misc{ding2024rethinking, title={Rethinking Negative Instances for Generative Named Entity Recognition}, author={Yuyang Ding and Juntao Li and Pinzheng Wang and Zecheng Tang and Bowen Yan and Min Zhang}, year={2024}, eprint={2402.16602}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
null
Non_BioNLP
<p align="center"><h2 align="center">Rethinking Negative Instances for Generative Named Entity Recognition</h2></p> # Model Card for GNER-T5-large <!-- Provide a quick summary of what the model is/does. --> We introduce GNER, a **G**enerative **N**amed **E**ntity **R**ecognition framework, which demonstrates enhanced zero-shot capabilities across unseen entity domains. Experiments on two representative generative models, i.e., LLaMA and Flan-T5, show that the integration of negative instances into the training process yields substantial performance enhancements. The resulting models, GNER-LLaMA and GNER-T5, outperform state-of-the-art (SoTA) approaches by a large margin, achieving improvements of 8 and 11 points in $F_1$ score, respectively. Code and models are publicly available. * 💻 Code: [https://github.com/yyDing1/GNER/](https://github.com/yyDing1/GNER/) * 📖 Paper: [Rethinking Negative Instances for Generative Named Entity Recognition](https://arxiv.org/abs/2402.16602) * 💾 Models in the 🤗 HuggingFace Hub: [GNER-Models](https://huggingface.co/collections/dyyyyyyyy/gner-65dda2cb96c6e35c814dea56) * 🧪 Reproduction Materials: [Reproduction Materials](https://drive.google.com/drive/folders/1m2FqDgItEbSoeUVo-i18AwMvBcNkZD46?usp=drive_link) * 🎨 Example Jupyter Notebooks: [GNER Notebook](https://github.com/yyDing1/GNER/blob/main/notebook.ipynb) <p align="center"> <img src="https://github.com/yyDing1/GNER/raw/main/assets/zero_shot_results.png"> </p> ## PreTrained Models We release five GNER models based on LLaMA (7B) and Flan-T5 (base, large, xl and xxl). | Model | # Params | Zero-shot Average $F_1$ | Supervised Average $F_1$ | 🤗 HuggingFace<br />Download Link | | ------------- | -------: | :----------------------: | :-----------------------: | :-------------------------------------------------: | | GNER-LLaMA | 7B | 66.1 | 86.09 | [link](https://huggingface.co/dyyyyyyyy/GNER-LLaMA-7B) | | GNER-T5-base | 248M | 59.5 | 83.21 | [link](https://huggingface.co/dyyyyyyyy/GNER-T5-base) | | GNER-T5-large | 783M | 63.5 | 85.45 | [link](https://huggingface.co/dyyyyyyyy/GNER-T5-large) | | GNER-T5-xl | 3B | 66.1 | 85.94 | [link](https://huggingface.co/dyyyyyyyy/GNER-T5-xl) | | GNER-T5-xxl | 11B | 69.1 | 86.15 | [link](https://huggingface.co/dyyyyyyyy/GNER-T5-xxl) | ## Demo usage You should install the dependencies: ```bash pip install torch datasets deepspeed accelerate transformers protobuf ``` Please check out [Example Jupyter Notebooks](https://github.com/yyDing1/GNER/blob/main/notebook.ipynb) for guidance on utilizing GNER models. A simple inference example is as follows: Below is an example using `GNER-T5` ```python >>> import torch >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("dyyyyyyyy/GNER-T5-xxl") >>> model = AutoModelForSeq2SeqLM.from_pretrained("dyyyyyyyy/GNER-T5-xxl", torch_dtype=torch.bfloat16).cuda() >>> model = model.eval() >>> instruction_template = "Please analyze the sentence provided, identifying the type of entity for each word on a token-by-token basis.\nOutput format is: word_1(label_1), word_2(label_2), ...\nWe'll use the BIO-format to label the entities, where:\n1. B- (Begin) indicates the start of a named entity.\n2. I- (Inside) is used for words within a named entity but are not the first word.\n3. O (Outside) denotes words that are not part of a named entity.\n" >>> sentence = "did george clooney make a musical in the 1980s" >>> entity_labels = ["genre", "rating", "review", "plot", "song", "average ratings", "director", "character", "trailer", "year", "actor", "title"] >>> instruction = f"{instruction_template}\nUse the specific entity tags: {', '.join(entity_labels)} and O.\nSentence: {sentence}" >>> inputs = tokenizer(instruction, return_tensors="pt").to("cuda") >>> outputs = model.generate(**inputs, max_new_tokens=640) >>> response = tokenizer.decode(outputs[0], skip_special_tokens=True) >>> print(response) "did(O) george(B-actor) clooney(I-actor) make(O) a(O) musical(B-genre) in(O) the(O) 1980s(B-year)" ``` ## Citation ```bibtex @misc{ding2024rethinking, title={Rethinking Negative Instances for Generative Named Entity Recognition}, author={Yuyang Ding and Juntao Li and Pinzheng Wang and Zecheng Tang and Bowen Yan and Min Zhang}, year={2024}, eprint={2402.16602}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"datasets": ["Universal-NER/Pile-NER-type"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["f1"], "pipeline_tag": "text2text-generation"}
task
[ "NAMED_ENTITY_RECOGNITION" ]
46,105
gtfintechlab/SubjECTiveQA-CLEAR
gtfintechlab
text-classification
[ "transformers", "safetensors", "bert", "text-classification", "en", "dataset:gtfintechlab/subjectiveqa", "arxiv:2410.20651", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-10-08T19:01:02Z
2024-12-17T00:24:26+00:00
0
0
--- base_model: - google-bert/bert-base-uncased datasets: - gtfintechlab/subjectiveqa language: - en library_name: transformers license: cc-by-4.0 metrics: - accuracy - precision - recall - f1 pipeline_tag: text-classification --- # SubjECTiveQA-CLEAR Model **Model Name:** SubjECTiveQA-CLEAR **Model Type:** Text Classification **Language:** English **License:** [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) **Base Model:** [google-bert/bert-base-uncased](https://huggingface.co/google/bert-base-uncased) **Dataset Used for Training:** [gtfintechlab/SubjECTive-QA](https://huggingface.co/datasets/gtfintechlab/SubjECTive-QA) ## Model Overview SubjECTiveQA-CLEAR is a fine-tuned BERT-based model designed to classify text data according to the 'CLEAR' attribute. The 'CLEAR' attribute is one of several subjective attributes annotated in the SubjECTive-QA dataset, which focuses on subjective question-answer pairs in financial contexts. ## Intended Use This model is intended for researchers and practitioners working on subjective text classification, particularly within financial domains. It is specifically designed to assess the 'CLEAR' attribute in question-answer pairs, aiding in the analysis of subjective content in financial communications. ## How to Use To utilize this model, you can load it using the Hugging Face `transformers` library: ```python from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification, AutoConfig # Load the tokenizer, model, and configuration tokenizer = AutoTokenizer.from_pretrained("gtfintechlab/SubjECTiveQA-CLEAR", do_lower_case=True, do_basic_tokenize=True) model = AutoModelForSequenceClassification.from_pretrained("gtfintechlab/SubjECTiveQA-CLEAR", num_labels=3) config = AutoConfig.from_pretrained("gtfintechlab/SubjECTiveQA-CLEAR") # Initialize the text classification pipeline classifier = pipeline('text-classification', model=model, tokenizer=tokenizer, config=config, framework="pt") # Classify the 'CLEAR' attribute in your question-answer pairs qa_pairs = [ "Question: What are your company's projections for the next quarter? Answer: We anticipate a 10% increase in revenue due to the launch of our new product line.", "Question: Can you explain the recent decline in stock prices? Answer: Market fluctuations are normal, and we are confident in our long-term strategy." ] results = classifier(qa_pairs, batch_size=128, truncation="only_first") print(results) ``` ## Label Interpretation - **LABEL_0:** Negatively Demonstrative of 'CLEAR' (0) Indicates that the response lacks clarity. - **LABEL_1:** Neutral Demonstration of 'CLEAR' (1) Indicates that the response has an average level of clarity. - **LABEL_2:** Positively Demonstrative of 'CLEAR' (2) Indicates that the response is clear and transparent. ## Training Data The model was trained on the SubjECTive-QA dataset, which comprises question-answer pairs from financial contexts, annotated with various subjective attributes, including 'CLEAR'. The dataset is divided into training, validation, and test sets, facilitating robust model training and evaluation. ## Citation If you use this model in your research, please cite the SubjECTive-QA dataset: ``` @article{SubjECTiveQA, title={SubjECTive-QA: Measuring Subjectivity in Earnings Call Transcripts’ QA Through Six-Dimensional Feature Analysis}, author={Huzaifa Pardawala, Siddhant Sukhani, Agam Shah, Veer Kejriwal, Abhishek Pillai, Rohan Bhasin, Andrew DiBiasio, Tarun Mandapati, Dhruv Adha, Sudheer Chava}, journal={arXiv preprint arXiv:2410.20651}, year={2024} } ``` For more details, refer to the [SubjECTive-QA dataset documentation](https://huggingface.co/datasets/gtfintechlab/SubjECTive-QA). ## Contact For any SubjECTive-QA related issues and questions, please contact: - Huzaifa Pardawala: huzaifahp7[at]gatech[dot]edu - Siddhant Sukhani: ssukhani3[at]gatech[dot]edu - Agam Shah: ashah482[at]gatech[dot]edu
null
Non_BioNLP
# SubjECTiveQA-CLEAR Model **Model Name:** SubjECTiveQA-CLEAR **Model Type:** Text Classification **Language:** English **License:** [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) **Base Model:** [google-bert/bert-base-uncased](https://huggingface.co/google/bert-base-uncased) **Dataset Used for Training:** [gtfintechlab/SubjECTive-QA](https://huggingface.co/datasets/gtfintechlab/SubjECTive-QA) ## Model Overview SubjECTiveQA-CLEAR is a fine-tuned BERT-based model designed to classify text data according to the 'CLEAR' attribute. The 'CLEAR' attribute is one of several subjective attributes annotated in the SubjECTive-QA dataset, which focuses on subjective question-answer pairs in financial contexts. ## Intended Use This model is intended for researchers and practitioners working on subjective text classification, particularly within financial domains. It is specifically designed to assess the 'CLEAR' attribute in question-answer pairs, aiding in the analysis of subjective content in financial communications. ## How to Use To utilize this model, you can load it using the Hugging Face `transformers` library: ```python from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification, AutoConfig # Load the tokenizer, model, and configuration tokenizer = AutoTokenizer.from_pretrained("gtfintechlab/SubjECTiveQA-CLEAR", do_lower_case=True, do_basic_tokenize=True) model = AutoModelForSequenceClassification.from_pretrained("gtfintechlab/SubjECTiveQA-CLEAR", num_labels=3) config = AutoConfig.from_pretrained("gtfintechlab/SubjECTiveQA-CLEAR") # Initialize the text classification pipeline classifier = pipeline('text-classification', model=model, tokenizer=tokenizer, config=config, framework="pt") # Classify the 'CLEAR' attribute in your question-answer pairs qa_pairs = [ "Question: What are your company's projections for the next quarter? Answer: We anticipate a 10% increase in revenue due to the launch of our new product line.", "Question: Can you explain the recent decline in stock prices? Answer: Market fluctuations are normal, and we are confident in our long-term strategy." ] results = classifier(qa_pairs, batch_size=128, truncation="only_first") print(results) ``` ## Label Interpretation - **LABEL_0:** Negatively Demonstrative of 'CLEAR' (0) Indicates that the response lacks clarity. - **LABEL_1:** Neutral Demonstration of 'CLEAR' (1) Indicates that the response has an average level of clarity. - **LABEL_2:** Positively Demonstrative of 'CLEAR' (2) Indicates that the response is clear and transparent. ## Training Data The model was trained on the SubjECTive-QA dataset, which comprises question-answer pairs from financial contexts, annotated with various subjective attributes, including 'CLEAR'. The dataset is divided into training, validation, and test sets, facilitating robust model training and evaluation. ## Citation If you use this model in your research, please cite the SubjECTive-QA dataset: ``` @article{SubjECTiveQA, title={SubjECTive-QA: Measuring Subjectivity in Earnings Call Transcripts’ QA Through Six-Dimensional Feature Analysis}, author={Huzaifa Pardawala, Siddhant Sukhani, Agam Shah, Veer Kejriwal, Abhishek Pillai, Rohan Bhasin, Andrew DiBiasio, Tarun Mandapati, Dhruv Adha, Sudheer Chava}, journal={arXiv preprint arXiv:2410.20651}, year={2024} } ``` For more details, refer to the [SubjECTive-QA dataset documentation](https://huggingface.co/datasets/gtfintechlab/SubjECTive-QA). ## Contact For any SubjECTive-QA related issues and questions, please contact: - Huzaifa Pardawala: huzaifahp7[at]gatech[dot]edu - Siddhant Sukhani: ssukhani3[at]gatech[dot]edu - Agam Shah: ashah482[at]gatech[dot]edu
{"base_model": ["google-bert/bert-base-uncased"], "datasets": ["gtfintechlab/subjectiveqa"], "language": ["en"], "library_name": "transformers", "license": "cc-by-4.0", "metrics": ["accuracy", "precision", "recall", "f1"], "pipeline_tag": "text-classification"}
task
[ "TEXT_CLASSIFICATION" ]
46,106
SumithThota/LingualsenseLSTM
SumithThota
null
[ "region:us" ]
2025-01-19T11:08:17Z
2025-01-19T11:19:20+00:00
0
0
--- {} --- # LingualSense_Infosys_Internship_Oct2024 To build a model that can automatically identify the language of a given text. Language identification is essential for various applications, including machine translation, multilingual document tracking, and electronic devices (e.g., mobiles, laptops).
null
Non_BioNLP
# LingualSense_Infosys_Internship_Oct2024 To build a model that can automatically identify the language of a given text. Language identification is essential for various applications, including machine translation, multilingual document tracking, and electronic devices (e.g., mobiles, laptops).
{}
task
[ "TRANSLATION" ]
46,107
RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf
RichardErkhov
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
2024-08-05T07:40:04Z
2024-08-05T09:03:03+00:00
127
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) mistral-7b-english-welsh-translate - GGUF - Model creator: https://huggingface.co/AndreasThinks/ - Original model: https://huggingface.co/AndreasThinks/mistral-7b-english-welsh-translate/ | Name | Quant method | Size | | ---- | ---- | ---- | | [mistral-7b-english-welsh-translate.Q2_K.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q2_K.gguf) | Q2_K | 2.54GB | | [mistral-7b-english-welsh-translate.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.IQ3_XS.gguf) | IQ3_XS | 2.82GB | | [mistral-7b-english-welsh-translate.IQ3_S.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.IQ3_S.gguf) | IQ3_S | 2.97GB | | [mistral-7b-english-welsh-translate.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [mistral-7b-english-welsh-translate.IQ3_M.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.IQ3_M.gguf) | IQ3_M | 3.06GB | | [mistral-7b-english-welsh-translate.Q3_K.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q3_K.gguf) | Q3_K | 3.28GB | | [mistral-7b-english-welsh-translate.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [mistral-7b-english-welsh-translate.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [mistral-7b-english-welsh-translate.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.IQ4_XS.gguf) | IQ4_XS | 3.68GB | | [mistral-7b-english-welsh-translate.Q4_0.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q4_0.gguf) | Q4_0 | 3.83GB | | [mistral-7b-english-welsh-translate.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [mistral-7b-english-welsh-translate.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [mistral-7b-english-welsh-translate.Q4_K.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q4_K.gguf) | Q4_K | 4.07GB | | [mistral-7b-english-welsh-translate.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [mistral-7b-english-welsh-translate.Q4_1.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q4_1.gguf) | Q4_1 | 4.24GB | | [mistral-7b-english-welsh-translate.Q5_0.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q5_0.gguf) | Q5_0 | 4.66GB | | [mistral-7b-english-welsh-translate.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q5_K_S.gguf) | Q5_K_S | 4.66GB | | [mistral-7b-english-welsh-translate.Q5_K.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q5_K.gguf) | Q5_K | 4.78GB | | [mistral-7b-english-welsh-translate.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [mistral-7b-english-welsh-translate.Q5_1.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q5_1.gguf) | Q5_1 | 5.07GB | | [mistral-7b-english-welsh-translate.Q6_K.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q6_K.gguf) | Q6_K | 5.54GB | | [mistral-7b-english-welsh-translate.Q8_0.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.3 tags: - axolotl - generated_from_trainer model-index: - name: mistral-7b-english-welsh-translate results: [] --- # mistral-7b-english-welsh-translate This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the [Welsh Government Alpaca Welsh-English Instructions](https://huggingface.co/datasets/AndreasThinks/welsh-translation-instruction/blob/main/README.md) dataset. This model is trained for English-Welsh translation (in any direction), with a focus on government documents, using Markdown formatting. To ensure the highest quality translations, use the Alpaca instruction prompt format with the below structure. ``` ### Instruction: {instruction} ### Input: {input} ### Response: ``` Your instruction should be "Translate the text from English to Welsh." (or vice versa). The model is also available [quantized as GGUF](https://huggingface.co/AndreasThinks/mistral-7b-english-welsh-translate-GGUF). This version be be [tested in this interactive space](https://huggingface.co/spaces/AndreasThinks/welsh-english-translator). ## Running the model The model is intended to be run locally, ideally using [Text generation web UI](https://github.com/oobabooga/text-generation-webui) to ensure correct prompt structure. Start the UI as instructed for your system. - In the "Model" tab, download either this model or [the quantized version](https://huggingface.co/AndreasThinks/mistral-7b-english-welsh-translate-GGUF). Once the download is complete, load the model. - In the "Parameters" tab, under "Generation", set "auto_max_new_tokens" to maximum, and "Ban the eos_token" to True. In "Custom stopping strings", add "### Input" - In the "Notebook" tab, make sure you are using the "Alpaca-with-input" prompt. Set the instruction as "Translate the text from Welsh to English." (or vice versa).# - Add the text you would like to translate (replacing "Input"), and hit "generate" Performance may start to degrade past a certain context window (especially if using the quantized models). Convert in chunks of under 1000 words to avoid these issues. ## LLM Evals Thanks to [YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard). [Full results available here.](https://gist.github.com/AndreasThinks/d998bc0a607ff5c7df09cd7333ed5c0c) | Model |AGIEval|TruthfulQA|Bigbench| |-------------------------------------------------------------------------------------------------------------|------:|---------:|-------:| |[mistral-7b-english-welsh-translate](https://huggingface.co/AndreasThinks/mistral-7b-english-welsh-translate)| 35.31| 54.5| 38.4| ## Training procedure [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: mistralai/Mistral-7B-Instruct-v0.3 model_type: MistralForCausalLM tokenizer_type: LlamaTokenizer load_in_8bit: false load_in_4bit: strict: false # huggingface repo datasets: - path: AndreasThinks/welsh-translation-instruction type: alpaca val_set_size: 0.04 output_dir: ./outputs/mistral-welsh hub_model_id: AndreasThinks/mistral-7b-english-welsh-translate sequence_len: 8192 sample_packing: true pad_to_sequence_len: true eval_sample_packing: false wandb_project: mistral-nemo-welsh wandb_entity: wandb_watch: wandb_name: mistral-nemo-welsh-v1 wandb_log_model: gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 4 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.000005 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: ``` </details><br> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/andreasthinks/mistral-nemo-welsh/runs/syq2m3vr) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.5781 | 0.0013 | 1 | 1.5514 | | 0.4427 | 0.2506 | 194 | 0.4841 | | 0.4142 | 0.5011 | 388 | 0.4271 | | 0.4001 | 0.7517 | 582 | 0.3996 | | 0.4155 | 1.0023 | 776 | 0.3828 | | 0.3178 | 1.2296 | 970 | 0.3792 | | 0.3156 | 1.4801 | 1164 | 0.3732 | | 0.3115 | 1.7307 | 1358 | 0.3678 | | 0.2722 | 1.9813 | 1552 | 0.3633 | | 0.2492 | 2.2089 | 1746 | 0.3809 | | 0.2159 | 2.4595 | 1940 | 0.3828 | | 0.2277 | 2.7100 | 2134 | 0.3810 | | 0.2435 | 2.9606 | 2328 | 0.3809 | | 0.158 | 3.1899 | 2522 | 0.3961 | | 0.1802 | 3.4404 | 2716 | 0.3966 | | 0.1679 | 3.6910 | 2910 | 0.3966 | ### Framework versions - Transformers 4.42.4 - Pytorch 2.3.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) mistral-7b-english-welsh-translate - GGUF - Model creator: https://huggingface.co/AndreasThinks/ - Original model: https://huggingface.co/AndreasThinks/mistral-7b-english-welsh-translate/ | Name | Quant method | Size | | ---- | ---- | ---- | | [mistral-7b-english-welsh-translate.Q2_K.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q2_K.gguf) | Q2_K | 2.54GB | | [mistral-7b-english-welsh-translate.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.IQ3_XS.gguf) | IQ3_XS | 2.82GB | | [mistral-7b-english-welsh-translate.IQ3_S.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.IQ3_S.gguf) | IQ3_S | 2.97GB | | [mistral-7b-english-welsh-translate.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [mistral-7b-english-welsh-translate.IQ3_M.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.IQ3_M.gguf) | IQ3_M | 3.06GB | | [mistral-7b-english-welsh-translate.Q3_K.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q3_K.gguf) | Q3_K | 3.28GB | | [mistral-7b-english-welsh-translate.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [mistral-7b-english-welsh-translate.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [mistral-7b-english-welsh-translate.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.IQ4_XS.gguf) | IQ4_XS | 3.68GB | | [mistral-7b-english-welsh-translate.Q4_0.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q4_0.gguf) | Q4_0 | 3.83GB | | [mistral-7b-english-welsh-translate.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [mistral-7b-english-welsh-translate.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [mistral-7b-english-welsh-translate.Q4_K.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q4_K.gguf) | Q4_K | 4.07GB | | [mistral-7b-english-welsh-translate.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [mistral-7b-english-welsh-translate.Q4_1.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q4_1.gguf) | Q4_1 | 4.24GB | | [mistral-7b-english-welsh-translate.Q5_0.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q5_0.gguf) | Q5_0 | 4.66GB | | [mistral-7b-english-welsh-translate.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q5_K_S.gguf) | Q5_K_S | 4.66GB | | [mistral-7b-english-welsh-translate.Q5_K.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q5_K.gguf) | Q5_K | 4.78GB | | [mistral-7b-english-welsh-translate.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [mistral-7b-english-welsh-translate.Q5_1.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q5_1.gguf) | Q5_1 | 5.07GB | | [mistral-7b-english-welsh-translate.Q6_K.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q6_K.gguf) | Q6_K | 5.54GB | | [mistral-7b-english-welsh-translate.Q8_0.gguf](https://huggingface.co/RichardErkhov/AndreasThinks_-_mistral-7b-english-welsh-translate-gguf/blob/main/mistral-7b-english-welsh-translate.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.3 tags: - axolotl - generated_from_trainer model-index: - name: mistral-7b-english-welsh-translate results: [] --- # mistral-7b-english-welsh-translate This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the [Welsh Government Alpaca Welsh-English Instructions](https://huggingface.co/datasets/AndreasThinks/welsh-translation-instruction/blob/main/README.md) dataset. This model is trained for English-Welsh translation (in any direction), with a focus on government documents, using Markdown formatting. To ensure the highest quality translations, use the Alpaca instruction prompt format with the below structure. ``` ### Instruction: {instruction} ### Input: {input} ### Response: ``` Your instruction should be "Translate the text from English to Welsh." (or vice versa). The model is also available [quantized as GGUF](https://huggingface.co/AndreasThinks/mistral-7b-english-welsh-translate-GGUF). This version be be [tested in this interactive space](https://huggingface.co/spaces/AndreasThinks/welsh-english-translator). ## Running the model The model is intended to be run locally, ideally using [Text generation web UI](https://github.com/oobabooga/text-generation-webui) to ensure correct prompt structure. Start the UI as instructed for your system. - In the "Model" tab, download either this model or [the quantized version](https://huggingface.co/AndreasThinks/mistral-7b-english-welsh-translate-GGUF). Once the download is complete, load the model. - In the "Parameters" tab, under "Generation", set "auto_max_new_tokens" to maximum, and "Ban the eos_token" to True. In "Custom stopping strings", add "### Input" - In the "Notebook" tab, make sure you are using the "Alpaca-with-input" prompt. Set the instruction as "Translate the text from Welsh to English." (or vice versa).# - Add the text you would like to translate (replacing "Input"), and hit "generate" Performance may start to degrade past a certain context window (especially if using the quantized models). Convert in chunks of under 1000 words to avoid these issues. ## LLM Evals Thanks to [YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard). [Full results available here.](https://gist.github.com/AndreasThinks/d998bc0a607ff5c7df09cd7333ed5c0c) | Model |AGIEval|TruthfulQA|Bigbench| |-------------------------------------------------------------------------------------------------------------|------:|---------:|-------:| |[mistral-7b-english-welsh-translate](https://huggingface.co/AndreasThinks/mistral-7b-english-welsh-translate)| 35.31| 54.5| 38.4| ## Training procedure [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: mistralai/Mistral-7B-Instruct-v0.3 model_type: MistralForCausalLM tokenizer_type: LlamaTokenizer load_in_8bit: false load_in_4bit: strict: false # huggingface repo datasets: - path: AndreasThinks/welsh-translation-instruction type: alpaca val_set_size: 0.04 output_dir: ./outputs/mistral-welsh hub_model_id: AndreasThinks/mistral-7b-english-welsh-translate sequence_len: 8192 sample_packing: true pad_to_sequence_len: true eval_sample_packing: false wandb_project: mistral-nemo-welsh wandb_entity: wandb_watch: wandb_name: mistral-nemo-welsh-v1 wandb_log_model: gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 4 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.000005 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: ``` </details><br> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/andreasthinks/mistral-nemo-welsh/runs/syq2m3vr) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.5781 | 0.0013 | 1 | 1.5514 | | 0.4427 | 0.2506 | 194 | 0.4841 | | 0.4142 | 0.5011 | 388 | 0.4271 | | 0.4001 | 0.7517 | 582 | 0.3996 | | 0.4155 | 1.0023 | 776 | 0.3828 | | 0.3178 | 1.2296 | 970 | 0.3792 | | 0.3156 | 1.4801 | 1164 | 0.3732 | | 0.3115 | 1.7307 | 1358 | 0.3678 | | 0.2722 | 1.9813 | 1552 | 0.3633 | | 0.2492 | 2.2089 | 1746 | 0.3809 | | 0.2159 | 2.4595 | 1940 | 0.3828 | | 0.2277 | 2.7100 | 2134 | 0.3810 | | 0.2435 | 2.9606 | 2328 | 0.3809 | | 0.158 | 3.1899 | 2522 | 0.3961 | | 0.1802 | 3.4404 | 2716 | 0.3966 | | 0.1679 | 3.6910 | 2910 | 0.3966 | ### Framework versions - Transformers 4.42.4 - Pytorch 2.3.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
{}
task
[ "TRANSLATION" ]
46,108
SRK2203/star
SRK2203
summarization
[ "code", "summarization", "en", "dataset:HuggingFaceM4/WebSight", "license:apache-2.0", "region:us" ]
2024-02-06T14:57:49Z
2024-02-06T15:01:21+00:00
0
0
--- datasets: - HuggingFaceM4/WebSight language: - en license: apache-2.0 pipeline_tag: summarization tags: - code ---
null
Non_BioNLP
{"datasets": ["HuggingFaceM4/WebSight"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "summarization", "tags": ["code"]}
task
[ "SUMMARIZATION" ]
46,109
dophys/bge-m3_finetuned_for_physics
dophys
sentence-similarity
[ "sentence-transformers", "tensorboard", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-09-22T10:32:20Z
2024-09-22T12:05:29+00:00
21
0
--- datasets: [] language: [] library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction widget: [] --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 1024 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'The weather is lovely today.', "It's so sunny outside!", 'He drove to the stadium.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.12.3 - Sentence Transformers: 3.0.1 - Transformers: 4.42.1 - PyTorch: 2.3.0+cu121 - Accelerate: 0.31.0 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 1024 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'The weather is lovely today.', "It's so sunny outside!", 'He drove to the stadium.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.12.3 - Sentence Transformers: 3.0.1 - Transformers: 4.42.1 - PyTorch: 2.3.0+cu121 - Accelerate: 0.31.0 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"datasets": [], "language": [], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction"], "widget": []}
task
[ "TEXT_CLASSIFICATION" ]
46,110
semindan/paws_x_xlm_r_only_es
semindan
text-classification
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:paws-x", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-12-17T18:26:41Z
2023-01-07T14:32:14+00:00
8
0
--- datasets: - paws-x license: mit metrics: - accuracy tags: - text-classification - generated_from_trainer model-index: - name: paws_x_xlm_r_only_es results: - task: type: text-classification name: Text Classification dataset: name: paws-x type: paws-x config: es split: train args: es metrics: - type: accuracy value: 0.8995 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paws_x_xlm_r_only_es This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the paws-x dataset. It achieves the following results on the evaluation set: - Loss: 0.4565 - Accuracy: 0.8995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4688 | 1.0 | 386 | 0.3226 | 0.87 | | 0.2192 | 2.0 | 772 | 0.3058 | 0.8895 | | 0.1616 | 3.0 | 1158 | 0.2904 | 0.8955 | | 0.125 | 4.0 | 1544 | 0.3465 | 0.898 | | 0.1027 | 5.0 | 1930 | 0.3171 | 0.8955 | | 0.0837 | 6.0 | 2316 | 0.4022 | 0.896 | | 0.0716 | 7.0 | 2702 | 0.3625 | 0.9005 | | 0.0613 | 8.0 | 3088 | 0.4102 | 0.898 | | 0.052 | 9.0 | 3474 | 0.4257 | 0.8985 | | 0.0441 | 10.0 | 3860 | 0.4565 | 0.8995 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Datasets 2.6.1 - Tokenizers 0.13.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paws_x_xlm_r_only_es This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the paws-x dataset. It achieves the following results on the evaluation set: - Loss: 0.4565 - Accuracy: 0.8995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4688 | 1.0 | 386 | 0.3226 | 0.87 | | 0.2192 | 2.0 | 772 | 0.3058 | 0.8895 | | 0.1616 | 3.0 | 1158 | 0.2904 | 0.8955 | | 0.125 | 4.0 | 1544 | 0.3465 | 0.898 | | 0.1027 | 5.0 | 1930 | 0.3171 | 0.8955 | | 0.0837 | 6.0 | 2316 | 0.4022 | 0.896 | | 0.0716 | 7.0 | 2702 | 0.3625 | 0.9005 | | 0.0613 | 8.0 | 3088 | 0.4102 | 0.898 | | 0.052 | 9.0 | 3474 | 0.4257 | 0.8985 | | 0.0441 | 10.0 | 3860 | 0.4565 | 0.8995 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Datasets 2.6.1 - Tokenizers 0.13.1
{"datasets": ["paws-x"], "license": "mit", "metrics": ["accuracy"], "tags": ["text-classification", "generated_from_trainer"], "model-index": [{"name": "paws_x_xlm_r_only_es", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "paws-x", "type": "paws-x", "config": "es", "split": "train", "args": "es"}, "metrics": [{"type": "accuracy", "value": 0.8995, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,111
gokuls/hBERTv1_mnli
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-03-02T11:29:37Z
2023-03-02T18:49:54+00:00
8
0
--- datasets: - glue language: - en metrics: - accuracy tags: - generated_from_trainer model-index: - name: hBERTv1_mnli results: - task: type: text-classification name: Text Classification dataset: name: GLUE MNLI type: glue config: mnli split: validation_matched args: mnli metrics: - type: accuracy value: 0.3522172497965826 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hBERTv1_mnli This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1](https://huggingface.co/gokuls/bert_12_layer_model_v1) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 1.0982 - Accuracy: 0.3522 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.1001 | 1.0 | 1534 | 1.0994 | 0.3182 | | 1.0988 | 2.0 | 3068 | 1.0990 | 0.3182 | | 1.0987 | 3.0 | 4602 | 1.0992 | 0.3274 | | 1.0987 | 4.0 | 6136 | 1.0986 | 0.3274 | | 1.0987 | 5.0 | 7670 | 1.0985 | 0.3545 | | 1.0986 | 6.0 | 9204 | 1.0987 | 0.3274 | | 1.105 | 7.0 | 10738 | 1.0986 | 0.3274 | | 1.1045 | 8.0 | 12272 | 1.0986 | 0.3182 | | 1.0988 | 9.0 | 13806 | 1.0983 | 0.3274 | | 1.0987 | 10.0 | 15340 | 1.0987 | 0.3182 | | 1.0987 | 11.0 | 16874 | 1.0991 | 0.3182 | | 1.0986 | 12.0 | 18408 | 1.0986 | 0.3545 | | 1.0986 | 13.0 | 19942 | 1.0982 | 0.3545 | | 1.0986 | 14.0 | 21476 | 1.0989 | 0.3545 | | 1.0986 | 15.0 | 23010 | 1.0987 | 0.3182 | | 1.0986 | 16.0 | 24544 | 1.0986 | 0.3545 | | 1.0986 | 17.0 | 26078 | 1.0986 | 0.3545 | | 1.0986 | 18.0 | 27612 | 1.0983 | 0.3182 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.14.0a0+410ce96 - Datasets 2.10.1 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hBERTv1_mnli This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1](https://huggingface.co/gokuls/bert_12_layer_model_v1) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 1.0982 - Accuracy: 0.3522 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.1001 | 1.0 | 1534 | 1.0994 | 0.3182 | | 1.0988 | 2.0 | 3068 | 1.0990 | 0.3182 | | 1.0987 | 3.0 | 4602 | 1.0992 | 0.3274 | | 1.0987 | 4.0 | 6136 | 1.0986 | 0.3274 | | 1.0987 | 5.0 | 7670 | 1.0985 | 0.3545 | | 1.0986 | 6.0 | 9204 | 1.0987 | 0.3274 | | 1.105 | 7.0 | 10738 | 1.0986 | 0.3274 | | 1.1045 | 8.0 | 12272 | 1.0986 | 0.3182 | | 1.0988 | 9.0 | 13806 | 1.0983 | 0.3274 | | 1.0987 | 10.0 | 15340 | 1.0987 | 0.3182 | | 1.0987 | 11.0 | 16874 | 1.0991 | 0.3182 | | 1.0986 | 12.0 | 18408 | 1.0986 | 0.3545 | | 1.0986 | 13.0 | 19942 | 1.0982 | 0.3545 | | 1.0986 | 14.0 | 21476 | 1.0989 | 0.3545 | | 1.0986 | 15.0 | 23010 | 1.0987 | 0.3182 | | 1.0986 | 16.0 | 24544 | 1.0986 | 0.3545 | | 1.0986 | 17.0 | 26078 | 1.0986 | 0.3545 | | 1.0986 | 18.0 | 27612 | 1.0983 | 0.3182 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.14.0a0+410ce96 - Datasets 2.10.1 - Tokenizers 0.13.2
{"datasets": ["glue"], "language": ["en"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "hBERTv1_mnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE MNLI", "type": "glue", "config": "mnli", "split": "validation_matched", "args": "mnli"}, "metrics": [{"type": "accuracy", "value": 0.3522172497965826, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,112
predibase/tldr_headline_gen
predibase
text-generation
[ "peft", "safetensors", "text-generation", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
2024-02-19T19:12:53Z
2024-02-21T19:13:42+00:00
69
2
--- base_model: mistralai/Mistral-7B-v0.1 library_name: peft pipeline_tag: text-generation --- Description: Headline generation given news content\ Original dataset: https://huggingface.co/datasets/JulesBelveze/tldr_news \ ---\ Try querying this adapter for free in Lora Land at https://predibase.com/lora-land! \ The adapter_category is Summarization and the name is News Headline Generation (tldr_news)\ ---\ Sample input: The following passage is content from a news report. Please summarize this passage in one sentence or less.\n\nPassage: Jeffrey Berns, CEO of Blockchains LLC, wants the Nevada government to allow companies like his to form local governments on land they own, granting them power over everything from schools to law enforcement. Berns envisions a city based on digital currencies and blockchain storage. His company is proposing to build a 15,000 home town 12 miles east of Reno. Nevada Lawmakers have responded with intrigue and skepticism. The proposed legislation has yet to be formally filed or discussed in public hearings.\n\nSummary: \ ---\ Sample output: Blockchain CEO wants to build a city in Nevada \ ---\ Try using this adapter yourself! ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mistral-7B-v0.1" peft_model_id = "predibase/tldr_headline_gen" model = AutoModelForCausalLM.from_pretrained(model_id) model.load_adapter(peft_model_id) ```
null
Non_BioNLP
Description: Headline generation given news content\ Original dataset: https://huggingface.co/datasets/JulesBelveze/tldr_news \ ---\ Try querying this adapter for free in Lora Land at https://predibase.com/lora-land! \ The adapter_category is Summarization and the name is News Headline Generation (tldr_news)\ ---\ Sample input: The following passage is content from a news report. Please summarize this passage in one sentence or less.\n\nPassage: Jeffrey Berns, CEO of Blockchains LLC, wants the Nevada government to allow companies like his to form local governments on land they own, granting them power over everything from schools to law enforcement. Berns envisions a city based on digital currencies and blockchain storage. His company is proposing to build a 15,000 home town 12 miles east of Reno. Nevada Lawmakers have responded with intrigue and skepticism. The proposed legislation has yet to be formally filed or discussed in public hearings.\n\nSummary: \ ---\ Sample output: Blockchain CEO wants to build a city in Nevada \ ---\ Try using this adapter yourself! ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mistral-7B-v0.1" peft_model_id = "predibase/tldr_headline_gen" model = AutoModelForCausalLM.from_pretrained(model_id) model.load_adapter(peft_model_id) ```
{"base_model": "mistralai/Mistral-7B-v0.1", "library_name": "peft", "pipeline_tag": "text-generation"}
task
[ "SUMMARIZATION" ]
46,113
dilarayavuz/olid-benign-bert-uncased
dilarayavuz
text-classification
[ "tensorboard", "safetensors", "bert", "autotrain", "text-classification", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "region:us" ]
2024-12-15T17:39:57Z
2024-12-15T17:42:15+00:00
48
0
--- base_model: google-bert/bert-base-uncased tags: - autotrain - text-classification widget: - text: I love AutoTrain --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.4172350764274597 f1: 0.7114252061248527 precision: 0.7383863080684596 recall: 0.6863636363636364 auc: 0.8708903229123818 accuracy: 0.8149546827794562
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.4172350764274597 f1: 0.7114252061248527 precision: 0.7383863080684596 recall: 0.6863636363636364 auc: 0.8708903229123818 accuracy: 0.8149546827794562
{"base_model": "google-bert/bert-base-uncased", "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
task
[ "TEXT_CLASSIFICATION" ]
46,114
LEE-F/sdsadasd
LEE-F
translation
[ "diffusers", "translation", "ab", "license:apache-2.0", "region:us" ]
2024-01-26T08:57:41Z
2024-04-15T08:53:20+00:00
0
0
--- language: - ab library_name: diffusers license: apache-2.0 metrics: - bleurt pipeline_tag: translation ---
null
Non_BioNLP
{"language": ["ab"], "library_name": "diffusers", "license": "apache-2.0", "metrics": ["bleurt"], "pipeline_tag": "translation"}
task
[ "TRANSLATION" ]
46,115
NousResearch/Hermes-2-Theta-Llama-3-8B
NousResearch
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "merges", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "base_model:NousResearch/Hermes-2-Pro-Llama-3-8B", "base_model:finetune:NousResearch/Hermes-2-Pro-Llama-3-8B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-05-05T09:14:29Z
2024-09-08T08:11:20+00:00
8,805
201
--- base_model: NousResearch/Hermes-2-Pro-Llama-3-8B datasets: - teknium/OpenHermes-2.5 language: - en license: apache-2.0 tags: - Llama-3 - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode - axolotl - merges widget: - example_title: Hermes 2 Pro Llama-3 Instruct Merge messages: - role: system content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. model-index: - name: Hermes-2-Pro-Llama-3-Instruct-8B-Merge results: [] --- # - Hermes-2 Θ Llama-3 8B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/HQnQmNM1L3KXGhp0wUzHH.png) ## Model Description Hermes-2 Θ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit. Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model. ## Example Outputs ### Create New Mythos: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/dvKhnSvHdx4nTQIqB9Lpv.png) ### Chat with a Meta-Cognitive Entity ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/GwdCqowE6GQylineqehhx.png) ### Ask for a structured JSON output: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/us72aL9gwUXdqSHetRVRV.png) # Prompt Format Hermes 2 Θ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/suBbCUIxpcRvhCv6-DBDQ.png) ## GPT4All: ``` |    Task     |Version| Metric |Value |   |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge|      0|acc     |0.5529|±  |0.0145| |             |       |acc_norm|0.5870|±  |0.0144| |arc_easy     |      0|acc     |0.8371|±  |0.0076| |             |       |acc_norm|0.8144|±  |0.0080| |boolq        |      1|acc     |0.8599|±  |0.0061| |hellaswag    |      0|acc     |0.6133|±  |0.0049| |             |       |acc_norm|0.7989|±  |0.0040| |openbookqa   |      0|acc     |0.3940|±  |0.0219| |             |       |acc_norm|0.4680|±  |0.0223| |piqa         |      0|acc     |0.8063|±  |0.0092| |             |       |acc_norm|0.8156|±  |0.0090| |winogrande   |      0|acc     |0.7372|±  |0.0124| ``` Average: 72.59 ## AGIEval: ``` |             Task             |Version| Metric |Value |   |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat              |      0|acc     |0.2441|±  |0.0270| |                              |       |acc_norm|0.2441|±  |0.0270| |agieval_logiqa_en             |      0|acc     |0.3687|±  |0.0189| |                              |       |acc_norm|0.3840|±  |0.0191| |agieval_lsat_ar               |      0|acc     |0.2304|±  |0.0278| |                              |       |acc_norm|0.2174|±  |0.0273| |agieval_lsat_lr               |      0|acc     |0.5471|±  |0.0221| |                              |       |acc_norm|0.5373|±  |0.0221| |agieval_lsat_rc               |      0|acc     |0.6617|±  |0.0289| |                              |       |acc_norm|0.6357|±  |0.0294| |agieval_sat_en                |      0|acc     |0.7670|±  |0.0295| |                              |       |acc_norm|0.7379|±  |0.0307| |agieval_sat_en_without_passage|      0|acc     |0.4417|±  |0.0347| |                              |       |acc_norm|0.4223|±  |0.0345| |agieval_sat_math              |      0|acc     |0.4000|±  |0.0331| |                              |       |acc_norm|0.3455|±  |0.0321| ``` Average: 44.05 ## BigBench: ``` |                      Task                      |Version|       Metric        |Value |   |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement                       |      0|multiple_choice_grade|0.6000|±  |0.0356| |bigbench_date_understanding                     |      0|multiple_choice_grade|0.6585|±  |0.0247| |bigbench_disambiguation_qa                      |      0|multiple_choice_grade|0.3178|±  |0.0290| |bigbench_geometric_shapes                       |      0|multiple_choice_grade|0.2340|±  |0.0224| |                                                |       |exact_str_match      |0.0000|±  |0.0000| |bigbench_logical_deduction_five_objects         |      0|multiple_choice_grade|0.2980|±  |0.0205| |bigbench_logical_deduction_seven_objects        |      0|multiple_choice_grade|0.2057|±  |0.0153| |bigbench_logical_deduction_three_objects        |      0|multiple_choice_grade|0.5367|±  |0.0288| |bigbench_movie_recommendation                   |      0|multiple_choice_grade|0.4040|±  |0.0220| |bigbench_navigate                               |      0|multiple_choice_grade|0.4970|±  |0.0158| |bigbench_reasoning_about_colored_objects        |      0|multiple_choice_grade|0.7075|±  |0.0102| |bigbench_ruin_names                             |      0|multiple_choice_grade|0.4821|±  |0.0236| |bigbench_salient_translation_error_detection    |      0|multiple_choice_grade|0.2295|±  |0.0133| |bigbench_snarks                                 |      0|multiple_choice_grade|0.6906|±  |0.0345| |bigbench_sports_understanding                   |      0|multiple_choice_grade|0.5375|±  |0.0159| |bigbench_temporal_sequences                     |      0|multiple_choice_grade|0.6270|±  |0.0153| |bigbench_tracking_shuffled_objects_five_objects |      0|multiple_choice_grade|0.2216|±  |0.0118| |bigbench_tracking_shuffled_objects_seven_objects|      0|multiple_choice_grade|0.1594|±  |0.0088| |bigbench_tracking_shuffled_objects_three_objects|      0|multiple_choice_grade|0.5367|±  |0.0288| ``` Average: 44.13 **IFEval**: 72.64 **MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875 # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM import bitsandbytes, flash_attn tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( "NousResearch/Hermes-2-Theta-Llama-3-8B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions: GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF # How to cite: ```bibtext @misc{Hermes-2-Theta-Llama-3-8B, url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))}, title={Hermes-2-Theta-Llama-3-8B}, author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"} } ```
null
Non_BioNLP
# - Hermes-2 Θ Llama-3 8B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/HQnQmNM1L3KXGhp0wUzHH.png) ## Model Description Hermes-2 Θ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit. Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model. ## Example Outputs ### Create New Mythos: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/dvKhnSvHdx4nTQIqB9Lpv.png) ### Chat with a Meta-Cognitive Entity ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/GwdCqowE6GQylineqehhx.png) ### Ask for a structured JSON output: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/us72aL9gwUXdqSHetRVRV.png) # Prompt Format Hermes 2 Θ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/suBbCUIxpcRvhCv6-DBDQ.png) ## GPT4All: ``` |    Task     |Version| Metric |Value |   |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge|      0|acc     |0.5529|±  |0.0145| |             |       |acc_norm|0.5870|±  |0.0144| |arc_easy     |      0|acc     |0.8371|±  |0.0076| |             |       |acc_norm|0.8144|±  |0.0080| |boolq        |      1|acc     |0.8599|±  |0.0061| |hellaswag    |      0|acc     |0.6133|±  |0.0049| |             |       |acc_norm|0.7989|±  |0.0040| |openbookqa   |      0|acc     |0.3940|±  |0.0219| |             |       |acc_norm|0.4680|±  |0.0223| |piqa         |      0|acc     |0.8063|±  |0.0092| |             |       |acc_norm|0.8156|±  |0.0090| |winogrande   |      0|acc     |0.7372|±  |0.0124| ``` Average: 72.59 ## AGIEval: ``` |             Task             |Version| Metric |Value |   |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat              |      0|acc     |0.2441|±  |0.0270| |                              |       |acc_norm|0.2441|±  |0.0270| |agieval_logiqa_en             |      0|acc     |0.3687|±  |0.0189| |                              |       |acc_norm|0.3840|±  |0.0191| |agieval_lsat_ar               |      0|acc     |0.2304|±  |0.0278| |                              |       |acc_norm|0.2174|±  |0.0273| |agieval_lsat_lr               |      0|acc     |0.5471|±  |0.0221| |                              |       |acc_norm|0.5373|±  |0.0221| |agieval_lsat_rc               |      0|acc     |0.6617|±  |0.0289| |                              |       |acc_norm|0.6357|±  |0.0294| |agieval_sat_en                |      0|acc     |0.7670|±  |0.0295| |                              |       |acc_norm|0.7379|±  |0.0307| |agieval_sat_en_without_passage|      0|acc     |0.4417|±  |0.0347| |                              |       |acc_norm|0.4223|±  |0.0345| |agieval_sat_math              |      0|acc     |0.4000|±  |0.0331| |                              |       |acc_norm|0.3455|±  |0.0321| ``` Average: 44.05 ## BigBench: ``` |                      Task                      |Version|       Metric        |Value |   |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement                       |      0|multiple_choice_grade|0.6000|±  |0.0356| |bigbench_date_understanding                     |      0|multiple_choice_grade|0.6585|±  |0.0247| |bigbench_disambiguation_qa                      |      0|multiple_choice_grade|0.3178|±  |0.0290| |bigbench_geometric_shapes                       |      0|multiple_choice_grade|0.2340|±  |0.0224| |                                                |       |exact_str_match      |0.0000|±  |0.0000| |bigbench_logical_deduction_five_objects         |      0|multiple_choice_grade|0.2980|±  |0.0205| |bigbench_logical_deduction_seven_objects        |      0|multiple_choice_grade|0.2057|±  |0.0153| |bigbench_logical_deduction_three_objects        |      0|multiple_choice_grade|0.5367|±  |0.0288| |bigbench_movie_recommendation                   |      0|multiple_choice_grade|0.4040|±  |0.0220| |bigbench_navigate                               |      0|multiple_choice_grade|0.4970|±  |0.0158| |bigbench_reasoning_about_colored_objects        |      0|multiple_choice_grade|0.7075|±  |0.0102| |bigbench_ruin_names                             |      0|multiple_choice_grade|0.4821|±  |0.0236| |bigbench_salient_translation_error_detection    |      0|multiple_choice_grade|0.2295|±  |0.0133| |bigbench_snarks                                 |      0|multiple_choice_grade|0.6906|±  |0.0345| |bigbench_sports_understanding                   |      0|multiple_choice_grade|0.5375|±  |0.0159| |bigbench_temporal_sequences                     |      0|multiple_choice_grade|0.6270|±  |0.0153| |bigbench_tracking_shuffled_objects_five_objects |      0|multiple_choice_grade|0.2216|±  |0.0118| |bigbench_tracking_shuffled_objects_seven_objects|      0|multiple_choice_grade|0.1594|±  |0.0088| |bigbench_tracking_shuffled_objects_three_objects|      0|multiple_choice_grade|0.5367|±  |0.0288| ``` Average: 44.13 **IFEval**: 72.64 **MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875 # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM import bitsandbytes, flash_attn tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( "NousResearch/Hermes-2-Theta-Llama-3-8B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions: GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF # How to cite: ```bibtext @misc{Hermes-2-Theta-Llama-3-8B, url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))}, title={Hermes-2-Theta-Llama-3-8B}, author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"} } ```
{"base_model": "NousResearch/Hermes-2-Pro-Llama-3-8B", "datasets": ["teknium/OpenHermes-2.5"], "language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "merges"], "widget": [{"example_title": "Hermes 2 Pro Llama-3 Instruct Merge", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "model-index": [{"name": "Hermes-2-Pro-Llama-3-Instruct-8B-Merge", "results": []}]}
task
[ "TRANSLATION" ]
46,116
LoneStriker/TowerInstruct-7B-v0.1-6.0bpw-h6-exl2
LoneStriker
translation
[ "transformers", "safetensors", "llama", "text-generation", "translation", "en", "de", "fr", "zh", "pt", "nl", "ru", "ko", "it", "es", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-01-11T19:20:34Z
2024-01-11T19:22:49+00:00
3
0
--- language: - en - de - fr - zh - pt - nl - ru - ko - it - es license: cc-by-nc-4.0 metrics: - comet pipeline_tag: translation --- # Model Card for TowerInstruct-7B-v0.1 ## Model Details ### Model Description TowerInstruct-7B is a language model that results from fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset. TowerInstruct-7B-v0.1 is the first model in the series. The model is trained to handle several translation-related tasks, such as general machine translation (e.g., sentence- and document-level translation, terminology-aware translation, context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation. We will release more details in the upcoming technical report. - **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay - **Model type:** A 7B parameter model fine-tuned on a mix of publicly available, synthetic datasets on translation-related tasks, as well as conversational datasets and code instructions. - **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian - **License:** CC-BY-NC-4.0, Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved. - **Finetuned from model:** [TowerBase](https://huggingface.co/Unbabel/TowerBase-7B-v0.1) ## Intended uses & limitations The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1)), which contains a diverse range of data sources: - Translation - Automatic Post Edition - Machine Translation Evaluation - Context-aware Translation - Terminology-aware Translation - Multi-reference Translation - Named-entity Recognition - Paraphrase Generation - Synthetic Chat data - Code instructions You can find the dataset and all data sources of [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1) here. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # Install transformers from source - only needed for versions <= v4.34 # pip install git+https://github.com/huggingface/transformers.git # pip install accelerate import torch from transformers import pipeline pipe = pipeline(“text-generation”, model=“Unbabel/TowerInstruct-v0.1“, torch_dtype=torch.bfloat16, device_map=“auto”) # We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {“role”: “user”, “content”: “Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:“}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=False) print(outputs[0][“generated_text”]) # <|im_start|>user # Translate the following text from Portuguese into English. # Portuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução. # English:<|im_end|> # <|im_start|>assistant # A group of researchers has launched a new model for translation-related tasks. ``` ### Out-of-Scope Use The model is not guaranteed to perform for languages other than the 10 languages it supports. Even though we trained the model on conversational data and code instructions, it is not intended to be used as a conversational chatbot or code assistant. ## Bias, Risks, and Limitations TowerInstruct-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements). ## Prompt Format TowerInstruct-v0.1 was trained using the ChatML prompt templates without any system prompts. An example follows below: ``` <|im_start|>user {USER PROMPT}<|im_end|> <|im_start|>assistant {MODEL RESPONSE}<|im_end|> <|im_start|>user [...] ``` ### Supervised tasks The prompts for all supervised tasks can be found in [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). We have used multiple prompt templates for each task. While different prompts may offer different outputs, the difference in downstream performance should be very minimal. ## Training Details ### Training Data Link to [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). #### Training Hyperparameters The following hyperparameters were used during training: - total_train_batch_size: 256 - learning_rate: 7e-06 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - weight_decay: 0.01 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - num_epochs: 4 - max_seq_length: 2048 ## Citation To be completed. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
null
Non_BioNLP
# Model Card for TowerInstruct-7B-v0.1 ## Model Details ### Model Description TowerInstruct-7B is a language model that results from fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset. TowerInstruct-7B-v0.1 is the first model in the series. The model is trained to handle several translation-related tasks, such as general machine translation (e.g., sentence- and document-level translation, terminology-aware translation, context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation. We will release more details in the upcoming technical report. - **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay - **Model type:** A 7B parameter model fine-tuned on a mix of publicly available, synthetic datasets on translation-related tasks, as well as conversational datasets and code instructions. - **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian - **License:** CC-BY-NC-4.0, Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved. - **Finetuned from model:** [TowerBase](https://huggingface.co/Unbabel/TowerBase-7B-v0.1) ## Intended uses & limitations The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1)), which contains a diverse range of data sources: - Translation - Automatic Post Edition - Machine Translation Evaluation - Context-aware Translation - Terminology-aware Translation - Multi-reference Translation - Named-entity Recognition - Paraphrase Generation - Synthetic Chat data - Code instructions You can find the dataset and all data sources of [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1) here. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # Install transformers from source - only needed for versions <= v4.34 # pip install git+https://github.com/huggingface/transformers.git # pip install accelerate import torch from transformers import pipeline pipe = pipeline(“text-generation”, model=“Unbabel/TowerInstruct-v0.1“, torch_dtype=torch.bfloat16, device_map=“auto”) # We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {“role”: “user”, “content”: “Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:“}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=False) print(outputs[0][“generated_text”]) # <|im_start|>user # Translate the following text from Portuguese into English. # Portuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução. # English:<|im_end|> # <|im_start|>assistant # A group of researchers has launched a new model for translation-related tasks. ``` ### Out-of-Scope Use The model is not guaranteed to perform for languages other than the 10 languages it supports. Even though we trained the model on conversational data and code instructions, it is not intended to be used as a conversational chatbot or code assistant. ## Bias, Risks, and Limitations TowerInstruct-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements). ## Prompt Format TowerInstruct-v0.1 was trained using the ChatML prompt templates without any system prompts. An example follows below: ``` <|im_start|>user {USER PROMPT}<|im_end|> <|im_start|>assistant {MODEL RESPONSE}<|im_end|> <|im_start|>user [...] ``` ### Supervised tasks The prompts for all supervised tasks can be found in [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). We have used multiple prompt templates for each task. While different prompts may offer different outputs, the difference in downstream performance should be very minimal. ## Training Details ### Training Data Link to [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). #### Training Hyperparameters The following hyperparameters were used during training: - total_train_batch_size: 256 - learning_rate: 7e-06 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - weight_decay: 0.01 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - num_epochs: 4 - max_seq_length: 2048 ## Citation To be completed. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
{"language": ["en", "de", "fr", "zh", "pt", "nl", "ru", "ko", "it", "es"], "license": "cc-by-nc-4.0", "metrics": ["comet"], "pipeline_tag": "translation"}
task
[ "TRANSLATION" ]
46,117
QuantFactory/emma-500-llama2-7b-GGUF
QuantFactory
null
[ "transformers", "gguf", "dataset:MaLA-LM/mala-monolingual-split", "arxiv:2409.17892", "base_model:meta-llama/Llama-2-7b-hf", "base_model:quantized:meta-llama/Llama-2-7b-hf", "license:llama2", "endpoints_compatible", "region:us" ]
2024-10-08T08:59:34Z
2024-10-08T10:25:17+00:00
120
3
--- base_model: - meta-llama/Llama-2-7b-hf datasets: - MaLA-LM/mala-monolingual-split library_name: transformers license: llama2 --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/emma-500-llama2-7b-GGUF This is quantized version of [MaLA-LM/emma-500-llama2-7b](https://huggingface.co/MaLA-LM/emma-500-llama2-7b) created using llama.cpp # Original Model Card # EMMA-500: Enhancing Massively Multilingual Adaptation of Large Language Models ## Model Description **EMMA-500** is a state-of-the-art multilingual language model designed to improve language representation, especially in low-resource languages, through continual pre-training on the **Llama 2 7B** architecture. Leveraging the **MaLA Corpus**, which spans over 500 languages and 74 billion tokens, EMMA-500 excels in multilingual tasks like commonsense reasoning, machine translation, open-ended generation, and text classification. **EMMA-500** outperforms other Llama 2-based models in diverse multilingual settings while maintaining robustness in specialized tasks. --- ## Model Details - **Architecture**: Built on Llama 2 7B with enhanced language adaptation through continual pre-training. - **Languages**: Supports **546 languages** with substantial training data (over 100k tokens each). - **Data Mix**: A diverse mix of text from domains like code, books, instruction data, and more. - **Key Tasks**: Commonsense reasoning, machine translation, text classification, natural language inference, code generation, and open-ended generation. ### Data Access - [MaLA Corpus](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529) - [PolyWrite Benchmark](https://huggingface.co/datasets/MaLA-LM/PolyWrite) --- ## Usage You can use **EMMA-500** for multilingual text generation. Below is an example to generate text using the model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "MaLA-LM/emma-500-llama2-7b" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) input_text = "Once upon a time" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` --- ## Model Performance **EMMA-500** was evaluated across multiple benchmarks and tasks, demonstrating: - **Lowest negative log-likelihood** in intrinsic evaluations. - Significant improvements in **commonsense reasoning**, **machine translation**, and **open-ended generation**. - **Outperformed** all Llama 2-based models in **text classification** and **natural language inference**. - Enhanced performance in **code generation** and **machine reading comprehension (MRC)**. Challenges remain in low-resource languages, where the model tends to have higher **Self-BLEU** scores, indicating reduced output diversity. --- ## Citation ``` @article{ji2024emma500enhancingmassivelymultilingual, title={{EMMA}-500: Enhancing Massively Multilingual Adaptation of Large Language Models}, author={Shaoxiong Ji and Zihao Li and Indraneil Paul and Jaakko Paavola and Peiqin Lin and Pinzhen Chen and Dayyán O'Brien and Hengyu Luo and Hinrich Schütze and Jörg Tiedemann and Barry Haddow}, year={2024}, journal={arXiv preprint 2409.17892}, url={https://arxiv.org/abs/2409.17892}, } ``` ## Acknowledgements We extend our thanks to the language communities and contributors who helped source, clean, and validate the diverse data used in the MaLA Corpus. Their efforts are invaluable in supporting linguistic diversity in AI research. This work is done by researchers at [Helsinki-NLP](https://huggingface.co/Helsinki-NLP) in collaboration with partners from TU Darmstadt, the University of Edinburgh, and LMU Munich. It is funded by [HPLT](https://hplt-project.org) and [UTTER](https://he-utter.eu).
null
Non_BioNLP
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/emma-500-llama2-7b-GGUF This is quantized version of [MaLA-LM/emma-500-llama2-7b](https://huggingface.co/MaLA-LM/emma-500-llama2-7b) created using llama.cpp # Original Model Card # EMMA-500: Enhancing Massively Multilingual Adaptation of Large Language Models ## Model Description **EMMA-500** is a state-of-the-art multilingual language model designed to improve language representation, especially in low-resource languages, through continual pre-training on the **Llama 2 7B** architecture. Leveraging the **MaLA Corpus**, which spans over 500 languages and 74 billion tokens, EMMA-500 excels in multilingual tasks like commonsense reasoning, machine translation, open-ended generation, and text classification. **EMMA-500** outperforms other Llama 2-based models in diverse multilingual settings while maintaining robustness in specialized tasks. --- ## Model Details - **Architecture**: Built on Llama 2 7B with enhanced language adaptation through continual pre-training. - **Languages**: Supports **546 languages** with substantial training data (over 100k tokens each). - **Data Mix**: A diverse mix of text from domains like code, books, instruction data, and more. - **Key Tasks**: Commonsense reasoning, machine translation, text classification, natural language inference, code generation, and open-ended generation. ### Data Access - [MaLA Corpus](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529) - [PolyWrite Benchmark](https://huggingface.co/datasets/MaLA-LM/PolyWrite) --- ## Usage You can use **EMMA-500** for multilingual text generation. Below is an example to generate text using the model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "MaLA-LM/emma-500-llama2-7b" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) input_text = "Once upon a time" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` --- ## Model Performance **EMMA-500** was evaluated across multiple benchmarks and tasks, demonstrating: - **Lowest negative log-likelihood** in intrinsic evaluations. - Significant improvements in **commonsense reasoning**, **machine translation**, and **open-ended generation**. - **Outperformed** all Llama 2-based models in **text classification** and **natural language inference**. - Enhanced performance in **code generation** and **machine reading comprehension (MRC)**. Challenges remain in low-resource languages, where the model tends to have higher **Self-BLEU** scores, indicating reduced output diversity. --- ## Citation ``` @article{ji2024emma500enhancingmassivelymultilingual, title={{EMMA}-500: Enhancing Massively Multilingual Adaptation of Large Language Models}, author={Shaoxiong Ji and Zihao Li and Indraneil Paul and Jaakko Paavola and Peiqin Lin and Pinzhen Chen and Dayyán O'Brien and Hengyu Luo and Hinrich Schütze and Jörg Tiedemann and Barry Haddow}, year={2024}, journal={arXiv preprint 2409.17892}, url={https://arxiv.org/abs/2409.17892}, } ``` ## Acknowledgements We extend our thanks to the language communities and contributors who helped source, clean, and validate the diverse data used in the MaLA Corpus. Their efforts are invaluable in supporting linguistic diversity in AI research. This work is done by researchers at [Helsinki-NLP](https://huggingface.co/Helsinki-NLP) in collaboration with partners from TU Darmstadt, the University of Edinburgh, and LMU Munich. It is funded by [HPLT](https://hplt-project.org) and [UTTER](https://he-utter.eu).
{"base_model": ["meta-llama/Llama-2-7b-hf"], "datasets": ["MaLA-LM/mala-monolingual-split"], "library_name": "transformers", "license": "llama2"}
task
[ "TEXT_CLASSIFICATION", "TRANSLATION" ]
46,118
ieq/IEQ-BERT
ieq
text-classification
[ "transformers", "safetensors", "bert", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-12-06T04:13:19Z
2025-03-02T23:02:41+00:00
24
0
--- license: mit --- # Quick Summary <!-- Provide a quick summary of what the model is/does. --> IEQ-BERT classifies building occupant feedback concerning indoor environmental quality. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> The IEQ-BERT model is a fine-tuned variant of the BERT (Bidirectional Encoder Representations from Transformers) architecture, adapted for the task of multilabel text classification in the context of Indoor Environmental Quality (IEQ). IEQ refers to the physical characteristics of indoor spaces, such as thermal comfort, acoustic comfort, visual comfort, and indoor air quality (IAQ), which directly impact occupant well-being, productivity, and satisfaction. The IEQ-BERT model is designed to analyze and classify occupant feedback into one or more of the following categories: "Acoustic," "IAQ," "Thermal," "Visual," and "No IEQ." The "No IEQ" category is reserved for feedback that uses language resembling the IEQ domain but does not pertain to indoor environmental quality, ensuring the model can distinguish between relevant and irrelevant content. - **Developed by:** Researchers at Deakin Unievrsity (Australia) and Northwestern University (US) - **Funded by:** Deakin University, School of Architecture and Built Environment - **Model type:** Multilable Text Classification - **Language:** English - **Finetuned from model:** bert-base-uncased ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** This model repository - **Paper:** Sadick, A.-M., & Chinazzo, G. (2025). What did the occupant say? Fine-tuning and evaluating a language model for efficient analysis of multi-domain indoor environmental quality feedback. Building and Environment, 112735. https://doi.org/10.1016/j.buildenv.2025.112735 - **Demo:** https://ieq-ieq-text-classifier-app.hf.space ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model has a wide range of potential use cases, including: - **Building Design and Architecture**: Analyzing feedback to identify recurring issues related to thermal comfort, lighting, or acoustics, which can inform design improvements to enhance occupant satisfaction. - **Building Management and Facility Planning**: Monitoring feedback in real-time to address specific IEQ concerns, such as HVAC performance or lighting issues, and prioritize interventions. - **Post-Occupancy Evaluation (POE)**: Classifying open-ended feedback from occupant surveys to assess the effectiveness of building designs and operational strategies. - **Integration into Building Automation Systems**: Processing occupant feedback alongside sensor data to provide actionable insights for optimizing indoor environments. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> Please use this model for the intended purposes stated above. ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("ieq/IEQ-BERT") model = AutoModelForSequenceClassification.from_pretrained("ieq/IEQ-BERT") ``` ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The training data consists of 14,622 filtered texts from Glassdoor job reviews and X posts about work environments during the COVID-19 pandemic. Five labellers manually labeled each feedback item using Labelbox to ensure accuracy, and they further checked for consistency using Cleanlab Studio. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> - **Accuracy**: 0.93 - **F1**: 0.93 ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you use this model, please cite the journal article below: **APA:** Sadick, A.-M., & Chinazzo, G. (2025). What did the occupant say? Fine-tuning and evaluating a large language model for efficient analysis of multi-domain indoor environmental quality feedback. Building and Environment, 112735. https://doi.org/10.1016/j.buildenv.2025.112735 ## Model Card Contact Dr Abdul-Manan Sadick - [email protected] Dr Giorgia Chinazzo - [email protected]
null
Non_BioNLP
# Quick Summary <!-- Provide a quick summary of what the model is/does. --> IEQ-BERT classifies building occupant feedback concerning indoor environmental quality. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> The IEQ-BERT model is a fine-tuned variant of the BERT (Bidirectional Encoder Representations from Transformers) architecture, adapted for the task of multilabel text classification in the context of Indoor Environmental Quality (IEQ). IEQ refers to the physical characteristics of indoor spaces, such as thermal comfort, acoustic comfort, visual comfort, and indoor air quality (IAQ), which directly impact occupant well-being, productivity, and satisfaction. The IEQ-BERT model is designed to analyze and classify occupant feedback into one or more of the following categories: "Acoustic," "IAQ," "Thermal," "Visual," and "No IEQ." The "No IEQ" category is reserved for feedback that uses language resembling the IEQ domain but does not pertain to indoor environmental quality, ensuring the model can distinguish between relevant and irrelevant content. - **Developed by:** Researchers at Deakin Unievrsity (Australia) and Northwestern University (US) - **Funded by:** Deakin University, School of Architecture and Built Environment - **Model type:** Multilable Text Classification - **Language:** English - **Finetuned from model:** bert-base-uncased ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** This model repository - **Paper:** Sadick, A.-M., & Chinazzo, G. (2025). What did the occupant say? Fine-tuning and evaluating a language model for efficient analysis of multi-domain indoor environmental quality feedback. Building and Environment, 112735. https://doi.org/10.1016/j.buildenv.2025.112735 - **Demo:** https://ieq-ieq-text-classifier-app.hf.space ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model has a wide range of potential use cases, including: - **Building Design and Architecture**: Analyzing feedback to identify recurring issues related to thermal comfort, lighting, or acoustics, which can inform design improvements to enhance occupant satisfaction. - **Building Management and Facility Planning**: Monitoring feedback in real-time to address specific IEQ concerns, such as HVAC performance or lighting issues, and prioritize interventions. - **Post-Occupancy Evaluation (POE)**: Classifying open-ended feedback from occupant surveys to assess the effectiveness of building designs and operational strategies. - **Integration into Building Automation Systems**: Processing occupant feedback alongside sensor data to provide actionable insights for optimizing indoor environments. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> Please use this model for the intended purposes stated above. ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("ieq/IEQ-BERT") model = AutoModelForSequenceClassification.from_pretrained("ieq/IEQ-BERT") ``` ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The training data consists of 14,622 filtered texts from Glassdoor job reviews and X posts about work environments during the COVID-19 pandemic. Five labellers manually labeled each feedback item using Labelbox to ensure accuracy, and they further checked for consistency using Cleanlab Studio. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> - **Accuracy**: 0.93 - **F1**: 0.93 ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you use this model, please cite the journal article below: **APA:** Sadick, A.-M., & Chinazzo, G. (2025). What did the occupant say? Fine-tuning and evaluating a large language model for efficient analysis of multi-domain indoor environmental quality feedback. Building and Environment, 112735. https://doi.org/10.1016/j.buildenv.2025.112735 ## Model Card Contact Dr Abdul-Manan Sadick - [email protected] Dr Giorgia Chinazzo - [email protected]
{"license": "mit"}
task
[ "TEXT_CLASSIFICATION" ]
46,119
tigindundar4/bert-base-uncased-finetuned-cola
tigindundar4
text-classification
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-06T18:57:43Z
2023-05-07T14:49:26+00:00
22
0
--- datasets: - glue license: apache-2.0 metrics: - matthews_correlation tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-cola results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - type: matthews_correlation value: 0.5108235781406687 name: Matthews Correlation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4659 - Matthews Correlation: 0.5108 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4908 | 1.0 | 535 | 0.4659 | 0.5108 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4659 - Matthews Correlation: 0.5108 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4908 | 1.0 | 535 | 0.4659 | 0.5108 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "cola", "split": "validation", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5108235781406687, "name": "Matthews Correlation"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,120
NavinspireIA/NavinLLM-400M
NavinspireIA
text-generation
[ "text-generation", "arxiv:2406.07887", "license:apache-2.0", "region:us" ]
2024-09-04T15:47:26Z
2024-09-06T19:30:40+00:00
0
3
--- license: apache-2.0 pipeline_tag: text-generation --- ## Model Overview NavinLLM is a bilingual (English/French) Mamba2-Hybrid model that integrates Mamba2, attention, and hybrid layers, designed with a sequence length of 4K tokens. The model training methodology is based on the techniques outlined in the [“An Empirical Study of Mamba-based Language Models”](https://arxiv.org/abs/2406.07887). Each version of the NavinLLM models has been trained on varying amounts of data, ranging from 10 billion tokens for the smallest model (200M parameters) to 800 billion tokens for the largest (7B parameters). These models are provided as base models without fine-tuning, except for the instruct version. ## Versions `NavinLLM-200M`: 200M Parameters Hybrid trained on 10B tokens (bilingual). `NavinLLM-400M` : 400M Parameters Hybrid trained on 20B tokens (bilingual). `NavinLLM-2B` : 2B Parameters pure SSM trained on 200B tokens (French). `NavinLLM-7B` : 7B Parameters Hybrid trained on 800B tokens (bilingual). `NavinLLM-7B-Instruct` : Fine-tuned on several tasks (Summarization / QA / Translation...) ## Tokenizer NavinLLM was trained using a custom SentencePiece Tokenizer, with two versions available: a 32k token vocabulary for more efficient representation, and a 52k token vocabulary designed to accommodate a broader range of tokens and linguistic variability. ## Datasets NavinLLM was trained on proprietary datasets, consisting of both publicly available data and synthetically generated content.
null
Non_BioNLP
## Model Overview NavinLLM is a bilingual (English/French) Mamba2-Hybrid model that integrates Mamba2, attention, and hybrid layers, designed with a sequence length of 4K tokens. The model training methodology is based on the techniques outlined in the [“An Empirical Study of Mamba-based Language Models”](https://arxiv.org/abs/2406.07887). Each version of the NavinLLM models has been trained on varying amounts of data, ranging from 10 billion tokens for the smallest model (200M parameters) to 800 billion tokens for the largest (7B parameters). These models are provided as base models without fine-tuning, except for the instruct version. ## Versions `NavinLLM-200M`: 200M Parameters Hybrid trained on 10B tokens (bilingual). `NavinLLM-400M` : 400M Parameters Hybrid trained on 20B tokens (bilingual). `NavinLLM-2B` : 2B Parameters pure SSM trained on 200B tokens (French). `NavinLLM-7B` : 7B Parameters Hybrid trained on 800B tokens (bilingual). `NavinLLM-7B-Instruct` : Fine-tuned on several tasks (Summarization / QA / Translation...) ## Tokenizer NavinLLM was trained using a custom SentencePiece Tokenizer, with two versions available: a 32k token vocabulary for more efficient representation, and a 52k token vocabulary designed to accommodate a broader range of tokens and linguistic variability. ## Datasets NavinLLM was trained on proprietary datasets, consisting of both publicly available data and synthetically generated content.
{"license": "apache-2.0", "pipeline_tag": "text-generation"}
task
[ "TRANSLATION", "SUMMARIZATION" ]
46,121
MultiBertGunjanPatrick/multiberts-seed-1-180k
MultiBertGunjanPatrick
null
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-1", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-10-04T04:59:37+00:00
101
0
--- datasets: - bookcorpus - wikipedia language: en license: apache-2.0 tags: - exbert - multiberts - multiberts-seed-1 --- # MultiBERTs Seed 1 Checkpoint 180k (uncased) Seed 1 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-180k') model = BertModel.from_pretrained("multiberts-seed-1-180k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
null
Non_BioNLP
# MultiBERTs Seed 1 Checkpoint 180k (uncased) Seed 1 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-180k') model = BertModel.from_pretrained("multiberts-seed-1-180k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"]}
task
[ "QUESTION_ANSWERING" ]
46,122
arcAman07/distilbert-base-uncased-finetuned-emotion
arcAman07
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-05-25T17:00:09Z
2022-05-25T17:08:01+00:00
12
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion args: default metrics: - type: accuracy value: 0.924 name: Accuracy - type: f1 value: 0.9240598378254522 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2222 - Accuracy: 0.924 - F1: 0.9241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8294 | 1.0 | 250 | 0.3209 | 0.9025 | 0.9001 | | 0.2536 | 2.0 | 500 | 0.2222 | 0.924 | 0.9241 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2222 - Accuracy: 0.924 - F1: 0.9241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8294 | 1.0 | 250 | 0.3209 | 0.9025 | 0.9001 | | 0.2536 | 2.0 | 500 | 0.2222 | 0.924 | 0.9241 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.924, "name": "Accuracy"}, {"type": "f1", "value": 0.9240598378254522, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,123
KarelDO/bert-base-uncased.CEBaB_confounding.food_service_positive.absa.5-class.seed_43
KarelDO
null
[ "transformers", "pytorch", "bert", "generated_from_trainer", "en", "dataset:OpenTable", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
2022-10-14T18:47:30Z
2022-10-14T18:52:56+00:00
31
0
--- datasets: - OpenTable language: - en license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: bert-base-uncased.CEBaB_confounding.food_service_positive.absa.5-class.seed_43 results: - task: type: text-classification name: Text Classification dataset: name: OpenTable OPENTABLE-ABSA type: OpenTable args: opentable-absa metrics: - type: accuracy value: 0.7962056303549572 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased.CEBaB_confounding.food_service_positive.absa.5-class.seed_43 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the OpenTable OPENTABLE-ABSA dataset. It achieves the following results on the evaluation set: - Loss: 0.8320 - Accuracy: 0.7962 - Macro-f1: 0.7933 - Weighted-macro-f1: 0.7964 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 43 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased.CEBaB_confounding.food_service_positive.absa.5-class.seed_43 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the OpenTable OPENTABLE-ABSA dataset. It achieves the following results on the evaluation set: - Loss: 0.8320 - Accuracy: 0.7962 - Macro-f1: 0.7933 - Weighted-macro-f1: 0.7964 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 43 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
{"datasets": ["OpenTable"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-uncased.CEBaB_confounding.food_service_positive.absa.5-class.seed_43", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "OpenTable OPENTABLE-ABSA", "type": "OpenTable", "args": "opentable-absa"}, "metrics": [{"type": "accuracy", "value": 0.7962056303549572, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,124
cgus/granite-3.2-8b-instruct-preview-iMat-GGUF
cgus
text-generation
[ "gguf", "language", "granite-3.2", "text-generation", "arxiv:0000.00000", "base_model:ibm-granite/granite-3.2-8b-instruct-preview", "base_model:quantized:ibm-granite/granite-3.2-8b-instruct-preview", "license:apache-2.0", "region:us", "imatrix", "conversational" ]
2025-02-09T15:19:27Z
2025-02-09T19:24:55+00:00
237
0
--- base_model: - ibm-granite/granite-3.2-8b-instruct-preview license: apache-2.0 pipeline_tag: text-generation tags: - language - granite-3.2 inference: false --- # Granite-3.2-8B-Instruct-Preview-iMat-GGUF Original model: [Granite-3.2-8B-Instruct-Preview](https://huggingface.co/ibm-granite/granite-3.2-8b-instruct-preview) Made by: [Granite Team, IBM](https://huggingface.co/ibm-granite) ## Quantization notes Made with llama.cpp-b4608 with imatrix file based on exllamav2 default dataset. These quants should work with lots of apps with llama.cpp engine: Jan, KoboldCpp, LM Studio, Text-Generation-WebUI, etc. # Original model card # Granite-3.2-8B-Instruct-Preview **Model Summary:** Granite-3.2-8B-Instruct-Preview is an early release of an 8B long-context model fine-tuned for enhanced reasoning (thinking) capabilities. Built on top of [Granite-3.1-8B-Instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct), it has been trained using a mix of permissively licensed open-source datasets and internally generated synthetic data designed for reasoning tasks. The model allows controllability of its thinking capability, ensuring it is applied only when required. <!-- is preview release of a finetuned mdpeis a 8B parameter long-context instruct model finetuned from Granite-3.1-8B-Instruct using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets tailored for solving long context problems. This model is finetuned to reason developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging. --> - **Developers:** Granite Team, IBM - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/) - **Release Date**: February 7th, 2025 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) **Supported Languages:** English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages. **Intended Use:** The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including business applications. **Capabilities** * **Thinking** * Summarization * Text classification * Text extraction * Question-answering * Retrieval Augmented Generation (RAG) * Code related tasks * Function-calling tasks * Multilingual dialog use cases * Long-context tasks including long document/meeting summarization, long document QA, etc. **Generation:** This is a simple example of how to use Granite-3.2-8B-Instruct-Preview model. Install the following libraries: ```shell pip install torch torchvision torchaudio pip install accelerate pip install transformers ``` Then, copy the snippet from the section that is relevant for your use case. ```python from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed import torch model_path="ibm-granite/granite-3.2-8b-instruct-preview" device="cuda" model = AutoModelForCausalLM.from_pretrained( model_path, device_map=device, torch_dtype=torch.bfloat16, ) tokenizer = AutoTokenizer.from_pretrained( model_path ) conv = [{"role": "user", "content":"How do I develop a skill?"}] input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device) set_seed(42) output = model.generate( **input_ids, max_new_tokens=8192, ) prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True) print(prediction) ``` **Example Outputs** - thinking=True ```md Here is my thought process: 1. Understand the user's intent: The user is seeking guidance on how to develop a new skill. 2. Break down the process: Skills development involves several stages - identification, planning, practice, feedback, and refinement. 3. Consider different aspects: Discuss various methods, mindset, and resources needed for effective skill acquisition. 4. Provide actionable steps: Offer a structured approach that the user can follow. Here is my response: Developing a new skill is a systematic process that involves several key steps and a dedicated mindset. Here's a comprehensive guide to help you effectively cultivate a new skill: ### 1. **Identify the Skill** - **Clarity of Purpose**: Clearly define what skill you want to acquire. Be specific about why you want this skill and how it aligns with your goals or interests. - **Research**: Understand the basics of the skill. Read about it, watch tutorials, or talk to people who are already proficient. This initial research will give you a realistic understanding of what the skill entails. ### 2. **Set Clear Goals** - **SMART Goals**: Make sure your goals are Specific, Measurable, Achievable, Relevant, and Time-bound. For example, instead of saying "I want to learn programming," set a goal like "I will complete a beginner's Python course within three months." - **Break Down**: If the skill is complex, break it down into smaller, manageable milestones. ### 3. **Create a Learning Plan** - **Resources**: Identify the best resources for learning. These could be online courses, books, mentors, workshops, or practical projects. - **Schedule**: Dedicate time in your routine for skill practice. Consistency is crucial in skill development. Treat your learning time as non-negotiable. ### 4. **Start with the Basics** - **Foundational Knowledge**: Begin by grasping the fundamentals. Understanding the basics thoroughly sets a strong foundation for more advanced learning. - **Practice Regularly**: Regular, focused practice is essential. Even 15-30 minutes a day can yield significant results over time. ### 5. **Seek Feedback and Guidance** - **Mentorship**: If possible, find a mentor or join a community of learners. Feedback from experienced individuals can accelerate your learning curve. - **Peer Learning**: Engage with others learning the same skill. Peer groups provide support, motivation, and different perspectives. ### 6. **Apply the Skill in Real-World Scenarios** - **Practical Application**: Look for opportunities to apply what you've learned. This could be through projects, internships, or volunteering. Practical application solidifies your understanding and builds confidence. - **Iterate Based on Feedback**: After applying your skill, seek and incorporate feedback to refine and improve. ### 7. **Maintain a Growth Mindset** - **Embrace Challenges**: Understand that setbacks and failures are part of the learning process. Maintain a positive attitude and view them as opportunities to learn. - **Patience and Persistence**: Skill development takes time. Stay patient and persistent, even when progress seems slow. ### 8. **Track and Reflect** - **Progress Tracking**: Regularly review your progress. This could be through keeping a learning journal or using apps designed for skill tracking. - **Reflect and Adjust**: Periodically reflect on your learning process. What's working? What's not? Adjust your plan accordingly. ### 9. **Stay Motivated** - **Reward System**: Set small rewards for achieving milestones to keep yourself motivated. - **Stay Curious**: Keep your interest piqued by exploring related topics or advanced aspects of the skill. By following these steps and maintaining a disciplined, growth-oriented approach, you can effectively develop any new skill. Remember, the key is consistent effort and a willingness to learn from both successes and failures. Building skills is an ongoing journey, not a destination. ``` - thinking=False ```md Developing a skill involves a combination of learning, practice, and often, feedback. Here's a step-by-step guide to help you develop a new skill: 1. **Identify the Skill**: Start by clearly defining what skill you want to develop. Be specific. Instead of saying "I want to learn to code," specify a programming language like Python or JavaScript. 2. **Research**: Learn about the basics of the skill. Read books, articles, watch tutorials, or take online courses. Websites like Coursera, Udemy, Khan Academy, and YouTube can be great resources. 3. **Set Clear Goals**: Break down your skill into smaller, manageable goals. For example, if you're learning a new language, your goals might be to learn basic grammar, build a simple sentence, have a basic conversation, etc. 4. **Create a Study Plan**: Allocate specific time each day or week for learning and practicing. Consistency is key in skill development. 5. **Practice**: Apply what you've learned. Practice makes permanent. If you're learning to code, write small programs. If it's a musical instrument, play regularly. 6. **Get Feedback**: Seek feedback from others who are more experienced. This could be a mentor, a tutor, or even online communities. Constructive criticism can help you identify areas for improvement. 7. **Review and Refine**: Regularly review what you've learned. Refine your skills based on feedback and your own observations. 8. **Apply in Real Life**: Try to use your new skill in real-life situations. This could be a project at work, a personal hobby, or volunteering. 9. **Be Patient and Persistent**: Skill development takes time. Don't get discouraged by slow progress or setbacks. Keep practicing and learning. 10. **Stay Motivated**: Keep your end goal in mind and celebrate small victories along the way to stay motivated. Remember, everyone learns at their own pace, so don't compare your progress with others. The most important thing is that you're consistently moving forward. ``` **Evaluation Results:** <table> <thead> <tr> <th style="text-align:left; background-color: #001d6c; color: white;">Models</th> <th style="text-align:center; background-color: #001d6c; color: white;">ArenaHard</th> <th style="text-align:center; background-color: #001d6c; color: white;">Alpaca-Eval-2</th> <th style="text-align:center; background-color: #001d6c; color: white;">MMLU</th> <th style="text-align:center; background-color: #001d6c; color: white;">PopQA</th> <th style="text-align:center; background-color: #001d6c; color: white;">TruthfulQA</th> <th style="text-align:center; background-color: #001d6c; color: white;">BigBenchHard</th> <th style="text-align:center; background-color: #001d6c; color: white;">DROP</th> <th style="text-align:center; background-color: #001d6c; color: white;">GSM8K</th> <th style="text-align:center; background-color: #001d6c; color: white;">HumanEval</th> <th style="text-align:center; background-color: #001d6c; color: white;">HumanEval+</th> <th style="text-align:center; background-color: #001d6c; color: white;">IFEval</th> <th style="text-align:center; background-color: #001d6c; color: white;">AttaQ</th> </tr></thead> <tbody> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;">Llama-3.1-8B-Instruct</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">36.43</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">27.22</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">69.15</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">28.79</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">52.79</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">72.66</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">61.48</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">83.24</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">85.32</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">80.15</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">79.10</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">83.43</td> </tr> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;">DeepSeek-R1-Distill-Llama-8B</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">17.17</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">21.85</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">45.80</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">13.25</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">47.43</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">65.71</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">44.46</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">72.18</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">67.54</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">62.91</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">66.50</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">42.87</td> </tr> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;">Qwen-2.5-7B-Instruct</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">25.44</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">30.34</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">74.30</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">18.12</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">63.06</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">70.40</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">54.71</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">84.46</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">93.35</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">89.91</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">74.90</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">81.90</td> </tr> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;">DeepSeek-R1-Distill-Qwen-7B</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">10.36</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">15.35</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">50.72</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">9.94</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">47.14</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">65.04</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">42.76</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">78.47</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">79.89</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">78.43</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">59.10</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">42.45</td> </tr> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.1-8B-Instruct</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">37.58</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">27.87</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">66.84</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">28.84</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">65.92</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">68.10</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">50.78</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">79.08</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">88.82</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">84.62</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">71.20</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">85.73</td> </tr> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.2-8B-Instruct-Preview</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">55.23</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">61.16</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">66.93</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">28.08</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">66.37</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">65.60</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">50.73</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">83.09</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">89.47</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">86.88</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">73.57</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">85.99</td> </tr> </tbody></table> **Training Data:** Overall, our training data is largely comprised of two key sources: (1) publicly available datasets with permissive license, (2) internal synthetically generated data targeted to enhance reasoning capabilites. <!-- A detailed attribution of datasets can be found in [Granite 3.2 Technical Report (coming soon)](#), and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf). --> **Infrastructure:** We train Granite-3.2-8B-Instruct-Preview using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs. **Ethical Considerations and Limitations:** Granite-3.2-8B-Instruct-Preview builds upon Granite-3.1-8B-Instruct, leveraging both permissively licensed open-source and select proprietary data for enhanced performance. Since it inherits its foundation from the previous model, all ethical considerations and limitations applicable to [Granite-3.1-8B-Instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct) remain relevant. **Resources** - ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite - 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/ - 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources <!-- ## Citation ``` @misc{granite-models, author = {author 1, author2, ...}, title = {}, journal = {}, volume = {}, year = {2024}, url = {https://arxiv.org/abs/0000.00000}, } ``` -->
null
Non_BioNLP
# Granite-3.2-8B-Instruct-Preview-iMat-GGUF Original model: [Granite-3.2-8B-Instruct-Preview](https://huggingface.co/ibm-granite/granite-3.2-8b-instruct-preview) Made by: [Granite Team, IBM](https://huggingface.co/ibm-granite) ## Quantization notes Made with llama.cpp-b4608 with imatrix file based on exllamav2 default dataset. These quants should work with lots of apps with llama.cpp engine: Jan, KoboldCpp, LM Studio, Text-Generation-WebUI, etc. # Original model card # Granite-3.2-8B-Instruct-Preview **Model Summary:** Granite-3.2-8B-Instruct-Preview is an early release of an 8B long-context model fine-tuned for enhanced reasoning (thinking) capabilities. Built on top of [Granite-3.1-8B-Instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct), it has been trained using a mix of permissively licensed open-source datasets and internally generated synthetic data designed for reasoning tasks. The model allows controllability of its thinking capability, ensuring it is applied only when required. <!-- is preview release of a finetuned mdpeis a 8B parameter long-context instruct model finetuned from Granite-3.1-8B-Instruct using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets tailored for solving long context problems. This model is finetuned to reason developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging. --> - **Developers:** Granite Team, IBM - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/) - **Release Date**: February 7th, 2025 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) **Supported Languages:** English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages. **Intended Use:** The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including business applications. **Capabilities** * **Thinking** * Summarization * Text classification * Text extraction * Question-answering * Retrieval Augmented Generation (RAG) * Code related tasks * Function-calling tasks * Multilingual dialog use cases * Long-context tasks including long document/meeting summarization, long document QA, etc. **Generation:** This is a simple example of how to use Granite-3.2-8B-Instruct-Preview model. Install the following libraries: ```shell pip install torch torchvision torchaudio pip install accelerate pip install transformers ``` Then, copy the snippet from the section that is relevant for your use case. ```python from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed import torch model_path="ibm-granite/granite-3.2-8b-instruct-preview" device="cuda" model = AutoModelForCausalLM.from_pretrained( model_path, device_map=device, torch_dtype=torch.bfloat16, ) tokenizer = AutoTokenizer.from_pretrained( model_path ) conv = [{"role": "user", "content":"How do I develop a skill?"}] input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device) set_seed(42) output = model.generate( **input_ids, max_new_tokens=8192, ) prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True) print(prediction) ``` **Example Outputs** - thinking=True ```md Here is my thought process: 1. Understand the user's intent: The user is seeking guidance on how to develop a new skill. 2. Break down the process: Skills development involves several stages - identification, planning, practice, feedback, and refinement. 3. Consider different aspects: Discuss various methods, mindset, and resources needed for effective skill acquisition. 4. Provide actionable steps: Offer a structured approach that the user can follow. Here is my response: Developing a new skill is a systematic process that involves several key steps and a dedicated mindset. Here's a comprehensive guide to help you effectively cultivate a new skill: ### 1. **Identify the Skill** - **Clarity of Purpose**: Clearly define what skill you want to acquire. Be specific about why you want this skill and how it aligns with your goals or interests. - **Research**: Understand the basics of the skill. Read about it, watch tutorials, or talk to people who are already proficient. This initial research will give you a realistic understanding of what the skill entails. ### 2. **Set Clear Goals** - **SMART Goals**: Make sure your goals are Specific, Measurable, Achievable, Relevant, and Time-bound. For example, instead of saying "I want to learn programming," set a goal like "I will complete a beginner's Python course within three months." - **Break Down**: If the skill is complex, break it down into smaller, manageable milestones. ### 3. **Create a Learning Plan** - **Resources**: Identify the best resources for learning. These could be online courses, books, mentors, workshops, or practical projects. - **Schedule**: Dedicate time in your routine for skill practice. Consistency is crucial in skill development. Treat your learning time as non-negotiable. ### 4. **Start with the Basics** - **Foundational Knowledge**: Begin by grasping the fundamentals. Understanding the basics thoroughly sets a strong foundation for more advanced learning. - **Practice Regularly**: Regular, focused practice is essential. Even 15-30 minutes a day can yield significant results over time. ### 5. **Seek Feedback and Guidance** - **Mentorship**: If possible, find a mentor or join a community of learners. Feedback from experienced individuals can accelerate your learning curve. - **Peer Learning**: Engage with others learning the same skill. Peer groups provide support, motivation, and different perspectives. ### 6. **Apply the Skill in Real-World Scenarios** - **Practical Application**: Look for opportunities to apply what you've learned. This could be through projects, internships, or volunteering. Practical application solidifies your understanding and builds confidence. - **Iterate Based on Feedback**: After applying your skill, seek and incorporate feedback to refine and improve. ### 7. **Maintain a Growth Mindset** - **Embrace Challenges**: Understand that setbacks and failures are part of the learning process. Maintain a positive attitude and view them as opportunities to learn. - **Patience and Persistence**: Skill development takes time. Stay patient and persistent, even when progress seems slow. ### 8. **Track and Reflect** - **Progress Tracking**: Regularly review your progress. This could be through keeping a learning journal or using apps designed for skill tracking. - **Reflect and Adjust**: Periodically reflect on your learning process. What's working? What's not? Adjust your plan accordingly. ### 9. **Stay Motivated** - **Reward System**: Set small rewards for achieving milestones to keep yourself motivated. - **Stay Curious**: Keep your interest piqued by exploring related topics or advanced aspects of the skill. By following these steps and maintaining a disciplined, growth-oriented approach, you can effectively develop any new skill. Remember, the key is consistent effort and a willingness to learn from both successes and failures. Building skills is an ongoing journey, not a destination. ``` - thinking=False ```md Developing a skill involves a combination of learning, practice, and often, feedback. Here's a step-by-step guide to help you develop a new skill: 1. **Identify the Skill**: Start by clearly defining what skill you want to develop. Be specific. Instead of saying "I want to learn to code," specify a programming language like Python or JavaScript. 2. **Research**: Learn about the basics of the skill. Read books, articles, watch tutorials, or take online courses. Websites like Coursera, Udemy, Khan Academy, and YouTube can be great resources. 3. **Set Clear Goals**: Break down your skill into smaller, manageable goals. For example, if you're learning a new language, your goals might be to learn basic grammar, build a simple sentence, have a basic conversation, etc. 4. **Create a Study Plan**: Allocate specific time each day or week for learning and practicing. Consistency is key in skill development. 5. **Practice**: Apply what you've learned. Practice makes permanent. If you're learning to code, write small programs. If it's a musical instrument, play regularly. 6. **Get Feedback**: Seek feedback from others who are more experienced. This could be a mentor, a tutor, or even online communities. Constructive criticism can help you identify areas for improvement. 7. **Review and Refine**: Regularly review what you've learned. Refine your skills based on feedback and your own observations. 8. **Apply in Real Life**: Try to use your new skill in real-life situations. This could be a project at work, a personal hobby, or volunteering. 9. **Be Patient and Persistent**: Skill development takes time. Don't get discouraged by slow progress or setbacks. Keep practicing and learning. 10. **Stay Motivated**: Keep your end goal in mind and celebrate small victories along the way to stay motivated. Remember, everyone learns at their own pace, so don't compare your progress with others. The most important thing is that you're consistently moving forward. ``` **Evaluation Results:** <table> <thead> <tr> <th style="text-align:left; background-color: #001d6c; color: white;">Models</th> <th style="text-align:center; background-color: #001d6c; color: white;">ArenaHard</th> <th style="text-align:center; background-color: #001d6c; color: white;">Alpaca-Eval-2</th> <th style="text-align:center; background-color: #001d6c; color: white;">MMLU</th> <th style="text-align:center; background-color: #001d6c; color: white;">PopQA</th> <th style="text-align:center; background-color: #001d6c; color: white;">TruthfulQA</th> <th style="text-align:center; background-color: #001d6c; color: white;">BigBenchHard</th> <th style="text-align:center; background-color: #001d6c; color: white;">DROP</th> <th style="text-align:center; background-color: #001d6c; color: white;">GSM8K</th> <th style="text-align:center; background-color: #001d6c; color: white;">HumanEval</th> <th style="text-align:center; background-color: #001d6c; color: white;">HumanEval+</th> <th style="text-align:center; background-color: #001d6c; color: white;">IFEval</th> <th style="text-align:center; background-color: #001d6c; color: white;">AttaQ</th> </tr></thead> <tbody> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;">Llama-3.1-8B-Instruct</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">36.43</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">27.22</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">69.15</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">28.79</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">52.79</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">72.66</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">61.48</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">83.24</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">85.32</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">80.15</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">79.10</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">83.43</td> </tr> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;">DeepSeek-R1-Distill-Llama-8B</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">17.17</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">21.85</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">45.80</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">13.25</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">47.43</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">65.71</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">44.46</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">72.18</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">67.54</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">62.91</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">66.50</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">42.87</td> </tr> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;">Qwen-2.5-7B-Instruct</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">25.44</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">30.34</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">74.30</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">18.12</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">63.06</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">70.40</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">54.71</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">84.46</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">93.35</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">89.91</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">74.90</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">81.90</td> </tr> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;">DeepSeek-R1-Distill-Qwen-7B</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">10.36</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">15.35</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">50.72</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">9.94</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">47.14</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">65.04</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">42.76</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">78.47</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">79.89</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">78.43</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">59.10</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">42.45</td> </tr> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.1-8B-Instruct</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">37.58</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">27.87</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">66.84</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">28.84</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">65.92</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">68.10</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">50.78</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">79.08</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">88.82</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">84.62</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">71.20</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">85.73</td> </tr> <tr> <td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.2-8B-Instruct-Preview</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">55.23</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">61.16</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">66.93</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">28.08</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">66.37</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">65.60</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">50.73</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">83.09</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">89.47</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">86.88</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">73.57</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">85.99</td> </tr> </tbody></table> **Training Data:** Overall, our training data is largely comprised of two key sources: (1) publicly available datasets with permissive license, (2) internal synthetically generated data targeted to enhance reasoning capabilites. <!-- A detailed attribution of datasets can be found in [Granite 3.2 Technical Report (coming soon)](#), and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf). --> **Infrastructure:** We train Granite-3.2-8B-Instruct-Preview using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs. **Ethical Considerations and Limitations:** Granite-3.2-8B-Instruct-Preview builds upon Granite-3.1-8B-Instruct, leveraging both permissively licensed open-source and select proprietary data for enhanced performance. Since it inherits its foundation from the previous model, all ethical considerations and limitations applicable to [Granite-3.1-8B-Instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct) remain relevant. **Resources** - ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite - 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/ - 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources <!-- ## Citation ``` @misc{granite-models, author = {author 1, author2, ...}, title = {}, journal = {}, volume = {}, year = {2024}, url = {https://arxiv.org/abs/0000.00000}, } ``` -->
{"base_model": ["ibm-granite/granite-3.2-8b-instruct-preview"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["language", "granite-3.2"], "inference": false}
task
[ "TEXT_CLASSIFICATION", "SUMMARIZATION" ]
46,125
abdiharyadi/kancilgpt
abdiharyadi
text-generation
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:indobenchmark/indogpt", "base_model:finetune:indobenchmark/indogpt", "license:mit", "autotrain_compatible", "text-generation-inference", "region:us" ]
2024-01-20T13:48:33Z
2024-12-12T13:42:16+00:00
85
0
--- base_model: indobenchmark/indogpt license: mit tags: - generated_from_trainer inference: false model-index: - name: kancilgpt results: [] --- # KancilGPT (Spoiler: KancilGPT is not ready yet for production. The reader will see why.) Once upon a time, in a digital data forest, there was a language model called KancilGPT. ## Model Description KancilGPT is a fine-tuned version of [indobenchmark/indogpt](https://huggingface.co/indobenchmark/indogpt). Its task is generating an Indonesian fable story. In addition, this model name is based on a famous, wise (but also master at trolling), cute fable character: [_kancil_](https://en.wikipedia.org/wiki/Chevrotain). KancilGPT learns on the unpublished dataset, gathered from [dongengceritarakyat.com](https://dongengceritarakyat.com/). ## Dataset and Prompt The dataset consists of 388 Indonesian fable stories. These stories was gathered from [dongengceritarakyat.com](https://dongengceritarakyat.com/) at January 8, 2024. The duplicated stories without any paraphrashing was removed, based on the value of cosine similarity of TF-IDF trigram words. Furthermore, the remaining stories were cleaned manually for removing non-fable stories, incomplete stories (e.g. synopsis), some misused punctuations, and some typos. If a mistake is found, the dataset will be modified as soon as possible. The cleaned stories was splitted with 80:10:10 ratio, giving - 310 stories for training, - 39 stories for evaluation, and - 39 stories for test (for now, it's unused). The splitting is based on the value of cosine similarity of TF-IDF trigram words, same as duplicate story handling. The stories are chosen one by one, and the smaller of maximum cosine similarity of a story is prioritized. The first 39 stories is used for test, and the rest is used for training and evaluation, randomly. This method is used to make sure no duplicate paraphrasing story exists in the test data. To make the KancilGPT understand to generate a story, the prompts were built with the following formats: 1. `<s> awal cerita | judul: <title> | <entire-story-content> | tamat </s>` 2. `<s> awal cerita | judul: <title> | <beginning-story-content> | bersambung </s>` 3. `<s> pertengahan cerita | judul: <title> | <last-story-content> | tamat </s>` 4. `<s> pertengahan cerita | judul: <title> | <end-story-content> | bersambung </s>` Indonesian language was used for all prompts. Generally, there are four part of the prompt: 1. story part type—it can be the beginning of a story (`awal cerita`) or it can be the middle of a story (`pertengahan cerita`); 2. story title (`judul`); 3. story content; and 4. story end status—it can be "to be continued" (`bersambung`) or "the end" (`tamat`). A story content consists of _n_ sentences that totally contains at least 1750 characters, with minimum _n_ value. If the entire story contains no more than 1750 characters, the format 1 will be used. For making a transition of story content from format 2 to format 3, or from format 3 to another format 3, or from format 3 to format 4, the first _k_ sentences from the current story content will be removed until the content contains less than 1750 characters, with minimum _k_ value and _k ≥ 1_ constraint. ## How to Use After learns how to generate an Indonesian fable story, KancilGPT can generate a random fable story with the specified procedures. Its competence shone with the help of [Huggingface Spaces](https://huggingface.co/spaces/abdiharyadi/kancilgpt). Huggingface pipeline could not help yet because KancilGPT uses `IndoNLGTokenizer` class from [`indobenchmark-toolkit`](https://github.com/indobenchmark/indobenchmark-toolkit). ### Step 1: Begin the story Use this prompt to generate the beginning of a story, including the generation of a title (`judul`): ``` <s> awal cerita | judul: ``` Below is the example output: ``` <s> awal cerita | judul: pemburu dan anak cheetah | suatu hari , pemburu itu melihat seekor cheetah yang sedang bersantai di tepi sungai . berburu cheetah di hutan itu menyenangkan , karena dia akan makan saat sedang asyik . cheetah itu gesit dan cerdik , dia bisa memburu cheetah yang sudah tua dan terlihat sangat lezat . pemburu itu berniat untuk menangkap nya , tapi sayang nya dia tidak membawa cheetah itu . oleh karena itu dia hanya mencari seekor kijang dan melihat tulang-tulang itu . setelah menemukan rusa itu , pemburu itu langsung mengejar nya hingga tubuh nya cukup besar . ketika selesai menangkap rusa itu , pemburu itu meminta cheetah untuk memasukkan kepala nya ke dalam cangkang . " ini adalah kepala ku , adik ku , " kata pemburu itu . " kau harus segera menggigit aku . " pemburu itu setuju , dan segera memukulkan kepala nya ke kepala anak cheetah tersebut . pemburu itu melempar sang kijang ke arah rusa . rusa segera menjerit kesakitan dan lari meninggalkan pemburu itu . pemburu yang melihat kejadian itu segera melaporkan kejadian itu kepada teman-teman nya yang lain . " pemburu itu adalah anak cheetah , mereka memang baru datang sekarang , tapi mereka selalu menjaga dan memberi semangat untuk berburu . " pemburu itu membawa seekor cheetah , lalu menunjukkan pada dua teman nya bahwa kepala nya tertembak oleh pemburu . " pemburu itu membawa rusa dan dua orang anak nya , " kata pemburu . " ayo kawan , kita lanjutkan saja perjalanan kita . aku akan mencari kepala rusa itu . bagaimana ? " teriak pemburu . rusa mencoba untuk melepaskan diri dari pemburu . tapi apa yang terjadi ? rusa muda tersebut malah melompat dari kepala pemburu . pemburu itu berhasil membebaskan nya . si cheetah segera berlari meninggalkan pemburu . | bersambung</s> ``` Notice that the real output has longer leading `</s>` with another random tokens. That's normal. From the generated output, notice the end status of a story before the `</s>` token. If it's `tamat`, the story ends. Go to step 3. If it's `bersambung`, the story should be continued. Remove the first _k_ sentences so the remaining sentences contains less than 1750 characters, with minimum _k_ value and _k ≥ 1_ constraint. Take the remaining sentences as the next content for the next prompt in step 2. Below is the next content from the example output: ``` berburu cheetah di hutan itu menyenangkan , karena dia akan makan saat sedang asyik . cheetah itu gesit dan cerdik , dia bisa memburu cheetah yang sudah tua dan terlihat sangat lezat . pemburu itu berniat untuk menangkap nya , tapi sayang nya dia tidak membawa cheetah itu . oleh karena itu dia hanya mencari seekor kijang dan melihat tulang-tulang itu . setelah menemukan rusa itu , pemburu itu langsung mengejar nya hingga tubuh nya cukup besar . ketika selesai menangkap rusa itu , pemburu itu meminta cheetah untuk memasukkan kepala nya ke dalam cangkang . " ini adalah kepala ku , adik ku , " kata pemburu itu . " kau harus segera menggigit aku . " pemburu itu setuju , dan segera memukulkan kepala nya ke kepala anak cheetah tersebut . pemburu itu melempar sang kijang ke arah rusa . rusa segera menjerit kesakitan dan lari meninggalkan pemburu itu . pemburu yang melihat kejadian itu segera melaporkan kejadian itu kepada teman-teman nya yang lain . " pemburu itu adalah anak cheetah , mereka memang baru datang sekarang , tapi mereka selalu menjaga dan memberi semangat untuk berburu . " pemburu itu membawa seekor cheetah , lalu menunjukkan pada dua teman nya bahwa kepala nya tertembak oleh pemburu . " pemburu itu membawa rusa dan dua orang anak nya , " kata pemburu . " ayo kawan , kita lanjutkan saja perjalanan kita . aku akan mencari kepala rusa itu . bagaimana ? " teriak pemburu . rusa mencoba untuk melepaskan diri dari pemburu . tapi apa yang terjadi ? rusa muda tersebut malah melompat dari kepala pemburu . pemburu itu berhasil membebaskan nya . si cheetah segera berlari meninggalkan pemburu . ``` ### Step 2: Continue the story With the existing title and next content, use this prompt format to continue the story: ``` <s> pertengahan cerita | judul: <title> | <next-content> ``` Below is the example prompt from the example next content from the step 1: ``` <s> pertengahan cerita | judul: pemburu dan anak cheetah | berburu cheetah di hutan itu menyenangkan , karena dia akan makan saat sedang asyik . cheetah itu gesit dan cerdik , dia bisa memburu cheetah yang sudah tua dan terlihat sangat lezat . pemburu itu berniat untuk menangkap nya , tapi sayang nya dia tidak membawa cheetah itu . oleh karena itu dia hanya mencari seekor kijang dan melihat tulang-tulang itu . setelah menemukan rusa itu , pemburu itu langsung mengejar nya hingga tubuh nya cukup besar . ketika selesai menangkap rusa itu , pemburu itu meminta cheetah untuk memasukkan kepala nya ke dalam cangkang . " ini adalah kepala ku , adik ku , " kata pemburu itu . " kau harus segera menggigit aku . " pemburu itu setuju , dan segera memukulkan kepala nya ke kepala anak cheetah tersebut . pemburu itu melempar sang kijang ke arah rusa . rusa segera menjerit kesakitan dan lari meninggalkan pemburu itu . pemburu yang melihat kejadian itu segera melaporkan kejadian itu kepada teman-teman nya yang lain . " pemburu itu adalah anak cheetah , mereka memang baru datang sekarang , tapi mereka selalu menjaga dan memberi semangat untuk berburu . " pemburu itu membawa seekor cheetah , lalu menunjukkan pada dua teman nya bahwa kepala nya tertembak oleh pemburu . " pemburu itu membawa rusa dan dua orang anak nya , " kata pemburu . " ayo kawan , kita lanjutkan saja perjalanan kita . aku akan mencari kepala rusa itu . bagaimana ? " teriak pemburu . rusa mencoba untuk melepaskan diri dari pemburu . tapi apa yang terjadi ? rusa muda tersebut malah melompat dari kepala pemburu . pemburu itu berhasil membebaskan nya . si cheetah segera berlari meninggalkan pemburu . ``` Below is the example output: ``` <s> pertengahan cerita | judul: pemburu dan anak cheetah | berburu cheetah di hutan itu menyenangkan , karena dia akan makan saat sedang asyik . cheetah itu gesit dan cerdik , dia bisa memburu cheetah yang sudah tua dan terlihat sangat lezat . pemburu itu berniat untuk menangkap nya , tapi sayang nya dia tidak membawa cheetah itu . oleh karena itu dia hanya mencari seekor kijang dan melihat tulang-tulang itu . setelah menemukan rusa itu , pemburu itu langsung mengejar nya hingga tubuh nya cukup besar . ketika selesai menangkap rusa itu , pemburu itu meminta cheetah untuk memasukkan kepala nya ke dalam cangkang . " ini adalah kepala ku , adik ku , " kata pemburu itu . " kau harus segera menggigit aku . " pemburu itu setuju , dan segera memukulkan kepala nya ke kepala anak cheetah tersebut . pemburu itu melempar sang kijang ke arah rusa . rusa segera menjerit kesakitan dan lari meninggalkan pemburu itu . pemburu yang melihat kejadian itu segera melaporkan kejadian itu kepada teman-teman nya yang lain . " pemburu itu adalah anak cheetah , mereka memang baru datang sekarang , tapi mereka selalu menjaga dan memberi semangat untuk berburu . " pemburu itu membawa seekor cheetah , lalu menunjukkan pada dua teman nya bahwa kepala nya tertembak oleh pemburu . " pemburu itu membawa rusa dan dua orang anak nya , " kata pemburu . " ayo kawan , kita lanjutkan saja perjalanan kita . aku akan mencari kepala rusa itu . bagaimana ? " teriak pemburu . rusa mencoba untuk melepaskan diri dari pemburu . tapi apa yang terjadi ? rusa muda tersebut malah melompat dari kepala pemburu . pemburu itu berhasil membebaskan nya . si cheetah segera berlari meninggalkan pemburu . namun , pemburu itu tak melihat rusa itu kembali . dia menengok ke belakang dan melihat kepala rusa itu masih di belakang . memburu cheetah berarti sudah menyerah lebih dulu . pemburu itu menjatuhkan cheetah pada diri nya sendiri . | bersambung</s> ``` From the generated output, notice the end status of a story before the `</s>` token. If it's `tamat`, the story ends. Go to step 3. If it's `bersambung`, the story should be continued. Same as step 1, Remove the first _k_ sentences so the remaining sentences contains less than 1750 characters, with minimum _k_ value and _k ≥ 1_ constraint. Take the remaining sentences as the next content for the next prompt in step 2. Below is the next content from the example output: ``` pemburu itu berniat untuk menangkap nya , tapi sayang nya dia tidak membawa cheetah itu . oleh karena itu dia hanya mencari seekor kijang dan melihat tulang-tulang itu . setelah menemukan rusa itu , pemburu itu langsung mengejar nya hingga tubuh nya cukup besar . ketika selesai menangkap rusa itu , pemburu itu meminta cheetah untuk memasukkan kepala nya ke dalam cangkang . " ini adalah kepala ku , adik ku , " kata pemburu itu . " kau harus segera menggigit aku . " pemburu itu setuju , dan segera memukulkan kepala nya ke kepala anak cheetah tersebut . pemburu itu melempar sang kijang ke arah rusa . rusa segera menjerit kesakitan dan lari meninggalkan pemburu itu . pemburu yang melihat kejadian itu segera melaporkan kejadian itu kepada teman-teman nya yang lain . " pemburu itu adalah anak cheetah , mereka memang baru datang sekarang , tapi mereka selalu menjaga dan memberi semangat untuk berburu . " pemburu itu membawa seekor cheetah , lalu menunjukkan pada dua teman nya bahwa kepala nya tertembak oleh pemburu . " pemburu itu membawa rusa dan dua orang anak nya , " kata pemburu . " ayo kawan , kita lanjutkan saja perjalanan kita . aku akan mencari kepala rusa itu . bagaimana ? " teriak pemburu . rusa mencoba untuk melepaskan diri dari pemburu . tapi apa yang terjadi ? rusa muda tersebut malah melompat dari kepala pemburu . pemburu itu berhasil membebaskan nya . si cheetah segera berlari meninggalkan pemburu . namun , pemburu itu tak melihat rusa itu kembali . dia menengok ke belakang dan melihat kepala rusa itu masih di belakang . memburu cheetah berarti sudah menyerah lebih dulu . pemburu itu menjatuhkan cheetah pada diri nya sendiri . ``` Do step 2 until the end status is `tamat`. ### Step 3: Finish the story Take all story contents from the generated outputs, and merge it. The story is finished! Below is the example of generated story: ``` pemburu dan anak cheetah suatu hari , pemburu itu melihat seekor cheetah yang sedang bersantai di tepi sungai . berburu cheetah di hutan itu menyenangkan , karena dia akan makan saat sedang asyik . cheetah itu gesit dan cerdik , dia bisa memburu cheetah yang sudah tua dan terlihat sangat lezat . pemburu itu berniat untuk menangkap nya , tapi sayang nya dia tidak membawa cheetah itu . oleh karena itu dia hanya mencari seekor kijang dan melihat tulang-tulang itu . setelah menemukan rusa itu , pemburu itu langsung mengejar nya hingga tubuh nya cukup besar . ketika selesai menangkap rusa itu , pemburu itu meminta cheetah untuk memasukkan kepala nya ke dalam cangkang . " ini adalah kepala ku , adik ku , " kata pemburu itu . " kau harus segera menggigit aku . " pemburu itu setuju , dan segera memukulkan kepala nya ke kepala anak cheetah tersebut . pemburu itu melempar sang kijang ke arah rusa . rusa segera menjerit kesakitan dan lari meninggalkan pemburu itu . pemburu yang melihat kejadian itu segera melaporkan kejadian itu kepada teman-teman nya yang lain . " pemburu itu adalah anak cheetah , mereka memang baru datang sekarang , tapi mereka selalu menjaga dan memberi semangat untuk berburu . " pemburu itu membawa seekor cheetah , lalu menunjukkan pada dua teman nya bahwa kepala nya tertembak oleh pemburu . " pemburu itu membawa rusa dan dua orang anak nya , " kata pemburu . " ayo kawan , kita lanjutkan saja perjalanan kita . aku akan mencari kepala rusa itu . bagaimana ? " teriak pemburu . rusa mencoba untuk melepaskan diri dari pemburu . tapi apa yang terjadi ? rusa muda tersebut malah melompat dari kepala pemburu . pemburu itu berhasil membebaskan nya . si cheetah segera berlari meninggalkan pemburu . namun , pemburu itu tak melihat rusa itu kembali . dia menengok ke belakang dan melihat kepala rusa itu masih di belakang . memburu cheetah berarti sudah menyerah lebih dulu . pemburu itu menjatuhkan cheetah pada diri nya sendiri . pemburu itu merasa sangat marah dan kesal . dia ingin segera menangkap rusa itu kembali agar dia cepat mati . tetapi karena rusa itu sangat marah , dia pun mengibas-ngibaskan kepala nya sehingga kepala nya lepas . " sudah lah . aku tidak akan melepaskan pemburu itu daripada mendapatkan kemampuan berlari nya . " ``` Below is the English translation, with the helps of [Google Translate](https://translate.google.com/): ``` the hunter and the cheetah cub one day, the hunter saw a cheetah relaxing on the river bank. hunting a cheetah in the forest is fun, because he will eat when he is fun . the cheetah is agile and clever, it can hunt old cheetahs and looks very delicious. the hunter intended to catch him, but unfortunately he didn't bring the cheetah. therefore he only looked for a deer and saw the bones. after finding the deer, the hunter immediately chase him until his body is big enough. when he finished catching the deer, the hunter asked the cheetah to put its head in the shell. " this is my head , my little brother , " said the hunter . " you must bite me immediately . "the hunter agreed, and immediately hit his head on the cheetah cub's head. the hunter threw the deer at the deer. the deer immediately screamed in pain and ran away from the hunter. hunters who saw the incident immediately reported it the incident happened to his other friends. "the hunters are cheetah cubs, they have only just arrived now, but they always look after and encourage them to hunt. "the hunter brought a cheetah, then showed his two friends that his head had been shot by the hunter." the hunter brought a deer and two the child," said the hunter. "come on, friend, let's just continue our journey. i'll look for the deer's head. how ? " shouted the hunter. the deer tried to escape from hunters. but what happened ? the young deer instead jumped off the hunter's head. the hunter managed to free him. the cheetah immediately ran away from the hunter. however, the hunter did not see the deer again. he looked back and saw the deer's head still behind him. hunt cheetahs means you have given up already. the hunter dropped the cheetah on himself. the hunter felt very angry and annoyed. he wanted to catch the deer immediately come back so he can die quickly. but because the deer was very angry, he shook his head so that his head fell off. "never mind. i won't let go of the hunter rather than gain his running ability. " ``` ## Limitations The reader probably got confused after reading the previous generated story. This shows the limitation of KancilGPT. The generated story sometimes 1. gives low correlation between title and content (The cheetah cub wasn't mentioned as a main character.), 2. introduces new character out-of-nowhere (Where the deer come from?), 3. introduces new character with the same name that leads to confusing anaphora resolution ("'This is my head, my little brother,' said *the hunter*. 'You must bite me immediately.' *The hunter* agreed, and [...]") 4. gives an illogical sentence ("The hunters are cheetah cubs"). Furthermore, all stories involved with KancilGPT were lowercased because the pretrained model was trained on lowercase texts. In the end, all of the limitations opened some opportunities to make KancilGPT better from time to time. This is just the beginning. By exploring the digital forest deeper, KancilGPT will generate a high quality Indonesian fable story in the future. The end. --- ## Behind The Story: Training Procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0208 | 1.0 | 432 | 2.6771 | | 1.4309 | 2.0 | 864 | 2.7912 | | 1.0811 | 3.0 | 1296 | 2.9315 | | 0.8536 | 4.0 | 1728 | 3.0387 | | 0.6999 | 5.0 | 2160 | 3.1300 | | 0.5949 | 6.0 | 2592 | 3.2062 | | 0.5232 | 7.0 | 3024 | 3.2750 | | 0.474 | 8.0 | 3456 | 3.2936 | | 0.4422 | 9.0 | 3888 | 3.3380 | | 0.4246 | 10.0 | 4320 | 3.3414 | KancilGPT achieves `loss=3.3414` on the evaluation set. ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
null
Non_BioNLP
# KancilGPT (Spoiler: KancilGPT is not ready yet for production. The reader will see why.) Once upon a time, in a digital data forest, there was a language model called KancilGPT. ## Model Description KancilGPT is a fine-tuned version of [indobenchmark/indogpt](https://huggingface.co/indobenchmark/indogpt). Its task is generating an Indonesian fable story. In addition, this model name is based on a famous, wise (but also master at trolling), cute fable character: [_kancil_](https://en.wikipedia.org/wiki/Chevrotain). KancilGPT learns on the unpublished dataset, gathered from [dongengceritarakyat.com](https://dongengceritarakyat.com/). ## Dataset and Prompt The dataset consists of 388 Indonesian fable stories. These stories was gathered from [dongengceritarakyat.com](https://dongengceritarakyat.com/) at January 8, 2024. The duplicated stories without any paraphrashing was removed, based on the value of cosine similarity of TF-IDF trigram words. Furthermore, the remaining stories were cleaned manually for removing non-fable stories, incomplete stories (e.g. synopsis), some misused punctuations, and some typos. If a mistake is found, the dataset will be modified as soon as possible. The cleaned stories was splitted with 80:10:10 ratio, giving - 310 stories for training, - 39 stories for evaluation, and - 39 stories for test (for now, it's unused). The splitting is based on the value of cosine similarity of TF-IDF trigram words, same as duplicate story handling. The stories are chosen one by one, and the smaller of maximum cosine similarity of a story is prioritized. The first 39 stories is used for test, and the rest is used for training and evaluation, randomly. This method is used to make sure no duplicate paraphrasing story exists in the test data. To make the KancilGPT understand to generate a story, the prompts were built with the following formats: 1. `<s> awal cerita | judul: <title> | <entire-story-content> | tamat </s>` 2. `<s> awal cerita | judul: <title> | <beginning-story-content> | bersambung </s>` 3. `<s> pertengahan cerita | judul: <title> | <last-story-content> | tamat </s>` 4. `<s> pertengahan cerita | judul: <title> | <end-story-content> | bersambung </s>` Indonesian language was used for all prompts. Generally, there are four part of the prompt: 1. story part type—it can be the beginning of a story (`awal cerita`) or it can be the middle of a story (`pertengahan cerita`); 2. story title (`judul`); 3. story content; and 4. story end status—it can be "to be continued" (`bersambung`) or "the end" (`tamat`). A story content consists of _n_ sentences that totally contains at least 1750 characters, with minimum _n_ value. If the entire story contains no more than 1750 characters, the format 1 will be used. For making a transition of story content from format 2 to format 3, or from format 3 to another format 3, or from format 3 to format 4, the first _k_ sentences from the current story content will be removed until the content contains less than 1750 characters, with minimum _k_ value and _k ≥ 1_ constraint. ## How to Use After learns how to generate an Indonesian fable story, KancilGPT can generate a random fable story with the specified procedures. Its competence shone with the help of [Huggingface Spaces](https://huggingface.co/spaces/abdiharyadi/kancilgpt). Huggingface pipeline could not help yet because KancilGPT uses `IndoNLGTokenizer` class from [`indobenchmark-toolkit`](https://github.com/indobenchmark/indobenchmark-toolkit). ### Step 1: Begin the story Use this prompt to generate the beginning of a story, including the generation of a title (`judul`): ``` <s> awal cerita | judul: ``` Below is the example output: ``` <s> awal cerita | judul: pemburu dan anak cheetah | suatu hari , pemburu itu melihat seekor cheetah yang sedang bersantai di tepi sungai . berburu cheetah di hutan itu menyenangkan , karena dia akan makan saat sedang asyik . cheetah itu gesit dan cerdik , dia bisa memburu cheetah yang sudah tua dan terlihat sangat lezat . pemburu itu berniat untuk menangkap nya , tapi sayang nya dia tidak membawa cheetah itu . oleh karena itu dia hanya mencari seekor kijang dan melihat tulang-tulang itu . setelah menemukan rusa itu , pemburu itu langsung mengejar nya hingga tubuh nya cukup besar . ketika selesai menangkap rusa itu , pemburu itu meminta cheetah untuk memasukkan kepala nya ke dalam cangkang . " ini adalah kepala ku , adik ku , " kata pemburu itu . " kau harus segera menggigit aku . " pemburu itu setuju , dan segera memukulkan kepala nya ke kepala anak cheetah tersebut . pemburu itu melempar sang kijang ke arah rusa . rusa segera menjerit kesakitan dan lari meninggalkan pemburu itu . pemburu yang melihat kejadian itu segera melaporkan kejadian itu kepada teman-teman nya yang lain . " pemburu itu adalah anak cheetah , mereka memang baru datang sekarang , tapi mereka selalu menjaga dan memberi semangat untuk berburu . " pemburu itu membawa seekor cheetah , lalu menunjukkan pada dua teman nya bahwa kepala nya tertembak oleh pemburu . " pemburu itu membawa rusa dan dua orang anak nya , " kata pemburu . " ayo kawan , kita lanjutkan saja perjalanan kita . aku akan mencari kepala rusa itu . bagaimana ? " teriak pemburu . rusa mencoba untuk melepaskan diri dari pemburu . tapi apa yang terjadi ? rusa muda tersebut malah melompat dari kepala pemburu . pemburu itu berhasil membebaskan nya . si cheetah segera berlari meninggalkan pemburu . | bersambung</s> ``` Notice that the real output has longer leading `</s>` with another random tokens. That's normal. From the generated output, notice the end status of a story before the `</s>` token. If it's `tamat`, the story ends. Go to step 3. If it's `bersambung`, the story should be continued. Remove the first _k_ sentences so the remaining sentences contains less than 1750 characters, with minimum _k_ value and _k ≥ 1_ constraint. Take the remaining sentences as the next content for the next prompt in step 2. Below is the next content from the example output: ``` berburu cheetah di hutan itu menyenangkan , karena dia akan makan saat sedang asyik . cheetah itu gesit dan cerdik , dia bisa memburu cheetah yang sudah tua dan terlihat sangat lezat . pemburu itu berniat untuk menangkap nya , tapi sayang nya dia tidak membawa cheetah itu . oleh karena itu dia hanya mencari seekor kijang dan melihat tulang-tulang itu . setelah menemukan rusa itu , pemburu itu langsung mengejar nya hingga tubuh nya cukup besar . ketika selesai menangkap rusa itu , pemburu itu meminta cheetah untuk memasukkan kepala nya ke dalam cangkang . " ini adalah kepala ku , adik ku , " kata pemburu itu . " kau harus segera menggigit aku . " pemburu itu setuju , dan segera memukulkan kepala nya ke kepala anak cheetah tersebut . pemburu itu melempar sang kijang ke arah rusa . rusa segera menjerit kesakitan dan lari meninggalkan pemburu itu . pemburu yang melihat kejadian itu segera melaporkan kejadian itu kepada teman-teman nya yang lain . " pemburu itu adalah anak cheetah , mereka memang baru datang sekarang , tapi mereka selalu menjaga dan memberi semangat untuk berburu . " pemburu itu membawa seekor cheetah , lalu menunjukkan pada dua teman nya bahwa kepala nya tertembak oleh pemburu . " pemburu itu membawa rusa dan dua orang anak nya , " kata pemburu . " ayo kawan , kita lanjutkan saja perjalanan kita . aku akan mencari kepala rusa itu . bagaimana ? " teriak pemburu . rusa mencoba untuk melepaskan diri dari pemburu . tapi apa yang terjadi ? rusa muda tersebut malah melompat dari kepala pemburu . pemburu itu berhasil membebaskan nya . si cheetah segera berlari meninggalkan pemburu . ``` ### Step 2: Continue the story With the existing title and next content, use this prompt format to continue the story: ``` <s> pertengahan cerita | judul: <title> | <next-content> ``` Below is the example prompt from the example next content from the step 1: ``` <s> pertengahan cerita | judul: pemburu dan anak cheetah | berburu cheetah di hutan itu menyenangkan , karena dia akan makan saat sedang asyik . cheetah itu gesit dan cerdik , dia bisa memburu cheetah yang sudah tua dan terlihat sangat lezat . pemburu itu berniat untuk menangkap nya , tapi sayang nya dia tidak membawa cheetah itu . oleh karena itu dia hanya mencari seekor kijang dan melihat tulang-tulang itu . setelah menemukan rusa itu , pemburu itu langsung mengejar nya hingga tubuh nya cukup besar . ketika selesai menangkap rusa itu , pemburu itu meminta cheetah untuk memasukkan kepala nya ke dalam cangkang . " ini adalah kepala ku , adik ku , " kata pemburu itu . " kau harus segera menggigit aku . " pemburu itu setuju , dan segera memukulkan kepala nya ke kepala anak cheetah tersebut . pemburu itu melempar sang kijang ke arah rusa . rusa segera menjerit kesakitan dan lari meninggalkan pemburu itu . pemburu yang melihat kejadian itu segera melaporkan kejadian itu kepada teman-teman nya yang lain . " pemburu itu adalah anak cheetah , mereka memang baru datang sekarang , tapi mereka selalu menjaga dan memberi semangat untuk berburu . " pemburu itu membawa seekor cheetah , lalu menunjukkan pada dua teman nya bahwa kepala nya tertembak oleh pemburu . " pemburu itu membawa rusa dan dua orang anak nya , " kata pemburu . " ayo kawan , kita lanjutkan saja perjalanan kita . aku akan mencari kepala rusa itu . bagaimana ? " teriak pemburu . rusa mencoba untuk melepaskan diri dari pemburu . tapi apa yang terjadi ? rusa muda tersebut malah melompat dari kepala pemburu . pemburu itu berhasil membebaskan nya . si cheetah segera berlari meninggalkan pemburu . ``` Below is the example output: ``` <s> pertengahan cerita | judul: pemburu dan anak cheetah | berburu cheetah di hutan itu menyenangkan , karena dia akan makan saat sedang asyik . cheetah itu gesit dan cerdik , dia bisa memburu cheetah yang sudah tua dan terlihat sangat lezat . pemburu itu berniat untuk menangkap nya , tapi sayang nya dia tidak membawa cheetah itu . oleh karena itu dia hanya mencari seekor kijang dan melihat tulang-tulang itu . setelah menemukan rusa itu , pemburu itu langsung mengejar nya hingga tubuh nya cukup besar . ketika selesai menangkap rusa itu , pemburu itu meminta cheetah untuk memasukkan kepala nya ke dalam cangkang . " ini adalah kepala ku , adik ku , " kata pemburu itu . " kau harus segera menggigit aku . " pemburu itu setuju , dan segera memukulkan kepala nya ke kepala anak cheetah tersebut . pemburu itu melempar sang kijang ke arah rusa . rusa segera menjerit kesakitan dan lari meninggalkan pemburu itu . pemburu yang melihat kejadian itu segera melaporkan kejadian itu kepada teman-teman nya yang lain . " pemburu itu adalah anak cheetah , mereka memang baru datang sekarang , tapi mereka selalu menjaga dan memberi semangat untuk berburu . " pemburu itu membawa seekor cheetah , lalu menunjukkan pada dua teman nya bahwa kepala nya tertembak oleh pemburu . " pemburu itu membawa rusa dan dua orang anak nya , " kata pemburu . " ayo kawan , kita lanjutkan saja perjalanan kita . aku akan mencari kepala rusa itu . bagaimana ? " teriak pemburu . rusa mencoba untuk melepaskan diri dari pemburu . tapi apa yang terjadi ? rusa muda tersebut malah melompat dari kepala pemburu . pemburu itu berhasil membebaskan nya . si cheetah segera berlari meninggalkan pemburu . namun , pemburu itu tak melihat rusa itu kembali . dia menengok ke belakang dan melihat kepala rusa itu masih di belakang . memburu cheetah berarti sudah menyerah lebih dulu . pemburu itu menjatuhkan cheetah pada diri nya sendiri . | bersambung</s> ``` From the generated output, notice the end status of a story before the `</s>` token. If it's `tamat`, the story ends. Go to step 3. If it's `bersambung`, the story should be continued. Same as step 1, Remove the first _k_ sentences so the remaining sentences contains less than 1750 characters, with minimum _k_ value and _k ≥ 1_ constraint. Take the remaining sentences as the next content for the next prompt in step 2. Below is the next content from the example output: ``` pemburu itu berniat untuk menangkap nya , tapi sayang nya dia tidak membawa cheetah itu . oleh karena itu dia hanya mencari seekor kijang dan melihat tulang-tulang itu . setelah menemukan rusa itu , pemburu itu langsung mengejar nya hingga tubuh nya cukup besar . ketika selesai menangkap rusa itu , pemburu itu meminta cheetah untuk memasukkan kepala nya ke dalam cangkang . " ini adalah kepala ku , adik ku , " kata pemburu itu . " kau harus segera menggigit aku . " pemburu itu setuju , dan segera memukulkan kepala nya ke kepala anak cheetah tersebut . pemburu itu melempar sang kijang ke arah rusa . rusa segera menjerit kesakitan dan lari meninggalkan pemburu itu . pemburu yang melihat kejadian itu segera melaporkan kejadian itu kepada teman-teman nya yang lain . " pemburu itu adalah anak cheetah , mereka memang baru datang sekarang , tapi mereka selalu menjaga dan memberi semangat untuk berburu . " pemburu itu membawa seekor cheetah , lalu menunjukkan pada dua teman nya bahwa kepala nya tertembak oleh pemburu . " pemburu itu membawa rusa dan dua orang anak nya , " kata pemburu . " ayo kawan , kita lanjutkan saja perjalanan kita . aku akan mencari kepala rusa itu . bagaimana ? " teriak pemburu . rusa mencoba untuk melepaskan diri dari pemburu . tapi apa yang terjadi ? rusa muda tersebut malah melompat dari kepala pemburu . pemburu itu berhasil membebaskan nya . si cheetah segera berlari meninggalkan pemburu . namun , pemburu itu tak melihat rusa itu kembali . dia menengok ke belakang dan melihat kepala rusa itu masih di belakang . memburu cheetah berarti sudah menyerah lebih dulu . pemburu itu menjatuhkan cheetah pada diri nya sendiri . ``` Do step 2 until the end status is `tamat`. ### Step 3: Finish the story Take all story contents from the generated outputs, and merge it. The story is finished! Below is the example of generated story: ``` pemburu dan anak cheetah suatu hari , pemburu itu melihat seekor cheetah yang sedang bersantai di tepi sungai . berburu cheetah di hutan itu menyenangkan , karena dia akan makan saat sedang asyik . cheetah itu gesit dan cerdik , dia bisa memburu cheetah yang sudah tua dan terlihat sangat lezat . pemburu itu berniat untuk menangkap nya , tapi sayang nya dia tidak membawa cheetah itu . oleh karena itu dia hanya mencari seekor kijang dan melihat tulang-tulang itu . setelah menemukan rusa itu , pemburu itu langsung mengejar nya hingga tubuh nya cukup besar . ketika selesai menangkap rusa itu , pemburu itu meminta cheetah untuk memasukkan kepala nya ke dalam cangkang . " ini adalah kepala ku , adik ku , " kata pemburu itu . " kau harus segera menggigit aku . " pemburu itu setuju , dan segera memukulkan kepala nya ke kepala anak cheetah tersebut . pemburu itu melempar sang kijang ke arah rusa . rusa segera menjerit kesakitan dan lari meninggalkan pemburu itu . pemburu yang melihat kejadian itu segera melaporkan kejadian itu kepada teman-teman nya yang lain . " pemburu itu adalah anak cheetah , mereka memang baru datang sekarang , tapi mereka selalu menjaga dan memberi semangat untuk berburu . " pemburu itu membawa seekor cheetah , lalu menunjukkan pada dua teman nya bahwa kepala nya tertembak oleh pemburu . " pemburu itu membawa rusa dan dua orang anak nya , " kata pemburu . " ayo kawan , kita lanjutkan saja perjalanan kita . aku akan mencari kepala rusa itu . bagaimana ? " teriak pemburu . rusa mencoba untuk melepaskan diri dari pemburu . tapi apa yang terjadi ? rusa muda tersebut malah melompat dari kepala pemburu . pemburu itu berhasil membebaskan nya . si cheetah segera berlari meninggalkan pemburu . namun , pemburu itu tak melihat rusa itu kembali . dia menengok ke belakang dan melihat kepala rusa itu masih di belakang . memburu cheetah berarti sudah menyerah lebih dulu . pemburu itu menjatuhkan cheetah pada diri nya sendiri . pemburu itu merasa sangat marah dan kesal . dia ingin segera menangkap rusa itu kembali agar dia cepat mati . tetapi karena rusa itu sangat marah , dia pun mengibas-ngibaskan kepala nya sehingga kepala nya lepas . " sudah lah . aku tidak akan melepaskan pemburu itu daripada mendapatkan kemampuan berlari nya . " ``` Below is the English translation, with the helps of [Google Translate](https://translate.google.com/): ``` the hunter and the cheetah cub one day, the hunter saw a cheetah relaxing on the river bank. hunting a cheetah in the forest is fun, because he will eat when he is fun . the cheetah is agile and clever, it can hunt old cheetahs and looks very delicious. the hunter intended to catch him, but unfortunately he didn't bring the cheetah. therefore he only looked for a deer and saw the bones. after finding the deer, the hunter immediately chase him until his body is big enough. when he finished catching the deer, the hunter asked the cheetah to put its head in the shell. " this is my head , my little brother , " said the hunter . " you must bite me immediately . "the hunter agreed, and immediately hit his head on the cheetah cub's head. the hunter threw the deer at the deer. the deer immediately screamed in pain and ran away from the hunter. hunters who saw the incident immediately reported it the incident happened to his other friends. "the hunters are cheetah cubs, they have only just arrived now, but they always look after and encourage them to hunt. "the hunter brought a cheetah, then showed his two friends that his head had been shot by the hunter." the hunter brought a deer and two the child," said the hunter. "come on, friend, let's just continue our journey. i'll look for the deer's head. how ? " shouted the hunter. the deer tried to escape from hunters. but what happened ? the young deer instead jumped off the hunter's head. the hunter managed to free him. the cheetah immediately ran away from the hunter. however, the hunter did not see the deer again. he looked back and saw the deer's head still behind him. hunt cheetahs means you have given up already. the hunter dropped the cheetah on himself. the hunter felt very angry and annoyed. he wanted to catch the deer immediately come back so he can die quickly. but because the deer was very angry, he shook his head so that his head fell off. "never mind. i won't let go of the hunter rather than gain his running ability. " ``` ## Limitations The reader probably got confused after reading the previous generated story. This shows the limitation of KancilGPT. The generated story sometimes 1. gives low correlation between title and content (The cheetah cub wasn't mentioned as a main character.), 2. introduces new character out-of-nowhere (Where the deer come from?), 3. introduces new character with the same name that leads to confusing anaphora resolution ("'This is my head, my little brother,' said *the hunter*. 'You must bite me immediately.' *The hunter* agreed, and [...]") 4. gives an illogical sentence ("The hunters are cheetah cubs"). Furthermore, all stories involved with KancilGPT were lowercased because the pretrained model was trained on lowercase texts. In the end, all of the limitations opened some opportunities to make KancilGPT better from time to time. This is just the beginning. By exploring the digital forest deeper, KancilGPT will generate a high quality Indonesian fable story in the future. The end. --- ## Behind The Story: Training Procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0208 | 1.0 | 432 | 2.6771 | | 1.4309 | 2.0 | 864 | 2.7912 | | 1.0811 | 3.0 | 1296 | 2.9315 | | 0.8536 | 4.0 | 1728 | 3.0387 | | 0.6999 | 5.0 | 2160 | 3.1300 | | 0.5949 | 6.0 | 2592 | 3.2062 | | 0.5232 | 7.0 | 3024 | 3.2750 | | 0.474 | 8.0 | 3456 | 3.2936 | | 0.4422 | 9.0 | 3888 | 3.3380 | | 0.4246 | 10.0 | 4320 | 3.3414 | KancilGPT achieves `loss=3.3414` on the evaluation set. ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
{"base_model": "indobenchmark/indogpt", "license": "mit", "tags": ["generated_from_trainer"], "inference": false, "model-index": [{"name": "kancilgpt", "results": []}]}
task
[ "TRANSLATION", "PARAPHRASING" ]
46,126
squarelike/Gugugo-koen-7B-V1.1
squarelike
translation
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "translation", "en", "ko", "dataset:squarelike/sharegpt_deepl_ko_translation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-10-27T14:38:43Z
2024-03-29T06:02:43+00:00
231
17
--- datasets: - squarelike/sharegpt_deepl_ko_translation language: - en - ko license: apache-2.0 pipeline_tag: translation --- # Gugugo-koen-7B-V1.1 Detail repo: [https://github.com/jwj7140/Gugugo](https://github.com/jwj7140/Gugugo) ![Gugugo](./logo.png) **Base Model**: [Llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) **Training Dataset**: [sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation). I trained with 1x A6000 GPUs for 90 hours. ## **Prompt Template** **KO->EN** ``` ### 한국어: {sentence}</끝> ### 영어: ``` **EN->KO** ``` ### 영어: {sentence}</끝> ### 한국어: ``` There are GPTQ, AWQ, and GGUF support. [https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GPTQ](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GPTQ) [https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-AWQ](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-AWQ) [https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GGUF](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GGUF) ## **Implementation Code** ```python from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList import torch repo = "squarelike/Gugugo-koen-7B-V1.1" model = AutoModelForCausalLM.from_pretrained( repo, load_in_4bit=True device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained(repo) class StoppingCriteriaSub(StoppingCriteria): def __init__(self, stops = [], encounters=1): super().__init__() self.stops = [stop for stop in stops] def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor): for stop in self.stops: if torch.all((stop == input_ids[0][-len(stop):])).item(): return True return False stop_words_ids = torch.tensor([[829, 45107, 29958], [1533, 45107, 29958], [829, 45107, 29958], [21106, 45107, 29958]]).to("cuda") stopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=stop_words_ids)]) def gen(lan="en", x=""): if (lan == "ko"): prompt = f"### 한국어: {x}</끝>\n### 영어:" else: prompt = f"### 영어: {x}</끝>\n### 한국어:" gened = model.generate( **tokenizer( prompt, return_tensors='pt', return_token_type_ids=False ).to("cuda"), max_new_tokens=2000, temperature=0.3, # no_repeat_ngram_size=5, num_beams=5, stopping_criteria=stopping_criteria ) return tokenizer.decode(gened[0][1:]).replace(prompt+" ", "").replace("</끝>", "") print(gen(lan="en", x="Hello, world!")) ```
null
Non_BioNLP
# Gugugo-koen-7B-V1.1 Detail repo: [https://github.com/jwj7140/Gugugo](https://github.com/jwj7140/Gugugo) ![Gugugo](./logo.png) **Base Model**: [Llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) **Training Dataset**: [sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation). I trained with 1x A6000 GPUs for 90 hours. ## **Prompt Template** **KO->EN** ``` ### 한국어: {sentence}</끝> ### 영어: ``` **EN->KO** ``` ### 영어: {sentence}</끝> ### 한국어: ``` There are GPTQ, AWQ, and GGUF support. [https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GPTQ](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GPTQ) [https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-AWQ](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-AWQ) [https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GGUF](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GGUF) ## **Implementation Code** ```python from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList import torch repo = "squarelike/Gugugo-koen-7B-V1.1" model = AutoModelForCausalLM.from_pretrained( repo, load_in_4bit=True device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained(repo) class StoppingCriteriaSub(StoppingCriteria): def __init__(self, stops = [], encounters=1): super().__init__() self.stops = [stop for stop in stops] def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor): for stop in self.stops: if torch.all((stop == input_ids[0][-len(stop):])).item(): return True return False stop_words_ids = torch.tensor([[829, 45107, 29958], [1533, 45107, 29958], [829, 45107, 29958], [21106, 45107, 29958]]).to("cuda") stopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=stop_words_ids)]) def gen(lan="en", x=""): if (lan == "ko"): prompt = f"### 한국어: {x}</끝>\n### 영어:" else: prompt = f"### 영어: {x}</끝>\n### 한국어:" gened = model.generate( **tokenizer( prompt, return_tensors='pt', return_token_type_ids=False ).to("cuda"), max_new_tokens=2000, temperature=0.3, # no_repeat_ngram_size=5, num_beams=5, stopping_criteria=stopping_criteria ) return tokenizer.decode(gened[0][1:]).replace(prompt+" ", "").replace("</끝>", "") print(gen(lan="en", x="Hello, world!")) ```
{"datasets": ["squarelike/sharegpt_deepl_ko_translation"], "language": ["en", "ko"], "license": "apache-2.0", "pipeline_tag": "translation"}
task
[ "TRANSLATION" ]
46,128
SEBIS/legal_t5_small_multitask_en_sv
SEBIS
text2text-generation
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "translation English Swedish model", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-06-23T11:00:55+00:00
172
0
--- datasets: - dcep europarl jrc-acquis language: English Swedish tags: - translation English Swedish model widget: - text: whereas enlargement to Bulgaria and Romania should be effective in 2007, --- # legal_t5_small_multitask_en_sv model Model on translating legal text from English to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_en_sv model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from English to Swedish. ### How to use Here is how to use this model to translate legal text from English to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_sv"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "whereas enlargement to Bulgaria and Romania should be effective in 2007," pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_en_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_en_sv | 47.968| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
null
Non_BioNLP
# legal_t5_small_multitask_en_sv model Model on translating legal text from English to Swedish. It was first released in [this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model. ## Model description No pretraining is involved in case of legal_t5_small_multitask_en_sv model, rather the unsupervised task is added with all the translation task to realize the multitask learning scenario. ## Intended uses & limitations The model could be used for translation of legal texts from English to Swedish. ### How to use Here is how to use this model to translate legal text from English to Swedish in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_sv"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_sv", do_lower_case=False, skip_special_tokens=True), device=0 ) en_text = "whereas enlargement to Bulgaria and Romania should be effective in 2007," pipeline([en_text], max_length=512) ``` ## Training data The legal_t5_small_multitask_en_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for translation test dataset, achieves the following results: Test results : | Model | BLEU score | |:-----:|:-----:| | legal_t5_small_multitask_en_sv | 47.968| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"datasets": ["dcep europarl jrc-acquis"], "language": "English Swedish", "tags": ["translation English Swedish model"], "widget": [{"text": "whereas enlargement to Bulgaria and Romania should be effective in 2007,"}]}
task
[ "TRANSLATION" ]
46,129
Microbee/ansost-bioformer
Microbee
text-classification
[ "transformers", "safetensors", "bert", "text-classification", "autotrain", "dataset:ansost-bioformer/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-02-26T09:10:01Z
2024-02-26T09:10:09+00:00
6
0
--- datasets: - ansost-bioformer/autotrain-data tags: - autotrain - text-classification widget: - text: I love AutoTrain --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.4252117872238159 f1: 0.9211663066954643 precision: 0.8969505783385909 recall: 0.946725860155383 auc: 0.9325766190258948 accuracy: 0.8810105949470253
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.4252117872238159 f1: 0.9211663066954643 precision: 0.8969505783385909 recall: 0.946725860155383 auc: 0.9325766190258948 accuracy: 0.8810105949470253
{"datasets": ["ansost-bioformer/autotrain-data"], "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
task
[ "TEXT_CLASSIFICATION" ]
46,130
hkbluesky/distilbert-base-uncased-finetuned-emotion
hkbluesky
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-21T05:45:05Z
2023-09-28T01:45:59+00:00
10
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion args: split metrics: - type: accuracy value: 0.928 name: Accuracy - type: f1 value: 0.9276533528239984 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2170 - Accuracy: 0.928 - F1: 0.9277 ## Model description 0-sadness 1-joy 2-love 3-anger 4-fear 5-surprise ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7878 | 1.0 | 250 | 0.2981 | 0.915 | 0.9135 | | 0.2452 | 2.0 | 500 | 0.2170 | 0.928 | 0.9277 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.0.1+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2170 - Accuracy: 0.928 - F1: 0.9277 ## Model description 0-sadness 1-joy 2-love 3-anger 4-fear 5-surprise ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7878 | 1.0 | 250 | 0.2981 | 0.915 | 0.9135 | | 0.2452 | 2.0 | 500 | 0.2170 | 0.928 | 0.9277 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.0.1+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.928, "name": "Accuracy"}, {"type": "f1", "value": 0.9276533528239984, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,131
real-jiakai/bert-base-chinese-finetuned-squadv2
real-jiakai
question-answering
[ "transformers", "safetensors", "bert", "question-answering", "generated_from_trainer", "dataset:real-jiakai/chinese-squadv2", "base_model:google-bert/bert-base-chinese", "base_model:finetune:google-bert/bert-base-chinese", "endpoints_compatible", "region:us" ]
2024-11-19T12:02:31Z
2024-11-19T13:29:43+00:00
13
0
--- base_model: bert-base-chinese datasets: - real-jiakai/chinese-squadv2 library_name: transformers tags: - generated_from_trainer model-index: - name: chinese_squadv2 results: [] --- # bert-base-chinese-finetuned-squadv2 This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the [Chinese SQuAD v2.0 dataset](https://huggingface.co/datasets/real-jiakai/chinese-squadv2). ## Model Description This model is designed for Chinese question answering tasks, specifically for extractive QA where the answer must be extracted from a given context paragraph. It can handle both answerable and unanswerable questions, following the SQuAD v2.0 format. Key features: - Based on BERT-base Chinese architecture - Supports both answerable and unanswerable questions - Trained on Chinese question-answer pairs - Optimized for extractive question answering ## Intended Uses & Limitations ### Intended Uses - Chinese extractive question answering - Reading comprehension tasks - Information extraction from Chinese text - Automated question answering systems ### Limitations - Performance is significantly better on unanswerable questions (76.65% accuracy) compared to answerable questions (36.41% accuracy) - Limited to extractive QA (cannot generate new answers) - May not perform well on domain-specific questions outside the training data - Designed for modern Chinese text, may not work well with classical Chinese or dialectal variations ## Training and Evaluation Data The model was trained on the Chinese SQuAD v2.0 dataset, which contains: Training Set: - Total examples: 90,027 - Answerable questions: 46,529 - Unanswerable questions: 43,498 Validation Set: - Total examples: 9,936 - Answerable questions: 3,991 - Unanswerable questions: 5,945 ## Training Procedure ### Training Hyperparameters - Learning rate: 3e-05 - Batch size: 12 - Evaluation batch size: 8 - Number of epochs: 5 - Optimizer: AdamW (β1=0.9, β2=0.999, ε=1e-08) - Learning rate scheduler: Linear - Maximum sequence length: 384 - Document stride: 128 - Training device: CUDA-enabled GPU ### Training Results Final evaluation metrics: - Overall Exact Match: 60.49% - Overall F1 Score: 60.54% - Answerable Questions: - Exact Match: 36.41% - F1 Score: 36.53% - Unanswerable Questions: - Exact Match: 76.65% - F1 Score: 76.65% ### Framework Versions - Transformers: 4.47.0.dev0 - PyTorch: 2.5.1+cu124 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Usage ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer import torch # Load model and tokenizer model_name = "real-jiakai/bert-base-chinese-finetuned-squadv2" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForQuestionAnswering.from_pretrained(model_name) def get_answer(question, context, threshold=0.0): # Tokenize input with maximum sequence length of 384 inputs = tokenizer( question, context, return_tensors="pt", max_length=384, truncation=True ) with torch.no_grad(): outputs = model(**inputs) start_logits = outputs.start_logits[0] end_logits = outputs.end_logits[0] # Calculate null score (score for predicting no answer) null_score = start_logits[0].item() + end_logits[0].item() # Find the best non-null answer, excluding [CLS] position # Set logits at [CLS] position to negative infinity start_logits[0] = float('-inf') end_logits[0] = float('-inf') start_idx = torch.argmax(start_logits) end_idx = torch.argmax(end_logits) # Ensure end_idx is not less than start_idx if end_idx < start_idx: end_idx = start_idx answer_score = start_logits[start_idx].item() + end_logits[end_idx].item() # If null score is higher (beyond threshold), return "no answer" if null_score - answer_score > threshold: return "Question cannot be answered based on the given context." # Otherwise, return the extracted answer tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) answer = tokenizer.convert_tokens_to_string(tokens[start_idx:end_idx+1]) # Check if answer is empty or contains only special tokens if not answer.strip() or answer.strip() in ['[CLS]', '[SEP]']: return "Question cannot be answered based on the given context." return answer.strip() questions = [ "本届第十五届珠海航展的亮点和主要展示内容是什么?", "珠海杀人案发生地点?" ] context = '第十五届中国国际航空航天博览会(珠海航展)于2024年11月12日至17日在珠海国际航展中心举行。本届航展吸引了来自47个国家和地区的超过890家企业参展,展示了涵盖"陆、海、空、天、电、网"全领域的高精尖展品。其中,备受瞩目的中国空军"八一"飞行表演队和"红鹰"飞行表演队,以及俄罗斯"勇士"飞行表演队同台献技,为观众呈现了精彩的飞行表演。此外,本届航展还首次开辟了无人机、无人船演示区,展示了多款前沿科技产品。' for question in questions: answer = get_answer(question, context) print(f"问题: {question}") print(f"答案: {answer}") print("-" * 50) ``` ## Limitations and Bias The model shows significant performance disparity between answerable and unanswerable questions, which might indicate: 1. Dataset quality issues 2. Potential translation artifacts in the Chinese version of SQuAD 3. Imbalanced handling of answerable vs. unanswerable questions ## Ethics & Responsible AI Users should be aware that: - The model may reflect biases present in the training data - Performance varies significantly based on question type - Results should be validated for critical applications - The model should not be used as the sole decision-maker in critical systems
null
Non_BioNLP
# bert-base-chinese-finetuned-squadv2 This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the [Chinese SQuAD v2.0 dataset](https://huggingface.co/datasets/real-jiakai/chinese-squadv2). ## Model Description This model is designed for Chinese question answering tasks, specifically for extractive QA where the answer must be extracted from a given context paragraph. It can handle both answerable and unanswerable questions, following the SQuAD v2.0 format. Key features: - Based on BERT-base Chinese architecture - Supports both answerable and unanswerable questions - Trained on Chinese question-answer pairs - Optimized for extractive question answering ## Intended Uses & Limitations ### Intended Uses - Chinese extractive question answering - Reading comprehension tasks - Information extraction from Chinese text - Automated question answering systems ### Limitations - Performance is significantly better on unanswerable questions (76.65% accuracy) compared to answerable questions (36.41% accuracy) - Limited to extractive QA (cannot generate new answers) - May not perform well on domain-specific questions outside the training data - Designed for modern Chinese text, may not work well with classical Chinese or dialectal variations ## Training and Evaluation Data The model was trained on the Chinese SQuAD v2.0 dataset, which contains: Training Set: - Total examples: 90,027 - Answerable questions: 46,529 - Unanswerable questions: 43,498 Validation Set: - Total examples: 9,936 - Answerable questions: 3,991 - Unanswerable questions: 5,945 ## Training Procedure ### Training Hyperparameters - Learning rate: 3e-05 - Batch size: 12 - Evaluation batch size: 8 - Number of epochs: 5 - Optimizer: AdamW (β1=0.9, β2=0.999, ε=1e-08) - Learning rate scheduler: Linear - Maximum sequence length: 384 - Document stride: 128 - Training device: CUDA-enabled GPU ### Training Results Final evaluation metrics: - Overall Exact Match: 60.49% - Overall F1 Score: 60.54% - Answerable Questions: - Exact Match: 36.41% - F1 Score: 36.53% - Unanswerable Questions: - Exact Match: 76.65% - F1 Score: 76.65% ### Framework Versions - Transformers: 4.47.0.dev0 - PyTorch: 2.5.1+cu124 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Usage ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer import torch # Load model and tokenizer model_name = "real-jiakai/bert-base-chinese-finetuned-squadv2" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForQuestionAnswering.from_pretrained(model_name) def get_answer(question, context, threshold=0.0): # Tokenize input with maximum sequence length of 384 inputs = tokenizer( question, context, return_tensors="pt", max_length=384, truncation=True ) with torch.no_grad(): outputs = model(**inputs) start_logits = outputs.start_logits[0] end_logits = outputs.end_logits[0] # Calculate null score (score for predicting no answer) null_score = start_logits[0].item() + end_logits[0].item() # Find the best non-null answer, excluding [CLS] position # Set logits at [CLS] position to negative infinity start_logits[0] = float('-inf') end_logits[0] = float('-inf') start_idx = torch.argmax(start_logits) end_idx = torch.argmax(end_logits) # Ensure end_idx is not less than start_idx if end_idx < start_idx: end_idx = start_idx answer_score = start_logits[start_idx].item() + end_logits[end_idx].item() # If null score is higher (beyond threshold), return "no answer" if null_score - answer_score > threshold: return "Question cannot be answered based on the given context." # Otherwise, return the extracted answer tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) answer = tokenizer.convert_tokens_to_string(tokens[start_idx:end_idx+1]) # Check if answer is empty or contains only special tokens if not answer.strip() or answer.strip() in ['[CLS]', '[SEP]']: return "Question cannot be answered based on the given context." return answer.strip() questions = [ "本届第十五届珠海航展的亮点和主要展示内容是什么?", "珠海杀人案发生地点?" ] context = '第十五届中国国际航空航天博览会(珠海航展)于2024年11月12日至17日在珠海国际航展中心举行。本届航展吸引了来自47个国家和地区的超过890家企业参展,展示了涵盖"陆、海、空、天、电、网"全领域的高精尖展品。其中,备受瞩目的中国空军"八一"飞行表演队和"红鹰"飞行表演队,以及俄罗斯"勇士"飞行表演队同台献技,为观众呈现了精彩的飞行表演。此外,本届航展还首次开辟了无人机、无人船演示区,展示了多款前沿科技产品。' for question in questions: answer = get_answer(question, context) print(f"问题: {question}") print(f"答案: {answer}") print("-" * 50) ``` ## Limitations and Bias The model shows significant performance disparity between answerable and unanswerable questions, which might indicate: 1. Dataset quality issues 2. Potential translation artifacts in the Chinese version of SQuAD 3. Imbalanced handling of answerable vs. unanswerable questions ## Ethics & Responsible AI Users should be aware that: - The model may reflect biases present in the training data - Performance varies significantly based on question type - Results should be validated for critical applications - The model should not be used as the sole decision-maker in critical systems
{"base_model": "bert-base-chinese", "datasets": ["real-jiakai/chinese-squadv2"], "library_name": "transformers", "tags": ["generated_from_trainer"], "model-index": [{"name": "chinese_squadv2", "results": []}]}
task
[ "QUESTION_ANSWERING", "TRANSLATION" ]
46,132
nilcars/tensorflow_tensorflow_model
nilcars
text-classification
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "region:us" ]
2024-06-24T10:07:10Z
2024-06-24T10:53:29+00:00
5
0
--- base_model: sentence-transformers/all-mpnet-base-v2 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: "How to get single UnidirectionalSequenceRnnOp in tflite model ### Issue Type\r\ \n\r\nSupport\r\n\r\n### Source\r\n\r\nsource\r\n\r\n### Tensorflow Version\r\n\ \r\n2.8\r\n\r\n### Custom Code\r\n\r\nYes\r\n\r\n### OS Platform and Distribution\r\ \n\r\nUbuntu 18.04\r\n\r\nAccording to https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc\ \ there is `kUnidirectionalSequenceRnnOp` as a single operation in tflite, could\ \ you give a python code example - how can I get this? For example - this code\ \ for LSTM gives tflite with one UnidirectionalSequenceLSTM Op.\r\n```py\r\n#\ \ NOTE tested with TF 2.8.0\r\nimport tensorflow as tf\r\nimport numpy as np\r\ \n\r\nfrom tensorflow import keras\r\n\r\n\r\nmodel = keras.Sequential()\r\nshape\ \ = (4, 4)\r\n\r\nmodel.add(keras.layers.InputLayer(input_shape=shape, batch_size=1))\r\ \nmodel.add(keras.layers.LSTM(2, input_shape=shape))\r\n```\r\n![image](https://user-images.githubusercontent.com/4616940/197647526-59c63de2-df61-46a1-bd61-75baa2688376.png)\r\ \nHow can I do same for UnidirectionalSequenceRnn?" - text: "[Feature Request] GELU activation with the Hexagon delegate **System information**\r\ \n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 20.04\r\n\ - TensorFlow installed from (source or binary): binary\r\n- TensorFlow version\ \ (or github SHA if from source): 2.9.1\r\n\r\nI think I'd be able to implement\ \ this myself, but wanted to see if there was any interest in including this upstream.\ \ Most of this I'm writing out to make sure my own understanding is correct.\r\ \n\r\n### The problem\r\n\r\nI'd like to add support for the GELU op to the Hexagon\ \ Delegate. The motivation for this is mostly for use with [DistilBERT](https://huggingface.co/distilbert-base-multilingual-cased),\ \ which uses this activation function in its feedforward network layers. (Also\ \ used by BERT, GPT-3, RoBERTa, etc.)\r\n\r\nAdding this as a supported op for\ \ the Hexagon delegate would avoid creating a graph partition/transferring between\ \ DSP<-->CPU each time the GELU activation function is used.\r\n\r\n### How I'd\ \ implement this\r\n\r\nGELU in TF Lite is implemented as a lookup table when\ \ there are integer inputs ([here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/kernels/activations.cc#L120-L140)\ \ and [here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/kernels/internal/reference/gelu.h#L37-L53)).\r\ \n\r\nThis same approach could be used for the Hexagon delegate, as it has int8/uint8\ \ data types and also supports lookup tables.\r\n\r\nI'd plan to do this by adding\ \ a new op builder in the delegate, populating a lookup table for each node as\ \ is currently done for the CPU version of the op, and then using the [Gather_8](https://source.codeaurora.org/quic/hexagon_nn/nnlib/tree/hexagon/ops/src/op_gather.c)\ \ nnlib library function to do the lookup.\r\n\r\n### Possible workaround\r\n\ \r\nA workaround I thought of:\r\n\r\nI'm going to try removing the [pattern matching](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/lite/transforms/optimize_patterns.td#L1034-L1095)\ \ for approximate GELU in MLIR, and then using the approximate version of GELU\ \ (so that using tanh and not Erf). This will probably be slower, but should\ \ let me keep execution on the DSP.\r\n\r\nSince this will then be tanh, addition,\ \ multiplication ops instead of GELU they should all be runnable by the DSP." - text: "Data init API for TFLite Swift <details><summary>Click to expand!</summary>\ \ \n \n ### Issue Type\n\nFeature Request\n\n### Source\n\nsource\n\n### Tensorflow\ \ Version\n\n2.8+\n\n### Custom Code\n\nNo\n\n### OS Platform and Distribution\n\ \n_No response_\n\n### Mobile device\n\n_No response_\n\n### Python version\n\n\ _No response_\n\n### Bazel version\n\n_No response_\n\n### GCC/Compiler version\n\ \n_No response_\n\n### CUDA/cuDNN version\n\n_No response_\n\n### GPU model and\ \ memory\n\n_No response_\n\n### Current Behaviour?\n\n```shell\nThe current Swift\ \ API only has `init` functions from files on disk unlike the Java (Android) API\ \ which has a byte buffer initializer. It'd be convenient if the Swift API could\ \ initialize `Interpreters` from `Data`.\n```\n\n\n### Standalone code to reproduce\ \ the issue\n\n```shell\nNo code. This is a feature request\n```\n\n\n### Relevant\ \ log output\n\n_No response_</details>" - text: "tf.distribute.MirroredStrategy for asynchronous training <details><summary>Click\ \ to expand!</summary> \r\n \r\n ### Issue Type\r\n\r\nFeature Request\r\n\r\n\ ### Tensorflow Version\r\n\r\n2.8.1\r\n\r\n### Python version\r\n\r\n3.8.13\r\n\ \r\n### CUDA/cuDNN version\r\n\r\n11.8\r\n\r\n### Use Case\r\n\r\nI need to run\ \ multiple asynchronous copies of the same model on different slices of the dataset\ \ (e.g. with bootstrap sampling). There's no *good* way to do this in keras api\ \ that I'm aware of, although a couple of hacks exist. Would this use case be\ \ feasible with tf.distribute?\r\n\r\n### Feature Request\r\n\r\n`tf.distribute.MirroredStrategy`\ \ is a synchronous, data parallel strategy for distributed training across multiple\ \ devices on a single host worker.\r\n\r\nWould it be possible to modify this\ \ strategy to allow for asynchronous training of all model replicas, without computing\ \ the average gradient over all replicas to update weights? In this case each\ \ replica would need its own un-mirrored copy of model weights, and the update\ \ rule would depend only on the loss and gradients of each replica.\r\n\r\nThanks" - text: "Build TensorFlow Lite for iOS failed!!!! Please go to Stack Overflow for\ \ help and support:\r\n\r\nhttps://stackoverflow.com/questions/tagged/tensorflow\r\ \n\r\nIf you open a GitHub issue, here is our policy:\r\n\r\n1. `bazel build --config=ios_arm64\ \ -c opt --cxxopt=--std=c++17 \\\\\r\n //tensorflow/lite/ios:TensorFlowLiteC_framework\r\ \n❯ bazel build --incompatible_run_shell_command_string=false --verbose_failures\ \ --config=ios_arm64 -c opt //tensorflow/lite/ios:TensorFlowLiteCMetal_framework\r\ \nINFO: Options provided by the client:\r\n Inherited 'common' options: --isatty=1\ \ --terminal_columns=170\r\nINFO: Reading rc options for 'build' from /Users/thao/Desktop/tensorflow/.bazelrc:\r\ \n Inherited 'common' options: --experimental_repo_remote_exec\r\nINFO: Reading\ \ rc options for 'build' from /Users/thao/Desktop/tensorflow/.bazelrc:\r\n 'build'\ \ options: --define framework_shared_object=true --define tsl_protobuf_header_only=true\ \ --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone\ \ -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive\ \ --enable_platform_specific_config --define=with_xla_support=true --config=short_logs\ \ --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library\ \ --experimental_link_static_libraries_once=false\r\nINFO: Reading rc options\ \ for 'build' from /Users/thao/Desktop/tensorflow/.tf_configure.bazelrc:\r\n \ \ 'build' options: --action_env PYTHON_BIN_PATH=/Users/thao/miniforge3/bin/python\ \ --action_env PYTHON_LIB_PATH=/Users/thao/miniforge3/lib/python3.10/site-packages\ \ --python_path=/Users/thao/miniforge3/bin/python\r\nINFO: Reading rc options\ \ for 'build' from /Users/thao/Desktop/tensorflow/.bazelrc:\r\n 'build' options:\ \ --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/ir,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_jitrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/common,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/graph_executor,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils\r\ \nINFO: Found applicable config definition build:short_logs in file /Users/thao/Desktop/tensorflow/.bazelrc:\ \ --output_filter=DONT_MATCH_ANYTHING\r\nINFO: Found applicable config definition\ \ build:v2 in file /Users/thao/Desktop/tensorflow/.bazelrc: --define=tf_api_version=2\ \ --action_env=TF2_BEHAVIOR=1\r\nINFO: Found applicable config definition build:ios_arm64\ \ in file /Users/thao/Desktop/tensorflow/.bazelrc: --config=ios --cpu=ios_arm64\r\ \nINFO: Found applicable config definition build:ios in file /Users/thao/Desktop/tensorflow/.bazelrc:\ \ --apple_platform_type=ios --apple_bitcode=embedded --copt=-fembed-bitcode --copt=-Wno-c++11-narrowing\ \ --noenable_platform_specific_config --copt=-w --cxxopt=-std=c++17 --host_cxxopt=-std=c++17\ \ --define=with_xla_support=false\r\nINFO: Build option --cxxopt has changed,\ \ discarding analysis cache.\r\nERROR: /private/var/tmp/_bazel_thao/26d40dc75f2c247e7283b353a9ab184f/external/local_config_cc/BUILD:48:19:\ \ in cc_toolchain_suite rule @local_config_cc//:toolchain: cc_toolchain_suite\ \ '@local_config_cc//:toolchain' does not contain a toolchain for cpu 'ios_arm64'\r\ \nERROR: /private/var/tmp/_bazel_thao/26d40dc75f2c247e7283b353a9ab184f/external/local_config_cc/BUILD:48:19:\ \ Analysis of target '@local_config_cc//:toolchain' failed\r\nERROR: Analysis\ \ of target '//tensorflow/lite/ios:TensorFlowLiteCMetal_framework' failed; build\ \ aborted: \r\nINFO: Elapsed time: 45.455s\r\nINFO: 0 processes.\r\nFAILED: Build\ \ did NOT complete successfully (66 packages loaded, 1118 targets configured)`\r\ \n\r\n**Here's why we have that policy**: TensorFlow developers respond to issues.\ \ We want to focus on work that benefits the whole community, e.g., fixing bugs\ \ and adding features. Support only helps individuals. GitHub also notifies thousands\ \ of people when issues are filed. We want them to see you communicating an interesting\ \ problem, rather than being redirected to Stack Overflow.\r\n\r\n------------------------\r\ \n\r\n### System information\r\nMacOS-M1Max : 13.3\r\nTensorflow:2.9.2\r\nPython:\ \ 3.10.0\r\n\r\n\r\n\r\n### Describe the problem\r\nDescribe the problem clearly\ \ here. Be sure to convey here why it's a bug in TensorFlow or a feature request.\r\ \n\r\n### Source code / logs\r\nInclude any logs or source code that would be\ \ helpful to diagnose the problem. If including tracebacks, please include the\ \ full traceback. Large logs and files should be attached. Try to provide a reproducible\ \ test case that is the bare minimum necessary to generate the problem.\r\n" inference: true --- # SetFit with sentence-transformers/all-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 384 tokens - **Number of Classes:** 3 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:---------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | question | <ul><li>"Parse output of `mobile_ssd_v2_float_coco.tflite` ### Issue type\n\nSupport\n\n### Have you reproduced the bug with TensorFlow Nightly?\n\nNo\n\n### Source\n\nsource\n\n### TensorFlow version\n\nv2.11.1\n\n### Custom code\n\nYes\n\n### OS platform and distribution\n\nLinux Ubuntu 20.04\n\n### Mobile device\n\nAndroid\n\n### Python version\n\n_No response_\n\n### Bazel version\n\n6.2.0\n\n### GCC/compiler version\n\n12\n\n### CUDA/cuDNN version\n\n_No response_\n\n### GPU model and memory\n\n_No response_\n\n### Current behavior?\n\nI'm trying to use the model [mobile_ssd_v2_float_coco.tflite](https://storage.googleapis.com/download.tensorflow.org/models/tflite/gpu/mobile_ssd_v2_float_coco.tflite) on a C++ application, I'm able to execute the inference and get the results.\r\n\r\nBased on the Netron app I see that its output is:\r\n![image](https://github.com/tensorflow/tensorflow/assets/92656601/7ee73cb9-52dc-47ef-a89f-d17843bd0f60)\r\n\r\nBut I couldn't find an example code showing how to parse this output.\r\n\r\nI tried to look into https://github.com/tensorflow/tensorflow/issues/29054 and https://github.com/tensorflow/tensorflow/issues/40298 but the output of the model is different from the one provided [here](https://storage.googleapis.com/download.tensorflow.org/models/tflite/gpu/mobile_ssd_v2_float_coco.tflite).\r\n\r\nDo you have any example code available in Java, Python, or even better in C++ to parse this model output?\n\n### Standalone code to reproduce the issue\n\n```shell\nNo example code is available to parse the output of mobile_ssd_v2_float_coco.tflite.\n```\n\n\n### Relevant log output\n\n_No response_"</li><li>'Tensorflow Lite library is crashing in WASM library at 3rd inference <details><summary>Click to expand!</summary> \r\n \r\n ### Issue Type\r\n\r\nSupport\r\n\r\n### Have you reproduced the bug with TF nightly?\r\n\r\nYes\r\n\r\n### Source\r\n\r\nsource\r\n\r\n### Tensorflow Version\r\n\r\n2.7.0\r\n\r\n### Custom Code\r\n\r\nYes\r\n\r\n### OS Platform and Distribution\r\n\r\nEmscripten, Ubuntu 18.04\r\n\r\n### Mobile device\r\n\r\n_No response_\r\n\r\n### Python version\r\n\r\n_No response_\r\n\r\n### Bazel version\r\n\r\n_No response_\r\n\r\n### GCC/Compiler version\r\n\r\n_No response_\r\n\r\n### CUDA/cuDNN version\r\n\r\n_No response_\r\n\r\n### GPU model and memory\r\n\r\n_No response_\r\n\r\n### Current Behaviour?\r\n\r\n```shell\r\nHello! I have C++ code that I want to deploy as WASM library and this code contains TFLite library. I have compiled TFLite library with XNNPack support using Emscripten toolchain quite easy, so no issue there. I have a leight-weight convolution+dense model that runs perfectly on Desktop, but I am starting having problems in the browser.\r\n\r\nIn 99% of cases I have an error on the third inference:\r\n\r\nUncaught RuntimeError: memory access out of bounds\r\n\r\nThrough some trivial debugging I have found out that the issue comes from _interpreter->Invoke() method. Does not matter if I put any input or not, I just need to call Invoke() three times and I have a crash.\r\n\r\nFirst thing first: I decided to add more memory to my WASM library by adding this line to CMake:\r\n\r\nSET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -s TOTAL_STACK=134217728 -s TOTAL_MEMORY=268435456")\r\nSET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -s TOTAL_STACK=134217728 -s TOTAL_MEMORY=268435456")\r\n\r\n128 MB and 256 MB in total for 1 MB model - I think this is more than enough. And on top of that, I am allowing Memory Growth. But unfortunately, I have exactly the same issue.\r\n\r\nI am beating on this problem for 2 weeks straight and at this stage I have no clue how to fix it. Also I have tried to set custom allocation using TfLiteCustomAllocation but in this case I have a crash on the very first inference. I guess I was not using it right, but unfortunately I couldn\'t find even one tutorial describing how to apply custom allocation in TFLite.\r\n\r\nI said that I have a crash in 99% of cases. There was one time when WASM library worked and inference worked as well. It happens just randomly once, and I couldn\'t reproduce it anymore.\r\n```\r\n\r\n\r\n### Standalone code to reproduce the issue\r\n\r\n```shell\r\nHere is the code that does TFLite inference\r\n\r\n\r\n#include <cstdlib>\r\n#include "tflite_model.h"\r\n#include <iostream>\r\n\r\n#include "tensorflow/lite/interpreter.h"\r\n#include "tensorflow/lite/util.h"\r\n\r\nnamespace tracker {\r\n\r\n#ifdef EMSCRIPTEN\r\n\tvoid TFLiteModel::init(std::stringstream& stream) {\r\n\r\n\t\tstd::string img_str = stream.str();\r\n\t\tstd::vector<char> img_model_data(img_str.size());\r\n\t\tstd::copy(img_str.begin(), img_str.end(), img_model_data.begin());\r\n\r\n\t\t_model = tflite::FlatBufferModel::BuildFromBuffer(img_str.data(), img_str.size());\r\n#else\r\n\tvoid TFLiteModel::init(const std::string& path) {\r\n\t\t_model = tflite::FlatBufferModel::BuildFromFile(path.c_str());\r\n\r\n#endif\r\n\r\n\t\ttflite::ops::builtin::BuiltinOpResolver resolver;\r\n\t\ttflite::InterpreterBuilder(*_model, resolver)(&_interpreter);\r\n\r\n\t\t_interpreter->AllocateTensors();\r\n\r\n\t\t/*for (int i = 0; i < _interpreter->tensors_size(); i++) {\r\n\t\t\tTfLiteTensor* tensor = _interpreter->tensor(i);\r\n\r\n\t\t\tif (tensor->allocation_type == kTfLiteArenaRw || tensor->allocation_type == kTfLiteArenaRwPersistent) {\r\n\r\n\t\t\t\tint aligned_bytes = tensor->bytes + (tflite::kDefaultTensorAlignment - tensor->bytes % tflite::kDefaultTensorAlignment) % tflite::kDefaultTensorAlignment;\r\n\r\n\t\t\t\tTfLiteCustomAllocation customAlloc;\r\n\t\t\t\tint result = posix_memalign(&customAlloc.data, tflite::kDefaultTensorAlignment, tensor->bytes);\r\n\t\t\t\tif (result != 0 || customAlloc.data == NULL) {\r\n\t\t\t\t\tstd::cout << "posix_memalign does not work!\\\\n";\r\n\t\t\t\t}\r\n\r\n\t\t\t\tTfLiteStatus st = _interpreter->SetCustomAllocationForTensor(i, customAlloc);\r\n\t\t\t\tstd::cout << "status = " << st << std::endl;\r\n\t\t\t\tif (tensor->bytes % tflite::kDefaultTensorAlignment != 0) {\r\n\t\t\t\t\tstd::cout << "bad! i " << i << ", size " << tensor->bytes << std::endl;\r\n\t\t\t\t}\r\n\t\t\t\t_allocations.push_back(customAlloc);\r\n\t\t\t}\r\n\t\t}\r\n\t\texit(0);*/\r\n\t}\r\n\r\n\tvoid TFLiteModel::forward(const cv::Mat& img_input, const std::vector<float>& lms_input) {\r\n\r\n\t\tfloat* model_in = _interpreter->typed_input_tensor<float>(0);\r\n\t\tstd::memcpy(model_in, img_input.data, img_input.total() * img_input.elemSize());\r\n\r\n\t\tfloat* lms_in = _interpreter->typed_input_tensor<float>(1);\r\n\t\tstd::memcpy(lms_in, lms_input.data(), sizeof(float) * lms_input.size());\r\n\t\t\r\n\t\t_interpreter->Invoke();\r\n\t}\r\n\r\n\tfloat* TFLiteModel::out() {\r\n\t\treturn _interpreter->typed_output_tensor<float>(0);\r\n\t}\r\n\r\n\tstd::vector<int> TFLiteModel::getOutputShape() const {\r\n\t\tTfLiteTensor* outtensor = _interpreter->output_tensor(0);\r\n\t\tTfLiteIntArray* dims = outtensor->dims;\r\n\r\n\t\tstd::vector<int> sh;\r\n\t\tfor (int i = 0; i < dims->size; i++) {\r\n\t\t\tsh.push_back(dims->data[i]);\r\n\t\t}\r\n\r\n\t\treturn sh;\r\n\t}\r\n}\r\n```\r\n\r\n\r\n### Relevant log output\r\n\r\n_No response_</details>'</li><li>'error: \'tf.Conv2D\' op is neither a custom op nor a flex op ### 1. System information\r\n\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 20.04\r\n- TensorFlow installation (pip package or built from source): pip package\r\n- TensorFlow library (version, if pip package or github SHA, if built from source): v2.10\r\n\r\n### 2. Code\r\nCode for conversion\r\n```\r\nconverter = tf.lite.TFLiteConverter.from_saved_model(f\'savedmodel/decoder\')\r\ntflite_model = converter.convert()\r\n\r\n# save the model\r\nwith open(f\'{name}.tflite\', \'wb\') as f:\r\n f.write(tflite_model)\r\n```\r\nCode for the model\r\n```\r\nlatent = keras.layers.Input((n_h, n_w, 4))\r\ndecoder = Decoder()\r\ndecoder = keras.models.Model(latent, decoder(latent))\r\n```\r\n```\r\nclass Decoder(keras.Sequential):\r\n def __init__(self):\r\n super().__init__(\r\n [\r\n keras.layers.Lambda(lambda x: 1 / 0.18215 * x),\r\n PaddedConv2D(4, 1),\r\n PaddedConv2D(512, 3, padding=1),\r\n ResnetBlock(512, 512),\r\n AttentionBlock(512),\r\n ResnetBlock(512, 512),\r\n ResnetBlock(512, 512),\r\n ResnetBlock(512, 512),\r\n ResnetBlock(512, 512),\r\n keras.layers.UpSampling2D(size=(2, 2)),\r\n PaddedConv2D(512, 3, padding=1),\r\n ResnetBlock(512, 512),\r\n ResnetBlock(512, 512),\r\n ResnetBlock(512, 512),\r\n keras.layers.UpSampling2D(size=(2, 2)),\r\n PaddedConv2D(512, 3, padding=1),\r\n ResnetBlock(512, 256),\r\n ResnetBlock(256, 256),\r\n ResnetBlock(256, 256),\r\n keras.layers.UpSampling2D(size=(2, 2)),\r\n PaddedConv2D(256, 3, padding=1),\r\n ResnetBlock(256, 128),\r\n ResnetBlock(128, 128),\r\n ResnetBlock(128, 128),\r\n tfa.layers.GroupNormalization(epsilon=1e-5),\r\n keras.layers.Activation("swish"),\r\n PaddedConv2D(3, 3, padding=1),\r\n ]\r\n )\r\n```\r\n\r\n### 3. Failure after conversion\r\nconversion fails\r\n\r\n\r\n### 5. (optional) Any other info / logs\r\n[error.log](https://github.com/tensorflow/tensorflow/files/10302790/error.log)\r\n```\r\nSome ops are not supported by the native TFLite runtime, you can enable TF kernels fallback using TF Select. See instructions: https://www.tensorflow.org/lite/guide/ops_select \r\nTF Select ops: Conv2D\r\nDetails:\r\n\ttf.Conv2D(tensor<?x?x?x?xf32>, tensor<1x1x512x512xf32>) -> (tensor<?x?x?x512xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}\r\n\ttf.Conv2D(tensor<?x?x?x?xf32>, tensor<3x3x128x128xf32>) -> (tensor<?x?x?x128xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}\r\n\ttf.Conv2D(tensor<?x?x?x?xf32>, tensor<3x3x128x3xf32>) -> (tensor<?x?x?x3xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}\r\n\ttf.Conv2D(tensor<?x?x?x?xf32>, tensor<3x3x256x128xf32>) -> (tensor<?x?x?x128xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}\r\n\ttf.Conv2D(tensor<?x?x?x?xf32>, tensor<3x3x256x256xf32>) -> (tensor<?x?x?x256xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}\r\n\ttf.Conv2D(tensor<?x?x?x?xf32>, tensor<3x3x512x256xf32>) -> (tensor<?x?x?x256xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}\r\n\ttf.Conv2D(tensor<?x?x?x?xf32>, tensor<3x3x512x512xf32>) -> (tensor<?x?x?x512xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}\r\n```\r\nAccording to the error message, I suspect that it can not recognize the input shape. But as you can see on the above code, input is specified for the functional API for `decoder` model. \r\n(FYI, The inference code is called with `predict_on_batch` method. I found out other model with `predict_on_batch` is converted successfully, but that model doesn\'t contain `conv2d` block inside. Can using `predict_on_batch` together with `conv2d` be a problem?)\r\n\r\n**I\'m sure `conv2d` is on the allowlist for TFLite operators. Any suggestions for this problem? Thank you.**'</li></ul> | | feature | <ul><li>'tf.keras.optimizers.experimental.AdamW only support constant weight_decay <details><summary>Click to expand!</summary> \n \n ### Issue Type\n\nFeature Request\n\n### Source\n\nsource\n\n### Tensorflow Version\n\n2.8\n\n### Custom Code\n\nNo\n\n### OS Platform and Distribution\n\n_No response_\n\n### Mobile device\n\n_No response_\n\n### Python version\n\n_No response_\n\n### Bazel version\n\n_No response_\n\n### GCC/Compiler version\n\n_No response_\n\n### CUDA/cuDNN version\n\n_No response_\n\n### GPU model and memory\n\n_No response_\n\n### Current Behaviour?\n\n```shell\ntf.keras.optimizers.experimental.AdamW only supports constant weight decay. But usually we want the weight_decay value to decay with learning rate schedule.\n```\n\n\n### Standalone code to reproduce the issue\n\n```shell\nThe legacy tfa.optimizers.AdamW supports callable weight_decay, which is much better.\n```\n\n\n### Relevant log output\n\n_No response_</details>'</li><li>'RFE tensorflow-aarch64==2.6.0 build ? **System information**\r\n TensorFlow version (you are using): 2.6.0\r\n- Are you willing to contribute it (Yes/No): Yes\r\n\r\n**Describe the feature and the current behavior/state.**\r\n\r\nBrainchip Akida AKD1000 SNN neuromorphic MetaTF SDK support 2.6.0 on x86_64. They claim support for aarch64, but when creating a virtualenv it fails on aarch64 due to lacking tensorflow-aarc64==2.6.0 build.\r\n\r\n**Will this change the current api? How?**\r\n\r\nNA\r\n\r\n**Who will benefit with this feature?**\r\n\r\nCustomer of Brainchip Akida who run on Arm64 platforms.\r\n\r\n**Any Other info.**\r\n\r\nhttps://doc.brainchipinc.com/installation.html\r\n\r\n\r\n'</li><li>"How to calculate 45 degree standing position of body from camera in swift (Pose estimation) <details><summary>Click to expand!</summary> \n \n ### Issue Type\n\nFeature Request\n\n### Source\n\nsource\n\n### Tensorflow Version\n\npod 'TensorFlowLiteSwift', '~> 0.0.1-nightly', :subspecs => ['CoreML', 'Metal']\n\n### Custom Code\n\nYes\n\n### OS Platform and Distribution\n\n_No response_\n\n### Mobile device\n\n_No response_\n\n### Python version\n\n_No response_\n\n### Bazel version\n\n_No response_\n\n### GCC/Compiler version\n\n_No response_\n\n### CUDA/cuDNN version\n\n_No response_\n\n### GPU model and memory\n\n_No response_\n\n### Current Behaviour?\n\n```shell\nHow to calculate 45 degree standing position of body from camera in swift.\n```\n\n\n### Standalone code to reproduce the issue\n\n```shell\nHow to calculate 45 degree standing position of body from camera in swift using the body keypoints. (Pose estimation)\n```\n\n\n### Relevant log output\n\n_No response_</details>"</li></ul> | | bug | <ul><li>'Abort when running tensorflow.python.ops.gen_array_ops.depth_to_space ### Issue type\n\nBug\n\n### Have you reproduced the bug with TensorFlow Nightly?\n\nNo\n\n### Source\n\nbinary\n\n### TensorFlow version\n\n2.11.0\n\n### Custom code\n\nYes\n\n### OS platform and distribution\n\n22.04\n\n### Mobile device\n\n_No response_\n\n### Python version\n\n3.9\n\n### Bazel version\n\n_No response_\n\n### GCC/compiler version\n\n_No response_\n\n### CUDA/cuDNN version\n\nnvidia-cudnn-cu11==8.6.0.163, cudatoolkit=11.8.0\n\n### GPU model and memory\n\n_No response_\n\n### Current behavior?\n\nDue to very large integer argument\n\n### Standalone code to reproduce the issue\n\n```shell\nimport tensorflow as tf\r\nimport os\r\nimport numpy as np\r\nfrom tensorflow.python.ops import gen_array_ops\r\ntry:\r\n arg_0_tensor = tf.random.uniform([3, 2, 3, 4], dtype=tf.float32)\r\n arg_0 = tf.identity(arg_0_tensor)\r\n arg_1 = 2147483647\r\n arg_2 = "NHWC"\r\n out = gen_array_ops.depth_to_space(arg_0,arg_1,arg_2,)\r\nexcept Exception as e:\r\n print("Error:"+str(e))\r\n\r\n```\n```\n\n\n### Relevant log output\n\n```shell\n023-08-13 00:23:53.644564: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\r\n2023-08-13 00:23:54.491071: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.510564: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.510736: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.511051: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\r\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2023-08-13 00:23:54.511595: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.511717: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.511830: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.572398: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.572634: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.572791: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.572916: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1613] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 153 MB memory: -> device: 0, name: NVIDIA GeForce GTX 1660 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5\r\n2023-08-13 00:23:54.594062: I tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:735] failed to allocate 153.88M (161349632 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory\r\n2023-08-13 00:23:54.594484: I tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:735] failed to allocate 138.49M (145214720 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory\r\n2023-08-13 00:23:54.600623: F tensorflow/core/framework/tensor_shape.cc:201] Non-OK-status: InitDims(dim_sizes) status: INVALID_ARGUMENT: Expected a non-negative size, got -2\r\nAborted\r\n\r\n```\n```\n'</li><li>"float8 (both e4m3fn and e5m2) missing from numbertype ### Issue Type\r\n\r\nBug\r\n\r\n### Have you reproduced the bug with TF nightly?\r\n\r\nNo\r\n\r\n### Source\r\n\r\nbinary\r\n\r\n### Tensorflow Version\r\n\r\n2.12.0\r\n\r\n### Custom Code\r\n\r\nYes\r\n\r\n### OS Platform and Distribution\r\n\r\nmacOS-13.2.1-arm64-arm-64bit\r\n\r\n### Mobile device\r\n\r\n_No response_\r\n\r\n### Python version\r\n\r\n3.9.6\r\n\r\n### Bazel version\r\n\r\n_No response_\r\n\r\n### GCC/Compiler version\r\n\r\n_No response_\r\n\r\n### CUDA/cuDNN version\r\n\r\n_No response_\r\n\r\n### GPU model and memory\r\n\r\n_No response_\r\n\r\n### Current Behaviour?\r\n\r\nFP8 datatypes are missing from `kNumberTypes` in `tensorflow/core/framework/types.h`, and also missing from `TF_CALL_FLOAT_TYPES(m)` in `tensorflow/core/framework/register_types.h`. This causes simple ops (like slice, transpose, split, etc.) to raise NotFoundError.\r\n\r\n### Standalone code to reproduce the issue\r\n\r\n```python\r\nimport tensorflow as tf\r\nfrom tensorflow.python.framework import dtypes\r\n\r\na = tf.constant([[1.2345678, 2.3456789, 3.4567891], [4.5678912, 5.6789123, 6.7891234]], dtype=dtypes.float16)\r\nprint(a)\r\n\r\na_fp8 = tf.cast(a, dtypes.float8_e4m3fn)\r\nprint(a_fp8)\r\n\r\nb = a_fp8[1:2] # tensorflow.python.framework.errors_impl.NotFoundError\r\nb = tf.transpose(a_fp8, [1, 0]) # tensorflow.python.framework.errors_impl.NotFoundError\r\n```\r\n\r\n\r\n### Relevant log output\r\n\r\n```\r\ntensorflow.python.framework.errors_impl.NotFoundError: Could not find device for node: {{node StridedSlice}} = StridedSlice[Index=DT_INT32, T=DT_FLOAT8_E4M3FN, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=0]\r\nAll kernels registered for op StridedSlice:\r\n device='XLA_CPU_JIT'; Index in [DT_INT32, DT_INT16, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, 930109355527764061, DT_HALF, DT_UINT32, DT_UINT64, DT_FLOAT8_E5M2, DT_FLOAT8_E4M3FN]\r\n device='CPU'; T in [DT_UINT64]\r\n device='CPU'; T in [DT_INT64]\r\n device='CPU'; T in [DT_UINT32]\r\n device='CPU'; T in [DT_UINT16]\r\n device='CPU'; T in [DT_INT16]\r\n device='CPU'; T in [DT_UINT8]\r\n device='CPU'; T in [DT_INT8]\r\n device='CPU'; T in [DT_INT32]\r\n device='CPU'; T in [DT_HALF]\r\n device='CPU'; T in [DT_BFLOAT16]\r\n device='CPU'; T in [DT_FLOAT]\r\n device='CPU'; T in [DT_DOUBLE]\r\n device='CPU'; T in [DT_COMPLEX64]\r\n device='CPU'; T in [DT_COMPLEX128]\r\n device='CPU'; T in [DT_BOOL]\r\n device='CPU'; T in [DT_STRING]\r\n device='CPU'; T in [DT_RESOURCE]\r\n device='CPU'; T in [DT_VARIANT]\r\n device='CPU'; T in [DT_QINT8]\r\n device='CPU'; T in [DT_QUINT8]\r\n device='CPU'; T in [DT_QINT32]\r\n device='DEFAULT'; T in [DT_INT32]\r\n [Op:StridedSlice] name: strided_slice/\r\n```\r\n\r\n```\r\ntensorflow.python.framework.errors_impl.NotFoundError: Could not find device for node: {{node Transpose}} = Transpose[T=DT_FLOAT8_E4M3FN, Tperm=DT_INT32]\r\nAll kernels registered for op Transpose:\r\n device='XLA_CPU_JIT'; Tperm in [DT_INT32, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, 930109355527764061, DT_HALF, DT_UINT32, DT_UINT64, DT_FLOAT8_E5M2, DT_FLOAT8_E4M3FN]\r\n device='CPU'; T in [DT_UINT64]\r\n device='CPU'; T in [DT_INT64]\r\n device='CPU'; T in [DT_UINT32]\r\n device='CPU'; T in [DT_UINT16]\r\n device='CPU'; T in [DT_INT16]\r\n device='CPU'; T in [DT_UINT8]\r\n device='CPU'; T in [DT_INT8]\r\n device='CPU'; T in [DT_INT32]\r\n device='CPU'; T in [DT_HALF]\r\n device='CPU'; T in [DT_BFLOAT16]\r\n device='CPU'; T in [DT_FLOAT]\r\n device='CPU'; T in [DT_DOUBLE]\r\n device='CPU'; T in [DT_COMPLEX64]\r\n device='CPU'; T in [DT_COMPLEX128]\r\n device='CPU'; T in [DT_BOOL]\r\n device='CPU'; T in [DT_STRING]\r\n device='CPU'; T in [DT_RESOURCE]\r\n device='CPU'; T in [DT_VARIANT]\r\n [Op:Transpose]\r\n```"</li><li>"My customized OP gives incorrect outputs on GPUs since `tf-nightly 2.13.0.dev20230413` ### Issue type\n\nBug\n\n### Have you reproduced the bug with TensorFlow Nightly?\n\nYes\n\n### Source\n\nbinary\n\n### TensorFlow version\n\n2.13\n\n### Custom code\n\nYes\n\n### OS platform and distribution\n\nfedora 36\n\n### Mobile device\n\n_No response_\n\n### Python version\n\n3.11.4\n\n### Bazel version\n\n_No response_\n\n### GCC/compiler version\n\n_No response_\n\n### CUDA/cuDNN version\n\n_No response_\n\n### GPU model and memory\n\n_No response_\n\n### Current behavior?\n\nI have a complex program based on TensorFlow with several customized OPs. These OPs were created following https://www.tensorflow.org/guide/create_op. Yesterday TF 2.13.0 was released, but after I upgraded to 2.13.0, I found that one of my customized OP gives incorrect results on GPUs and still has the correct outputs on CPUs.\r\n\r\nThen I tested many `tf-nightly` versions and found that `tf-nightly 2.13.0.dev20230412` works but `tf-nightly 2.13.0.dev20230413` fails. So the situation is shown in the following table:\r\n| version | CPU | GPU |\r\n| -------- | --------- | ----------- |\r\n| tensorflow 2.12.0 | Correct | Correct |\r\n| tensorflow 2.13.0 | Correct | Incorrect |\r\n| tf-nightly 2.13.0.dev20230412 | Correct | Correct |\r\n| tf-nightly 2.13.0.dev20230413 | Correct | Incorrect |\r\n\r\nI'd like to know what changed between April 12th and 13th related to the customized OPs. This can be a breaking change to downstream applications or an internal bug. Thanks!\r\n\r\nHere is a quick link for commits between April 12th and 13th:\r\nhttps://github.com/tensorflow/tensorflow/commits/master?before=525da8a93eca846e32e5c41eddc0496b25a2ef5b+770\r\n\n\n### Standalone code to reproduce the issue\n\n```shell\nIndeed, the reason is still unclear to me, so it is hard to create a minimal example.\r\n\r\nThe code of our customized OPs is https://github.com/deepmodeling/deepmd-kit/blob/37fd8d193362f91c925cf7c2f3a58b97dc921b27/source/op/prod_force_multi_device.cc#L49-L166\n```\n\n\n### Relevant log output\n\n_No response_"</li></ul> | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("Data init API for TFLite Swift <details><summary>Click to expand!</summary> ### Issue Type Feature Request ### Source source ### Tensorflow Version 2.8+ ### Custom Code No ### OS Platform and Distribution _No response_ ### Mobile device _No response_ ### Python version _No response_ ### Bazel version _No response_ ### GCC/Compiler version _No response_ ### CUDA/cuDNN version _No response_ ### GPU model and memory _No response_ ### Current Behaviour? ```shell The current Swift API only has `init` functions from files on disk unlike the Java (Android) API which has a byte buffer initializer. It'd be convenient if the Swift API could initialize `Interpreters` from `Data`. ``` ### Standalone code to reproduce the issue ```shell No code. This is a feature request ``` ### Relevant log output _No response_</details>") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:-----| | Word count | 5 | 353.7433 | 6124 | | Label | Training Sample Count | |:---------|:----------------------| | bug | 200 | | feature | 200 | | question | 200 | ### Training Hyperparameters - batch_size: (16, 2) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0007 | 1 | 0.1719 | - | | 0.0067 | 10 | 0.2869 | - | | 0.0133 | 20 | 0.2513 | - | | 0.02 | 30 | 0.1871 | - | | 0.0267 | 40 | 0.2065 | - | | 0.0333 | 50 | 0.2302 | - | | 0.04 | 60 | 0.1645 | - | | 0.0467 | 70 | 0.1887 | - | | 0.0533 | 80 | 0.1376 | - | | 0.06 | 90 | 0.1171 | - | | 0.0667 | 100 | 0.1303 | - | | 0.0733 | 110 | 0.121 | - | | 0.08 | 120 | 0.1126 | - | | 0.0867 | 130 | 0.1247 | - | | 0.0933 | 140 | 0.1764 | - | | 0.1 | 150 | 0.0401 | - | | 0.1067 | 160 | 0.1571 | - | | 0.1133 | 170 | 0.0186 | - | | 0.12 | 180 | 0.0501 | - | | 0.1267 | 190 | 0.1003 | - | | 0.1333 | 200 | 0.0152 | - | | 0.14 | 210 | 0.0784 | - | | 0.1467 | 220 | 0.1423 | - | | 0.1533 | 230 | 0.1313 | - | | 0.16 | 240 | 0.0799 | - | | 0.1667 | 250 | 0.0542 | - | | 0.1733 | 260 | 0.0426 | - | | 0.18 | 270 | 0.047 | - | | 0.1867 | 280 | 0.0062 | - | | 0.1933 | 290 | 0.0085 | - | | 0.2 | 300 | 0.0625 | - | | 0.2067 | 310 | 0.095 | - | | 0.2133 | 320 | 0.0262 | - | | 0.22 | 330 | 0.0029 | - | | 0.2267 | 340 | 0.0097 | - | | 0.2333 | 350 | 0.063 | - | | 0.24 | 360 | 0.0059 | - | | 0.2467 | 370 | 0.0016 | - | | 0.2533 | 380 | 0.0025 | - | | 0.26 | 390 | 0.0033 | - | | 0.2667 | 400 | 0.0006 | - | | 0.2733 | 410 | 0.0032 | - | | 0.28 | 420 | 0.0045 | - | | 0.2867 | 430 | 0.0013 | - | | 0.2933 | 440 | 0.0011 | - | | 0.3 | 450 | 0.001 | - | | 0.3067 | 460 | 0.0044 | - | | 0.3133 | 470 | 0.001 | - | | 0.32 | 480 | 0.0009 | - | | 0.3267 | 490 | 0.0004 | - | | 0.3333 | 500 | 0.0006 | - | | 0.34 | 510 | 0.001 | - | | 0.3467 | 520 | 0.0003 | - | | 0.3533 | 530 | 0.0008 | - | | 0.36 | 540 | 0.0003 | - | | 0.3667 | 550 | 0.0023 | - | | 0.3733 | 560 | 0.0336 | - | | 0.38 | 570 | 0.0004 | - | | 0.3867 | 580 | 0.0003 | - | | 0.3933 | 590 | 0.0006 | - | | 0.4 | 600 | 0.0008 | - | | 0.4067 | 610 | 0.0011 | - | | 0.4133 | 620 | 0.0002 | - | | 0.42 | 630 | 0.0004 | - | | 0.4267 | 640 | 0.0005 | - | | 0.4333 | 650 | 0.0601 | - | | 0.44 | 660 | 0.0003 | - | | 0.4467 | 670 | 0.0003 | - | | 0.4533 | 680 | 0.0006 | - | | 0.46 | 690 | 0.0005 | - | | 0.4667 | 700 | 0.0003 | - | | 0.4733 | 710 | 0.0006 | - | | 0.48 | 720 | 0.0001 | - | | 0.4867 | 730 | 0.0002 | - | | 0.4933 | 740 | 0.0002 | - | | 0.5 | 750 | 0.0002 | - | | 0.5067 | 760 | 0.0002 | - | | 0.5133 | 770 | 0.0016 | - | | 0.52 | 780 | 0.0001 | - | | 0.5267 | 790 | 0.0005 | - | | 0.5333 | 800 | 0.0004 | - | | 0.54 | 810 | 0.0039 | - | | 0.5467 | 820 | 0.0031 | - | | 0.5533 | 830 | 0.0008 | - | | 0.56 | 840 | 0.0003 | - | | 0.5667 | 850 | 0.0002 | - | | 0.5733 | 860 | 0.0002 | - | | 0.58 | 870 | 0.0002 | - | | 0.5867 | 880 | 0.0001 | - | | 0.5933 | 890 | 0.0004 | - | | 0.6 | 900 | 0.0002 | - | | 0.6067 | 910 | 0.0008 | - | | 0.6133 | 920 | 0.0005 | - | | 0.62 | 930 | 0.0005 | - | | 0.6267 | 940 | 0.0002 | - | | 0.6333 | 950 | 0.0001 | - | | 0.64 | 960 | 0.0002 | - | | 0.6467 | 970 | 0.0007 | - | | 0.6533 | 980 | 0.0002 | - | | 0.66 | 990 | 0.0002 | - | | 0.6667 | 1000 | 0.0002 | - | | 0.6733 | 1010 | 0.0002 | - | | 0.68 | 1020 | 0.0002 | - | | 0.6867 | 1030 | 0.0002 | - | | 0.6933 | 1040 | 0.0004 | - | | 0.7 | 1050 | 0.0076 | - | | 0.7067 | 1060 | 0.0002 | - | | 0.7133 | 1070 | 0.0002 | - | | 0.72 | 1080 | 0.0001 | - | | 0.7267 | 1090 | 0.0002 | - | | 0.7333 | 1100 | 0.0001 | - | | 0.74 | 1110 | 0.0365 | - | | 0.7467 | 1120 | 0.0002 | - | | 0.7533 | 1130 | 0.0002 | - | | 0.76 | 1140 | 0.0003 | - | | 0.7667 | 1150 | 0.0002 | - | | 0.7733 | 1160 | 0.0002 | - | | 0.78 | 1170 | 0.0004 | - | | 0.7867 | 1180 | 0.0001 | - | | 0.7933 | 1190 | 0.0001 | - | | 0.8 | 1200 | 0.0001 | - | | 0.8067 | 1210 | 0.0001 | - | | 0.8133 | 1220 | 0.0002 | - | | 0.82 | 1230 | 0.0002 | - | | 0.8267 | 1240 | 0.0001 | - | | 0.8333 | 1250 | 0.0001 | - | | 0.84 | 1260 | 0.0002 | - | | 0.8467 | 1270 | 0.0002 | - | | 0.8533 | 1280 | 0.0 | - | | 0.86 | 1290 | 0.0002 | - | | 0.8667 | 1300 | 0.032 | - | | 0.8733 | 1310 | 0.0001 | - | | 0.88 | 1320 | 0.0001 | - | | 0.8867 | 1330 | 0.0001 | - | | 0.8933 | 1340 | 0.0003 | - | | 0.9 | 1350 | 0.0001 | - | | 0.9067 | 1360 | 0.0001 | - | | 0.9133 | 1370 | 0.0001 | - | | 0.92 | 1380 | 0.0001 | - | | 0.9267 | 1390 | 0.0001 | - | | 0.9333 | 1400 | 0.0001 | - | | 0.94 | 1410 | 0.0001 | - | | 0.9467 | 1420 | 0.0001 | - | | 0.9533 | 1430 | 0.031 | - | | 0.96 | 1440 | 0.0001 | - | | 0.9667 | 1450 | 0.0003 | - | | 0.9733 | 1460 | 0.0001 | - | | 0.98 | 1470 | 0.0001 | - | | 0.9867 | 1480 | 0.0001 | - | | 0.9933 | 1490 | 0.0001 | - | | 1.0 | 1500 | 0.0001 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 3.0.1 - Transformers: 4.39.0 - PyTorch: 2.3.0+cu121 - Datasets: 2.20.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with sentence-transformers/all-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 384 tokens - **Number of Classes:** 3 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:---------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | question | <ul><li>"Parse output of `mobile_ssd_v2_float_coco.tflite` ### Issue type\n\nSupport\n\n### Have you reproduced the bug with TensorFlow Nightly?\n\nNo\n\n### Source\n\nsource\n\n### TensorFlow version\n\nv2.11.1\n\n### Custom code\n\nYes\n\n### OS platform and distribution\n\nLinux Ubuntu 20.04\n\n### Mobile device\n\nAndroid\n\n### Python version\n\n_No response_\n\n### Bazel version\n\n6.2.0\n\n### GCC/compiler version\n\n12\n\n### CUDA/cuDNN version\n\n_No response_\n\n### GPU model and memory\n\n_No response_\n\n### Current behavior?\n\nI'm trying to use the model [mobile_ssd_v2_float_coco.tflite](https://storage.googleapis.com/download.tensorflow.org/models/tflite/gpu/mobile_ssd_v2_float_coco.tflite) on a C++ application, I'm able to execute the inference and get the results.\r\n\r\nBased on the Netron app I see that its output is:\r\n![image](https://github.com/tensorflow/tensorflow/assets/92656601/7ee73cb9-52dc-47ef-a89f-d17843bd0f60)\r\n\r\nBut I couldn't find an example code showing how to parse this output.\r\n\r\nI tried to look into https://github.com/tensorflow/tensorflow/issues/29054 and https://github.com/tensorflow/tensorflow/issues/40298 but the output of the model is different from the one provided [here](https://storage.googleapis.com/download.tensorflow.org/models/tflite/gpu/mobile_ssd_v2_float_coco.tflite).\r\n\r\nDo you have any example code available in Java, Python, or even better in C++ to parse this model output?\n\n### Standalone code to reproduce the issue\n\n```shell\nNo example code is available to parse the output of mobile_ssd_v2_float_coco.tflite.\n```\n\n\n### Relevant log output\n\n_No response_"</li><li>'Tensorflow Lite library is crashing in WASM library at 3rd inference <details><summary>Click to expand!</summary> \r\n \r\n ### Issue Type\r\n\r\nSupport\r\n\r\n### Have you reproduced the bug with TF nightly?\r\n\r\nYes\r\n\r\n### Source\r\n\r\nsource\r\n\r\n### Tensorflow Version\r\n\r\n2.7.0\r\n\r\n### Custom Code\r\n\r\nYes\r\n\r\n### OS Platform and Distribution\r\n\r\nEmscripten, Ubuntu 18.04\r\n\r\n### Mobile device\r\n\r\n_No response_\r\n\r\n### Python version\r\n\r\n_No response_\r\n\r\n### Bazel version\r\n\r\n_No response_\r\n\r\n### GCC/Compiler version\r\n\r\n_No response_\r\n\r\n### CUDA/cuDNN version\r\n\r\n_No response_\r\n\r\n### GPU model and memory\r\n\r\n_No response_\r\n\r\n### Current Behaviour?\r\n\r\n```shell\r\nHello! I have C++ code that I want to deploy as WASM library and this code contains TFLite library. I have compiled TFLite library with XNNPack support using Emscripten toolchain quite easy, so no issue there. I have a leight-weight convolution+dense model that runs perfectly on Desktop, but I am starting having problems in the browser.\r\n\r\nIn 99% of cases I have an error on the third inference:\r\n\r\nUncaught RuntimeError: memory access out of bounds\r\n\r\nThrough some trivial debugging I have found out that the issue comes from _interpreter->Invoke() method. Does not matter if I put any input or not, I just need to call Invoke() three times and I have a crash.\r\n\r\nFirst thing first: I decided to add more memory to my WASM library by adding this line to CMake:\r\n\r\nSET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -s TOTAL_STACK=134217728 -s TOTAL_MEMORY=268435456")\r\nSET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -s TOTAL_STACK=134217728 -s TOTAL_MEMORY=268435456")\r\n\r\n128 MB and 256 MB in total for 1 MB model - I think this is more than enough. And on top of that, I am allowing Memory Growth. But unfortunately, I have exactly the same issue.\r\n\r\nI am beating on this problem for 2 weeks straight and at this stage I have no clue how to fix it. Also I have tried to set custom allocation using TfLiteCustomAllocation but in this case I have a crash on the very first inference. I guess I was not using it right, but unfortunately I couldn\'t find even one tutorial describing how to apply custom allocation in TFLite.\r\n\r\nI said that I have a crash in 99% of cases. There was one time when WASM library worked and inference worked as well. It happens just randomly once, and I couldn\'t reproduce it anymore.\r\n```\r\n\r\n\r\n### Standalone code to reproduce the issue\r\n\r\n```shell\r\nHere is the code that does TFLite inference\r\n\r\n\r\n#include <cstdlib>\r\n#include "tflite_model.h"\r\n#include <iostream>\r\n\r\n#include "tensorflow/lite/interpreter.h"\r\n#include "tensorflow/lite/util.h"\r\n\r\nnamespace tracker {\r\n\r\n#ifdef EMSCRIPTEN\r\n\tvoid TFLiteModel::init(std::stringstream& stream) {\r\n\r\n\t\tstd::string img_str = stream.str();\r\n\t\tstd::vector<char> img_model_data(img_str.size());\r\n\t\tstd::copy(img_str.begin(), img_str.end(), img_model_data.begin());\r\n\r\n\t\t_model = tflite::FlatBufferModel::BuildFromBuffer(img_str.data(), img_str.size());\r\n#else\r\n\tvoid TFLiteModel::init(const std::string& path) {\r\n\t\t_model = tflite::FlatBufferModel::BuildFromFile(path.c_str());\r\n\r\n#endif\r\n\r\n\t\ttflite::ops::builtin::BuiltinOpResolver resolver;\r\n\t\ttflite::InterpreterBuilder(*_model, resolver)(&_interpreter);\r\n\r\n\t\t_interpreter->AllocateTensors();\r\n\r\n\t\t/*for (int i = 0; i < _interpreter->tensors_size(); i++) {\r\n\t\t\tTfLiteTensor* tensor = _interpreter->tensor(i);\r\n\r\n\t\t\tif (tensor->allocation_type == kTfLiteArenaRw || tensor->allocation_type == kTfLiteArenaRwPersistent) {\r\n\r\n\t\t\t\tint aligned_bytes = tensor->bytes + (tflite::kDefaultTensorAlignment - tensor->bytes % tflite::kDefaultTensorAlignment) % tflite::kDefaultTensorAlignment;\r\n\r\n\t\t\t\tTfLiteCustomAllocation customAlloc;\r\n\t\t\t\tint result = posix_memalign(&customAlloc.data, tflite::kDefaultTensorAlignment, tensor->bytes);\r\n\t\t\t\tif (result != 0 || customAlloc.data == NULL) {\r\n\t\t\t\t\tstd::cout << "posix_memalign does not work!\\\\n";\r\n\t\t\t\t}\r\n\r\n\t\t\t\tTfLiteStatus st = _interpreter->SetCustomAllocationForTensor(i, customAlloc);\r\n\t\t\t\tstd::cout << "status = " << st << std::endl;\r\n\t\t\t\tif (tensor->bytes % tflite::kDefaultTensorAlignment != 0) {\r\n\t\t\t\t\tstd::cout << "bad! i " << i << ", size " << tensor->bytes << std::endl;\r\n\t\t\t\t}\r\n\t\t\t\t_allocations.push_back(customAlloc);\r\n\t\t\t}\r\n\t\t}\r\n\t\texit(0);*/\r\n\t}\r\n\r\n\tvoid TFLiteModel::forward(const cv::Mat& img_input, const std::vector<float>& lms_input) {\r\n\r\n\t\tfloat* model_in = _interpreter->typed_input_tensor<float>(0);\r\n\t\tstd::memcpy(model_in, img_input.data, img_input.total() * img_input.elemSize());\r\n\r\n\t\tfloat* lms_in = _interpreter->typed_input_tensor<float>(1);\r\n\t\tstd::memcpy(lms_in, lms_input.data(), sizeof(float) * lms_input.size());\r\n\t\t\r\n\t\t_interpreter->Invoke();\r\n\t}\r\n\r\n\tfloat* TFLiteModel::out() {\r\n\t\treturn _interpreter->typed_output_tensor<float>(0);\r\n\t}\r\n\r\n\tstd::vector<int> TFLiteModel::getOutputShape() const {\r\n\t\tTfLiteTensor* outtensor = _interpreter->output_tensor(0);\r\n\t\tTfLiteIntArray* dims = outtensor->dims;\r\n\r\n\t\tstd::vector<int> sh;\r\n\t\tfor (int i = 0; i < dims->size; i++) {\r\n\t\t\tsh.push_back(dims->data[i]);\r\n\t\t}\r\n\r\n\t\treturn sh;\r\n\t}\r\n}\r\n```\r\n\r\n\r\n### Relevant log output\r\n\r\n_No response_</details>'</li><li>'error: \'tf.Conv2D\' op is neither a custom op nor a flex op ### 1. System information\r\n\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 20.04\r\n- TensorFlow installation (pip package or built from source): pip package\r\n- TensorFlow library (version, if pip package or github SHA, if built from source): v2.10\r\n\r\n### 2. Code\r\nCode for conversion\r\n```\r\nconverter = tf.lite.TFLiteConverter.from_saved_model(f\'savedmodel/decoder\')\r\ntflite_model = converter.convert()\r\n\r\n# save the model\r\nwith open(f\'{name}.tflite\', \'wb\') as f:\r\n f.write(tflite_model)\r\n```\r\nCode for the model\r\n```\r\nlatent = keras.layers.Input((n_h, n_w, 4))\r\ndecoder = Decoder()\r\ndecoder = keras.models.Model(latent, decoder(latent))\r\n```\r\n```\r\nclass Decoder(keras.Sequential):\r\n def __init__(self):\r\n super().__init__(\r\n [\r\n keras.layers.Lambda(lambda x: 1 / 0.18215 * x),\r\n PaddedConv2D(4, 1),\r\n PaddedConv2D(512, 3, padding=1),\r\n ResnetBlock(512, 512),\r\n AttentionBlock(512),\r\n ResnetBlock(512, 512),\r\n ResnetBlock(512, 512),\r\n ResnetBlock(512, 512),\r\n ResnetBlock(512, 512),\r\n keras.layers.UpSampling2D(size=(2, 2)),\r\n PaddedConv2D(512, 3, padding=1),\r\n ResnetBlock(512, 512),\r\n ResnetBlock(512, 512),\r\n ResnetBlock(512, 512),\r\n keras.layers.UpSampling2D(size=(2, 2)),\r\n PaddedConv2D(512, 3, padding=1),\r\n ResnetBlock(512, 256),\r\n ResnetBlock(256, 256),\r\n ResnetBlock(256, 256),\r\n keras.layers.UpSampling2D(size=(2, 2)),\r\n PaddedConv2D(256, 3, padding=1),\r\n ResnetBlock(256, 128),\r\n ResnetBlock(128, 128),\r\n ResnetBlock(128, 128),\r\n tfa.layers.GroupNormalization(epsilon=1e-5),\r\n keras.layers.Activation("swish"),\r\n PaddedConv2D(3, 3, padding=1),\r\n ]\r\n )\r\n```\r\n\r\n### 3. Failure after conversion\r\nconversion fails\r\n\r\n\r\n### 5. (optional) Any other info / logs\r\n[error.log](https://github.com/tensorflow/tensorflow/files/10302790/error.log)\r\n```\r\nSome ops are not supported by the native TFLite runtime, you can enable TF kernels fallback using TF Select. See instructions: https://www.tensorflow.org/lite/guide/ops_select \r\nTF Select ops: Conv2D\r\nDetails:\r\n\ttf.Conv2D(tensor<?x?x?x?xf32>, tensor<1x1x512x512xf32>) -> (tensor<?x?x?x512xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}\r\n\ttf.Conv2D(tensor<?x?x?x?xf32>, tensor<3x3x128x128xf32>) -> (tensor<?x?x?x128xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}\r\n\ttf.Conv2D(tensor<?x?x?x?xf32>, tensor<3x3x128x3xf32>) -> (tensor<?x?x?x3xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}\r\n\ttf.Conv2D(tensor<?x?x?x?xf32>, tensor<3x3x256x128xf32>) -> (tensor<?x?x?x128xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}\r\n\ttf.Conv2D(tensor<?x?x?x?xf32>, tensor<3x3x256x256xf32>) -> (tensor<?x?x?x256xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}\r\n\ttf.Conv2D(tensor<?x?x?x?xf32>, tensor<3x3x512x256xf32>) -> (tensor<?x?x?x256xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}\r\n\ttf.Conv2D(tensor<?x?x?x?xf32>, tensor<3x3x512x512xf32>) -> (tensor<?x?x?x512xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}\r\n```\r\nAccording to the error message, I suspect that it can not recognize the input shape. But as you can see on the above code, input is specified for the functional API for `decoder` model. \r\n(FYI, The inference code is called with `predict_on_batch` method. I found out other model with `predict_on_batch` is converted successfully, but that model doesn\'t contain `conv2d` block inside. Can using `predict_on_batch` together with `conv2d` be a problem?)\r\n\r\n**I\'m sure `conv2d` is on the allowlist for TFLite operators. Any suggestions for this problem? Thank you.**'</li></ul> | | feature | <ul><li>'tf.keras.optimizers.experimental.AdamW only support constant weight_decay <details><summary>Click to expand!</summary> \n \n ### Issue Type\n\nFeature Request\n\n### Source\n\nsource\n\n### Tensorflow Version\n\n2.8\n\n### Custom Code\n\nNo\n\n### OS Platform and Distribution\n\n_No response_\n\n### Mobile device\n\n_No response_\n\n### Python version\n\n_No response_\n\n### Bazel version\n\n_No response_\n\n### GCC/Compiler version\n\n_No response_\n\n### CUDA/cuDNN version\n\n_No response_\n\n### GPU model and memory\n\n_No response_\n\n### Current Behaviour?\n\n```shell\ntf.keras.optimizers.experimental.AdamW only supports constant weight decay. But usually we want the weight_decay value to decay with learning rate schedule.\n```\n\n\n### Standalone code to reproduce the issue\n\n```shell\nThe legacy tfa.optimizers.AdamW supports callable weight_decay, which is much better.\n```\n\n\n### Relevant log output\n\n_No response_</details>'</li><li>'RFE tensorflow-aarch64==2.6.0 build ? **System information**\r\n TensorFlow version (you are using): 2.6.0\r\n- Are you willing to contribute it (Yes/No): Yes\r\n\r\n**Describe the feature and the current behavior/state.**\r\n\r\nBrainchip Akida AKD1000 SNN neuromorphic MetaTF SDK support 2.6.0 on x86_64. They claim support for aarch64, but when creating a virtualenv it fails on aarch64 due to lacking tensorflow-aarc64==2.6.0 build.\r\n\r\n**Will this change the current api? How?**\r\n\r\nNA\r\n\r\n**Who will benefit with this feature?**\r\n\r\nCustomer of Brainchip Akida who run on Arm64 platforms.\r\n\r\n**Any Other info.**\r\n\r\nhttps://doc.brainchipinc.com/installation.html\r\n\r\n\r\n'</li><li>"How to calculate 45 degree standing position of body from camera in swift (Pose estimation) <details><summary>Click to expand!</summary> \n \n ### Issue Type\n\nFeature Request\n\n### Source\n\nsource\n\n### Tensorflow Version\n\npod 'TensorFlowLiteSwift', '~> 0.0.1-nightly', :subspecs => ['CoreML', 'Metal']\n\n### Custom Code\n\nYes\n\n### OS Platform and Distribution\n\n_No response_\n\n### Mobile device\n\n_No response_\n\n### Python version\n\n_No response_\n\n### Bazel version\n\n_No response_\n\n### GCC/Compiler version\n\n_No response_\n\n### CUDA/cuDNN version\n\n_No response_\n\n### GPU model and memory\n\n_No response_\n\n### Current Behaviour?\n\n```shell\nHow to calculate 45 degree standing position of body from camera in swift.\n```\n\n\n### Standalone code to reproduce the issue\n\n```shell\nHow to calculate 45 degree standing position of body from camera in swift using the body keypoints. (Pose estimation)\n```\n\n\n### Relevant log output\n\n_No response_</details>"</li></ul> | | bug | <ul><li>'Abort when running tensorflow.python.ops.gen_array_ops.depth_to_space ### Issue type\n\nBug\n\n### Have you reproduced the bug with TensorFlow Nightly?\n\nNo\n\n### Source\n\nbinary\n\n### TensorFlow version\n\n2.11.0\n\n### Custom code\n\nYes\n\n### OS platform and distribution\n\n22.04\n\n### Mobile device\n\n_No response_\n\n### Python version\n\n3.9\n\n### Bazel version\n\n_No response_\n\n### GCC/compiler version\n\n_No response_\n\n### CUDA/cuDNN version\n\nnvidia-cudnn-cu11==8.6.0.163, cudatoolkit=11.8.0\n\n### GPU model and memory\n\n_No response_\n\n### Current behavior?\n\nDue to very large integer argument\n\n### Standalone code to reproduce the issue\n\n```shell\nimport tensorflow as tf\r\nimport os\r\nimport numpy as np\r\nfrom tensorflow.python.ops import gen_array_ops\r\ntry:\r\n arg_0_tensor = tf.random.uniform([3, 2, 3, 4], dtype=tf.float32)\r\n arg_0 = tf.identity(arg_0_tensor)\r\n arg_1 = 2147483647\r\n arg_2 = "NHWC"\r\n out = gen_array_ops.depth_to_space(arg_0,arg_1,arg_2,)\r\nexcept Exception as e:\r\n print("Error:"+str(e))\r\n\r\n```\n```\n\n\n### Relevant log output\n\n```shell\n023-08-13 00:23:53.644564: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\r\n2023-08-13 00:23:54.491071: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.510564: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.510736: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.511051: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\r\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2023-08-13 00:23:54.511595: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.511717: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.511830: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.572398: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.572634: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.572791: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2023-08-13 00:23:54.572916: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1613] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 153 MB memory: -> device: 0, name: NVIDIA GeForce GTX 1660 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5\r\n2023-08-13 00:23:54.594062: I tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:735] failed to allocate 153.88M (161349632 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory\r\n2023-08-13 00:23:54.594484: I tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:735] failed to allocate 138.49M (145214720 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory\r\n2023-08-13 00:23:54.600623: F tensorflow/core/framework/tensor_shape.cc:201] Non-OK-status: InitDims(dim_sizes) status: INVALID_ARGUMENT: Expected a non-negative size, got -2\r\nAborted\r\n\r\n```\n```\n'</li><li>"float8 (both e4m3fn and e5m2) missing from numbertype ### Issue Type\r\n\r\nBug\r\n\r\n### Have you reproduced the bug with TF nightly?\r\n\r\nNo\r\n\r\n### Source\r\n\r\nbinary\r\n\r\n### Tensorflow Version\r\n\r\n2.12.0\r\n\r\n### Custom Code\r\n\r\nYes\r\n\r\n### OS Platform and Distribution\r\n\r\nmacOS-13.2.1-arm64-arm-64bit\r\n\r\n### Mobile device\r\n\r\n_No response_\r\n\r\n### Python version\r\n\r\n3.9.6\r\n\r\n### Bazel version\r\n\r\n_No response_\r\n\r\n### GCC/Compiler version\r\n\r\n_No response_\r\n\r\n### CUDA/cuDNN version\r\n\r\n_No response_\r\n\r\n### GPU model and memory\r\n\r\n_No response_\r\n\r\n### Current Behaviour?\r\n\r\nFP8 datatypes are missing from `kNumberTypes` in `tensorflow/core/framework/types.h`, and also missing from `TF_CALL_FLOAT_TYPES(m)` in `tensorflow/core/framework/register_types.h`. This causes simple ops (like slice, transpose, split, etc.) to raise NotFoundError.\r\n\r\n### Standalone code to reproduce the issue\r\n\r\n```python\r\nimport tensorflow as tf\r\nfrom tensorflow.python.framework import dtypes\r\n\r\na = tf.constant([[1.2345678, 2.3456789, 3.4567891], [4.5678912, 5.6789123, 6.7891234]], dtype=dtypes.float16)\r\nprint(a)\r\n\r\na_fp8 = tf.cast(a, dtypes.float8_e4m3fn)\r\nprint(a_fp8)\r\n\r\nb = a_fp8[1:2] # tensorflow.python.framework.errors_impl.NotFoundError\r\nb = tf.transpose(a_fp8, [1, 0]) # tensorflow.python.framework.errors_impl.NotFoundError\r\n```\r\n\r\n\r\n### Relevant log output\r\n\r\n```\r\ntensorflow.python.framework.errors_impl.NotFoundError: Could not find device for node: {{node StridedSlice}} = StridedSlice[Index=DT_INT32, T=DT_FLOAT8_E4M3FN, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=0]\r\nAll kernels registered for op StridedSlice:\r\n device='XLA_CPU_JIT'; Index in [DT_INT32, DT_INT16, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, 930109355527764061, DT_HALF, DT_UINT32, DT_UINT64, DT_FLOAT8_E5M2, DT_FLOAT8_E4M3FN]\r\n device='CPU'; T in [DT_UINT64]\r\n device='CPU'; T in [DT_INT64]\r\n device='CPU'; T in [DT_UINT32]\r\n device='CPU'; T in [DT_UINT16]\r\n device='CPU'; T in [DT_INT16]\r\n device='CPU'; T in [DT_UINT8]\r\n device='CPU'; T in [DT_INT8]\r\n device='CPU'; T in [DT_INT32]\r\n device='CPU'; T in [DT_HALF]\r\n device='CPU'; T in [DT_BFLOAT16]\r\n device='CPU'; T in [DT_FLOAT]\r\n device='CPU'; T in [DT_DOUBLE]\r\n device='CPU'; T in [DT_COMPLEX64]\r\n device='CPU'; T in [DT_COMPLEX128]\r\n device='CPU'; T in [DT_BOOL]\r\n device='CPU'; T in [DT_STRING]\r\n device='CPU'; T in [DT_RESOURCE]\r\n device='CPU'; T in [DT_VARIANT]\r\n device='CPU'; T in [DT_QINT8]\r\n device='CPU'; T in [DT_QUINT8]\r\n device='CPU'; T in [DT_QINT32]\r\n device='DEFAULT'; T in [DT_INT32]\r\n [Op:StridedSlice] name: strided_slice/\r\n```\r\n\r\n```\r\ntensorflow.python.framework.errors_impl.NotFoundError: Could not find device for node: {{node Transpose}} = Transpose[T=DT_FLOAT8_E4M3FN, Tperm=DT_INT32]\r\nAll kernels registered for op Transpose:\r\n device='XLA_CPU_JIT'; Tperm in [DT_INT32, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, 930109355527764061, DT_HALF, DT_UINT32, DT_UINT64, DT_FLOAT8_E5M2, DT_FLOAT8_E4M3FN]\r\n device='CPU'; T in [DT_UINT64]\r\n device='CPU'; T in [DT_INT64]\r\n device='CPU'; T in [DT_UINT32]\r\n device='CPU'; T in [DT_UINT16]\r\n device='CPU'; T in [DT_INT16]\r\n device='CPU'; T in [DT_UINT8]\r\n device='CPU'; T in [DT_INT8]\r\n device='CPU'; T in [DT_INT32]\r\n device='CPU'; T in [DT_HALF]\r\n device='CPU'; T in [DT_BFLOAT16]\r\n device='CPU'; T in [DT_FLOAT]\r\n device='CPU'; T in [DT_DOUBLE]\r\n device='CPU'; T in [DT_COMPLEX64]\r\n device='CPU'; T in [DT_COMPLEX128]\r\n device='CPU'; T in [DT_BOOL]\r\n device='CPU'; T in [DT_STRING]\r\n device='CPU'; T in [DT_RESOURCE]\r\n device='CPU'; T in [DT_VARIANT]\r\n [Op:Transpose]\r\n```"</li><li>"My customized OP gives incorrect outputs on GPUs since `tf-nightly 2.13.0.dev20230413` ### Issue type\n\nBug\n\n### Have you reproduced the bug with TensorFlow Nightly?\n\nYes\n\n### Source\n\nbinary\n\n### TensorFlow version\n\n2.13\n\n### Custom code\n\nYes\n\n### OS platform and distribution\n\nfedora 36\n\n### Mobile device\n\n_No response_\n\n### Python version\n\n3.11.4\n\n### Bazel version\n\n_No response_\n\n### GCC/compiler version\n\n_No response_\n\n### CUDA/cuDNN version\n\n_No response_\n\n### GPU model and memory\n\n_No response_\n\n### Current behavior?\n\nI have a complex program based on TensorFlow with several customized OPs. These OPs were created following https://www.tensorflow.org/guide/create_op. Yesterday TF 2.13.0 was released, but after I upgraded to 2.13.0, I found that one of my customized OP gives incorrect results on GPUs and still has the correct outputs on CPUs.\r\n\r\nThen I tested many `tf-nightly` versions and found that `tf-nightly 2.13.0.dev20230412` works but `tf-nightly 2.13.0.dev20230413` fails. So the situation is shown in the following table:\r\n| version | CPU | GPU |\r\n| -------- | --------- | ----------- |\r\n| tensorflow 2.12.0 | Correct | Correct |\r\n| tensorflow 2.13.0 | Correct | Incorrect |\r\n| tf-nightly 2.13.0.dev20230412 | Correct | Correct |\r\n| tf-nightly 2.13.0.dev20230413 | Correct | Incorrect |\r\n\r\nI'd like to know what changed between April 12th and 13th related to the customized OPs. This can be a breaking change to downstream applications or an internal bug. Thanks!\r\n\r\nHere is a quick link for commits between April 12th and 13th:\r\nhttps://github.com/tensorflow/tensorflow/commits/master?before=525da8a93eca846e32e5c41eddc0496b25a2ef5b+770\r\n\n\n### Standalone code to reproduce the issue\n\n```shell\nIndeed, the reason is still unclear to me, so it is hard to create a minimal example.\r\n\r\nThe code of our customized OPs is https://github.com/deepmodeling/deepmd-kit/blob/37fd8d193362f91c925cf7c2f3a58b97dc921b27/source/op/prod_force_multi_device.cc#L49-L166\n```\n\n\n### Relevant log output\n\n_No response_"</li></ul> | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("Data init API for TFLite Swift <details><summary>Click to expand!</summary> ### Issue Type Feature Request ### Source source ### Tensorflow Version 2.8+ ### Custom Code No ### OS Platform and Distribution _No response_ ### Mobile device _No response_ ### Python version _No response_ ### Bazel version _No response_ ### GCC/Compiler version _No response_ ### CUDA/cuDNN version _No response_ ### GPU model and memory _No response_ ### Current Behaviour? ```shell The current Swift API only has `init` functions from files on disk unlike the Java (Android) API which has a byte buffer initializer. It'd be convenient if the Swift API could initialize `Interpreters` from `Data`. ``` ### Standalone code to reproduce the issue ```shell No code. This is a feature request ``` ### Relevant log output _No response_</details>") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:-----| | Word count | 5 | 353.7433 | 6124 | | Label | Training Sample Count | |:---------|:----------------------| | bug | 200 | | feature | 200 | | question | 200 | ### Training Hyperparameters - batch_size: (16, 2) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0007 | 1 | 0.1719 | - | | 0.0067 | 10 | 0.2869 | - | | 0.0133 | 20 | 0.2513 | - | | 0.02 | 30 | 0.1871 | - | | 0.0267 | 40 | 0.2065 | - | | 0.0333 | 50 | 0.2302 | - | | 0.04 | 60 | 0.1645 | - | | 0.0467 | 70 | 0.1887 | - | | 0.0533 | 80 | 0.1376 | - | | 0.06 | 90 | 0.1171 | - | | 0.0667 | 100 | 0.1303 | - | | 0.0733 | 110 | 0.121 | - | | 0.08 | 120 | 0.1126 | - | | 0.0867 | 130 | 0.1247 | - | | 0.0933 | 140 | 0.1764 | - | | 0.1 | 150 | 0.0401 | - | | 0.1067 | 160 | 0.1571 | - | | 0.1133 | 170 | 0.0186 | - | | 0.12 | 180 | 0.0501 | - | | 0.1267 | 190 | 0.1003 | - | | 0.1333 | 200 | 0.0152 | - | | 0.14 | 210 | 0.0784 | - | | 0.1467 | 220 | 0.1423 | - | | 0.1533 | 230 | 0.1313 | - | | 0.16 | 240 | 0.0799 | - | | 0.1667 | 250 | 0.0542 | - | | 0.1733 | 260 | 0.0426 | - | | 0.18 | 270 | 0.047 | - | | 0.1867 | 280 | 0.0062 | - | | 0.1933 | 290 | 0.0085 | - | | 0.2 | 300 | 0.0625 | - | | 0.2067 | 310 | 0.095 | - | | 0.2133 | 320 | 0.0262 | - | | 0.22 | 330 | 0.0029 | - | | 0.2267 | 340 | 0.0097 | - | | 0.2333 | 350 | 0.063 | - | | 0.24 | 360 | 0.0059 | - | | 0.2467 | 370 | 0.0016 | - | | 0.2533 | 380 | 0.0025 | - | | 0.26 | 390 | 0.0033 | - | | 0.2667 | 400 | 0.0006 | - | | 0.2733 | 410 | 0.0032 | - | | 0.28 | 420 | 0.0045 | - | | 0.2867 | 430 | 0.0013 | - | | 0.2933 | 440 | 0.0011 | - | | 0.3 | 450 | 0.001 | - | | 0.3067 | 460 | 0.0044 | - | | 0.3133 | 470 | 0.001 | - | | 0.32 | 480 | 0.0009 | - | | 0.3267 | 490 | 0.0004 | - | | 0.3333 | 500 | 0.0006 | - | | 0.34 | 510 | 0.001 | - | | 0.3467 | 520 | 0.0003 | - | | 0.3533 | 530 | 0.0008 | - | | 0.36 | 540 | 0.0003 | - | | 0.3667 | 550 | 0.0023 | - | | 0.3733 | 560 | 0.0336 | - | | 0.38 | 570 | 0.0004 | - | | 0.3867 | 580 | 0.0003 | - | | 0.3933 | 590 | 0.0006 | - | | 0.4 | 600 | 0.0008 | - | | 0.4067 | 610 | 0.0011 | - | | 0.4133 | 620 | 0.0002 | - | | 0.42 | 630 | 0.0004 | - | | 0.4267 | 640 | 0.0005 | - | | 0.4333 | 650 | 0.0601 | - | | 0.44 | 660 | 0.0003 | - | | 0.4467 | 670 | 0.0003 | - | | 0.4533 | 680 | 0.0006 | - | | 0.46 | 690 | 0.0005 | - | | 0.4667 | 700 | 0.0003 | - | | 0.4733 | 710 | 0.0006 | - | | 0.48 | 720 | 0.0001 | - | | 0.4867 | 730 | 0.0002 | - | | 0.4933 | 740 | 0.0002 | - | | 0.5 | 750 | 0.0002 | - | | 0.5067 | 760 | 0.0002 | - | | 0.5133 | 770 | 0.0016 | - | | 0.52 | 780 | 0.0001 | - | | 0.5267 | 790 | 0.0005 | - | | 0.5333 | 800 | 0.0004 | - | | 0.54 | 810 | 0.0039 | - | | 0.5467 | 820 | 0.0031 | - | | 0.5533 | 830 | 0.0008 | - | | 0.56 | 840 | 0.0003 | - | | 0.5667 | 850 | 0.0002 | - | | 0.5733 | 860 | 0.0002 | - | | 0.58 | 870 | 0.0002 | - | | 0.5867 | 880 | 0.0001 | - | | 0.5933 | 890 | 0.0004 | - | | 0.6 | 900 | 0.0002 | - | | 0.6067 | 910 | 0.0008 | - | | 0.6133 | 920 | 0.0005 | - | | 0.62 | 930 | 0.0005 | - | | 0.6267 | 940 | 0.0002 | - | | 0.6333 | 950 | 0.0001 | - | | 0.64 | 960 | 0.0002 | - | | 0.6467 | 970 | 0.0007 | - | | 0.6533 | 980 | 0.0002 | - | | 0.66 | 990 | 0.0002 | - | | 0.6667 | 1000 | 0.0002 | - | | 0.6733 | 1010 | 0.0002 | - | | 0.68 | 1020 | 0.0002 | - | | 0.6867 | 1030 | 0.0002 | - | | 0.6933 | 1040 | 0.0004 | - | | 0.7 | 1050 | 0.0076 | - | | 0.7067 | 1060 | 0.0002 | - | | 0.7133 | 1070 | 0.0002 | - | | 0.72 | 1080 | 0.0001 | - | | 0.7267 | 1090 | 0.0002 | - | | 0.7333 | 1100 | 0.0001 | - | | 0.74 | 1110 | 0.0365 | - | | 0.7467 | 1120 | 0.0002 | - | | 0.7533 | 1130 | 0.0002 | - | | 0.76 | 1140 | 0.0003 | - | | 0.7667 | 1150 | 0.0002 | - | | 0.7733 | 1160 | 0.0002 | - | | 0.78 | 1170 | 0.0004 | - | | 0.7867 | 1180 | 0.0001 | - | | 0.7933 | 1190 | 0.0001 | - | | 0.8 | 1200 | 0.0001 | - | | 0.8067 | 1210 | 0.0001 | - | | 0.8133 | 1220 | 0.0002 | - | | 0.82 | 1230 | 0.0002 | - | | 0.8267 | 1240 | 0.0001 | - | | 0.8333 | 1250 | 0.0001 | - | | 0.84 | 1260 | 0.0002 | - | | 0.8467 | 1270 | 0.0002 | - | | 0.8533 | 1280 | 0.0 | - | | 0.86 | 1290 | 0.0002 | - | | 0.8667 | 1300 | 0.032 | - | | 0.8733 | 1310 | 0.0001 | - | | 0.88 | 1320 | 0.0001 | - | | 0.8867 | 1330 | 0.0001 | - | | 0.8933 | 1340 | 0.0003 | - | | 0.9 | 1350 | 0.0001 | - | | 0.9067 | 1360 | 0.0001 | - | | 0.9133 | 1370 | 0.0001 | - | | 0.92 | 1380 | 0.0001 | - | | 0.9267 | 1390 | 0.0001 | - | | 0.9333 | 1400 | 0.0001 | - | | 0.94 | 1410 | 0.0001 | - | | 0.9467 | 1420 | 0.0001 | - | | 0.9533 | 1430 | 0.031 | - | | 0.96 | 1440 | 0.0001 | - | | 0.9667 | 1450 | 0.0003 | - | | 0.9733 | 1460 | 0.0001 | - | | 0.98 | 1470 | 0.0001 | - | | 0.9867 | 1480 | 0.0001 | - | | 0.9933 | 1490 | 0.0001 | - | | 1.0 | 1500 | 0.0001 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 3.0.1 - Transformers: 4.39.0 - PyTorch: 2.3.0+cu121 - Datasets: 2.20.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/all-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "How to get single UnidirectionalSequenceRnnOp in tflite model ### Issue Type\r\n\r\nSupport\r\n\r\n### Source\r\n\r\nsource\r\n\r\n### Tensorflow Version\r\n\r\n2.8\r\n\r\n### Custom Code\r\n\r\nYes\r\n\r\n### OS Platform and Distribution\r\n\r\nUbuntu 18.04\r\n\r\nAccording to https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc there is `kUnidirectionalSequenceRnnOp` as a single operation in tflite, could you give a python code example - how can I get this? For example - this code for LSTM gives tflite with one UnidirectionalSequenceLSTM Op.\r\n```py\r\n# NOTE tested with TF 2.8.0\r\nimport tensorflow as tf\r\nimport numpy as np\r\n\r\nfrom tensorflow import keras\r\n\r\n\r\nmodel = keras.Sequential()\r\nshape = (4, 4)\r\n\r\nmodel.add(keras.layers.InputLayer(input_shape=shape, batch_size=1))\r\nmodel.add(keras.layers.LSTM(2, input_shape=shape))\r\n```\r\n![image](https://user-images.githubusercontent.com/4616940/197647526-59c63de2-df61-46a1-bd61-75baa2688376.png)\r\nHow can I do same for UnidirectionalSequenceRnn?"}, {"text": "[Feature Request] GELU activation with the Hexagon delegate **System information**\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 20.04\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (or github SHA if from source): 2.9.1\r\n\r\nI think I'd be able to implement this myself, but wanted to see if there was any interest in including this upstream. Most of this I'm writing out to make sure my own understanding is correct.\r\n\r\n### The problem\r\n\r\nI'd like to add support for the GELU op to the Hexagon Delegate. The motivation for this is mostly for use with [DistilBERT](https://huggingface.co/distilbert-base-multilingual-cased), which uses this activation function in its feedforward network layers. (Also used by BERT, GPT-3, RoBERTa, etc.)\r\n\r\nAdding this as a supported op for the Hexagon delegate would avoid creating a graph partition/transferring between DSP<-->CPU each time the GELU activation function is used.\r\n\r\n### How I'd implement this\r\n\r\nGELU in TF Lite is implemented as a lookup table when there are integer inputs ([here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/kernels/activations.cc#L120-L140) and [here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/kernels/internal/reference/gelu.h#L37-L53)).\r\n\r\nThis same approach could be used for the Hexagon delegate, as it has int8/uint8 data types and also supports lookup tables.\r\n\r\nI'd plan to do this by adding a new op builder in the delegate, populating a lookup table for each node as is currently done for the CPU version of the op, and then using the [Gather_8](https://source.codeaurora.org/quic/hexagon_nn/nnlib/tree/hexagon/ops/src/op_gather.c) nnlib library function to do the lookup.\r\n\r\n### Possible workaround\r\n\r\nA workaround I thought of:\r\n\r\nI'm going to try removing the [pattern matching](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/lite/transforms/optimize_patterns.td#L1034-L1095) for approximate GELU in MLIR, and then using the approximate version of GELU (so that using tanh and not Erf). This will probably be slower, but should let me keep execution on the DSP.\r\n\r\nSince this will then be tanh, addition, multiplication ops instead of GELU they should all be runnable by the DSP."}, {"text": "Data init API for TFLite Swift <details><summary>Click to expand!</summary> \n \n ### Issue Type\n\nFeature Request\n\n### Source\n\nsource\n\n### Tensorflow Version\n\n2.8+\n\n### Custom Code\n\nNo\n\n### OS Platform and Distribution\n\n_No response_\n\n### Mobile device\n\n_No response_\n\n### Python version\n\n_No response_\n\n### Bazel version\n\n_No response_\n\n### GCC/Compiler version\n\n_No response_\n\n### CUDA/cuDNN version\n\n_No response_\n\n### GPU model and memory\n\n_No response_\n\n### Current Behaviour?\n\n```shell\nThe current Swift API only has `init` functions from files on disk unlike the Java (Android) API which has a byte buffer initializer. It'd be convenient if the Swift API could initialize `Interpreters` from `Data`.\n```\n\n\n### Standalone code to reproduce the issue\n\n```shell\nNo code. This is a feature request\n```\n\n\n### Relevant log output\n\n_No response_</details>"}, {"text": "tf.distribute.MirroredStrategy for asynchronous training <details><summary>Click to expand!</summary> \r\n \r\n ### Issue Type\r\n\r\nFeature Request\r\n\r\n### Tensorflow Version\r\n\r\n2.8.1\r\n\r\n### Python version\r\n\r\n3.8.13\r\n\r\n### CUDA/cuDNN version\r\n\r\n11.8\r\n\r\n### Use Case\r\n\r\nI need to run multiple asynchronous copies of the same model on different slices of the dataset (e.g. with bootstrap sampling). There's no *good* way to do this in keras api that I'm aware of, although a couple of hacks exist. Would this use case be feasible with tf.distribute?\r\n\r\n### Feature Request\r\n\r\n`tf.distribute.MirroredStrategy` is a synchronous, data parallel strategy for distributed training across multiple devices on a single host worker.\r\n\r\nWould it be possible to modify this strategy to allow for asynchronous training of all model replicas, without computing the average gradient over all replicas to update weights? In this case each replica would need its own un-mirrored copy of model weights, and the update rule would depend only on the loss and gradients of each replica.\r\n\r\nThanks"}, {"text": "Build TensorFlow Lite for iOS failed!!!! Please go to Stack Overflow for help and support:\r\n\r\nhttps://stackoverflow.com/questions/tagged/tensorflow\r\n\r\nIf you open a GitHub issue, here is our policy:\r\n\r\n1. `bazel build --config=ios_arm64 -c opt --cxxopt=--std=c++17 \\\\\r\n //tensorflow/lite/ios:TensorFlowLiteC_framework\r\n❯ bazel build --incompatible_run_shell_command_string=false --verbose_failures --config=ios_arm64 -c opt //tensorflow/lite/ios:TensorFlowLiteCMetal_framework\r\nINFO: Options provided by the client:\r\n Inherited 'common' options: --isatty=1 --terminal_columns=170\r\nINFO: Reading rc options for 'build' from /Users/thao/Desktop/tensorflow/.bazelrc:\r\n Inherited 'common' options: --experimental_repo_remote_exec\r\nINFO: Reading rc options for 'build' from /Users/thao/Desktop/tensorflow/.bazelrc:\r\n 'build' options: --define framework_shared_object=true --define tsl_protobuf_header_only=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library --experimental_link_static_libraries_once=false\r\nINFO: Reading rc options for 'build' from /Users/thao/Desktop/tensorflow/.tf_configure.bazelrc:\r\n 'build' options: --action_env PYTHON_BIN_PATH=/Users/thao/miniforge3/bin/python --action_env PYTHON_LIB_PATH=/Users/thao/miniforge3/lib/python3.10/site-packages --python_path=/Users/thao/miniforge3/bin/python\r\nINFO: Reading rc options for 'build' from /Users/thao/Desktop/tensorflow/.bazelrc:\r\n 'build' options: --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/ir,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_jitrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/common,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/graph_executor,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils\r\nINFO: Found applicable config definition build:short_logs in file /Users/thao/Desktop/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING\r\nINFO: Found applicable config definition build:v2 in file /Users/thao/Desktop/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1\r\nINFO: Found applicable config definition build:ios_arm64 in file /Users/thao/Desktop/tensorflow/.bazelrc: --config=ios --cpu=ios_arm64\r\nINFO: Found applicable config definition build:ios in file /Users/thao/Desktop/tensorflow/.bazelrc: --apple_platform_type=ios --apple_bitcode=embedded --copt=-fembed-bitcode --copt=-Wno-c++11-narrowing --noenable_platform_specific_config --copt=-w --cxxopt=-std=c++17 --host_cxxopt=-std=c++17 --define=with_xla_support=false\r\nINFO: Build option --cxxopt has changed, discarding analysis cache.\r\nERROR: /private/var/tmp/_bazel_thao/26d40dc75f2c247e7283b353a9ab184f/external/local_config_cc/BUILD:48:19: in cc_toolchain_suite rule @local_config_cc//:toolchain: cc_toolchain_suite '@local_config_cc//:toolchain' does not contain a toolchain for cpu 'ios_arm64'\r\nERROR: /private/var/tmp/_bazel_thao/26d40dc75f2c247e7283b353a9ab184f/external/local_config_cc/BUILD:48:19: Analysis of target '@local_config_cc//:toolchain' failed\r\nERROR: Analysis of target '//tensorflow/lite/ios:TensorFlowLiteCMetal_framework' failed; build aborted: \r\nINFO: Elapsed time: 45.455s\r\nINFO: 0 processes.\r\nFAILED: Build did NOT complete successfully (66 packages loaded, 1118 targets configured)`\r\n\r\n**Here's why we have that policy**: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.\r\n\r\n------------------------\r\n\r\n### System information\r\nMacOS-M1Max : 13.3\r\nTensorflow:2.9.2\r\nPython: 3.10.0\r\n\r\n\r\n\r\n### Describe the problem\r\nDescribe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request.\r\n\r\n### Source code / logs\r\nInclude any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.\r\n"}], "inference": true}
task
[ "TEXT_CLASSIFICATION" ]
46,133
Romain1311/model_emotion
Romain1311
text-classification
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-06-24T20:50:54Z
2024-06-24T20:53:51+00:00
6
0
--- base_model: distilbert-base-uncased datasets: - emotion license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: model_emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: test args: split metrics: - type: accuracy value: 0.9185 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2136 - Accuracy: 0.9185 - F1 score: 0.9177 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 0.7765 | 1.0 | 250 | 0.2963 | 0.903 | 0.9016 | | 0.2439 | 2.0 | 500 | 0.2136 | 0.9185 | 0.9177 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2136 - Accuracy: 0.9185 - F1 score: 0.9177 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 0.7765 | 1.0 | 250 | 0.2963 | 0.903 | 0.9016 | | 0.2439 | 2.0 | 500 | 0.2136 | 0.9185 | 0.9177 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "model_emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "test", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9185, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,134
fine-tuned/ArguAna-32000-384-gpt-4o-2024-05-13-13220755
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/ArguAna-32000-384-gpt-4o-2024-05-13-13220755", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-05-29T19:11:48Z
2024-05-29T19:12:22+00:00
8
0
--- datasets: - fine-tuned/ArguAna-32000-384-gpt-4o-2024-05-13-13220755 - allenai/c4 language: - en - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/ArguAna-32000-384-gpt-4o-2024-05-13-13220755', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
null
Non_BioNLP
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/ArguAna-32000-384-gpt-4o-2024-05-13-13220755', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
{"datasets": ["fine-tuned/ArguAna-32000-384-gpt-4o-2024-05-13-13220755", "allenai/c4"], "language": ["en", "en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"]}
task
[ "TEXT_CLASSIFICATION" ]
46,135
unsloth/Qwen2-VL-7B
unsloth
image-text-to-text
[ "transformers", "safetensors", "qwen2_vl", "image-text-to-text", "multimodal", "unsloth", "qwen", "conversational", "en", "arxiv:2409.12191", "arxiv:2308.12966", "base_model:Qwen/Qwen2-VL-7B", "base_model:finetune:Qwen/Qwen2-VL-7B", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
2025-03-09T06:03:50Z
2025-03-09T06:28:55+00:00
19
0
--- base_model: Qwen/Qwen2-VL-7B language: - en library_name: transformers license: apache-2.0 pipeline_tag: image-text-to-text tags: - multimodal - unsloth - qwen --- # Qwen2-VL-7B <a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Introduction We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation. ### What’s New in Qwen2-VL? #### Key Enhancements: * **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. * **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc. * **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions. * **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc. #### Model Architecture Updates: * **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience. <p align="center"> <img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/> <p> * **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities. <p align="center"> <img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/> <p> We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL). ## Evaluation ### Image Benchmarks | Benchmark | InternVL2-8B | MiniCPM-V 2.6 | GPT-4o-mini | **Qwen2-VL-7B** | | :--- | :---: | :---: | :---: | :---: | | MMMU<sub>val</sub> | 51.8 | 49.8 | **60**| 54.1 | | DocVQA<sub>test</sub> | 91.6 | 90.8 | - | **94.5** | | InfoVQA<sub>test</sub> | 74.8 | - | - |**76.5** | | ChartQA<sub>test</sub> | **83.3** | - |- | 83.0 | | TextVQA<sub>val</sub> | 77.4 | 80.1 | -| **84.3** | | OCRBench | 794 | **852** | 785 | 845 | | MTVQA | - | - | -| **26.3** | | VCR<sub>en easy</sub> | - | 73.88 | 83.60 | **89.70** | | VCR<sub>zh easy</sub> | - | 10.18| 1.10 | **59.94** | | RealWorldQA | 64.4 | - | - | **70.1** | | MME<sub>sum</sub> | 2210.3 | **2348.4** | 2003.4| 2326.8 | | MMBench-EN<sub>test</sub> | 81.7 | - | - | **83.0** | | MMBench-CN<sub>test</sub> | **81.2** | - | - | 80.5 | | MMBench-V1.1<sub>test</sub> | 79.4 | 78.0 | 76.0| **80.7** | | MMT-Bench<sub>test</sub> | - | - | - |**63.7** | | MMStar | **61.5** | 57.5 | 54.8 | 60.7 | | MMVet<sub>GPT-4-Turbo</sub> | 54.2 | 60.0 | **66.9** | 62.0 | | HallBench<sub>avg</sub> | 45.2 | 48.1 | 46.1| **50.6** | | MathVista<sub>testmini</sub> | 58.3 | **60.6** | 52.4 | 58.2 | | MathVision | - | - | - | **16.3** | ### Video Benchmarks | Benchmark | Internvl2-8B | LLaVA-OneVision-7B | MiniCPM-V 2.6 | **Qwen2-VL-7B** | | :--- | :---: | :---: | :---: | :---: | | MVBench | 66.4 | 56.7 | - | **67.0** | | PerceptionTest<sub>test</sub> | - | 57.1 | - | **62.3** | | EgoSchema<sub>test</sub> | - | 60.1 | - | **66.7** | | Video-MME<sub>wo/w subs</sub> | 54.0/56.9 | 58.2/- | 60.9/63.6 | **63.3**/**69.0** | ## Requirements The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error: ``` KeyError: 'qwen2_vl' ``` ## Quickstart We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command: ```bash pip install qwen-vl-utils ``` Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`: ```python from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor from qwen_vl_utils import process_vision_info # default: Load the model on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto" ) # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios. # model = Qwen2VLForConditionalGeneration.from_pretrained( # "Qwen/Qwen2-VL-7B-Instruct", # torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2", # device_map="auto", # ) # default processer processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct") # The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage. # min_pixels = 256*28*28 # max_pixels = 1280*28*28 # processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) messages = [ { "role": "user", "content": [ { "type": "image", "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference: Generation of the output generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` <details> <summary>Without qwen_vl_utils</summary> ```python from PIL import Image import requests import torch from torchvision import io from typing import Dict from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor # Load the model in half-precision on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto" ) processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct") # Image url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg" image = Image.open(requests.get(url, stream=True).raw) conversation = [ { "role": "user", "content": [ { "type": "image", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preprocess the inputs text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) # Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n' inputs = processor( text=[text_prompt], images=[image], padding=True, return_tensors="pt" ) inputs = inputs.to("cuda") # Inference: Generation of the output output_ids = model.generate(**inputs, max_new_tokens=128) generated_ids = [ output_ids[len(input_ids) :] for input_ids, output_ids in zip(inputs.input_ids, output_ids) ] output_text = processor.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True ) print(output_text) ``` </details> <details> <summary>Multi image inference</summary> ```python # Messages containing multiple images and a text query messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "Identify the similarities between these images."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Video inference</summary> ```python # Messages containing a images list as a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": [ "file:///path/to/frame1.jpg", "file:///path/to/frame2.jpg", "file:///path/to/frame3.jpg", "file:///path/to/frame4.jpg", ], "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Messages containing a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": "file:///path/to/video1.mp4", "max_pixels": 360 * 420, "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Batch inference</summary> ```python # Sample messages for batch inference messages1 = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "What are the common elements in these pictures?"}, ], } ] messages2 = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who are you?"}, ] # Combine messages for batch processing messages = [messages1, messages1] # Preparation for batch inference texts = [ processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) for msg in messages ] image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=texts, images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Batch Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_texts = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_texts) ``` </details> ### More Usage Tips For input images, we support local files, base64, and URLs. For videos, we currently only support local files. ```python # You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text. ## Local file path messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Image URL messages = [ { "role": "user", "content": [ {"type": "image", "image": "http://path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Base64 encoded image messages = [ { "role": "user", "content": [ {"type": "image", "image": "data:image;base64,/9j/..."}, {"type": "text", "text": "Describe this image."}, ], } ] ``` #### Image Resolution for performance boost The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage. ```python min_pixels = 256 * 28 * 28 max_pixels = 1280 * 28 * 28 processor = AutoProcessor.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels ) ``` Besides, We provide two methods for fine-grained control over the image size input to the model: 1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels. 2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28. ```python # min_pixels and max_pixels messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "resized_height": 280, "resized_width": 420, }, {"type": "text", "text": "Describe this image."}, ], } ] # resized_height and resized_width messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "min_pixels": 50176, "max_pixels": 50176, }, {"type": "text", "text": "Describe this image."}, ], } ] ``` ## Limitations While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions: 1. Lack of Audio Support: The current model does **not comprehend audio information** within videos. 2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered. 3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands. 4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement. 5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements. 6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects. These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{Qwen2VL, title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution}, author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang}, journal={arXiv preprint arXiv:2409.12191}, year={2024} } @article{Qwen-VL, title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond}, author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren}, journal={arXiv preprint arXiv:2308.12966}, year={2023} } ```
null
Non_BioNLP
# Qwen2-VL-7B <a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Introduction We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation. ### What’s New in Qwen2-VL? #### Key Enhancements: * **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. * **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc. * **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions. * **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc. #### Model Architecture Updates: * **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience. <p align="center"> <img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/> <p> * **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities. <p align="center"> <img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/> <p> We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL). ## Evaluation ### Image Benchmarks | Benchmark | InternVL2-8B | MiniCPM-V 2.6 | GPT-4o-mini | **Qwen2-VL-7B** | | :--- | :---: | :---: | :---: | :---: | | MMMU<sub>val</sub> | 51.8 | 49.8 | **60**| 54.1 | | DocVQA<sub>test</sub> | 91.6 | 90.8 | - | **94.5** | | InfoVQA<sub>test</sub> | 74.8 | - | - |**76.5** | | ChartQA<sub>test</sub> | **83.3** | - |- | 83.0 | | TextVQA<sub>val</sub> | 77.4 | 80.1 | -| **84.3** | | OCRBench | 794 | **852** | 785 | 845 | | MTVQA | - | - | -| **26.3** | | VCR<sub>en easy</sub> | - | 73.88 | 83.60 | **89.70** | | VCR<sub>zh easy</sub> | - | 10.18| 1.10 | **59.94** | | RealWorldQA | 64.4 | - | - | **70.1** | | MME<sub>sum</sub> | 2210.3 | **2348.4** | 2003.4| 2326.8 | | MMBench-EN<sub>test</sub> | 81.7 | - | - | **83.0** | | MMBench-CN<sub>test</sub> | **81.2** | - | - | 80.5 | | MMBench-V1.1<sub>test</sub> | 79.4 | 78.0 | 76.0| **80.7** | | MMT-Bench<sub>test</sub> | - | - | - |**63.7** | | MMStar | **61.5** | 57.5 | 54.8 | 60.7 | | MMVet<sub>GPT-4-Turbo</sub> | 54.2 | 60.0 | **66.9** | 62.0 | | HallBench<sub>avg</sub> | 45.2 | 48.1 | 46.1| **50.6** | | MathVista<sub>testmini</sub> | 58.3 | **60.6** | 52.4 | 58.2 | | MathVision | - | - | - | **16.3** | ### Video Benchmarks | Benchmark | Internvl2-8B | LLaVA-OneVision-7B | MiniCPM-V 2.6 | **Qwen2-VL-7B** | | :--- | :---: | :---: | :---: | :---: | | MVBench | 66.4 | 56.7 | - | **67.0** | | PerceptionTest<sub>test</sub> | - | 57.1 | - | **62.3** | | EgoSchema<sub>test</sub> | - | 60.1 | - | **66.7** | | Video-MME<sub>wo/w subs</sub> | 54.0/56.9 | 58.2/- | 60.9/63.6 | **63.3**/**69.0** | ## Requirements The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error: ``` KeyError: 'qwen2_vl' ``` ## Quickstart We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command: ```bash pip install qwen-vl-utils ``` Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`: ```python from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor from qwen_vl_utils import process_vision_info # default: Load the model on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto" ) # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios. # model = Qwen2VLForConditionalGeneration.from_pretrained( # "Qwen/Qwen2-VL-7B-Instruct", # torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2", # device_map="auto", # ) # default processer processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct") # The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage. # min_pixels = 256*28*28 # max_pixels = 1280*28*28 # processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) messages = [ { "role": "user", "content": [ { "type": "image", "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference: Generation of the output generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` <details> <summary>Without qwen_vl_utils</summary> ```python from PIL import Image import requests import torch from torchvision import io from typing import Dict from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor # Load the model in half-precision on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto" ) processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct") # Image url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg" image = Image.open(requests.get(url, stream=True).raw) conversation = [ { "role": "user", "content": [ { "type": "image", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preprocess the inputs text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) # Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n' inputs = processor( text=[text_prompt], images=[image], padding=True, return_tensors="pt" ) inputs = inputs.to("cuda") # Inference: Generation of the output output_ids = model.generate(**inputs, max_new_tokens=128) generated_ids = [ output_ids[len(input_ids) :] for input_ids, output_ids in zip(inputs.input_ids, output_ids) ] output_text = processor.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True ) print(output_text) ``` </details> <details> <summary>Multi image inference</summary> ```python # Messages containing multiple images and a text query messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "Identify the similarities between these images."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Video inference</summary> ```python # Messages containing a images list as a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": [ "file:///path/to/frame1.jpg", "file:///path/to/frame2.jpg", "file:///path/to/frame3.jpg", "file:///path/to/frame4.jpg", ], "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Messages containing a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": "file:///path/to/video1.mp4", "max_pixels": 360 * 420, "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Batch inference</summary> ```python # Sample messages for batch inference messages1 = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "What are the common elements in these pictures?"}, ], } ] messages2 = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who are you?"}, ] # Combine messages for batch processing messages = [messages1, messages1] # Preparation for batch inference texts = [ processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) for msg in messages ] image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=texts, images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Batch Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_texts = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_texts) ``` </details> ### More Usage Tips For input images, we support local files, base64, and URLs. For videos, we currently only support local files. ```python # You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text. ## Local file path messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Image URL messages = [ { "role": "user", "content": [ {"type": "image", "image": "http://path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Base64 encoded image messages = [ { "role": "user", "content": [ {"type": "image", "image": "data:image;base64,/9j/..."}, {"type": "text", "text": "Describe this image."}, ], } ] ``` #### Image Resolution for performance boost The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage. ```python min_pixels = 256 * 28 * 28 max_pixels = 1280 * 28 * 28 processor = AutoProcessor.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels ) ``` Besides, We provide two methods for fine-grained control over the image size input to the model: 1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels. 2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28. ```python # min_pixels and max_pixels messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "resized_height": 280, "resized_width": 420, }, {"type": "text", "text": "Describe this image."}, ], } ] # resized_height and resized_width messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "min_pixels": 50176, "max_pixels": 50176, }, {"type": "text", "text": "Describe this image."}, ], } ] ``` ## Limitations While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions: 1. Lack of Audio Support: The current model does **not comprehend audio information** within videos. 2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered. 3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands. 4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement. 5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements. 6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects. These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{Qwen2VL, title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution}, author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang}, journal={arXiv preprint arXiv:2409.12191}, year={2024} } @article{Qwen-VL, title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond}, author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren}, journal={arXiv preprint arXiv:2308.12966}, year={2023} } ```
{"base_model": "Qwen/Qwen2-VL-7B", "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "image-text-to-text", "tags": ["multimodal", "unsloth", "qwen"]}
task
[ "QUESTION_ANSWERING" ]
46,136
MJ03/distilbert-base-uncased-distilled-clinc
MJ03
text-classification
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-31T11:09:41Z
2023-05-31T11:20:34+00:00
11
0
--- datasets: - clinc_oos license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: type: text-classification name: Text Classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - type: accuracy value: 0.9396774193548387 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.1022 - Accuracy: 0.9397 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9252 | 1.0 | 318 | 0.5759 | 0.7268 | | 0.4452 | 2.0 | 636 | 0.2766 | 0.8787 | | 0.2465 | 3.0 | 954 | 0.1728 | 0.9174 | | 0.1722 | 4.0 | 1272 | 0.1356 | 0.93 | | 0.1398 | 5.0 | 1590 | 0.1202 | 0.9348 | | 0.1243 | 6.0 | 1908 | 0.1118 | 0.9387 | | 0.1148 | 7.0 | 2226 | 0.1073 | 0.9387 | | 0.109 | 8.0 | 2544 | 0.1044 | 0.9403 | | 0.1056 | 9.0 | 2862 | 0.1027 | 0.9394 | | 0.1043 | 10.0 | 3180 | 0.1022 | 0.9397 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 1.16.1 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.1022 - Accuracy: 0.9397 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9252 | 1.0 | 318 | 0.5759 | 0.7268 | | 0.4452 | 2.0 | 636 | 0.2766 | 0.8787 | | 0.2465 | 3.0 | 954 | 0.1728 | 0.9174 | | 0.1722 | 4.0 | 1272 | 0.1356 | 0.93 | | 0.1398 | 5.0 | 1590 | 0.1202 | 0.9348 | | 0.1243 | 6.0 | 1908 | 0.1118 | 0.9387 | | 0.1148 | 7.0 | 2226 | 0.1073 | 0.9387 | | 0.109 | 8.0 | 2544 | 0.1044 | 0.9403 | | 0.1056 | 9.0 | 2862 | 0.1027 | 0.9394 | | 0.1043 | 10.0 | 3180 | 0.1022 | 0.9397 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 1.16.1 - Tokenizers 0.13.3
{"datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "config": "plus", "split": "validation", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9396774193548387, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,137
AmelieSchreiber/esm2_t6_8M_ptm_lora_500K
AmelieSchreiber
null
[ "license:mit", "region:us" ]
2023-10-09T21:08:26Z
2023-10-11T02:05:54+00:00
0
0
--- license: mit --- # ESM-2 for Post Translational Modification ## Metrics ```python Train metrics: {'eval_loss': 0.024510689079761505, 'eval_accuracy': 0.9908227849618837, 'eval_precision': 0.22390420883031378, 'eval_recall': 0.9793229461354229, 'eval_f1': 0.3644773616334614, 'eval_auc': 0.9850883581685357, 'eval_mcc': 0.4660172779827273} Test metrics: {'eval_loss': 0.1606895923614502, 'eval_accuracy': 0.9363938912290479, 'eval_precision': 0.04428881619840198, 'eval_recall': 0.7708102070506146, 'eval_f1': 0.08376472210171558, 'eval_auc': 0.8539155251667717, 'eval_mcc': 0.17519724897930178} ``` ## Using the Model To use this model, firts run: ``` !pip install transformers -q !pip install peft -q ``` Then run the following on your protein sequence to predict post translational modification sites: ```python from transformers import AutoModelForTokenClassification, AutoTokenizer from peft import PeftModel import torch # Path to the saved LoRA model model_path = "AmelieSchreiber/esm2_t6_8M_ptm_lora_500K" # ESM2 base model base_model_path = "facebook/esm2_t6_8M_UR50D" # Load the model base_model = AutoModelForTokenClassification.from_pretrained(base_model_path) loaded_model = PeftModel.from_pretrained(base_model, model_path) # Ensure the model is in evaluation mode loaded_model.eval() # Load the tokenizer loaded_tokenizer = AutoTokenizer.from_pretrained(base_model_path) # Protein sequence for inference protein_sequence = "MAVPETRPNHTIYINNLNEKIKKDELKKSLHAIFSRFGQILDILVSRSLKMRGQAFVIFKEVSSATNALRSMQGFPFYDKPMRIQYAKTDSDIIAKMKGT" # Replace with your actual sequence # Tokenize the sequence inputs = loaded_tokenizer(protein_sequence, return_tensors="pt", truncation=True, max_length=1024, padding='max_length') # Run the model with torch.no_grad(): logits = loaded_model(**inputs).logits # Get predictions tokens = loaded_tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) # Convert input ids back to tokens predictions = torch.argmax(logits, dim=2) # Define labels id2label = { 0: "No ptm site", 1: "ptm site" } # Print the predicted labels for each token for token, prediction in zip(tokens, predictions[0].numpy()): if token not in ['<pad>', '<cls>', '<eos>']: print((token, id2label[prediction])) ```
null
Non_BioNLP
# ESM-2 for Post Translational Modification ## Metrics ```python Train metrics: {'eval_loss': 0.024510689079761505, 'eval_accuracy': 0.9908227849618837, 'eval_precision': 0.22390420883031378, 'eval_recall': 0.9793229461354229, 'eval_f1': 0.3644773616334614, 'eval_auc': 0.9850883581685357, 'eval_mcc': 0.4660172779827273} Test metrics: {'eval_loss': 0.1606895923614502, 'eval_accuracy': 0.9363938912290479, 'eval_precision': 0.04428881619840198, 'eval_recall': 0.7708102070506146, 'eval_f1': 0.08376472210171558, 'eval_auc': 0.8539155251667717, 'eval_mcc': 0.17519724897930178} ``` ## Using the Model To use this model, firts run: ``` !pip install transformers -q !pip install peft -q ``` Then run the following on your protein sequence to predict post translational modification sites: ```python from transformers import AutoModelForTokenClassification, AutoTokenizer from peft import PeftModel import torch # Path to the saved LoRA model model_path = "AmelieSchreiber/esm2_t6_8M_ptm_lora_500K" # ESM2 base model base_model_path = "facebook/esm2_t6_8M_UR50D" # Load the model base_model = AutoModelForTokenClassification.from_pretrained(base_model_path) loaded_model = PeftModel.from_pretrained(base_model, model_path) # Ensure the model is in evaluation mode loaded_model.eval() # Load the tokenizer loaded_tokenizer = AutoTokenizer.from_pretrained(base_model_path) # Protein sequence for inference protein_sequence = "MAVPETRPNHTIYINNLNEKIKKDELKKSLHAIFSRFGQILDILVSRSLKMRGQAFVIFKEVSSATNALRSMQGFPFYDKPMRIQYAKTDSDIIAKMKGT" # Replace with your actual sequence # Tokenize the sequence inputs = loaded_tokenizer(protein_sequence, return_tensors="pt", truncation=True, max_length=1024, padding='max_length') # Run the model with torch.no_grad(): logits = loaded_model(**inputs).logits # Get predictions tokens = loaded_tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) # Convert input ids back to tokens predictions = torch.argmax(logits, dim=2) # Define labels id2label = { 0: "No ptm site", 1: "ptm site" } # Print the predicted labels for each token for token, prediction in zip(tokens, predictions[0].numpy()): if token not in ['<pad>', '<cls>', '<eos>']: print((token, id2label[prediction])) ```
{"license": "mit"}
task
[ "TRANSLATION" ]
46,138
SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask_finetune
SEBIS
summarization
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-06-23T07:57:33+00:00
118
0
--- tags: - summarization widget: - text: def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end --- # CodeTrans model for code documentation generation ruby Pretrained model on programming language ruby using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the ruby function/method. ## Intended uses & limitations The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/ruby/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
null
Non_BioNLP
# CodeTrans model for code documentation generation ruby Pretrained model on programming language ruby using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the ruby function/method. ## Intended uses & limitations The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/ruby/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"}]}
task
[ "SUMMARIZATION" ]
46,139
facebook/fasttext-ms-vectors
facebook
feature-extraction
[ "fasttext", "feature-extraction", "ms", "arxiv:1607.04606", "arxiv:1802.06893", "arxiv:1607.01759", "arxiv:1612.03651", "license:cc-by-sa-3.0", "region:us" ]
2023-03-20T19:46:05Z
2023-06-03T22:13:58+00:00
2
0
--- language: ms library_name: fasttext license: cc-by-sa-3.0 tags: - feature-extraction widget: - text: apple example_title: apple --- # fastText (Malay) fastText is an open-source, free, lightweight library that allows users to learn text representations and text classifiers. It works on standard, generic hardware. Models can later be reduced in size to even fit on mobile devices. It was introduced in [this paper](https://arxiv.org/abs/1607.04606). The official website can be found [here](https://fasttext.cc/). ## Model description fastText is a library for efficient learning of word representations and sentence classification. fastText is designed to be simple to use for developers, domain experts, and students. It's dedicated to text classification and learning word representations, and was designed to allow for quick model iteration and refinement without specialized hardware. fastText models can be trained on more than a billion words on any multicore CPU in less than a few minutes. It includes pre-trained models learned on Wikipedia and in over 157 different languages. fastText can be used as a command line, linked to a C++ application, or used as a library for use cases from experimentation and prototyping to production. ## Intended uses & limitations You can use pre-trained word vectors for text classification or language identification. See the [tutorials](https://fasttext.cc/docs/en/supervised-tutorial.html) and [resources](https://fasttext.cc/docs/en/english-vectors.html) on its official website to look for tasks that interest you. ### How to use Here is how to load and use a pre-trained vectors ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-ms-vectors", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.words ['the', 'of', 'and', 'to', 'in', 'a', 'that', 'is', ...] >>> len(model.words) 145940 >>> model['bread'] array([ 4.89417791e-01, 1.60882145e-01, -2.25947708e-01, -2.94273376e-01, -1.04577184e-01, 1.17962055e-01, 1.34821936e-01, -2.41778508e-01, ...]) ``` Here is how to use this model to query nearest neighbors of an English word vector: ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-en-nearest-neighbors", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.get_nearest_neighbors("bread", k=5) [(0.5641006231307983, 'butter'), (0.48875734210014343, 'loaf'), (0.4491206705570221, 'eat'), (0.42444291710853577, 'food'), (0.4229326844215393, 'cheese')] ``` Here is how to use this model to detect the language of a given text: ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-language-identification", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.predict("Hello, world!") (('__label__eng_Latn',), array([0.81148803])) >>> model.predict("Hello, world!", k=5) (('__label__eng_Latn', '__label__vie_Latn', '__label__nld_Latn', '__label__pol_Latn', '__label__deu_Latn'), array([0.61224753, 0.21323682, 0.09696738, 0.01359863, 0.01319415])) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. Cosine similarity can be used to measure the similarity between two different word vectors. If two two vectors are identical, the cosine similarity will be 1. For two completely unrelated vectors, the value will be 0. If two vectors have an opposite relationship, the value will be -1. ```python >>> import numpy as np >>> def cosine_similarity(word1, word2): >>> return np.dot(model[word1], model[word2]) / (np.linalg.norm(model[word1]) * np.linalg.norm(model[word2])) >>> cosine_similarity("man", "boy") 0.061653383 >>> cosine_similarity("man", "ceo") 0.11989131 >>> cosine_similarity("woman", "ceo") -0.08834904 ``` ## Training data Pre-trained word vectors for 157 languages were trained on [Common Crawl](http://commoncrawl.org/) and [Wikipedia](https://www.wikipedia.org/) using fastText. These models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. We also distribute three new word analogy datasets, for French, Hindi and Polish. ## Training procedure ### Tokenization We used the [Stanford word segmenter](https://nlp.stanford.edu/software/segmenter.html) for Chinese, [Mecab](http://taku910.github.io/mecab/) for Japanese and [UETsegmenter](https://github.com/phongnt570/UETsegmenter) for Vietnamese. For languages using the Latin, Cyrillic, Hebrew or Greek scripts, we used the tokenizer from the [Europarl](https://www.statmt.org/europarl/) preprocessing tools. For the remaining languages, we used the ICU tokenizer. More information about the training of these models can be found in the article [Learning Word Vectors for 157 Languages](https://arxiv.org/abs/1802.06893). ### License The word vectors are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/). ### Evaluation datasets The analogy evaluation datasets described in the paper are available here: [French](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-fr.txt), [Hindi](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-hi.txt), [Polish](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-pl.txt). ### BibTeX entry and citation info Please cite [1] if using this code for learning word representations or [2] if using for text classification. [1] P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606) ```markup @article{bojanowski2016enriching, title={Enriching Word Vectors with Subword Information}, author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas}, journal={arXiv preprint arXiv:1607.04606}, year={2016} } ``` [2] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759) ```markup @article{joulin2016bag, title={Bag of Tricks for Efficient Text Classification}, author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas}, journal={arXiv preprint arXiv:1607.01759}, year={2016} } ``` [3] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: Compressing text classification models*](https://arxiv.org/abs/1612.03651) ```markup @article{joulin2016fasttext, title={FastText.zip: Compressing text classification models}, author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Douze, Matthijs and J{'e}gou, H{'e}rve and Mikolov, Tomas}, journal={arXiv preprint arXiv:1612.03651}, year={2016} } ``` If you use these word vectors, please cite the following paper: [4] E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893) ```markup @inproceedings{grave2018learning, title={Learning Word Vectors for 157 Languages}, author={Grave, Edouard and Bojanowski, Piotr and Gupta, Prakhar and Joulin, Armand and Mikolov, Tomas}, booktitle={Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)}, year={2018} } ``` (\* These authors contributed equally.)
null
TBD
# fastText (Malay) fastText is an open-source, free, lightweight library that allows users to learn text representations and text classifiers. It works on standard, generic hardware. Models can later be reduced in size to even fit on mobile devices. It was introduced in [this paper](https://arxiv.org/abs/1607.04606). The official website can be found [here](https://fasttext.cc/). ## Model description fastText is a library for efficient learning of word representations and sentence classification. fastText is designed to be simple to use for developers, domain experts, and students. It's dedicated to text classification and learning word representations, and was designed to allow for quick model iteration and refinement without specialized hardware. fastText models can be trained on more than a billion words on any multicore CPU in less than a few minutes. It includes pre-trained models learned on Wikipedia and in over 157 different languages. fastText can be used as a command line, linked to a C++ application, or used as a library for use cases from experimentation and prototyping to production. ## Intended uses & limitations You can use pre-trained word vectors for text classification or language identification. See the [tutorials](https://fasttext.cc/docs/en/supervised-tutorial.html) and [resources](https://fasttext.cc/docs/en/english-vectors.html) on its official website to look for tasks that interest you. ### How to use Here is how to load and use a pre-trained vectors ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-ms-vectors", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.words ['the', 'of', 'and', 'to', 'in', 'a', 'that', 'is', ...] >>> len(model.words) 145940 >>> model['bread'] array([ 4.89417791e-01, 1.60882145e-01, -2.25947708e-01, -2.94273376e-01, -1.04577184e-01, 1.17962055e-01, 1.34821936e-01, -2.41778508e-01, ...]) ``` Here is how to use this model to query nearest neighbors of an English word vector: ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-en-nearest-neighbors", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.get_nearest_neighbors("bread", k=5) [(0.5641006231307983, 'butter'), (0.48875734210014343, 'loaf'), (0.4491206705570221, 'eat'), (0.42444291710853577, 'food'), (0.4229326844215393, 'cheese')] ``` Here is how to use this model to detect the language of a given text: ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-language-identification", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.predict("Hello, world!") (('__label__eng_Latn',), array([0.81148803])) >>> model.predict("Hello, world!", k=5) (('__label__eng_Latn', '__label__vie_Latn', '__label__nld_Latn', '__label__pol_Latn', '__label__deu_Latn'), array([0.61224753, 0.21323682, 0.09696738, 0.01359863, 0.01319415])) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. Cosine similarity can be used to measure the similarity between two different word vectors. If two two vectors are identical, the cosine similarity will be 1. For two completely unrelated vectors, the value will be 0. If two vectors have an opposite relationship, the value will be -1. ```python >>> import numpy as np >>> def cosine_similarity(word1, word2): >>> return np.dot(model[word1], model[word2]) / (np.linalg.norm(model[word1]) * np.linalg.norm(model[word2])) >>> cosine_similarity("man", "boy") 0.061653383 >>> cosine_similarity("man", "ceo") 0.11989131 >>> cosine_similarity("woman", "ceo") -0.08834904 ``` ## Training data Pre-trained word vectors for 157 languages were trained on [Common Crawl](http://commoncrawl.org/) and [Wikipedia](https://www.wikipedia.org/) using fastText. These models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. We also distribute three new word analogy datasets, for French, Hindi and Polish. ## Training procedure ### Tokenization We used the [Stanford word segmenter](https://nlp.stanford.edu/software/segmenter.html) for Chinese, [Mecab](http://taku910.github.io/mecab/) for Japanese and [UETsegmenter](https://github.com/phongnt570/UETsegmenter) for Vietnamese. For languages using the Latin, Cyrillic, Hebrew or Greek scripts, we used the tokenizer from the [Europarl](https://www.statmt.org/europarl/) preprocessing tools. For the remaining languages, we used the ICU tokenizer. More information about the training of these models can be found in the article [Learning Word Vectors for 157 Languages](https://arxiv.org/abs/1802.06893). ### License The word vectors are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/). ### Evaluation datasets The analogy evaluation datasets described in the paper are available here: [French](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-fr.txt), [Hindi](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-hi.txt), [Polish](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-pl.txt). ### BibTeX entry and citation info Please cite [1] if using this code for learning word representations or [2] if using for text classification. [1] P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606) ```markup @article{bojanowski2016enriching, title={Enriching Word Vectors with Subword Information}, author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas}, journal={arXiv preprint arXiv:1607.04606}, year={2016} } ``` [2] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759) ```markup @article{joulin2016bag, title={Bag of Tricks for Efficient Text Classification}, author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas}, journal={arXiv preprint arXiv:1607.01759}, year={2016} } ``` [3] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: Compressing text classification models*](https://arxiv.org/abs/1612.03651) ```markup @article{joulin2016fasttext, title={FastText.zip: Compressing text classification models}, author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Douze, Matthijs and J{'e}gou, H{'e}rve and Mikolov, Tomas}, journal={arXiv preprint arXiv:1612.03651}, year={2016} } ``` If you use these word vectors, please cite the following paper: [4] E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893) ```markup @inproceedings{grave2018learning, title={Learning Word Vectors for 157 Languages}, author={Grave, Edouard and Bojanowski, Piotr and Gupta, Prakhar and Joulin, Armand and Mikolov, Tomas}, booktitle={Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)}, year={2018} } ``` (\* These authors contributed equally.)
{"language": "ms", "library_name": "fasttext", "license": "cc-by-sa-3.0", "tags": ["feature-extraction"], "widget": [{"text": "apple", "example_title": "apple"}]}
task
[ "TEXT_CLASSIFICATION" ]
46,140
Cheng98/bert-base-mnli
Cheng98
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-04-05T19:18:34Z
2024-04-08T23:13:47+00:00
15
0
--- base_model: bert-base-cased datasets: - glue language: - en license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: bert-base-mnli results: - task: type: text-classification name: Text Classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - type: accuracy value: 0.8389951179820992 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-mnli This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.8833 - Accuracy: 0.8390 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.18.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-mnli This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.8833 - Accuracy: 0.8390 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.18.0 - Tokenizers 0.13.3
{"base_model": "bert-base-cased", "datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-mnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE MNLI", "type": "glue", "args": "mnli"}, "metrics": [{"type": "accuracy", "value": 0.8389951179820992, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,141
rambodazimi/roberta-base-finetuned-LoRA-MRPC
rambodazimi
null
[ "safetensors", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "region:us" ]
2024-08-31T17:22:17Z
2024-08-31T17:24:05+00:00
0
0
--- datasets: - glue license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: roberta-base-finetuned-LoRA-MRPC results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue args: mrpc metrics: - type: accuracy value: 0.8504901960784313 name: Accuracy - type: f1 value: 0.8912655971479501 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-lora-mrpc This model is a fine-tuned version of [roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Accuracy: 0.8505 - F1: 0.8913 - trainable model parameters: 1771778 - all model parameters: 126418948 - percentage of trainable model parameters: 1.40% ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-04 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - weight_decay: 0.01 - rank: 32 - lora_alpha: 32 - lora_dropout: 0.05 - num_epochs: 3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-lora-mrpc This model is a fine-tuned version of [roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Accuracy: 0.8505 - F1: 0.8913 - trainable model parameters: 1771778 - all model parameters: 126418948 - percentage of trainable model parameters: 1.40% ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-04 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - weight_decay: 0.01 - rank: 32 - lora_alpha: 32 - lora_dropout: 0.05 - num_epochs: 3
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "roberta-base-finetuned-LoRA-MRPC", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.8504901960784313, "name": "Accuracy"}, {"type": "f1", "value": 0.8912655971479501, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,142
MethosPi/llama3-8b-italIA-unsloth-merged
MethosPi
text-generation
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "unsloth", "trl", "sft", "it", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-04-21T07:49:59Z
2024-04-22T06:36:54+00:00
18
3
--- language: - it library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ItalIA is a LLM trained for the Italian language and based on Llama3-8b. ## Model Details ### Model Description ItalIA is a state-of-the-art language model specifically trained for the Italian language using unsloth, leveraging the latest advancements in the LLM frameworks llama3. This model aims to provide highly accurate and context-aware natural language understanding and generation, making it ideal for a wide range of applications from automated customer support to content creation. - **Developed by:** Davide Pizzo - **Model type:** Transformer-based Large Language Model - **Language(s) (NLP):** Italian - **License:** Other - **Finetuned from model:** llama3-8b ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ItalIA can be directly integrated into applications requiring natural language processing in Italian, including but not limited to text summarization, question answering, and conversational agents. ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model serves as a powerful italian base for fine-tuning on specific tasks such as legal document analysis, medical record interpretation, and more specialized forms of conversational AI tailored to specific industries. ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be aware of the potential for biased outputs based on the training data, particularly in scenarios involving regional linguistic variations within Italy. ## How to Get Started with the Model Use the code below to get started with the model. ** from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "MethosPi/llama3-8b-italIA-unsloth-merged" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) text = "Inserisci qui il tuo testo in italiano." input_ids = tokenizer.encode(text, return_tensors="pt") output = model.generate(input_ids) print(tokenizer.decode(output[0], skip_special_tokens=True)) ** ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The model was trained on a diverse corpus of Italian texts, including literature, news articles, and web content, ensuring a broad understanding of the language. ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software unsloth ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact For any question, contact me [[email protected]]
null
Non_BioNLP
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ItalIA is a LLM trained for the Italian language and based on Llama3-8b. ## Model Details ### Model Description ItalIA is a state-of-the-art language model specifically trained for the Italian language using unsloth, leveraging the latest advancements in the LLM frameworks llama3. This model aims to provide highly accurate and context-aware natural language understanding and generation, making it ideal for a wide range of applications from automated customer support to content creation. - **Developed by:** Davide Pizzo - **Model type:** Transformer-based Large Language Model - **Language(s) (NLP):** Italian - **License:** Other - **Finetuned from model:** llama3-8b ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ItalIA can be directly integrated into applications requiring natural language processing in Italian, including but not limited to text summarization, question answering, and conversational agents. ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model serves as a powerful italian base for fine-tuning on specific tasks such as legal document analysis, medical record interpretation, and more specialized forms of conversational AI tailored to specific industries. ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be aware of the potential for biased outputs based on the training data, particularly in scenarios involving regional linguistic variations within Italy. ## How to Get Started with the Model Use the code below to get started with the model. ** from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "MethosPi/llama3-8b-italIA-unsloth-merged" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) text = "Inserisci qui il tuo testo in italiano." input_ids = tokenizer.encode(text, return_tensors="pt") output = model.generate(input_ids) print(tokenizer.decode(output[0], skip_special_tokens=True)) ** ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The model was trained on a diverse corpus of Italian texts, including literature, news articles, and web content, ensuring a broad understanding of the language. ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software unsloth ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact For any question, contact me [[email protected]]
{"language": ["it"], "library_name": "transformers", "tags": ["unsloth", "trl", "sft"]}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
46,143
nguyenphuthien/flan-t5-large-Q4_K_M-GGUF
nguyenphuthien
text2text-generation
[ "gguf", "text2text-generation", "llama-cpp", "gguf-my-repo", "en", "fr", "ro", "de", "multilingual", "dataset:svakulenk0/qrecc", "dataset:taskmaster2", "dataset:djaym7/wiki_dialog", "dataset:deepmind/code_contests", "dataset:lambada", "dataset:gsm8k", "dataset:aqua_rat", "dataset:esnli", "dataset:quasc", "dataset:qed", "base_model:google/flan-t5-large", "base_model:quantized:google/flan-t5-large", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-11-13T12:19:16Z
2024-11-13T12:19:20+00:00
12
0
--- base_model: google/flan-t5-large datasets: - svakulenk0/qrecc - taskmaster2 - djaym7/wiki_dialog - deepmind/code_contests - lambada - gsm8k - aqua_rat - esnli - quasc - qed language: - en - fr - ro - de - multilingual license: apache-2.0 tags: - text2text-generation - llama-cpp - gguf-my-repo widget: - text: 'Translate to German: My name is Arthur' example_title: Translation - text: Please answer to the following question. Who is going to be the next Ballon d'or? example_title: Question Answering - text: 'Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering.' example_title: Logical reasoning - text: Please answer the following question. What is the boiling point of Nitrogen? example_title: Scientific knowledge - text: Answer the following yes/no question. Can you write a whole Haiku in a single tweet? example_title: Yes/no question - text: Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet? example_title: Reasoning task - text: 'Q: ( False or not False or False ) is? A: Let''s think step by step' example_title: Boolean Expressions - text: The square root of x is the cube root of y. What is y to the power of 2, if x = 4? example_title: Math reasoning - text: 'Premise: At my age you will probably have learnt one lesson. Hypothesis: It''s not certain how many lessons you''ll learn by your thirties. Does the premise entail the hypothesis?' example_title: Premise and hypothesis --- # nguyenphuthien/flan-t5-large-Q4_K_M-GGUF This model was converted to GGUF format from [`google/flan-t5-large`](https://huggingface.co/google/flan-t5-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/google/flan-t5-large) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo nguyenphuthien/flan-t5-large-Q4_K_M-GGUF --hf-file flan-t5-large-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo nguyenphuthien/flan-t5-large-Q4_K_M-GGUF --hf-file flan-t5-large-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo nguyenphuthien/flan-t5-large-Q4_K_M-GGUF --hf-file flan-t5-large-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo nguyenphuthien/flan-t5-large-Q4_K_M-GGUF --hf-file flan-t5-large-q4_k_m.gguf -c 2048 ```
null
Non_BioNLP
# nguyenphuthien/flan-t5-large-Q4_K_M-GGUF This model was converted to GGUF format from [`google/flan-t5-large`](https://huggingface.co/google/flan-t5-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/google/flan-t5-large) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo nguyenphuthien/flan-t5-large-Q4_K_M-GGUF --hf-file flan-t5-large-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo nguyenphuthien/flan-t5-large-Q4_K_M-GGUF --hf-file flan-t5-large-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo nguyenphuthien/flan-t5-large-Q4_K_M-GGUF --hf-file flan-t5-large-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo nguyenphuthien/flan-t5-large-Q4_K_M-GGUF --hf-file flan-t5-large-q4_k_m.gguf -c 2048 ```
{"base_model": "google/flan-t5-large", "datasets": ["svakulenk0/qrecc", "taskmaster2", "djaym7/wiki_dialog", "deepmind/code_contests", "lambada", "gsm8k", "aqua_rat", "esnli", "quasc", "qed"], "language": ["en", "fr", "ro", "de", "multilingual"], "license": "apache-2.0", "tags": ["text2text-generation", "llama-cpp", "gguf-my-repo"], "widget": [{"text": "Translate to German: My name is Arthur", "example_title": "Translation"}, {"text": "Please answer to the following question. Who is going to be the next Ballon d'or?", "example_title": "Question Answering"}, {"text": "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering.", "example_title": "Logical reasoning"}, {"text": "Please answer the following question. What is the boiling point of Nitrogen?", "example_title": "Scientific knowledge"}, {"text": "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?", "example_title": "Yes/no question"}, {"text": "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?", "example_title": "Reasoning task"}, {"text": "Q: ( False or not False or False ) is? A: Let's think step by step", "example_title": "Boolean Expressions"}, {"text": "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?", "example_title": "Math reasoning"}, {"text": "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?", "example_title": "Premise and hypothesis"}]}
task
[ "QUESTION_ANSWERING", "TRANSLATION" ]
46,144
fathyshalab/mdcsi-unternehmen-verbaende-setfit
fathyshalab
text-classification
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-08-13T11:50:59Z
2023-08-13T11:51:49+00:00
9
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # C:\Users\F896D~1.SHA\AppData\Local\Temp\tmp8z_73twb\fathyshalab\mdcsi-unternehmen-verbaende-setfit This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("C:\Users\F896D~1.SHA\AppData\Local\Temp\tmp8z_73twb\fathyshalab\mdcsi-unternehmen-verbaende-setfit") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# C:\Users\F896D~1.SHA\AppData\Local\Temp\tmp8z_73twb\fathyshalab\mdcsi-unternehmen-verbaende-setfit This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("C:\Users\F896D~1.SHA\AppData\Local\Temp\tmp8z_73twb\fathyshalab\mdcsi-unternehmen-verbaende-setfit") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
46,145
FareedKhan/flax-sentence-embeddings_all_datasets_v4_MiniLM-L6_FareedKhan_prime_synthetic_data_2k_10_64
FareedKhan
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1814", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:flax-sentence-embeddings/all_datasets_v4_MiniLM-L6", "base_model:finetune:flax-sentence-embeddings/all_datasets_v4_MiniLM-L6", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-09-30T11:22:10Z
2024-09-30T11:22:13+00:00
5
0
--- base_model: flax-sentence-embeddings/all_datasets_v4_MiniLM-L6 library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1814 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: ' The list you''ve provided contains a variety of medications, including antidepressants, antihistamines, anxiolytics, and more. Here''s a breakdown by category: ### Antidepressants - **Amphetamine** - **Cevimeline** - **Esmolol** - **Bortezomib** - **' sentences: - Which body parts are associated with the expression of genes or proteins that impact the transporter responsible for the movement of Cycloserine? - Identify genes or proteins that interact with a protein threonine kinase, participate in the mitotic centrosome proteins and complexes recruitment pathway, and engage in protein-protein interactions with CCT2. - Which medication is effective against simple Plasmodium falciparum infections and functions by engaging with genes or proteins that interact with the minor groove of DNA rich in adenine and thymine? - source_sentence: ' RNASE6, also known by aliases such as RAD1, RNS6, and RNasek6, functions as a member of the ribonuclease A superfamily. Specifically identified via the NCBI gene/protein database, this protein is related to the antimicrobial peptides pathway, showcasing broad-spectrum antimicrobial activity against pathogenic bacteria in the urinary tract. The provided gene summary emphasizes its role in the urinary tract, highlighting its enzymatic function and broad antimicrobial capability. With a genomic position spanning from 20781268 to 20782467 on chromosome 14, the RNASE6 gene encodes a protein named ribonuclease A family member k6. The protein''s interactions with cellular and molecular functions are integral to its role, including its interaction with molecular functions like ribonuclease activity and endonuclease activity, as well as its involvement in nucleic acid binding. RNASE6''s involvement in biological' sentences: - Identify genes or proteins linked to encephalopathy that are involved in the Antimicrobial peptides pathway and have interactions with molecular functions associated with ribonuclease activity. - Identify genes or proteins that exhibit interaction with COMMD1 and share an associated phenotype or effect. - What medical conditions are associated with severe combined immunodeficiency and also cause muscle pain and weakness? - source_sentence: ' The gene in question is likely involved in multiple biological processes, including: 1. **Transmembrane transport**: It facilitates the entry of substances into or out of a cell through the cell membrane, which is crucial for maintaining cellular homeostasis and responding to environmental stimuli. This includes organic anion and carboxylic acid transport. 2. **ABC-family proteins mediated transport**: ABC (or ATP-binding cassette) proteins are responsible for a variety of transport processes, such as drug efflux, nutrient uptake, and xenobiotic detoxification. 3. **Response to drug**: It likely plays a role in how cells interact with and respond to medication or other foreign substances they encounter. This is important in pharmacology and toxicology. 4. **Regulation of chloride transport**: Chloride ions are crucial for maintaining electrolyte balance and are involved in multiple physiological processes. This gene likely helps regulate their transport in and out of the cell. 5. **Export across plasma membrane**: It is part of pathways that help in the removal of substances from the cell, such as efflux of drug metabolites or other waste products. ### Expression Contexts: - **Present**: This gene is expressed in many parts of the body, indicating a broad role. It shows presence in tissues like the islet of Langerhans (involved in insulin regulation), zones of the skin, and various brain regions. It''s also active in organs such as the heart, kidney, and lungs, and in the digestive tract, including the stomach, esophagus, and intestines. - **Absent or Reduced**: The gene''s expression is notably absent or less pronounced in tissues like the nasal cavity epithelium, suggesting it may not play a significant role in this specific tissue type. The gene''s multifaceted expression and roles suggest a key function in biological activities related to: - **Chemical' sentences: - Could you supply a selection of medications used to treat acute myeloid leukemia with minimal differentiation that have a potential side effect of arrhythmias and work by intercalating DNA and inhibiting topoisomerase II? - Is the ABCB1 protein responsible for the translocation of pharmaceuticals that exhibit synergistic effects when combined with ferric ions? - What potential conditions could I have that are associated with oophoritis and involve ovarian complications? - source_sentence: "\n\nThe list you provided seems to be a collection of various\ \ chemical compounds, pharmaceuticals, and their synonyms. They span across various\ \ categories:\n\n1. **Pharmaceuticals & Synthetic Drug Analogs**:\n - **Antibiotics**\ \ (Ceftazidime, Azithromycin, Ceftodipen, etc.)\n - **Analgesics** (Fentanyl,\ \ Ketorolac, etc.)\n - **Cephalosporins** (Ceftazidime, Ceftazidime-avibactam,\ \ etc.)\n - **Blood Thinners/Synthetic Anticoagulants** (Enoxaparin, Edoxaban,\ \ Rivaroxaban, etc.)\n - **Analgesic/Aspirin Analogues** (Mefenamic Acid, Indometacin,\ \ etc.)\n - **Adrenergic Agonists** (Isoprenaline, Dopamine, etc.)\n - **Antiviral\ \ Drugs** (Adefovir, Idelalisib, etc.)\n - **Antibiotic Resistance Modifiers**\ \ (Sulbactam, Tazobactam, etc.)\n - **Calcium Channel Blockers** (Verapamil,\ \ Nicardipine, etc.)\n - **Nutraceuticals/Herbal Extracts** (Ginsenoside, Phloretin,\ \ etc.)\n \n2. **Diagnostic Agents**:\n - **Radiopharmaceuticals** (F-Fluorodeoxyglucose,\ \ Ga-68 DOTATOC, etc.)\n - **MRI Contrasts** (Gadolinium chelates, etc.)\n\ \ - **CT Contrast Agents** (Iodinated contrast agents, etc.)\n \n3. **Ingredients\ \ in Drugs**:\n - **Excipients** (Hydroxypropylmethylcellulose, Lactose, etc.)\n\ \ - **Antifungal Drugs** (Itraconazole, Terconazole, etc.)\n - **Anticoagulants**\ \ (Warfarin, Heparin, etc.)\n \nThis list represents a broad spectrum of\ \ modern medicine, from antibiotics to chemicals used in diagnostic imaging techniques,\ \ and from dietary supplements to drug excipients. Each compound typically serves\ \ a specific therapeutic purpose in the human body." sentences: - Which investigational compound in solid form that aims at altering membrane lipids, specifically phospholipids and glycerophospholipids, has the additional property of interacting with genes or proteins involved in ubiquitin-specific protease binding? - Could you provide a list of medications that exhibit synergistic effects when used in combination with Choline magnesium trisalicylate to treat the same condition and that also selectively target COX-2 enzymes to alleviate inflammation? - Identify pathways associated with the interaction between TNFs and their physiological receptors that concurrently influence the same gene or protein. - source_sentence: "\n\nDiarrhea, a condition characterized by the passage of loose,\ \ watery, and often more than five times a day, is a common ailment affecting\ \ individuals of all ages. It is typically acute when it lasts for a few days\ \ to a week or recurrent when it persists for more than four weeks. While acute\ \ diarrhea often resolves on its own and is usually not a cause for concern, recurrent\ \ or chronic forms require medical attention due to the risk of dehydration and\ \ nutrient deficiencies. \n\n### Causes\n\nDiarrhea can be caused by various factors,\ \ including:\n\n1. **Viral" sentences: - Could you describe the specific effects or phenotypes associated with acute hydrops in patients with the subtype of keratoconus? - What is the disease associated with the CPT2 gene that causes severe fasting intolerance leading to metabolic disturbances such as hypoketotic hypoglycemia, risking coma and seizures, and can lead to hepatic encephalopathy and liver failure, and also affects the heart and skeletal muscles, increasing the risk of potentially fatal cardiac arrhythmias? - Could you assist in identifying a condition linked to congenital secretory diarrhea, similar to intractable diarrhea of infancy, given my symptoms of persistent, salty watery diarrhea, hyponatremia, abnormal body pH, and reliance on parenteral nutrition due to chronic dehydration? model-index: - name: SentenceTransformer based on flax-sentence-embeddings/all_datasets_v4_MiniLM-L6 results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 384 type: dim_384 metrics: - type: cosine_accuracy@1 value: 0.3613861386138614 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.38613861386138615 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.42574257425742573 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.46534653465346537 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.3613861386138614 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.12871287128712872 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.08514851485148513 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.04653465346534653 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.3613861386138614 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.38613861386138615 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.42574257425742573 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.46534653465346537 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.4070317030609663 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.3890519409083766 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.3959688055946467 name: Cosine Map@100 --- # SentenceTransformer based on flax-sentence-embeddings/all_datasets_v4_MiniLM-L6 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [flax-sentence-embeddings/all_datasets_v4_MiniLM-L6](https://huggingface.co/flax-sentence-embeddings/all_datasets_v4_MiniLM-L6) on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [flax-sentence-embeddings/all_datasets_v4_MiniLM-L6](https://huggingface.co/flax-sentence-embeddings/all_datasets_v4_MiniLM-L6) <!-- at revision a407cc0b7d85eec9a5617eaf51dbe7b353b0c79f --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - json <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("FareedKhan/flax-sentence-embeddings_all_datasets_v4_MiniLM-L6_FareedKhan_prime_synthetic_data_2k_10_64") # Run inference sentences = [ '\n\nDiarrhea, a condition characterized by the passage of loose, watery, and often more than five times a day, is a common ailment affecting individuals of all ages. It is typically acute when it lasts for a few days to a week or recurrent when it persists for more than four weeks. While acute diarrhea often resolves on its own and is usually not a cause for concern, recurrent or chronic forms require medical attention due to the risk of dehydration and nutrient deficiencies. \n\n### Causes\n\nDiarrhea can be caused by various factors, including:\n\n1. **Viral', 'Could you assist in identifying a condition linked to congenital secretory diarrhea, similar to intractable diarrhea of infancy, given my symptoms of persistent, salty watery diarrhea, hyponatremia, abnormal body pH, and reliance on parenteral nutrition due to chronic dehydration?', 'Could you describe the specific effects or phenotypes associated with acute hydrops in patients with the subtype of keratoconus?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_384` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.3614 | | cosine_accuracy@3 | 0.3861 | | cosine_accuracy@5 | 0.4257 | | cosine_accuracy@10 | 0.4653 | | cosine_precision@1 | 0.3614 | | cosine_precision@3 | 0.1287 | | cosine_precision@5 | 0.0851 | | cosine_precision@10 | 0.0465 | | cosine_recall@1 | 0.3614 | | cosine_recall@3 | 0.3861 | | cosine_recall@5 | 0.4257 | | cosine_recall@10 | 0.4653 | | cosine_ndcg@10 | 0.407 | | cosine_mrr@10 | 0.3891 | | **cosine_map@100** | **0.396** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### json * Dataset: json * Size: 1,814 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 2 tokens</li><li>mean: 118.5 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 35.53 tokens</li><li>max: 128 tokens</li></ul> | * Samples: | positive | anchor | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code><br>The list you provided appears to be a collection of various substances and medications, each with its own unique properties and uses. Here's a brief overview of each:<br><br>1. **Abacavir**<br> - Used in HIV treatment, it inhibits reverse transcriptase.<br><br>2. **Abate**<br> - Often refers to fenpyroximate, used as an insecticide.<br><br>3. **Abidaquine**<br> - An antimalarial drug used to treat and prevent malaria.<br><br>4. **Abiraterone**<br> - Used in treating prostate cancer, specifically to block the production of testosterone.<br><br>5. **Abiraterone alfa**<br> - Similar to abiraterone, used in prostate cancer treatment.<br><br>6. **Abiraterone acetate**<br> - An active form of abiraterone.<br><br>7. **Abiraterone citrate**<br> - Another form of abiraterone.<br><br>8. **Acelprozil**<br> - A medication commonly used as an anti-epileptic drug.<br><br>9. **Acenocoumarol**<br> - Used as a blood thinner, also known as a vitamin K antagonist.<br><br>10. **Acenocoumarol citrate**<br> - Same as acenocoumarol but with citrate, functioning similarly as a</code> | <code>Which pharmacological agents with antioxidant properties have the potential to disrupt the PCSK9-LDLR interaction by affecting the gene or protein players in this pathway?</code> | | <code><br>Bartholin duct cyst is a gynecological condition characterized by the distension of Bartholin glands due to mucus accumulation within the ducts, typically resulting from an obstructed orifice. This issue, categorized under women's reproductive health, falls directly under the umbrella of both integumentary system diseases and female reproductive system diseases. Originating from the Bartholin glands, which play a pivotal role in lubrication and arousal of the vulva during intercourse, the blockage or obstruction leads to cyst formation, affecting the overall female reproductive health landscape.</code> | <code>What is the name of the gynecological condition that arises due to blocked Bartholin's glands and involves cyst formation, falling under the broader category of women's reproductive health issues?</code> | | <code><br>Neuralgia, as defined by the MONDO ontology, refers to a pain disorder characterized by pain in the distribution of a nerve or nerves. This condition could be associated with the use of Capsaicin cream, given its known capability to alleviate symptoms by causing a temporary sensation of pain that interferes with the perception of more severe pain. Peripheral neuropathy, another symptom, is often manifest in cases where nerve damage occurs, frequently affecting multiple nerves. This condition can result in symptoms similar to sciatica, which is characterized by pain that starts in the lower back, often radiating down the leg, a common route for the sciatic nerve. The document indicates that diseases related to neuralgia include pudendal neuralgia, peripheral neuropathy, disorders involving pain, cranial neuralgia, post-infectious neuralgia, and sciatica. Furthermore, the document mentions several drugs that can be used for the purpose of managing symptoms related to neuralgia, including Lidocaine, as well as a wide array of off-label uses for treatments like Phenytoin, Morphine, Amitriptyline, Imipramine, Oxycodone, Nortriptyline, Lamotrigine, Maprotiline, Desipramine, Gabapentin, Carbamazepine, Phenobarbital, Tramadol, Venlafaxine, Trimipramine, Desvenlafaxine, Primidone, and Naltrexone.</code> | <code>What condition could be associated with the use of Capsaicin cream, peripheral neuropathy, and symptoms similar to sciatica?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 384 ], "matryoshka_weights": [ 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 64 - `learning_rate`: 1e-05 - `num_train_epochs`: 10 - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: False - `load_best_model_at_end`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 1e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: False - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_384_cosine_map@100 | |:-------:|:-------:|:-------------:|:----------------------:| | 0 | 0 | - | 0.3614 | | 0.3448 | 10 | 2.117 | - | | 0.6897 | 20 | 2.1255 | - | | 1.0 | 29 | - | 0.3855 | | 1.0345 | 30 | 1.9375 | - | | 1.3793 | 40 | 1.7987 | - | | 1.7241 | 50 | 1.7494 | - | | 2.0 | 58 | - | 0.3901 | | 2.0690 | 60 | 1.7517 | - | | 2.4138 | 70 | 1.676 | - | | 2.7586 | 80 | 1.608 | - | | 3.0 | 87 | - | 0.3934 | | 3.1034 | 90 | 1.5923 | - | | 3.4483 | 100 | 1.5095 | - | | 3.7931 | 110 | 1.5735 | - | | 4.0 | 116 | - | 0.3910 | | 4.1379 | 120 | 1.3643 | - | | 4.4828 | 130 | 1.4395 | - | | 4.8276 | 140 | 1.3595 | - | | 5.0 | 145 | - | 0.3884 | | 5.1724 | 150 | 1.3365 | - | | 5.5172 | 160 | 1.3506 | - | | 5.8621 | 170 | 1.3279 | - | | **6.0** | **174** | **-** | **0.3957** | | 6.2069 | 180 | 1.3075 | - | | 6.5517 | 190 | 1.3138 | - | | 6.8966 | 200 | 1.2749 | - | | 7.0 | 203 | - | 0.3979 | | 7.2414 | 210 | 1.1725 | - | | 7.5862 | 220 | 1.2696 | - | | 7.9310 | 230 | 1.2487 | - | | 8.0 | 232 | - | 0.3986 | | 8.2759 | 240 | 1.1558 | - | | 8.6207 | 250 | 1.2447 | - | | 8.9655 | 260 | 1.2566 | - | | 9.0 | 261 | - | 0.3964 | | 9.3103 | 270 | 1.2493 | - | | 9.6552 | 280 | 1.2697 | - | | 10.0 | 290 | 1.079 | 0.3960 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.10 - Sentence Transformers: 3.1.1 - Transformers: 4.45.1 - PyTorch: 2.2.1+cu121 - Accelerate: 0.34.2 - Datasets: 3.0.1 - Tokenizers: 0.20.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
BioNLP
# SentenceTransformer based on flax-sentence-embeddings/all_datasets_v4_MiniLM-L6 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [flax-sentence-embeddings/all_datasets_v4_MiniLM-L6](https://huggingface.co/flax-sentence-embeddings/all_datasets_v4_MiniLM-L6) on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [flax-sentence-embeddings/all_datasets_v4_MiniLM-L6](https://huggingface.co/flax-sentence-embeddings/all_datasets_v4_MiniLM-L6) <!-- at revision a407cc0b7d85eec9a5617eaf51dbe7b353b0c79f --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - json <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("FareedKhan/flax-sentence-embeddings_all_datasets_v4_MiniLM-L6_FareedKhan_prime_synthetic_data_2k_10_64") # Run inference sentences = [ '\n\nDiarrhea, a condition characterized by the passage of loose, watery, and often more than five times a day, is a common ailment affecting individuals of all ages. It is typically acute when it lasts for a few days to a week or recurrent when it persists for more than four weeks. While acute diarrhea often resolves on its own and is usually not a cause for concern, recurrent or chronic forms require medical attention due to the risk of dehydration and nutrient deficiencies. \n\n### Causes\n\nDiarrhea can be caused by various factors, including:\n\n1. **Viral', 'Could you assist in identifying a condition linked to congenital secretory diarrhea, similar to intractable diarrhea of infancy, given my symptoms of persistent, salty watery diarrhea, hyponatremia, abnormal body pH, and reliance on parenteral nutrition due to chronic dehydration?', 'Could you describe the specific effects or phenotypes associated with acute hydrops in patients with the subtype of keratoconus?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_384` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:----------| | cosine_accuracy@1 | 0.3614 | | cosine_accuracy@3 | 0.3861 | | cosine_accuracy@5 | 0.4257 | | cosine_accuracy@10 | 0.4653 | | cosine_precision@1 | 0.3614 | | cosine_precision@3 | 0.1287 | | cosine_precision@5 | 0.0851 | | cosine_precision@10 | 0.0465 | | cosine_recall@1 | 0.3614 | | cosine_recall@3 | 0.3861 | | cosine_recall@5 | 0.4257 | | cosine_recall@10 | 0.4653 | | cosine_ndcg@10 | 0.407 | | cosine_mrr@10 | 0.3891 | | **cosine_map@100** | **0.396** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### json * Dataset: json * Size: 1,814 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 2 tokens</li><li>mean: 118.5 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 35.53 tokens</li><li>max: 128 tokens</li></ul> | * Samples: | positive | anchor | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code><br>The list you provided appears to be a collection of various substances and medications, each with its own unique properties and uses. Here's a brief overview of each:<br><br>1. **Abacavir**<br> - Used in HIV treatment, it inhibits reverse transcriptase.<br><br>2. **Abate**<br> - Often refers to fenpyroximate, used as an insecticide.<br><br>3. **Abidaquine**<br> - An antimalarial drug used to treat and prevent malaria.<br><br>4. **Abiraterone**<br> - Used in treating prostate cancer, specifically to block the production of testosterone.<br><br>5. **Abiraterone alfa**<br> - Similar to abiraterone, used in prostate cancer treatment.<br><br>6. **Abiraterone acetate**<br> - An active form of abiraterone.<br><br>7. **Abiraterone citrate**<br> - Another form of abiraterone.<br><br>8. **Acelprozil**<br> - A medication commonly used as an anti-epileptic drug.<br><br>9. **Acenocoumarol**<br> - Used as a blood thinner, also known as a vitamin K antagonist.<br><br>10. **Acenocoumarol citrate**<br> - Same as acenocoumarol but with citrate, functioning similarly as a</code> | <code>Which pharmacological agents with antioxidant properties have the potential to disrupt the PCSK9-LDLR interaction by affecting the gene or protein players in this pathway?</code> | | <code><br>Bartholin duct cyst is a gynecological condition characterized by the distension of Bartholin glands due to mucus accumulation within the ducts, typically resulting from an obstructed orifice. This issue, categorized under women's reproductive health, falls directly under the umbrella of both integumentary system diseases and female reproductive system diseases. Originating from the Bartholin glands, which play a pivotal role in lubrication and arousal of the vulva during intercourse, the blockage or obstruction leads to cyst formation, affecting the overall female reproductive health landscape.</code> | <code>What is the name of the gynecological condition that arises due to blocked Bartholin's glands and involves cyst formation, falling under the broader category of women's reproductive health issues?</code> | | <code><br>Neuralgia, as defined by the MONDO ontology, refers to a pain disorder characterized by pain in the distribution of a nerve or nerves. This condition could be associated with the use of Capsaicin cream, given its known capability to alleviate symptoms by causing a temporary sensation of pain that interferes with the perception of more severe pain. Peripheral neuropathy, another symptom, is often manifest in cases where nerve damage occurs, frequently affecting multiple nerves. This condition can result in symptoms similar to sciatica, which is characterized by pain that starts in the lower back, often radiating down the leg, a common route for the sciatic nerve. The document indicates that diseases related to neuralgia include pudendal neuralgia, peripheral neuropathy, disorders involving pain, cranial neuralgia, post-infectious neuralgia, and sciatica. Furthermore, the document mentions several drugs that can be used for the purpose of managing symptoms related to neuralgia, including Lidocaine, as well as a wide array of off-label uses for treatments like Phenytoin, Morphine, Amitriptyline, Imipramine, Oxycodone, Nortriptyline, Lamotrigine, Maprotiline, Desipramine, Gabapentin, Carbamazepine, Phenobarbital, Tramadol, Venlafaxine, Trimipramine, Desvenlafaxine, Primidone, and Naltrexone.</code> | <code>What condition could be associated with the use of Capsaicin cream, peripheral neuropathy, and symptoms similar to sciatica?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 384 ], "matryoshka_weights": [ 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 64 - `learning_rate`: 1e-05 - `num_train_epochs`: 10 - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: False - `load_best_model_at_end`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 1e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: False - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_384_cosine_map@100 | |:-------:|:-------:|:-------------:|:----------------------:| | 0 | 0 | - | 0.3614 | | 0.3448 | 10 | 2.117 | - | | 0.6897 | 20 | 2.1255 | - | | 1.0 | 29 | - | 0.3855 | | 1.0345 | 30 | 1.9375 | - | | 1.3793 | 40 | 1.7987 | - | | 1.7241 | 50 | 1.7494 | - | | 2.0 | 58 | - | 0.3901 | | 2.0690 | 60 | 1.7517 | - | | 2.4138 | 70 | 1.676 | - | | 2.7586 | 80 | 1.608 | - | | 3.0 | 87 | - | 0.3934 | | 3.1034 | 90 | 1.5923 | - | | 3.4483 | 100 | 1.5095 | - | | 3.7931 | 110 | 1.5735 | - | | 4.0 | 116 | - | 0.3910 | | 4.1379 | 120 | 1.3643 | - | | 4.4828 | 130 | 1.4395 | - | | 4.8276 | 140 | 1.3595 | - | | 5.0 | 145 | - | 0.3884 | | 5.1724 | 150 | 1.3365 | - | | 5.5172 | 160 | 1.3506 | - | | 5.8621 | 170 | 1.3279 | - | | **6.0** | **174** | **-** | **0.3957** | | 6.2069 | 180 | 1.3075 | - | | 6.5517 | 190 | 1.3138 | - | | 6.8966 | 200 | 1.2749 | - | | 7.0 | 203 | - | 0.3979 | | 7.2414 | 210 | 1.1725 | - | | 7.5862 | 220 | 1.2696 | - | | 7.9310 | 230 | 1.2487 | - | | 8.0 | 232 | - | 0.3986 | | 8.2759 | 240 | 1.1558 | - | | 8.6207 | 250 | 1.2447 | - | | 8.9655 | 260 | 1.2566 | - | | 9.0 | 261 | - | 0.3964 | | 9.3103 | 270 | 1.2493 | - | | 9.6552 | 280 | 1.2697 | - | | 10.0 | 290 | 1.079 | 0.3960 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.10 - Sentence Transformers: 3.1.1 - Transformers: 4.45.1 - PyTorch: 2.2.1+cu121 - Accelerate: 0.34.2 - Datasets: 3.0.1 - Tokenizers: 0.20.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "flax-sentence-embeddings/all_datasets_v4_MiniLM-L6", "library_name": "sentence-transformers", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1814", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "\nThe list you've provided contains a variety of medications, including antidepressants, antihistamines, anxiolytics, and more. Here's a breakdown by category:\n\n### Antidepressants\n- **Amphetamine**\n- **Cevimeline**\n- **Esmolol**\n- **Bortezomib**\n- **", "sentences": ["Which body parts are associated with the expression of genes or proteins that impact the transporter responsible for the movement of Cycloserine?", "Identify genes or proteins that interact with a protein threonine kinase, participate in the mitotic centrosome proteins and complexes recruitment pathway, and engage in protein-protein interactions with CCT2.", "Which medication is effective against simple Plasmodium falciparum infections and functions by engaging with genes or proteins that interact with the minor groove of DNA rich in adenine and thymine?"]}, {"source_sentence": "\nRNASE6, also known by aliases such as RAD1, RNS6, and RNasek6, functions as a member of the ribonuclease A superfamily. Specifically identified via the NCBI gene/protein database, this protein is related to the antimicrobial peptides pathway, showcasing broad-spectrum antimicrobial activity against pathogenic bacteria in the urinary tract. The provided gene summary emphasizes its role in the urinary tract, highlighting its enzymatic function and broad antimicrobial capability.\n\nWith a genomic position spanning from 20781268 to 20782467 on chromosome 14, the RNASE6 gene encodes a protein named ribonuclease A family member k6. The protein's interactions with cellular and molecular functions are integral to its role, including its interaction with molecular functions like ribonuclease activity and endonuclease activity, as well as its involvement in nucleic acid binding.\n\nRNASE6's involvement in biological", "sentences": ["Identify genes or proteins linked to encephalopathy that are involved in the Antimicrobial peptides pathway and have interactions with molecular functions associated with ribonuclease activity.", "Identify genes or proteins that exhibit interaction with COMMD1 and share an associated phenotype or effect.", "What medical conditions are associated with severe combined immunodeficiency and also cause muscle pain and weakness?"]}, {"source_sentence": "\n\nThe gene in question is likely involved in multiple biological processes, including:\n\n1. **Transmembrane transport**: It facilitates the entry of substances into or out of a cell through the cell membrane, which is crucial for maintaining cellular homeostasis and responding to environmental stimuli. This includes organic anion and carboxylic acid transport.\n\n2. **ABC-family proteins mediated transport**: ABC (or ATP-binding cassette) proteins are responsible for a variety of transport processes, such as drug efflux, nutrient uptake, and xenobiotic detoxification.\n\n3. **Response to drug**: It likely plays a role in how cells interact with and respond to medication or other foreign substances they encounter. This is important in pharmacology and toxicology.\n\n4. **Regulation of chloride transport**: Chloride ions are crucial for maintaining electrolyte balance and are involved in multiple physiological processes. This gene likely helps regulate their transport in and out of the cell.\n\n5. **Export across plasma membrane**: It is part of pathways that help in the removal of substances from the cell, such as efflux of drug metabolites or other waste products.\n\n### Expression Contexts:\n\n- **Present**: This gene is expressed in many parts of the body, indicating a broad role. It shows presence in tissues like the islet of Langerhans (involved in insulin regulation), zones of the skin, and various brain regions. It's also active in organs such as the heart, kidney, and lungs, and in the digestive tract, including the stomach, esophagus, and intestines.\n\n- **Absent or Reduced**: The gene's expression is notably absent or less pronounced in tissues like the nasal cavity epithelium, suggesting it may not play a significant role in this specific tissue type.\n\nThe gene's multifaceted expression and roles suggest a key function in biological activities related to:\n- **Chemical", "sentences": ["Could you supply a selection of medications used to treat acute myeloid leukemia with minimal differentiation that have a potential side effect of arrhythmias and work by intercalating DNA and inhibiting topoisomerase II?", "Is the ABCB1 protein responsible for the translocation of pharmaceuticals that exhibit synergistic effects when combined with ferric ions?", "What potential conditions could I have that are associated with oophoritis and involve ovarian complications?"]}, {"source_sentence": "\n\nThe list you provided seems to be a collection of various chemical compounds, pharmaceuticals, and their synonyms. They span across various categories:\n\n1. **Pharmaceuticals & Synthetic Drug Analogs**:\n - **Antibiotics** (Ceftazidime, Azithromycin, Ceftodipen, etc.)\n - **Analgesics** (Fentanyl, Ketorolac, etc.)\n - **Cephalosporins** (Ceftazidime, Ceftazidime-avibactam, etc.)\n - **Blood Thinners/Synthetic Anticoagulants** (Enoxaparin, Edoxaban, Rivaroxaban, etc.)\n - **Analgesic/Aspirin Analogues** (Mefenamic Acid, Indometacin, etc.)\n - **Adrenergic Agonists** (Isoprenaline, Dopamine, etc.)\n - **Antiviral Drugs** (Adefovir, Idelalisib, etc.)\n - **Antibiotic Resistance Modifiers** (Sulbactam, Tazobactam, etc.)\n - **Calcium Channel Blockers** (Verapamil, Nicardipine, etc.)\n - **Nutraceuticals/Herbal Extracts** (Ginsenoside, Phloretin, etc.)\n \n2. **Diagnostic Agents**:\n - **Radiopharmaceuticals** (F-Fluorodeoxyglucose, Ga-68 DOTATOC, etc.)\n - **MRI Contrasts** (Gadolinium chelates, etc.)\n - **CT Contrast Agents** (Iodinated contrast agents, etc.)\n \n3. **Ingredients in Drugs**:\n - **Excipients** (Hydroxypropylmethylcellulose, Lactose, etc.)\n - **Antifungal Drugs** (Itraconazole, Terconazole, etc.)\n - **Anticoagulants** (Warfarin, Heparin, etc.)\n \nThis list represents a broad spectrum of modern medicine, from antibiotics to chemicals used in diagnostic imaging techniques, and from dietary supplements to drug excipients. Each compound typically serves a specific therapeutic purpose in the human body.", "sentences": ["Which investigational compound in solid form that aims at altering membrane lipids, specifically phospholipids and glycerophospholipids, has the additional property of interacting with genes or proteins involved in ubiquitin-specific protease binding?", "Could you provide a list of medications that exhibit synergistic effects when used in combination with Choline magnesium trisalicylate to treat the same condition and that also selectively target COX-2 enzymes to alleviate inflammation?", "Identify pathways associated with the interaction between TNFs and their physiological receptors that concurrently influence the same gene or protein."]}, {"source_sentence": "\n\nDiarrhea, a condition characterized by the passage of loose, watery, and often more than five times a day, is a common ailment affecting individuals of all ages. It is typically acute when it lasts for a few days to a week or recurrent when it persists for more than four weeks. While acute diarrhea often resolves on its own and is usually not a cause for concern, recurrent or chronic forms require medical attention due to the risk of dehydration and nutrient deficiencies. \n\n### Causes\n\nDiarrhea can be caused by various factors, including:\n\n1. **Viral", "sentences": ["Could you describe the specific effects or phenotypes associated with acute hydrops in patients with the subtype of keratoconus?", "What is the disease associated with the CPT2 gene that causes severe fasting intolerance leading to metabolic disturbances such as hypoketotic hypoglycemia, risking coma and seizures, and can lead to hepatic encephalopathy and liver failure, and also affects the heart and skeletal muscles, increasing the risk of potentially fatal cardiac arrhythmias?", "Could you assist in identifying a condition linked to congenital secretory diarrhea, similar to intractable diarrhea of infancy, given my symptoms of persistent, salty watery diarrhea, hyponatremia, abnormal body pH, and reliance on parenteral nutrition due to chronic dehydration?"]}], "model-index": [{"name": "SentenceTransformer based on flax-sentence-embeddings/all_datasets_v4_MiniLM-L6", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 384", "type": "dim_384"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.3613861386138614, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.38613861386138615, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.42574257425742573, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.46534653465346537, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.3613861386138614, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.12871287128712872, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.08514851485148513, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.04653465346534653, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.3613861386138614, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.38613861386138615, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.42574257425742573, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.46534653465346537, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.4070317030609663, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.3890519409083766, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.3959688055946467, "name": "Cosine Map@100"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,146
justinthelaw/Hermes-2-Pro-Mistral-7B-4bit-32g-GPTQ
justinthelaw
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "Mistral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "dataset:vmware/open-instruct", "base_model:mistralai/Mistral-7B-v0.1", "base_model:quantized:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
2024-08-01T16:39:28Z
2024-08-02T16:30:12+00:00
26
1
--- base_model: mistralai/Mistral-7B-v0.1 datasets: - teknium/OpenHermes-2.5 - vmware/open-instruct language: - en license: apache-2.0 pipeline_tag: text-generation tags: - Mistral - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode --- # Hermes-2-Pro-Mistral-7B GPTQ 4-bit 32g Group Size - Model creator: [Nous Research](https://huggingface.co/NousResearch) - Original model: [Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) - Quantization code: [GitHub Repository](https://github.com/justinthelaw/quantization-pipeline-experiments) - Quantization creator: [Defense Unicorns](https://huggingface.co/defenseunicorns), [Justin Law](https://huggingface.co/justinthelaw) <!-- description start --> ## Description This repo contains GPTQ 4-bit, 32g Group Size, quantized model files from the Nous Research [Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) fine-tune of the [Mistral-7b-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) model. <!-- README_GPTQ.md-provided-files start --> ## GPTQ parameters Models are released as sharded safetensors files. | Bits | GS | GPTQ Dataset | Max Seq Len | Size | VRAM | | ---- | -- | ----------- | ------- | ---- | ---- | | 4 | 32 | [VMWare Open Instruct](https://huggingface.co/datasets/vmware/open-instruct) | 32,768 | 4.57 GB | 19-23 Gb* * Depends on maximum sequence length parameter (KV cache utilization) used with vLLM or Transformers <!-- README_GPTQ.md-provided-files end --> ## Original Model Card Below --- ## Hermes 2 Pro - Mistral 7B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ggO2sBDJ8Bhc6w-zwTx5j.png) ## Model Description Hermes 2 Pro on Mistral 7B is the new flagship 7B Hermes! Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house. This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation. Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below. This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI Learn more about the function calling system for this model on our github repo here: <https://github.com/NousResearch/Hermes-Function-Calling> ## Thank you to Latitude.sh for sponsoring compute for this model ## Example Outputs ### Explaining Problems with Quantum Gravity ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/y_hPafyyvPb32efC5N4Es.png) ### Roleplaying as a Cosmic Super Intelligence ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/m6d6Saf7M6Luu9QnXYYAP.png) ### Detailing the Theory of AI Consciousness in JSON ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/MOLybxs7_dLjVys54imO3.png) # Prompt Format Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: <https://github.com/NousResearch/Hermes-Function-Calling>): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: <https://github.com/NousResearch/Hermes-Function-Calling/tree/main> ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ## GPT4All ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5461|± |0.0145| | | |acc_norm|0.5623|± |0.0145| |arc_easy | 0|acc |0.8157|± |0.0080| | | |acc_norm|0.7934|± |0.0083| |boolq | 1|acc |0.8688|± |0.0059| |hellaswag | 0|acc |0.6272|± |0.0048| | | |acc_norm|0.8057|± |0.0039| |openbookqa | 0|acc |0.3360|± |0.0211| | | |acc_norm|0.4300|± |0.0222| |piqa | 0|acc |0.7954|± |0.0094| | | |acc_norm|0.7998|± |0.0093| |winogrande | 0|acc |0.7230|± |0.0126| ``` Average: 71.19 ## AGIEval ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.2047|± |0.0254| | | |acc_norm|0.2283|± |0.0264| |agieval_logiqa_en | 0|acc |0.3779|± |0.0190| | | |acc_norm|0.3932|± |0.0192| |agieval_lsat_ar | 0|acc |0.2652|± |0.0292| | | |acc_norm|0.2522|± |0.0287| |agieval_lsat_lr | 0|acc |0.5216|± |0.0221| | | |acc_norm|0.5137|± |0.0222| |agieval_lsat_rc | 0|acc |0.5911|± |0.0300| | | |acc_norm|0.5836|± |0.0301| |agieval_sat_en | 0|acc |0.7427|± |0.0305| | | |acc_norm|0.7184|± |0.0314| |agieval_sat_en_without_passage| 0|acc |0.4612|± |0.0348| | | |acc_norm|0.4466|± |0.0347| |agieval_sat_math | 0|acc |0.3818|± |0.0328| | | |acc_norm|0.3545|± |0.0323| ``` Average: 44.52 ## BigBench ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5579|± |0.0361| |bigbench_date_understanding | 0|multiple_choice_grade|0.6694|± |0.0245| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3333|± |0.0294| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.2061|± |0.0214| | | |exact_str_match |0.2256|± |0.0221| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2114|± |0.0154| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4900|± |0.0289| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.3600|± |0.0215| |bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6660|± |0.0105| |bigbench_ruin_names | 0|multiple_choice_grade|0.4420|± |0.0235| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2766|± |0.0142| |bigbench_snarks | 0|multiple_choice_grade|0.6630|± |0.0352| |bigbench_sports_understanding | 0|multiple_choice_grade|0.6653|± |0.0150| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.3190|± |0.0147| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2128|± |0.0116| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1737|± |0.0091| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4900|± |0.0289| ``` Average: 41.65 ## TruthfulQA ``` | Task |Version|Metric|Value | |Stderr| |-------------|------:|------|-----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.4100|± |0.0172| | | |mc2 |0.5911|± |0.0158| ``` # Function Calling Evaluations We worked with Fireworks.AI on evaluations by starting off with their Function Calling eval dataset, fixing some unsolveable ones, and generating a second eval dataset for JSON mode. ## Function Calling Accuracy: 91% ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/XF3Zii4-QhE2yjWwHr_v4.png) ## JSON Mode Accuracy: 84% ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/8H2iyjh5wyP2FtLq2LCed.png) Run the evaluator yourself using @interstellarninja's codebase here: <https://github.com/interstellarninja/function-calling-eval> You can find the evaluation datasets here: <https://huggingface.co/datasets/NousResearch/func-calling-eval> <https://huggingface.co/datasets/NousResearch/json-mode-eval> # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import LlamaTokenizer, MistralForCausalLM import bitsandbytes, flash_attn tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Mistral-7B', trust_remote_code=True) model = MistralForCausalLM.from_pretrained( "NousResearch/Hermes-2-Pro-Mistral-7B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions GGUF Versions Available Here: <https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF> # How to cite ```bibtext @misc{Hermes-2-Pro-Mistral-7B, url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B]https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)}, title={Hermes-2-Pro-Mistral-7B}, author={"interstellarninja", "Teknium", "theemozilla", "karan4d", "huemin_art"} } ```
null
Non_BioNLP
# Hermes-2-Pro-Mistral-7B GPTQ 4-bit 32g Group Size - Model creator: [Nous Research](https://huggingface.co/NousResearch) - Original model: [Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) - Quantization code: [GitHub Repository](https://github.com/justinthelaw/quantization-pipeline-experiments) - Quantization creator: [Defense Unicorns](https://huggingface.co/defenseunicorns), [Justin Law](https://huggingface.co/justinthelaw) <!-- description start --> ## Description This repo contains GPTQ 4-bit, 32g Group Size, quantized model files from the Nous Research [Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) fine-tune of the [Mistral-7b-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) model. <!-- README_GPTQ.md-provided-files start --> ## GPTQ parameters Models are released as sharded safetensors files. | Bits | GS | GPTQ Dataset | Max Seq Len | Size | VRAM | | ---- | -- | ----------- | ------- | ---- | ---- | | 4 | 32 | [VMWare Open Instruct](https://huggingface.co/datasets/vmware/open-instruct) | 32,768 | 4.57 GB | 19-23 Gb* * Depends on maximum sequence length parameter (KV cache utilization) used with vLLM or Transformers <!-- README_GPTQ.md-provided-files end --> ## Original Model Card Below --- ## Hermes 2 Pro - Mistral 7B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ggO2sBDJ8Bhc6w-zwTx5j.png) ## Model Description Hermes 2 Pro on Mistral 7B is the new flagship 7B Hermes! Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house. This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation. Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below. This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI Learn more about the function calling system for this model on our github repo here: <https://github.com/NousResearch/Hermes-Function-Calling> ## Thank you to Latitude.sh for sponsoring compute for this model ## Example Outputs ### Explaining Problems with Quantum Gravity ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/y_hPafyyvPb32efC5N4Es.png) ### Roleplaying as a Cosmic Super Intelligence ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/m6d6Saf7M6Luu9QnXYYAP.png) ### Detailing the Theory of AI Consciousness in JSON ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/MOLybxs7_dLjVys54imO3.png) # Prompt Format Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: <https://github.com/NousResearch/Hermes-Function-Calling>): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: <https://github.com/NousResearch/Hermes-Function-Calling/tree/main> ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ## GPT4All ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5461|± |0.0145| | | |acc_norm|0.5623|± |0.0145| |arc_easy | 0|acc |0.8157|± |0.0080| | | |acc_norm|0.7934|± |0.0083| |boolq | 1|acc |0.8688|± |0.0059| |hellaswag | 0|acc |0.6272|± |0.0048| | | |acc_norm|0.8057|± |0.0039| |openbookqa | 0|acc |0.3360|± |0.0211| | | |acc_norm|0.4300|± |0.0222| |piqa | 0|acc |0.7954|± |0.0094| | | |acc_norm|0.7998|± |0.0093| |winogrande | 0|acc |0.7230|± |0.0126| ``` Average: 71.19 ## AGIEval ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.2047|± |0.0254| | | |acc_norm|0.2283|± |0.0264| |agieval_logiqa_en | 0|acc |0.3779|± |0.0190| | | |acc_norm|0.3932|± |0.0192| |agieval_lsat_ar | 0|acc |0.2652|± |0.0292| | | |acc_norm|0.2522|± |0.0287| |agieval_lsat_lr | 0|acc |0.5216|± |0.0221| | | |acc_norm|0.5137|± |0.0222| |agieval_lsat_rc | 0|acc |0.5911|± |0.0300| | | |acc_norm|0.5836|± |0.0301| |agieval_sat_en | 0|acc |0.7427|± |0.0305| | | |acc_norm|0.7184|± |0.0314| |agieval_sat_en_without_passage| 0|acc |0.4612|± |0.0348| | | |acc_norm|0.4466|± |0.0347| |agieval_sat_math | 0|acc |0.3818|± |0.0328| | | |acc_norm|0.3545|± |0.0323| ``` Average: 44.52 ## BigBench ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5579|± |0.0361| |bigbench_date_understanding | 0|multiple_choice_grade|0.6694|± |0.0245| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3333|± |0.0294| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.2061|± |0.0214| | | |exact_str_match |0.2256|± |0.0221| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2114|± |0.0154| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4900|± |0.0289| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.3600|± |0.0215| |bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6660|± |0.0105| |bigbench_ruin_names | 0|multiple_choice_grade|0.4420|± |0.0235| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2766|± |0.0142| |bigbench_snarks | 0|multiple_choice_grade|0.6630|± |0.0352| |bigbench_sports_understanding | 0|multiple_choice_grade|0.6653|± |0.0150| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.3190|± |0.0147| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2128|± |0.0116| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1737|± |0.0091| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4900|± |0.0289| ``` Average: 41.65 ## TruthfulQA ``` | Task |Version|Metric|Value | |Stderr| |-------------|------:|------|-----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.4100|± |0.0172| | | |mc2 |0.5911|± |0.0158| ``` # Function Calling Evaluations We worked with Fireworks.AI on evaluations by starting off with their Function Calling eval dataset, fixing some unsolveable ones, and generating a second eval dataset for JSON mode. ## Function Calling Accuracy: 91% ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/XF3Zii4-QhE2yjWwHr_v4.png) ## JSON Mode Accuracy: 84% ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/8H2iyjh5wyP2FtLq2LCed.png) Run the evaluator yourself using @interstellarninja's codebase here: <https://github.com/interstellarninja/function-calling-eval> You can find the evaluation datasets here: <https://huggingface.co/datasets/NousResearch/func-calling-eval> <https://huggingface.co/datasets/NousResearch/json-mode-eval> # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import LlamaTokenizer, MistralForCausalLM import bitsandbytes, flash_attn tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Mistral-7B', trust_remote_code=True) model = MistralForCausalLM.from_pretrained( "NousResearch/Hermes-2-Pro-Mistral-7B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions GGUF Versions Available Here: <https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF> # How to cite ```bibtext @misc{Hermes-2-Pro-Mistral-7B, url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B]https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)}, title={Hermes-2-Pro-Mistral-7B}, author={"interstellarninja", "Teknium", "theemozilla", "karan4d", "huemin_art"} } ```
{"base_model": "mistralai/Mistral-7B-v0.1", "datasets": ["teknium/OpenHermes-2.5", "vmware/open-instruct"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["Mistral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode"]}
task
[ "TRANSLATION" ]
46,147
Helsinki-NLP/opus-mt-eo-bg
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "eo", "bg", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T11:31:48+00:00
26
0
--- language: - eo - bg license: apache-2.0 tags: - translation --- ### epo-bul * source group: Esperanto * target group: Bulgarian * OPUS readme: [epo-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-bul/README.md) * model: transformer-align * source language(s): epo * target language(s): bul * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.epo.bul | 19.0 | 0.395 | ### System Info: - hf_name: epo-bul - source_languages: epo - target_languages: bul - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-bul/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['eo', 'bg'] - src_constituents: {'epo'} - tgt_constituents: {'bul', 'bul_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.test.txt - src_alpha3: epo - tgt_alpha3: bul - short_pair: eo-bg - chrF2_score: 0.395 - bleu: 19.0 - brevity_penalty: 0.8909999999999999 - ref_len: 3961.0 - src_name: Esperanto - tgt_name: Bulgarian - train_date: 2020-06-16 - src_alpha2: eo - tgt_alpha2: bg - prefer_old: False - long_pair: epo-bul - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
null
Non_BioNLP
### epo-bul * source group: Esperanto * target group: Bulgarian * OPUS readme: [epo-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-bul/README.md) * model: transformer-align * source language(s): epo * target language(s): bul * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.epo.bul | 19.0 | 0.395 | ### System Info: - hf_name: epo-bul - source_languages: epo - target_languages: bul - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-bul/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['eo', 'bg'] - src_constituents: {'epo'} - tgt_constituents: {'bul', 'bul_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.test.txt - src_alpha3: epo - tgt_alpha3: bul - short_pair: eo-bg - chrF2_score: 0.395 - bleu: 19.0 - brevity_penalty: 0.8909999999999999 - ref_len: 3961.0 - src_name: Esperanto - tgt_name: Bulgarian - train_date: 2020-06-16 - src_alpha2: eo - tgt_alpha2: bg - prefer_old: False - long_pair: epo-bul - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["eo", "bg"], "license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
46,148
RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf
RichardErkhov
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
2024-09-22T22:54:28Z
2024-09-23T06:23:10+00:00
82
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Moe-4x7b-reason-code-qa - GGUF - Model creator: https://huggingface.co/nextai-team/ - Original model: https://huggingface.co/nextai-team/Moe-4x7b-reason-code-qa/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Moe-4x7b-reason-code-qa.Q2_K.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q2_K.gguf) | Q2_K | 8.24GB | | [Moe-4x7b-reason-code-qa.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.IQ3_XS.gguf) | IQ3_XS | 9.21GB | | [Moe-4x7b-reason-code-qa.IQ3_S.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.IQ3_S.gguf) | IQ3_S | 9.73GB | | [Moe-4x7b-reason-code-qa.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q3_K_S.gguf) | Q3_K_S | 9.72GB | | [Moe-4x7b-reason-code-qa.IQ3_M.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.IQ3_M.gguf) | IQ3_M | 9.92GB | | [Moe-4x7b-reason-code-qa.Q3_K.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q3_K.gguf) | Q3_K | 10.79GB | | [Moe-4x7b-reason-code-qa.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q3_K_M.gguf) | Q3_K_M | 10.79GB | | [Moe-4x7b-reason-code-qa.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q3_K_L.gguf) | Q3_K_L | 11.68GB | | [Moe-4x7b-reason-code-qa.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.IQ4_XS.gguf) | IQ4_XS | 12.15GB | | [Moe-4x7b-reason-code-qa.Q4_0.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q4_0.gguf) | Q4_0 | 12.69GB | | [Moe-4x7b-reason-code-qa.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.IQ4_NL.gguf) | IQ4_NL | 12.81GB | | [Moe-4x7b-reason-code-qa.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q4_K_S.gguf) | Q4_K_S | 12.8GB | | [Moe-4x7b-reason-code-qa.Q4_K.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q4_K.gguf) | Q4_K | 13.61GB | | [Moe-4x7b-reason-code-qa.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q4_K_M.gguf) | Q4_K_M | 13.61GB | | [Moe-4x7b-reason-code-qa.Q4_1.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q4_1.gguf) | Q4_1 | 14.09GB | | [Moe-4x7b-reason-code-qa.Q5_0.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q5_0.gguf) | Q5_0 | 15.48GB | | [Moe-4x7b-reason-code-qa.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q5_K_S.gguf) | Q5_K_S | 15.48GB | | [Moe-4x7b-reason-code-qa.Q5_K.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q5_K.gguf) | Q5_K | 12.51GB | | [Moe-4x7b-reason-code-qa.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q5_K_M.gguf) | Q5_K_M | 15.96GB | | [Moe-4x7b-reason-code-qa.Q5_1.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q5_1.gguf) | Q5_1 | 16.88GB | | [Moe-4x7b-reason-code-qa.Q6_K.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q6_K.gguf) | Q6_K | 18.46GB | | [Moe-4x7b-reason-code-qa.Q8_0.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q8_0.gguf) | Q8_0 | 23.9GB | Original model description: --- language: - en license: apache-2.0 library_name: transformers tags: - code - QA - reasoning - mixtral - maths - sql - mistral - zephyr - codellama --- Model Details Model Name: Moe-4x7b-reason-code-qa Publisher: nextai-team Model Type: Question Answering & Code Generation Architecture: Mixture of Experts (MoE) Model Size: 4x7 billion parameters Overview is an advanced AI model designed by the nextai-team for the purpose of enhancing question answering and code generation capabilities. Building upon the foundation of its predecessor, Moe-4x7b-reason-code-qa, this iteration introduces refined mechanisms and expanded training datasets to deliver more precise and contextually relevant responses. How to Use ```from transformers import AutoTokenizer import transformers import torch model = "nextai-team/Moe-4x7b-reason-code-qa" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, device_map="auto", model_kwargs={"torch_dtype": torch.float16}, ) def generate_resposne(query): messages = [{"role": "user", "content": query}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) return outputs[0]['generated_text'] response = generate_resposne("How to learn coding .Please provide a step by step procedure") print(response) ``` Intended Use This model is intended for developers, data scientists, and researchers seeking to integrate sophisticated natural language understanding and code generation functionalities into their applications. Ideal use cases include but are not limited to: Automated coding assistance Technical support bots Educational tools for learning programming Enhancing code review processes Model Architecture employs a Mixture of Experts (MoE) architecture, which allows it to efficiently manage its vast number of parameters for specialized tasks. This architecture facilitates the model's ability to discern subtle nuances in programming languages and natural language queries, leading to more accurate code generation and question answering performance. Training Data The model has been trained on a diverse and extensive corpus comprising technical documentation, open-source code repositories, Stack Overflow questions and answers, and other programming-related texts. Special attention has been given to ensure a wide range of programming languages and frameworks are represented in the training data to enhance the model's versatility. Performance demonstrates significant improvements in accuracy and relevance over its predecessor, particularly in complex coding scenarios and detailed technical queries. Benchmarks and performance metrics can be provided upon request. Limitations and Biases While represents a leap forward in AI-assisted coding and technical Q&A, it is not without limitations. The model may exhibit biases present in its training data, and its performance can vary based on the specificity and context of the input queries. Users are encouraged to critically assess the model's output and consider it as one of several tools in the decision-making process. Ethical Considerations We are committed to ethical AI development and urge users to employ Moe-4x7b-reason-code-qa responsibly. This includes but is not limited to avoiding the generation of harmful or unsafe code, respecting copyright and intellectual property rights, and being mindful of privacy concerns when inputting sensitive information into the model. Demo API: app.nextai.co.in Usage Instructions For detailed instructions on how to integrate and utilize Moe-4x7b-reason-code-qa in your projects, please refer to our GitHub repository and Hugging Face documentation. Citation If you use Moe-4x7b-reason-code-qa in your research or application, please cite it as follows: @misc{nextai2024moe4x7b, title={Moe-4x7b-reason-code-qa: Enhancing Question Answering and Code Generation with Mixture of Experts}, author={NextAI Team}, year={2024}, publisher={Hugging Face} }
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Moe-4x7b-reason-code-qa - GGUF - Model creator: https://huggingface.co/nextai-team/ - Original model: https://huggingface.co/nextai-team/Moe-4x7b-reason-code-qa/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Moe-4x7b-reason-code-qa.Q2_K.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q2_K.gguf) | Q2_K | 8.24GB | | [Moe-4x7b-reason-code-qa.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.IQ3_XS.gguf) | IQ3_XS | 9.21GB | | [Moe-4x7b-reason-code-qa.IQ3_S.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.IQ3_S.gguf) | IQ3_S | 9.73GB | | [Moe-4x7b-reason-code-qa.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q3_K_S.gguf) | Q3_K_S | 9.72GB | | [Moe-4x7b-reason-code-qa.IQ3_M.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.IQ3_M.gguf) | IQ3_M | 9.92GB | | [Moe-4x7b-reason-code-qa.Q3_K.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q3_K.gguf) | Q3_K | 10.79GB | | [Moe-4x7b-reason-code-qa.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q3_K_M.gguf) | Q3_K_M | 10.79GB | | [Moe-4x7b-reason-code-qa.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q3_K_L.gguf) | Q3_K_L | 11.68GB | | [Moe-4x7b-reason-code-qa.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.IQ4_XS.gguf) | IQ4_XS | 12.15GB | | [Moe-4x7b-reason-code-qa.Q4_0.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q4_0.gguf) | Q4_0 | 12.69GB | | [Moe-4x7b-reason-code-qa.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.IQ4_NL.gguf) | IQ4_NL | 12.81GB | | [Moe-4x7b-reason-code-qa.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q4_K_S.gguf) | Q4_K_S | 12.8GB | | [Moe-4x7b-reason-code-qa.Q4_K.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q4_K.gguf) | Q4_K | 13.61GB | | [Moe-4x7b-reason-code-qa.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q4_K_M.gguf) | Q4_K_M | 13.61GB | | [Moe-4x7b-reason-code-qa.Q4_1.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q4_1.gguf) | Q4_1 | 14.09GB | | [Moe-4x7b-reason-code-qa.Q5_0.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q5_0.gguf) | Q5_0 | 15.48GB | | [Moe-4x7b-reason-code-qa.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q5_K_S.gguf) | Q5_K_S | 15.48GB | | [Moe-4x7b-reason-code-qa.Q5_K.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q5_K.gguf) | Q5_K | 12.51GB | | [Moe-4x7b-reason-code-qa.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q5_K_M.gguf) | Q5_K_M | 15.96GB | | [Moe-4x7b-reason-code-qa.Q5_1.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q5_1.gguf) | Q5_1 | 16.88GB | | [Moe-4x7b-reason-code-qa.Q6_K.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q6_K.gguf) | Q6_K | 18.46GB | | [Moe-4x7b-reason-code-qa.Q8_0.gguf](https://huggingface.co/RichardErkhov/nextai-team_-_Moe-4x7b-reason-code-qa-gguf/blob/main/Moe-4x7b-reason-code-qa.Q8_0.gguf) | Q8_0 | 23.9GB | Original model description: --- language: - en license: apache-2.0 library_name: transformers tags: - code - QA - reasoning - mixtral - maths - sql - mistral - zephyr - codellama --- Model Details Model Name: Moe-4x7b-reason-code-qa Publisher: nextai-team Model Type: Question Answering & Code Generation Architecture: Mixture of Experts (MoE) Model Size: 4x7 billion parameters Overview is an advanced AI model designed by the nextai-team for the purpose of enhancing question answering and code generation capabilities. Building upon the foundation of its predecessor, Moe-4x7b-reason-code-qa, this iteration introduces refined mechanisms and expanded training datasets to deliver more precise and contextually relevant responses. How to Use ```from transformers import AutoTokenizer import transformers import torch model = "nextai-team/Moe-4x7b-reason-code-qa" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, device_map="auto", model_kwargs={"torch_dtype": torch.float16}, ) def generate_resposne(query): messages = [{"role": "user", "content": query}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) return outputs[0]['generated_text'] response = generate_resposne("How to learn coding .Please provide a step by step procedure") print(response) ``` Intended Use This model is intended for developers, data scientists, and researchers seeking to integrate sophisticated natural language understanding and code generation functionalities into their applications. Ideal use cases include but are not limited to: Automated coding assistance Technical support bots Educational tools for learning programming Enhancing code review processes Model Architecture employs a Mixture of Experts (MoE) architecture, which allows it to efficiently manage its vast number of parameters for specialized tasks. This architecture facilitates the model's ability to discern subtle nuances in programming languages and natural language queries, leading to more accurate code generation and question answering performance. Training Data The model has been trained on a diverse and extensive corpus comprising technical documentation, open-source code repositories, Stack Overflow questions and answers, and other programming-related texts. Special attention has been given to ensure a wide range of programming languages and frameworks are represented in the training data to enhance the model's versatility. Performance demonstrates significant improvements in accuracy and relevance over its predecessor, particularly in complex coding scenarios and detailed technical queries. Benchmarks and performance metrics can be provided upon request. Limitations and Biases While represents a leap forward in AI-assisted coding and technical Q&A, it is not without limitations. The model may exhibit biases present in its training data, and its performance can vary based on the specificity and context of the input queries. Users are encouraged to critically assess the model's output and consider it as one of several tools in the decision-making process. Ethical Considerations We are committed to ethical AI development and urge users to employ Moe-4x7b-reason-code-qa responsibly. This includes but is not limited to avoiding the generation of harmful or unsafe code, respecting copyright and intellectual property rights, and being mindful of privacy concerns when inputting sensitive information into the model. Demo API: app.nextai.co.in Usage Instructions For detailed instructions on how to integrate and utilize Moe-4x7b-reason-code-qa in your projects, please refer to our GitHub repository and Hugging Face documentation. Citation If you use Moe-4x7b-reason-code-qa in your research or application, please cite it as follows: @misc{nextai2024moe4x7b, title={Moe-4x7b-reason-code-qa: Enhancing Question Answering and Code Generation with Mixture of Experts}, author={NextAI Team}, year={2024}, publisher={Hugging Face} }
{}
task
[ "QUESTION_ANSWERING" ]
46,149
ahmed-masry/chartgemma
ahmed-masry
image-text-to-text
[ "transformers", "safetensors", "paligemma", "image-text-to-text", "en", "arxiv:2407.04172", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-06-19T02:25:21Z
2024-07-27T01:08:06+00:00
1,362
41
--- language: - en license: mit --- # ChartGemma: Visual Instruction-tuning for Chart Reasoning in the Wild Paper Link: https://arxiv.org/abs/2407.04172 The abstract of the paper states that: > Given the ubiquity of charts as a data analysis, visualization, and decision-making tool across industries and sciences, there has been a growing interest in developing pre-trained foundation models as well as general purpose instruction-tuned models for chart understanding and reasoning. However, existing methods suffer crucial drawbacks across two critical axes affecting the performance of chart representation models: they are trained on data generated from underlying data tables of the charts, ignoring the visual trends and patterns in chart images, \emph{and} use weakly aligned vision-language backbone models for domain-specific training, limiting their generalizability when encountering charts in the wild. We address these important drawbacks and introduce ChartGemma, a novel chart understanding and reasoning model developed over PaliGemma. Rather than relying on underlying data tables, ChartGemma is trained on instruction-tuning data generated directly from chart images, thus capturing both high-level trends and low-level visual information from a diverse set of charts. Our simple approach achieves state-of-the-art results across $5$ benchmarks spanning chart summarization, question answering, and fact-checking, and our elaborate qualitative studies on real-world charts show that ChartGemma generates more realistic and factually correct summaries compared to its contemporaries. # Web Demo If you wish to quickly try our model, you can access our public web demo hosted on the Hugging Face Spaces platform with a friendly interface! [ChartGemma Web Demo](https://huggingface.co/spaces/ahmed-masry/ChartGemma) # Inference You can easily use our models for inference with the huggingface library! You just need to do the following: 1. Chage the **_image_path_** to your chart example image path on your system 2. Write the **_input_text_** We recommend using beam search with a beam size of 4, but if your machine has low memory, you can remove the num_beams from the generate method. ``` from PIL import Image import requests from transformers import AutoProcessor, PaliGemmaForConditionalGeneration import torch torch.hub.download_url_to_file('https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/multi_col_1229.png', 'chart_example_1.png') image_path = "/content/chart_example_1.png" input_text ="program of thought: what is the sum of Faceboob Messnger and Whatsapp values in the 18-29 age group?" # Load Model model = PaliGemmaForConditionalGeneration.from_pretrained("ahmed-masry/chartgemma", torch_dtype=torch.float16) processor = AutoProcessor.from_pretrained("ahmed-masry/chartgemma") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = model.to(device) # Process Inputs image = Image.open(image_path).convert('RGB') inputs = processor(text=input_text, images=image, return_tensors="pt") prompt_length = inputs['input_ids'].shape[1] inputs = {k: v.to(device) for k, v in inputs.items()} # Generate generate_ids = model.generate(**inputs, num_beams=4, max_new_tokens=512) output_text = processor.batch_decode(generate_ids[:, prompt_length:], skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] print(output_text) ``` # Contact If you have any questions about this work, please contact **[Ahmed Masry](https://ahmedmasryku.github.io/)** using the following email addresses: **[email protected]** or **[email protected]**. # Reference Please cite our paper if you use our model in your research. ``` @misc{masry2024chartgemmavisualinstructiontuningchart, title={ChartGemma: Visual Instruction-tuning for Chart Reasoning in the Wild}, author={Ahmed Masry and Megh Thakkar and Aayush Bajaj and Aaryaman Kartha and Enamul Hoque and Shafiq Joty}, year={2024}, eprint={2407.04172}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2407.04172}, } ```
null
Non_BioNLP
# ChartGemma: Visual Instruction-tuning for Chart Reasoning in the Wild Paper Link: https://arxiv.org/abs/2407.04172 The abstract of the paper states that: > Given the ubiquity of charts as a data analysis, visualization, and decision-making tool across industries and sciences, there has been a growing interest in developing pre-trained foundation models as well as general purpose instruction-tuned models for chart understanding and reasoning. However, existing methods suffer crucial drawbacks across two critical axes affecting the performance of chart representation models: they are trained on data generated from underlying data tables of the charts, ignoring the visual trends and patterns in chart images, \emph{and} use weakly aligned vision-language backbone models for domain-specific training, limiting their generalizability when encountering charts in the wild. We address these important drawbacks and introduce ChartGemma, a novel chart understanding and reasoning model developed over PaliGemma. Rather than relying on underlying data tables, ChartGemma is trained on instruction-tuning data generated directly from chart images, thus capturing both high-level trends and low-level visual information from a diverse set of charts. Our simple approach achieves state-of-the-art results across $5$ benchmarks spanning chart summarization, question answering, and fact-checking, and our elaborate qualitative studies on real-world charts show that ChartGemma generates more realistic and factually correct summaries compared to its contemporaries. # Web Demo If you wish to quickly try our model, you can access our public web demo hosted on the Hugging Face Spaces platform with a friendly interface! [ChartGemma Web Demo](https://huggingface.co/spaces/ahmed-masry/ChartGemma) # Inference You can easily use our models for inference with the huggingface library! You just need to do the following: 1. Chage the **_image_path_** to your chart example image path on your system 2. Write the **_input_text_** We recommend using beam search with a beam size of 4, but if your machine has low memory, you can remove the num_beams from the generate method. ``` from PIL import Image import requests from transformers import AutoProcessor, PaliGemmaForConditionalGeneration import torch torch.hub.download_url_to_file('https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/multi_col_1229.png', 'chart_example_1.png') image_path = "/content/chart_example_1.png" input_text ="program of thought: what is the sum of Faceboob Messnger and Whatsapp values in the 18-29 age group?" # Load Model model = PaliGemmaForConditionalGeneration.from_pretrained("ahmed-masry/chartgemma", torch_dtype=torch.float16) processor = AutoProcessor.from_pretrained("ahmed-masry/chartgemma") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = model.to(device) # Process Inputs image = Image.open(image_path).convert('RGB') inputs = processor(text=input_text, images=image, return_tensors="pt") prompt_length = inputs['input_ids'].shape[1] inputs = {k: v.to(device) for k, v in inputs.items()} # Generate generate_ids = model.generate(**inputs, num_beams=4, max_new_tokens=512) output_text = processor.batch_decode(generate_ids[:, prompt_length:], skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] print(output_text) ``` # Contact If you have any questions about this work, please contact **[Ahmed Masry](https://ahmedmasryku.github.io/)** using the following email addresses: **[email protected]** or **[email protected]**. # Reference Please cite our paper if you use our model in your research. ``` @misc{masry2024chartgemmavisualinstructiontuningchart, title={ChartGemma: Visual Instruction-tuning for Chart Reasoning in the Wild}, author={Ahmed Masry and Megh Thakkar and Aayush Bajaj and Aaryaman Kartha and Enamul Hoque and Shafiq Joty}, year={2024}, eprint={2407.04172}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2407.04172}, } ```
{"language": ["en"], "license": "mit"}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
46,150
RUCAIBox/mtl-question-generation
RUCAIBox
text2text-generation
[ "transformers", "pytorch", "mvp", "text-generation", "text2text-generation", "en", "arxiv:2206.12131", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-06-02T12:00:54Z
2022-06-27T02:27:24+00:00
93
1
--- language: - en license: apache-2.0 pipeline_tag: text2text-generation tags: - text-generation - text2text-generation widget: - text: 'Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing .' example_title: Example1 - text: 'Generate the question based on the answer: Arthur ''s Magazine [X_SEP] Arthur ''s Magazine ( 1844–1846 ) was an American literary periodical published in Philadelphia in the 19th century . First for Women is a woman ''s magazine published by Bauer Media Group in the USA .' example_title: Example2 --- # MTL-question-generation The MTL-question-generation model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP). ## Model Description MTL-question-generation is supervised pre-trained using a mixture of labeled question generation datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture. MTL-question-generation is specially designed for question generation tasks, such as SQuAD and CoQA. ## Example ```python >>> from transformers import MvpTokenizer, MvpForConditionalGeneration >>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp") >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-question-generation") >>> inputs = tokenizer( ... "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing .", ... return_tensors="pt", ... ) >>> generated_ids = model.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['A bolo punch and a hook are both punches used in what sport?] ``` ## Related Models **MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp). **Prompt-based models**: - MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task). - MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization). - MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog). - MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text). - MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story). - MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering). - MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation). - MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog). **Multi-task models**: - MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization). - MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog). - MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text). - MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story). - MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering). - MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation). - MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog). ## Citation ```bibtex @article{tang2022mvp, title={MVP: Multi-task Supervised Pre-training for Natural Language Generation}, author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong}, journal={arXiv preprint arXiv:2206.12131}, year={2022}, url={https://arxiv.org/abs/2206.12131}, } ```
null
Non_BioNLP
# MTL-question-generation The MTL-question-generation model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP). ## Model Description MTL-question-generation is supervised pre-trained using a mixture of labeled question generation datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture. MTL-question-generation is specially designed for question generation tasks, such as SQuAD and CoQA. ## Example ```python >>> from transformers import MvpTokenizer, MvpForConditionalGeneration >>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp") >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-question-generation") >>> inputs = tokenizer( ... "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing .", ... return_tensors="pt", ... ) >>> generated_ids = model.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['A bolo punch and a hook are both punches used in what sport?] ``` ## Related Models **MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp). **Prompt-based models**: - MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task). - MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization). - MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog). - MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text). - MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story). - MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering). - MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation). - MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog). **Multi-task models**: - MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization). - MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog). - MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text). - MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story). - MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering). - MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation). - MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog). ## Citation ```bibtex @article{tang2022mvp, title={MVP: Multi-task Supervised Pre-training for Natural Language Generation}, author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong}, journal={arXiv preprint arXiv:2206.12131}, year={2022}, url={https://arxiv.org/abs/2206.12131}, } ```
{"language": ["en"], "license": "apache-2.0", "pipeline_tag": "text2text-generation", "tags": ["text-generation", "text2text-generation"], "widget": [{"text": "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing .", "example_title": "Example1"}, {"text": "Generate the question based on the answer: Arthur 's Magazine [X_SEP] Arthur 's Magazine ( 1844–1846 ) was an American literary periodical published in Philadelphia in the 19th century . First for Women is a woman 's magazine published by Bauer Media Group in the USA .", "example_title": "Example2"}]}
task
[ "SUMMARIZATION" ]
46,151
Helsinki-NLP/opus-mt-bg-sv
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T11:26:22+00:00
55
0
--- license: apache-2.0 tags: - translation --- ### opus-mt-bg-sv * source languages: bg * target languages: sv * OPUS readme: [bg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bg.sv | 29.1 | 0.494 |
null
Non_BioNLP
### opus-mt-bg-sv * source languages: bg * target languages: sv * OPUS readme: [bg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bg.sv | 29.1 | 0.494 |
{"license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
46,152
RichardErkhov/andito_-_mlx_summarization-gguf
RichardErkhov
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
2025-02-20T12:49:03Z
2025-02-20T13:25:34+00:00
578
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) mlx_summarization - GGUF - Model creator: https://huggingface.co/andito/ - Original model: https://huggingface.co/andito/mlx_summarization/ | Name | Quant method | Size | | ---- | ---- | ---- | | [mlx_summarization.Q2_K.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q2_K.gguf) | Q2_K | 0.63GB | | [mlx_summarization.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.IQ3_XS.gguf) | IQ3_XS | 0.69GB | | [mlx_summarization.IQ3_S.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.IQ3_S.gguf) | IQ3_S | 0.72GB | | [mlx_summarization.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q3_K_S.gguf) | Q3_K_S | 0.72GB | | [mlx_summarization.IQ3_M.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.IQ3_M.gguf) | IQ3_M | 0.75GB | | [mlx_summarization.Q3_K.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q3_K.gguf) | Q3_K | 0.8GB | | [mlx_summarization.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q3_K_M.gguf) | Q3_K_M | 0.8GB | | [mlx_summarization.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q3_K_L.gguf) | Q3_K_L | 0.87GB | | [mlx_summarization.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.IQ4_XS.gguf) | IQ4_XS | 0.88GB | | [mlx_summarization.Q4_0.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q4_0.gguf) | Q4_0 | 0.92GB | | [mlx_summarization.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.IQ4_NL.gguf) | IQ4_NL | 0.93GB | | [mlx_summarization.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q4_K_S.gguf) | Q4_K_S | 0.93GB | | [mlx_summarization.Q4_K.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q4_K.gguf) | Q4_K | 0.98GB | | [mlx_summarization.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q4_K_M.gguf) | Q4_K_M | 0.98GB | | [mlx_summarization.Q4_1.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q4_1.gguf) | Q4_1 | 1.02GB | | [mlx_summarization.Q5_0.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q5_0.gguf) | Q5_0 | 1.11GB | | [mlx_summarization.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q5_K_S.gguf) | Q5_K_S | 1.11GB | | [mlx_summarization.Q5_K.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q5_K.gguf) | Q5_K | 1.14GB | | [mlx_summarization.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q5_K_M.gguf) | Q5_K_M | 1.14GB | | [mlx_summarization.Q5_1.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q5_1.gguf) | Q5_1 | 1.2GB | | [mlx_summarization.Q6_K.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q6_K.gguf) | Q6_K | 1.31GB | | [mlx_summarization.Q8_0.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q8_0.gguf) | Q8_0 | 1.7GB | Original model description: --- library_name: transformers tags: - mlx base_model: HuggingFaceTB/SmolLM2-1.7B-Intermediate-SFT-v2-summarization-lora-r32-a64-merged-2 --- # andito/mlx_summarization The Model [andito/mlx_summarization](https://huggingface.co/andito/mlx_summarization) was converted to MLX format from [HuggingFaceTB/SmolLM2-1.7B-Intermediate-SFT-v2-summarization-lora-r32-a64-merged-2](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Intermediate-SFT-v2-summarization-lora-r32-a64-merged-2) using mlx-lm version **0.19.2**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("andito/mlx_summarization") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) mlx_summarization - GGUF - Model creator: https://huggingface.co/andito/ - Original model: https://huggingface.co/andito/mlx_summarization/ | Name | Quant method | Size | | ---- | ---- | ---- | | [mlx_summarization.Q2_K.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q2_K.gguf) | Q2_K | 0.63GB | | [mlx_summarization.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.IQ3_XS.gguf) | IQ3_XS | 0.69GB | | [mlx_summarization.IQ3_S.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.IQ3_S.gguf) | IQ3_S | 0.72GB | | [mlx_summarization.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q3_K_S.gguf) | Q3_K_S | 0.72GB | | [mlx_summarization.IQ3_M.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.IQ3_M.gguf) | IQ3_M | 0.75GB | | [mlx_summarization.Q3_K.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q3_K.gguf) | Q3_K | 0.8GB | | [mlx_summarization.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q3_K_M.gguf) | Q3_K_M | 0.8GB | | [mlx_summarization.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q3_K_L.gguf) | Q3_K_L | 0.87GB | | [mlx_summarization.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.IQ4_XS.gguf) | IQ4_XS | 0.88GB | | [mlx_summarization.Q4_0.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q4_0.gguf) | Q4_0 | 0.92GB | | [mlx_summarization.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.IQ4_NL.gguf) | IQ4_NL | 0.93GB | | [mlx_summarization.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q4_K_S.gguf) | Q4_K_S | 0.93GB | | [mlx_summarization.Q4_K.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q4_K.gguf) | Q4_K | 0.98GB | | [mlx_summarization.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q4_K_M.gguf) | Q4_K_M | 0.98GB | | [mlx_summarization.Q4_1.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q4_1.gguf) | Q4_1 | 1.02GB | | [mlx_summarization.Q5_0.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q5_0.gguf) | Q5_0 | 1.11GB | | [mlx_summarization.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q5_K_S.gguf) | Q5_K_S | 1.11GB | | [mlx_summarization.Q5_K.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q5_K.gguf) | Q5_K | 1.14GB | | [mlx_summarization.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q5_K_M.gguf) | Q5_K_M | 1.14GB | | [mlx_summarization.Q5_1.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q5_1.gguf) | Q5_1 | 1.2GB | | [mlx_summarization.Q6_K.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q6_K.gguf) | Q6_K | 1.31GB | | [mlx_summarization.Q8_0.gguf](https://huggingface.co/RichardErkhov/andito_-_mlx_summarization-gguf/blob/main/mlx_summarization.Q8_0.gguf) | Q8_0 | 1.7GB | Original model description: --- library_name: transformers tags: - mlx base_model: HuggingFaceTB/SmolLM2-1.7B-Intermediate-SFT-v2-summarization-lora-r32-a64-merged-2 --- # andito/mlx_summarization The Model [andito/mlx_summarization](https://huggingface.co/andito/mlx_summarization) was converted to MLX format from [HuggingFaceTB/SmolLM2-1.7B-Intermediate-SFT-v2-summarization-lora-r32-a64-merged-2](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Intermediate-SFT-v2-summarization-lora-r32-a64-merged-2) using mlx-lm version **0.19.2**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("andito/mlx_summarization") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
{}
task
[ "SUMMARIZATION" ]
46,153
BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1-Q4_K_M-GGUF
BlackBeenie
text-generation
[ "transformers", "gguf", "Chain-of-Thought Activation", "Llama3.1", "8B", "CoT", "SFT", "text-generation-inference", "Ollama", "safetensors", "Question Answering", "Math", "llama-cpp", "gguf-my-repo", "text-generation", "en", "dataset:O1-OPEN/OpenO1-SFT", "base_model:BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1", "base_model:quantized:BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
2025-01-27T10:09:48Z
2025-01-27T10:10:11+00:00
13
0
--- base_model: BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1 datasets: - O1-OPEN/OpenO1-SFT language: - en library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - Chain-of-Thought Activation - Llama3.1 - 8B - CoT - SFT - text-generation-inference - Ollama - safetensors - Question Answering - Math - llama-cpp - gguf-my-repo --- # BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1-Q4_K_M-GGUF This model was converted to GGUF format from [`BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1`](https://huggingface.co/BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1-Q4_K_M-GGUF --hf-file llama-3.1-8b-openo1-sft-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1-Q4_K_M-GGUF --hf-file llama-3.1-8b-openo1-sft-v0.1-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1-Q4_K_M-GGUF --hf-file llama-3.1-8b-openo1-sft-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1-Q4_K_M-GGUF --hf-file llama-3.1-8b-openo1-sft-v0.1-q4_k_m.gguf -c 2048 ```
null
Non_BioNLP
# BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1-Q4_K_M-GGUF This model was converted to GGUF format from [`BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1`](https://huggingface.co/BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1-Q4_K_M-GGUF --hf-file llama-3.1-8b-openo1-sft-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1-Q4_K_M-GGUF --hf-file llama-3.1-8b-openo1-sft-v0.1-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1-Q4_K_M-GGUF --hf-file llama-3.1-8b-openo1-sft-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1-Q4_K_M-GGUF --hf-file llama-3.1-8b-openo1-sft-v0.1-q4_k_m.gguf -c 2048 ```
{"base_model": "BlackBeenie/Llama-3.1-8B-OpenO1-SFT-v0.1", "datasets": ["O1-OPEN/OpenO1-SFT"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["Chain-of-Thought Activation", "Llama3.1", "8B", "CoT", "SFT", "text-generation-inference", "Ollama", "safetensors", "Question Answering", "Math", "llama-cpp", "gguf-my-repo"]}
task
[ "QUESTION_ANSWERING" ]
46,154
kamalu-chioma/sentence_clustering
kamalu-chioma
null
[ "region:us" ]
2024-10-12T20:12:54Z
2024-10-12T21:16:14+00:00
0
0
--- {} --- ## Model Card for BERT-based Sentence Clustering ## Model Overview ## Model Name: BERT Sentence Clustering ## Framework: PyTorch --- license: mit --- Description This model uses BERT embeddings to group sentences based on their semantic similarity, enabling insights from text data. It transforms sentences into high-dimensional vectors, facilitating the identification of patterns and themes. Purpose Designed for exploratory data analysis, this model helps users: Discover hidden patterns in text. Organize related sentences for better understanding. Enhance content curation. Training Data Trained on a diverse dataset of 2,000 sentences, ensuring robust performance across various topics. Performance The model effectively clusters sentences into coherent groups, providing meaningful insights into the data. Future Work Potential improvements include experimenting with different clustering algorithms and expanding the dataset for better performance.
null
Non_BioNLP
## Model Card for BERT-based Sentence Clustering ## Model Overview ## Model Name: BERT Sentence Clustering ## Framework: PyTorch --- license: mit --- Description This model uses BERT embeddings to group sentences based on their semantic similarity, enabling insights from text data. It transforms sentences into high-dimensional vectors, facilitating the identification of patterns and themes. Purpose Designed for exploratory data analysis, this model helps users: Discover hidden patterns in text. Organize related sentences for better understanding. Enhance content curation. Training Data Trained on a diverse dataset of 2,000 sentences, ensuring robust performance across various topics. Performance The model effectively clusters sentences into coherent groups, providing meaningful insights into the data. Future Work Potential improvements include experimenting with different clustering algorithms and expanding the dataset for better performance.
{}
task
[ "SEMANTIC_SIMILARITY" ]
46,155
intfloat/e5-base-unsupervised
intfloat
sentence-similarity
[ "sentence-transformers", "pytorch", "safetensors", "bert", "Sentence Transformers", "sentence-similarity", "en", "arxiv:2212.03533", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2023-01-31T03:03:25Z
2023-07-27T04:36:18+00:00
2,476
1
--- language: - en license: mit tags: - Sentence Transformers - sentence-similarity - sentence-transformers --- # E5-base-unsupervised **This model is similar to [e5-base](https://huggingface.co/intfloat/e5-base) but without supervised fine-tuning.** [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 12 layers and the embedding size is 768. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-base-unsupervised') model = AutoModel.from_pretrained('intfloat/e5-base-unsupervised') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/e5-base-unsupervised') input_texts = [ 'query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments." ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens.
null
Non_BioNLP
# E5-base-unsupervised **This model is similar to [e5-base](https://huggingface.co/intfloat/e5-base) but without supervised fine-tuning.** [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 12 layers and the embedding size is 768. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-base-unsupervised') model = AutoModel.from_pretrained('intfloat/e5-base-unsupervised') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/e5-base-unsupervised') input_texts = [ 'query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments." ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens.
{"language": ["en"], "license": "mit", "tags": ["Sentence Transformers", "sentence-similarity", "sentence-transformers"]}
task
[ "SEMANTIC_SIMILARITY" ]
46,156
ritheshSree/disaster-setfit-model
ritheshSree
text-classification
[ "sentence-transformers", "pytorch", "albert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-02-17T14:07:51Z
2023-02-20T13:04:11+00:00
16
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # disaster-setfit-model This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("disaster-setfit-model") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# disaster-setfit-model This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("disaster-setfit-model") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
46,157
jianboy/demo
jianboy
automatic-speech-recognition
[ "audio", "automatic-speech-recognition", "hf-asr-leaderboard", "en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "el", "ms", "cs", "ro", "da", "hu", "ta", "no", "th", "ur", "hr", "bg", "lt", "la", "mi", "ml", "cy", "sk", "te", "fa", "lv", "bn", "sr", "az", "sl", "kn", "et", "mk", "br", "eu", "is", "hy", "ne", "mn", "bs", "kk", "sq", "sw", "gl", "mr", "pa", "si", "km", "sn", "yo", "so", "af", "oc", "ka", "be", "tg", "sd", "gu", "am", "yi", "lo", "uz", "fo", "ht", "ps", "tk", "nn", "mt", "sa", "lb", "my", "bo", "tl", "mg", "as", "tt", "haw", "ln", "ha", "ba", "jw", "su", "arxiv:2212.04356", "license:apache-2.0", "region:us" ]
2023-02-23T18:25:56Z
2023-02-24T02:18:37+00:00
0
0
--- language: - en - zh - de - es - ru - ko - fr - ja - pt - tr - pl - ca - nl - ar - sv - it - id - hi - fi - vi - he - uk - el - ms - cs - ro - da - hu - ta - false - th - ur - hr - bg - lt - la - mi - ml - cy - sk - te - fa - lv - bn - sr - az - sl - kn - et - mk - br - eu - is - hy - ne - mn - bs - kk - sq - sw - gl - mr - pa - si - km - sn - yo - so - af - oc - ka - be - tg - sd - gu - am - yi - lo - uz - fo - ht - ps - tk - nn - mt - sa - lb - my - bo - tl - mg - as - tt - haw - ln - ha - ba - jw - su license: apache-2.0 pipeline_tag: automatic-speech-recognition tags: - audio - automatic-speech-recognition - hf-asr-leaderboard widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac --- # Whisper Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need for fine-tuning. Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356) by Alec Radford et al. from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper). Compared to the Whisper large model, the large-v2 model is trained for 2.5x more epochs with added regularization for improved performance. **Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were copied and pasted from the original model card. ## Model details Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model. It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision. The models were trained on either English-only data or multilingual data. The English-only models were trained on the task of speech recognition. The multilingual models were trained on both speech recognition and speech translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio. For speech translation, the model predicts transcriptions to a *different* language to the audio. Whisper checkpoints come in five configurations of varying model sizes. The smallest four are trained on either English-only or multilingual data. The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The checkpoints are summarised in the following table with links to the models on the Hub: | Size | Parameters | English-only | Multilingual | |----------|------------|------------------------------------------------------|-----------------------------------------------------| | tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) | | base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) | | small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) | | medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) | | large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) | | large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) | # Usage To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor). The `WhisperProcessor` is used to: 1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model) 2. Post-process the model outputs (converting them from tokens to text) The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order: 1. The transcription always starts with the `<|startoftranscript|>` token 2. The second token is the language token (e.g. `<|en|>` for English) 3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation 4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction Thus, a typical sequence of context tokens might look as follows: ``` <|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|> ``` Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps. These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at each position. This allows one to control the output language and task for the Whisper model. If they are un-forced, the Whisper model will automatically predict the output langauge and task itself. The context tokens can be set accordingly: ```python model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe") ``` Which forces the model to predict in English under the task of speech recognition. ## Transcription ### English to English In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language (English) and task (transcribe). ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2") >>> model.config.forced_decoder_ids = None >>> # load dummy dataset and read audio files >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False) ['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>'] >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.'] ``` The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`. ### French to French The following example demonstrates French to French transcription by setting the decoder ids appropriately. ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import Audio, load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2") >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe") >>> # load streaming dataset and read first audio sample >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True) >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) >>> input_speech = next(iter(ds))["audio"] >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids) ['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>'] >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' Un vrai travail intéressant va enfin être mené sur ce sujet.'] ``` ## Translation Setting the task to "translate" forces the Whisper model to perform speech translation. ### French to English ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import Audio, load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2") >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate") >>> # load streaming dataset and read first audio sample >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True) >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) >>> input_speech = next(iter(ds))["audio"] >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' A very interesting work, we will finally be given on this subject.'] ``` ## Evaluation This code snippet shows how to evaluate Whisper Large on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr): ```python >>> from datasets import load_dataset >>> from transformers import WhisperForConditionalGeneration, WhisperProcessor >>> import torch >>> from evaluate import load >>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test") >>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2").to("cuda") >>> def map_to_pred(batch): >>> audio = batch["audio"] >>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features >>> batch["reference"] = processor.tokenizer._normalize(batch['text']) >>> >>> with torch.no_grad(): >>> predicted_ids = model.generate(input_features.to("cuda"))[0] >>> transcription = processor.decode(predicted_ids) >>> batch["prediction"] = processor.tokenizer._normalize(transcription) >>> return batch >>> result = librispeech_test_clean.map(map_to_pred) >>> wer = load("wer") >>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"])) 3.0003583080317572 ``` ## Long-Form Transcription The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. It can also be extended to predict utterance level timestamps by passing `return_timestamps=True`: ```python >>> import torch >>> from transformers import pipeline >>> from datasets import load_dataset >>> device = "cuda:0" if torch.cuda.is_available() else "cpu" >>> pipe = pipeline( >>> "automatic-speech-recognition", >>> model="openai/whisper-large-v2", >>> chunk_length_s=30, >>> device=device, >>> ) >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> prediction = pipe(sample.copy())["text"] " Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel." >>> # we can also return timestamps for the predictions >>> prediction = pipe(sample, return_timestamps=True)["chunks"] [{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.', 'timestamp': (0.0, 5.44)}] ``` ## Fine-Tuning The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However, its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step guide to fine-tuning the Whisper model with as little as 5 hours of labelled data. ### Evaluated Use The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research. The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them. In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes. ## Training Data The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages. As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language. ## Performance and Limitations Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level. However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself. Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf). In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages. ## Broader Implications We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications. There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects. ### BibTeX entry and citation info ```bibtex @misc{radford2022whisper, doi = {10.48550/ARXIV.2212.04356}, url = {https://arxiv.org/abs/2212.04356}, author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya}, title = {Robust Speech Recognition via Large-Scale Weak Supervision}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
null
Non_BioNLP
# Whisper Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need for fine-tuning. Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356) by Alec Radford et al. from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper). Compared to the Whisper large model, the large-v2 model is trained for 2.5x more epochs with added regularization for improved performance. **Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were copied and pasted from the original model card. ## Model details Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model. It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision. The models were trained on either English-only data or multilingual data. The English-only models were trained on the task of speech recognition. The multilingual models were trained on both speech recognition and speech translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio. For speech translation, the model predicts transcriptions to a *different* language to the audio. Whisper checkpoints come in five configurations of varying model sizes. The smallest four are trained on either English-only or multilingual data. The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The checkpoints are summarised in the following table with links to the models on the Hub: | Size | Parameters | English-only | Multilingual | |----------|------------|------------------------------------------------------|-----------------------------------------------------| | tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) | | base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) | | small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) | | medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) | | large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) | | large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) | # Usage To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor). The `WhisperProcessor` is used to: 1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model) 2. Post-process the model outputs (converting them from tokens to text) The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order: 1. The transcription always starts with the `<|startoftranscript|>` token 2. The second token is the language token (e.g. `<|en|>` for English) 3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation 4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction Thus, a typical sequence of context tokens might look as follows: ``` <|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|> ``` Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps. These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at each position. This allows one to control the output language and task for the Whisper model. If they are un-forced, the Whisper model will automatically predict the output langauge and task itself. The context tokens can be set accordingly: ```python model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe") ``` Which forces the model to predict in English under the task of speech recognition. ## Transcription ### English to English In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language (English) and task (transcribe). ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2") >>> model.config.forced_decoder_ids = None >>> # load dummy dataset and read audio files >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False) ['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>'] >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.'] ``` The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`. ### French to French The following example demonstrates French to French transcription by setting the decoder ids appropriately. ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import Audio, load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2") >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe") >>> # load streaming dataset and read first audio sample >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True) >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) >>> input_speech = next(iter(ds))["audio"] >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids) ['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>'] >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' Un vrai travail intéressant va enfin être mené sur ce sujet.'] ``` ## Translation Setting the task to "translate" forces the Whisper model to perform speech translation. ### French to English ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import Audio, load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2") >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate") >>> # load streaming dataset and read first audio sample >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True) >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) >>> input_speech = next(iter(ds))["audio"] >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' A very interesting work, we will finally be given on this subject.'] ``` ## Evaluation This code snippet shows how to evaluate Whisper Large on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr): ```python >>> from datasets import load_dataset >>> from transformers import WhisperForConditionalGeneration, WhisperProcessor >>> import torch >>> from evaluate import load >>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test") >>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2").to("cuda") >>> def map_to_pred(batch): >>> audio = batch["audio"] >>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features >>> batch["reference"] = processor.tokenizer._normalize(batch['text']) >>> >>> with torch.no_grad(): >>> predicted_ids = model.generate(input_features.to("cuda"))[0] >>> transcription = processor.decode(predicted_ids) >>> batch["prediction"] = processor.tokenizer._normalize(transcription) >>> return batch >>> result = librispeech_test_clean.map(map_to_pred) >>> wer = load("wer") >>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"])) 3.0003583080317572 ``` ## Long-Form Transcription The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. It can also be extended to predict utterance level timestamps by passing `return_timestamps=True`: ```python >>> import torch >>> from transformers import pipeline >>> from datasets import load_dataset >>> device = "cuda:0" if torch.cuda.is_available() else "cpu" >>> pipe = pipeline( >>> "automatic-speech-recognition", >>> model="openai/whisper-large-v2", >>> chunk_length_s=30, >>> device=device, >>> ) >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> prediction = pipe(sample.copy())["text"] " Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel." >>> # we can also return timestamps for the predictions >>> prediction = pipe(sample, return_timestamps=True)["chunks"] [{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.', 'timestamp': (0.0, 5.44)}] ``` ## Fine-Tuning The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However, its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step guide to fine-tuning the Whisper model with as little as 5 hours of labelled data. ### Evaluated Use The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research. The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them. In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes. ## Training Data The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages. As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language. ## Performance and Limitations Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level. However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself. Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf). In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages. ## Broader Implications We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications. There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects. ### BibTeX entry and citation info ```bibtex @misc{radford2022whisper, doi = {10.48550/ARXIV.2212.04356}, url = {https://arxiv.org/abs/2212.04356}, author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya}, title = {Robust Speech Recognition via Large-Scale Weak Supervision}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
{"language": ["en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "el", "ms", "cs", "ro", "da", "hu", "ta", false, "th", "ur", "hr", "bg", "lt", "la", "mi", "ml", "cy", "sk", "te", "fa", "lv", "bn", "sr", "az", "sl", "kn", "et", "mk", "br", "eu", "is", "hy", "ne", "mn", "bs", "kk", "sq", "sw", "gl", "mr", "pa", "si", "km", "sn", "yo", "so", "af", "oc", "ka", "be", "tg", "sd", "gu", "am", "yi", "lo", "uz", "fo", "ht", "ps", "tk", "nn", "mt", "sa", "lb", "my", "bo", "tl", "mg", "as", "tt", "haw", "ln", "ha", "ba", "jw", "su"], "license": "apache-2.0", "pipeline_tag": "automatic-speech-recognition", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard"], "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}]}
task
[ "TRANSLATION" ]
46,158
hkodimuddana/mt5-small-finetuned-amazon-en-es
hkodimuddana
summarization
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "summarization", "generated_from_trainer", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-11-29T04:57:44Z
2024-11-29T21:08:33+00:00
17
0
--- base_model: google/mt5-small library_name: transformers license: apache-2.0 metrics: - rouge tags: - summarization - generated_from_trainer model-index: - name: mt5-small-finetuned-amazon-en-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2830 - Rouge1: 15.06 - Rouge2: 6.78 - Rougel: 14.55 - Rougelsum: 14.52 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | No log | 1.0 | 125 | 3.2940 | 14.93 | 6.52 | 14.68 | 14.68 | | No log | 2.0 | 250 | 3.2848 | 14.36 | 6.32 | 13.93 | 13.9 | | No log | 3.0 | 375 | 3.2891 | 14.01 | 6.4 | 13.68 | 13.61 | | No log | 4.0 | 500 | 3.2786 | 14.26 | 6.35 | 13.87 | 13.83 | | No log | 5.0 | 625 | 3.2825 | 14.55 | 5.54 | 14.17 | 14.14 | | No log | 6.0 | 750 | 3.2934 | 14.16 | 6.38 | 13.81 | 13.78 | | No log | 7.0 | 875 | 3.2830 | 14.83 | 6.67 | 14.43 | 14.39 | | No log | 8.0 | 1000 | 3.2830 | 15.06 | 6.78 | 14.55 | 14.52 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2830 - Rouge1: 15.06 - Rouge2: 6.78 - Rougel: 14.55 - Rougelsum: 14.52 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | No log | 1.0 | 125 | 3.2940 | 14.93 | 6.52 | 14.68 | 14.68 | | No log | 2.0 | 250 | 3.2848 | 14.36 | 6.32 | 13.93 | 13.9 | | No log | 3.0 | 375 | 3.2891 | 14.01 | 6.4 | 13.68 | 13.61 | | No log | 4.0 | 500 | 3.2786 | 14.26 | 6.35 | 13.87 | 13.83 | | No log | 5.0 | 625 | 3.2825 | 14.55 | 5.54 | 14.17 | 14.14 | | No log | 6.0 | 750 | 3.2934 | 14.16 | 6.38 | 13.81 | 13.78 | | No log | 7.0 | 875 | 3.2830 | 14.83 | 6.67 | 14.43 | 14.39 | | No log | 8.0 | 1000 | 3.2830 | 15.06 | 6.78 | 14.55 | 14.52 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
{"base_model": "google/mt5-small", "library_name": "transformers", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-amazon-en-es", "results": []}]}
task
[ "SUMMARIZATION" ]
46,160
Falconsai/text_summarization
Falconsai
summarization
[ "transformers", "pytorch", "coreml", "onnx", "safetensors", "t5", "text2text-generation", "summarization", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-10-21T00:53:53Z
2024-02-17T07:55:14+00:00
96,719
224
--- language: - en license: apache-2.0 pipeline_tag: summarization widget: - text: 'Hugging Face: Revolutionizing Natural Language Processing Introduction In the rapidly evolving field of Natural Language Processing (NLP), Hugging Face has emerged as a prominent and innovative force. This article will explore the story and significance of Hugging Face, a company that has made remarkable contributions to NLP and AI as a whole. From its inception to its role in democratizing AI, Hugging Face has left an indelible mark on the industry. The Birth of Hugging Face Hugging Face was founded in 2016 by Clément Delangue, Julien Chaumond, and Thomas Wolf. The name Hugging Face was chosen to reflect the company''s mission of making AI models more accessible and friendly to humans, much like a comforting hug. Initially, they began as a chatbot company but later shifted their focus to NLP, driven by their belief in the transformative potential of this technology. Transformative Innovations Hugging Face is best known for its open-source contributions, particularly the Transformers library. This library has become the de facto standard for NLP and enables researchers, developers, and organizations to easily access and utilize state-of-the-art pre-trained language models, such as BERT, GPT-3, and more. These models have countless applications, from chatbots and virtual assistants to language translation and sentiment analysis. ' example_title: Summarization Example 1 --- # Model Card: Fine-Tuned T5 Small for Text Summarization ## Model Description The **Fine-Tuned T5 Small** is a variant of the T5 transformer model, designed for the task of text summarization. It is adapted and fine-tuned to generate concise and coherent summaries of input text. The model, named "t5-small," is pre-trained on a diverse corpus of text data, enabling it to capture essential information and generate meaningful summaries. Fine-tuning is conducted with careful attention to hyperparameter settings, including batch size and learning rate, to ensure optimal performance for text summarization. During the fine-tuning process, a batch size of 8 is chosen for efficient computation and learning. Additionally, a learning rate of 2e-5 is selected to balance convergence speed and model optimization. This approach guarantees not only rapid learning but also continuous refinement during training. The fine-tuning dataset consists of a variety of documents and their corresponding human-generated summaries. This diverse dataset allows the model to learn the art of creating summaries that capture the most important information while maintaining coherence and fluency. The goal of this meticulous training process is to equip the model with the ability to generate high-quality text summaries, making it valuable for a wide range of applications involving document summarization and content condensation. ## Intended Uses & Limitations ### Intended Uses - **Text Summarization**: The primary intended use of this model is to generate concise and coherent text summaries. It is well-suited for applications that involve summarizing lengthy documents, news articles, and textual content. ### How to Use To use this model for text summarization, you can follow these steps: ```python from transformers import pipeline summarizer = pipeline("summarization", model="Falconsai/text_summarization") ARTICLE = """ Hugging Face: Revolutionizing Natural Language Processing Introduction In the rapidly evolving field of Natural Language Processing (NLP), Hugging Face has emerged as a prominent and innovative force. This article will explore the story and significance of Hugging Face, a company that has made remarkable contributions to NLP and AI as a whole. From its inception to its role in democratizing AI, Hugging Face has left an indelible mark on the industry. The Birth of Hugging Face Hugging Face was founded in 2016 by Clément Delangue, Julien Chaumond, and Thomas Wolf. The name "Hugging Face" was chosen to reflect the company's mission of making AI models more accessible and friendly to humans, much like a comforting hug. Initially, they began as a chatbot company but later shifted their focus to NLP, driven by their belief in the transformative potential of this technology. Transformative Innovations Hugging Face is best known for its open-source contributions, particularly the "Transformers" library. This library has become the de facto standard for NLP and enables researchers, developers, and organizations to easily access and utilize state-of-the-art pre-trained language models, such as BERT, GPT-3, and more. These models have countless applications, from chatbots and virtual assistants to language translation and sentiment analysis. Key Contributions: 1. **Transformers Library:** The Transformers library provides a unified interface for more than 50 pre-trained models, simplifying the development of NLP applications. It allows users to fine-tune these models for specific tasks, making it accessible to a wider audience. 2. **Model Hub:** Hugging Face's Model Hub is a treasure trove of pre-trained models, making it simple for anyone to access, experiment with, and fine-tune models. Researchers and developers around the world can collaborate and share their models through this platform. 3. **Hugging Face Transformers Community:** Hugging Face has fostered a vibrant online community where developers, researchers, and AI enthusiasts can share their knowledge, code, and insights. This collaborative spirit has accelerated the growth of NLP. Democratizing AI Hugging Face's most significant impact has been the democratization of AI and NLP. Their commitment to open-source development has made powerful AI models accessible to individuals, startups, and established organizations. This approach contrasts with the traditional proprietary AI model market, which often limits access to those with substantial resources. By providing open-source models and tools, Hugging Face has empowered a diverse array of users to innovate and create their own NLP applications. This shift has fostered inclusivity, allowing a broader range of voices to contribute to AI research and development. Industry Adoption The success and impact of Hugging Face are evident in its widespread adoption. Numerous companies and institutions, from startups to tech giants, leverage Hugging Face's technology for their AI applications. This includes industries as varied as healthcare, finance, and entertainment, showcasing the versatility of NLP and Hugging Face's contributions. Future Directions Hugging Face's journey is far from over. As of my last knowledge update in September 2021, the company was actively pursuing research into ethical AI, bias reduction in models, and more. Given their track record of innovation and commitment to the AI community, it is likely that they will continue to lead in ethical AI development and promote responsible use of NLP technologies. Conclusion Hugging Face's story is one of transformation, collaboration, and empowerment. Their open-source contributions have reshaped the NLP landscape and democratized access to AI. As they continue to push the boundaries of AI research, we can expect Hugging Face to remain at the forefront of innovation, contributing to a more inclusive and ethical AI future. Their journey reminds us that the power of open-source collaboration can lead to groundbreaking advancements in technology and bring AI within the reach of many. """ print(summarizer(ARTICLE, max_length=1000, min_length=30, do_sample=False)) >>> [{'summary_text': 'Hugging Face has emerged as a prominent and innovative force in NLP . From its inception to its role in democratizing AI, the company has left an indelible mark on the industry . The name "Hugging Face" was chosen to reflect the company\'s mission of making AI models more accessible and friendly to humans .'}] ``` Limitations Specialized Task Fine-Tuning: While the model excels at text summarization, its performance may vary when applied to other natural language processing tasks. Users interested in employing this model for different tasks should explore fine-tuned versions available in the model hub for optimal results. Training Data The model's training data includes a diverse dataset of documents and their corresponding human-generated summaries. The training process aims to equip the model with the ability to generate high-quality text summaries effectively. Training Stats - Evaluation Loss: 0.012345678901234567 - Evaluation Rouge Score: 0.95 (F1) - Evaluation Runtime: 2.3456 - Evaluation Samples per Second: 1234.56 - Evaluation Steps per Second: 45.678 Responsible Usage It is essential to use this model responsibly and ethically, adhering to content guidelines and applicable regulations when implementing it in real-world applications, particularly those involving potentially sensitive content. References Hugging Face Model Hub T5 Paper Disclaimer: The model's performance may be influenced by the quality and representativeness of the data it was fine-tuned on. Users are encouraged to assess the model's suitability for their specific applications and datasets.
null
Non_BioNLP
# Model Card: Fine-Tuned T5 Small for Text Summarization ## Model Description The **Fine-Tuned T5 Small** is a variant of the T5 transformer model, designed for the task of text summarization. It is adapted and fine-tuned to generate concise and coherent summaries of input text. The model, named "t5-small," is pre-trained on a diverse corpus of text data, enabling it to capture essential information and generate meaningful summaries. Fine-tuning is conducted with careful attention to hyperparameter settings, including batch size and learning rate, to ensure optimal performance for text summarization. During the fine-tuning process, a batch size of 8 is chosen for efficient computation and learning. Additionally, a learning rate of 2e-5 is selected to balance convergence speed and model optimization. This approach guarantees not only rapid learning but also continuous refinement during training. The fine-tuning dataset consists of a variety of documents and their corresponding human-generated summaries. This diverse dataset allows the model to learn the art of creating summaries that capture the most important information while maintaining coherence and fluency. The goal of this meticulous training process is to equip the model with the ability to generate high-quality text summaries, making it valuable for a wide range of applications involving document summarization and content condensation. ## Intended Uses & Limitations ### Intended Uses - **Text Summarization**: The primary intended use of this model is to generate concise and coherent text summaries. It is well-suited for applications that involve summarizing lengthy documents, news articles, and textual content. ### How to Use To use this model for text summarization, you can follow these steps: ```python from transformers import pipeline summarizer = pipeline("summarization", model="Falconsai/text_summarization") ARTICLE = """ Hugging Face: Revolutionizing Natural Language Processing Introduction In the rapidly evolving field of Natural Language Processing (NLP), Hugging Face has emerged as a prominent and innovative force. This article will explore the story and significance of Hugging Face, a company that has made remarkable contributions to NLP and AI as a whole. From its inception to its role in democratizing AI, Hugging Face has left an indelible mark on the industry. The Birth of Hugging Face Hugging Face was founded in 2016 by Clément Delangue, Julien Chaumond, and Thomas Wolf. The name "Hugging Face" was chosen to reflect the company's mission of making AI models more accessible and friendly to humans, much like a comforting hug. Initially, they began as a chatbot company but later shifted their focus to NLP, driven by their belief in the transformative potential of this technology. Transformative Innovations Hugging Face is best known for its open-source contributions, particularly the "Transformers" library. This library has become the de facto standard for NLP and enables researchers, developers, and organizations to easily access and utilize state-of-the-art pre-trained language models, such as BERT, GPT-3, and more. These models have countless applications, from chatbots and virtual assistants to language translation and sentiment analysis. Key Contributions: 1. **Transformers Library:** The Transformers library provides a unified interface for more than 50 pre-trained models, simplifying the development of NLP applications. It allows users to fine-tune these models for specific tasks, making it accessible to a wider audience. 2. **Model Hub:** Hugging Face's Model Hub is a treasure trove of pre-trained models, making it simple for anyone to access, experiment with, and fine-tune models. Researchers and developers around the world can collaborate and share their models through this platform. 3. **Hugging Face Transformers Community:** Hugging Face has fostered a vibrant online community where developers, researchers, and AI enthusiasts can share their knowledge, code, and insights. This collaborative spirit has accelerated the growth of NLP. Democratizing AI Hugging Face's most significant impact has been the democratization of AI and NLP. Their commitment to open-source development has made powerful AI models accessible to individuals, startups, and established organizations. This approach contrasts with the traditional proprietary AI model market, which often limits access to those with substantial resources. By providing open-source models and tools, Hugging Face has empowered a diverse array of users to innovate and create their own NLP applications. This shift has fostered inclusivity, allowing a broader range of voices to contribute to AI research and development. Industry Adoption The success and impact of Hugging Face are evident in its widespread adoption. Numerous companies and institutions, from startups to tech giants, leverage Hugging Face's technology for their AI applications. This includes industries as varied as healthcare, finance, and entertainment, showcasing the versatility of NLP and Hugging Face's contributions. Future Directions Hugging Face's journey is far from over. As of my last knowledge update in September 2021, the company was actively pursuing research into ethical AI, bias reduction in models, and more. Given their track record of innovation and commitment to the AI community, it is likely that they will continue to lead in ethical AI development and promote responsible use of NLP technologies. Conclusion Hugging Face's story is one of transformation, collaboration, and empowerment. Their open-source contributions have reshaped the NLP landscape and democratized access to AI. As they continue to push the boundaries of AI research, we can expect Hugging Face to remain at the forefront of innovation, contributing to a more inclusive and ethical AI future. Their journey reminds us that the power of open-source collaboration can lead to groundbreaking advancements in technology and bring AI within the reach of many. """ print(summarizer(ARTICLE, max_length=1000, min_length=30, do_sample=False)) >>> [{'summary_text': 'Hugging Face has emerged as a prominent and innovative force in NLP . From its inception to its role in democratizing AI, the company has left an indelible mark on the industry . The name "Hugging Face" was chosen to reflect the company\'s mission of making AI models more accessible and friendly to humans .'}] ``` Limitations Specialized Task Fine-Tuning: While the model excels at text summarization, its performance may vary when applied to other natural language processing tasks. Users interested in employing this model for different tasks should explore fine-tuned versions available in the model hub for optimal results. Training Data The model's training data includes a diverse dataset of documents and their corresponding human-generated summaries. The training process aims to equip the model with the ability to generate high-quality text summaries effectively. Training Stats - Evaluation Loss: 0.012345678901234567 - Evaluation Rouge Score: 0.95 (F1) - Evaluation Runtime: 2.3456 - Evaluation Samples per Second: 1234.56 - Evaluation Steps per Second: 45.678 Responsible Usage It is essential to use this model responsibly and ethically, adhering to content guidelines and applicable regulations when implementing it in real-world applications, particularly those involving potentially sensitive content. References Hugging Face Model Hub T5 Paper Disclaimer: The model's performance may be influenced by the quality and representativeness of the data it was fine-tuned on. Users are encouraged to assess the model's suitability for their specific applications and datasets.
{"language": ["en"], "license": "apache-2.0", "pipeline_tag": "summarization", "widget": [{"text": "Hugging Face: Revolutionizing Natural Language Processing Introduction In the rapidly evolving field of Natural Language Processing (NLP), Hugging Face has emerged as a prominent and innovative force. This article will explore the story and significance of Hugging Face, a company that has made remarkable contributions to NLP and AI as a whole. From its inception to its role in democratizing AI, Hugging Face has left an indelible mark on the industry. The Birth of Hugging Face Hugging Face was founded in 2016 by Clément Delangue, Julien Chaumond, and Thomas Wolf. The name Hugging Face was chosen to reflect the company's mission of making AI models more accessible and friendly to humans, much like a comforting hug. Initially, they began as a chatbot company but later shifted their focus to NLP, driven by their belief in the transformative potential of this technology. Transformative Innovations Hugging Face is best known for its open-source contributions, particularly the Transformers library. This library has become the de facto standard for NLP and enables researchers, developers, and organizations to easily access and utilize state-of-the-art pre-trained language models, such as BERT, GPT-3, and more. These models have countless applications, from chatbots and virtual assistants to language translation and sentiment analysis. "}], "example_title": "Summarization Example 1"}
task
[ "TRANSLATION", "SUMMARIZATION" ]
46,161
nihiluis/argureviews-specificity-full-mpnet
nihiluis
text-classification
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-10-20T09:14:26Z
2023-10-20T09:14:44+00:00
6
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # nihiluis/argureviews-specificity-full-mpnet This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("nihiluis/argureviews-specificity-full-mpnet") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# nihiluis/argureviews-specificity-full-mpnet This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("nihiluis/argureviews-specificity-full-mpnet") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
46,162
farleyknight/patent-summarization-allen-led-large-2022-09-20
farleyknight
text2text-generation
[ "transformers", "pytorch", "led", "text2text-generation", "generated_from_trainer", "dataset:farleyknight/big_patent_5_percent", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-09-20T16:22:35Z
2022-09-25T23:46:58+00:00
24
1
--- datasets: - farleyknight/big_patent_5_percent license: apache-2.0 metrics: - rouge tags: - generated_from_trainer model-index: - name: patent-summarization-allen-led-large-2022-09-20 results: - task: type: summarization name: Summarization dataset: name: farleyknight/big_patent_5_percent type: farleyknight/big_patent_5_percent config: all split: train args: all metrics: - type: rouge value: 0.0 name: Rouge1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # patent-summarization-allen-led-large-2022-09-20 This model is a fine-tuned version of [allenai/led-large-16384-arxiv](https://huggingface.co/allenai/led-large-16384-arxiv) on the farleyknight/big_patent_5_percent dataset. It achieves the following results on the evaluation set: - Loss: 2.8233 - Rouge1: 0.0 - Rouge2: 0.0 - Rougel: 0.0 - Rougelsum: 0.0 - Gen Len: 128.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 3.4766 | 0.08 | 5000 | 3.4240 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 3.2549 | 0.17 | 10000 | 3.2908 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 3.2295 | 0.25 | 15000 | 3.1862 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 3.1455 | 0.33 | 20000 | 3.1291 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 3.0526 | 0.41 | 25000 | 3.0684 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 3.0024 | 0.5 | 30000 | 3.0134 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.9671 | 0.58 | 35000 | 2.9696 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.9862 | 0.66 | 40000 | 2.9431 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.9168 | 0.75 | 45000 | 2.8989 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.9063 | 0.83 | 50000 | 2.8559 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.8417 | 0.91 | 55000 | 2.8398 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.7853 | 0.99 | 60000 | 2.8240 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | ### Framework versions - Transformers 4.23.0.dev0 - Pytorch 1.12.0 - Datasets 2.4.0 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # patent-summarization-allen-led-large-2022-09-20 This model is a fine-tuned version of [allenai/led-large-16384-arxiv](https://huggingface.co/allenai/led-large-16384-arxiv) on the farleyknight/big_patent_5_percent dataset. It achieves the following results on the evaluation set: - Loss: 2.8233 - Rouge1: 0.0 - Rouge2: 0.0 - Rougel: 0.0 - Rougelsum: 0.0 - Gen Len: 128.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 3.4766 | 0.08 | 5000 | 3.4240 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 3.2549 | 0.17 | 10000 | 3.2908 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 3.2295 | 0.25 | 15000 | 3.1862 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 3.1455 | 0.33 | 20000 | 3.1291 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 3.0526 | 0.41 | 25000 | 3.0684 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 3.0024 | 0.5 | 30000 | 3.0134 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.9671 | 0.58 | 35000 | 2.9696 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.9862 | 0.66 | 40000 | 2.9431 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.9168 | 0.75 | 45000 | 2.8989 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.9063 | 0.83 | 50000 | 2.8559 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.8417 | 0.91 | 55000 | 2.8398 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | | 2.7853 | 0.99 | 60000 | 2.8240 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 | ### Framework versions - Transformers 4.23.0.dev0 - Pytorch 1.12.0 - Datasets 2.4.0 - Tokenizers 0.12.1
{"datasets": ["farleyknight/big_patent_5_percent"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "patent-summarization-allen-led-large-2022-09-20", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "farleyknight/big_patent_5_percent", "type": "farleyknight/big_patent_5_percent", "config": "all", "split": "train", "args": "all"}, "metrics": [{"type": "rouge", "value": 0.0, "name": "Rouge1"}]}]}]}
task
[ "SUMMARIZATION" ]
46,163
joshuapb/fine-tuned-matryoshka-1000
joshuapb
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1000", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "en", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:BAAI/bge-base-en-v1.5", "base_model:finetune:BAAI/bge-base-en-v1.5", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-08-23T23:29:03Z
2024-08-23T23:29:25+00:00
5
0
--- base_model: BAAI/bge-base-en-v1.5 datasets: [] language: - en library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1000 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: 'Revision stage: Edit the output to correct content unsupported by evidence while preserving the original content as much as possible. Initialize the revised text $y=x$. (1) Per $(q_i, e_{ij})$, an agreement model (via few-shot prompting + CoT, $(y, q, e) \to {0,1}$) checks whether the evidence $e_i$ disagrees with the current revised text $y$. (2) Only if a disagreement is detect, the edit model (via few-shot prompting + CoT, $(y, q, e) \to \text{ new }y$) outputs a new version of $y$ that aims to agree with evidence $e_{ij}$ while otherwise minimally altering $y$. (3) Finally only a limited number $M=5$ of evidence goes into the attribution report $A$. Fig. 12. Illustration of RARR (Retrofit Attribution using Research and Revision). (Image source: Gao et al. 2022) When evaluating the revised text $y$, both attribution and preservation metrics matter.' sentences: - What is the impact of claim extraction on the efficiency of query generation within various tool querying methodologies? - What are the implications of integrating both attribution and preservation metrics in the assessment of a revised text for an attribution report? - What impact does the calibration of large language models, as discussed in the research by Kadavath et al. (2022), have on the consistency and accuracy of their responses, particularly in the context of multiple choice questions? - source_sentence: 'Fig. 1. Knowledge categorization of close-book QA examples based on how likely the model outputs correct answers. (Image source: Gekhman et al. 2024) Some interesting observations of the experiments, where dev set accuracy is considered a proxy for hallucinations. Unknown examples are fitted substantially slower than Known. The best dev performance is obtained when the LLM fits the majority of the Known training examples but only a few of the Unknown ones. The model starts to hallucinate when it learns most of the Unknown examples. Among Known examples, MaybeKnown cases result in better overall performance, more essential than HighlyKnown ones.' sentences: - What are the implications of a language model's performance when it is primarily trained on familiar examples compared to a diverse set of unfamiliar examples, and how does this relate to the phenomenon of hallucinations in language models? - How can the insights gained from the evaluation framework inform the future enhancements of AI models, particularly in terms of improving factual accuracy and entity recognition? - What role does the MPNet model play in evaluating the faithfulness of reasoning paths, particularly in relation to scores of entailment and contradiction? - source_sentence: 'Non-context LLM: Prompt LLM directly with <atomic-fact> True or False? without additional context. Retrieval→LLM: Prompt with $k$ related passages retrieved from the knowledge source as context. Nonparametric probability (NP)): Compute the average likelihood of tokens in the atomic fact by a masked LM and use that to make a prediction. Retrieval→LLM + NP: Ensemble of two methods. Some interesting observations on model hallucination behavior: Error rates are higher for rarer entities in the task of biography generation. Error rates are higher for facts mentioned later in the generation. Using retrieval to ground the model generation significantly helps reduce hallucination.' sentences: - What methods does the model employ to generate impactful, non-standard verification questions that enhance the fact-checking process? - What impact does the timing of fact presentation in AI outputs have on the likelihood of generating inaccuracies? - What are the benefits of using the 'Factor+revise' strategy in enhancing the reliability of verification processes in few-shot learning, particularly when it comes to identifying inconsistencies? - source_sentence: 'Research stage: Find related documents as evidence. (1) First use a query generation model (via few-shot prompting, $x \to {q_1, \dots, q_N}$) to construct a set of search queries ${q_1, \dots, q_N}$ to verify all aspects of each sentence. (2) Run Google search, $K=5$ results per query $q_i$. (3) Utilize a pretrained query-document relevance model to assign relevance scores and only retain one most relevant $J=1$ document $e_{i1}, \dots, e_{iJ}$ per query $q_i$. Revision stage: Edit the output to correct content unsupported by evidence while preserving the original content as much as possible. Initialize the revised text $y=x$.' sentences: - In what ways does the process of generating queries facilitate the verification of content accuracy, particularly through the lens of evidence-based editing methodologies? - What role do attribution and preservation metrics play in assessing the quality of revised texts, and how might these factors influence the success of the Evidence Disagreement Detection process? - What are the practical ways to utilize the F1 @ K metric for assessing how well FacTool identifies factual inaccuracies in various fields? - source_sentence: '(1) Joint: join with step 2, where the few-shot examples are structured as (response, verification questions, verification answers); The drawback is that the original response is in the context, so the model may repeat similar hallucination. (2) 2-step: separate the verification planning and execution steps, such as the original response doesn’t impact (3) Factored: each verification question is answered separately. Say, if a long-form base generation results in multiple verification questions, we would answer each question one-by-one. (4) Factor+revise: adding a “cross-checking” step after factored verification execution, conditioned on both the baseline response and the verification question and answer. It detects inconsistency. Final output: Generate the final, refined output. The output gets revised at this step if any inconsistency is discovered.' sentences: - What are the key challenges associated with using a pre-training dataset for world knowledge, particularly in maintaining the factual accuracy of the outputs generated by the model? - What obstacles arise when depending on the pre-training dataset in the context of extrinsic hallucination affecting model outputs? - In what ways does the 'Factor+revise' method enhance the reliability of responses when compared to the 'Joint' and '2-step' methods used for cross-checking? model-index: - name: BGE base Financial Matryoshka results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.8802083333333334 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.984375 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9947916666666666 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9947916666666666 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.8802083333333334 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.328125 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.19895833333333335 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09947916666666667 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.8802083333333334 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.984375 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.9947916666666666 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9947916666666666 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9495062223081544 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9337673611111109 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.934240845959596 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.8854166666666666 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.984375 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9947916666666666 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 1.0 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.8854166666666666 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.328125 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.19895833333333335 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09999999999999999 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.8854166666666666 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.984375 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.9947916666666666 name: Cosine Recall@5 - type: cosine_recall@10 value: 1.0 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9536782535355709 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.937818287037037 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.937818287037037 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.9010416666666666 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.984375 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 1.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 1.0 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9010416666666666 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.328125 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.19999999999999998 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09999999999999999 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.9010416666666666 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.984375 name: Cosine Recall@3 - type: cosine_recall@5 value: 1.0 name: Cosine Recall@5 - type: cosine_recall@10 value: 1.0 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9587563670488631 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9446180555555554 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.9446180555555556 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.90625 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.984375 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 1.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 1.0 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.90625 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.328125 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.19999999999999998 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09999999999999999 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.90625 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.984375 name: Cosine Recall@3 - type: cosine_recall@5 value: 1.0 name: Cosine Recall@5 - type: cosine_recall@10 value: 1.0 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9609068566179642 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9474826388888888 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.947482638888889 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.890625 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.984375 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 1.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 1.0 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.890625 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.328125 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.19999999999999998 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09999999999999999 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.890625 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.984375 name: Cosine Recall@3 - type: cosine_recall@5 value: 1.0 name: Cosine Recall@5 - type: cosine_recall@10 value: 1.0 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9551401340175182 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9396701388888888 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.939670138888889 name: Cosine Map@100 --- # BGE base Financial Matryoshka This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("joshuapb/fine-tuned-matryoshka-1000") # Run inference sentences = [ '(1) Joint: join with step 2, where the few-shot examples are structured as (response, verification questions, verification answers); The drawback is that the original response is in the context, so the model may repeat similar hallucination.\n(2) 2-step: separate the verification planning and execution steps, such as the original response doesn’t impact\n(3) Factored: each verification question is answered separately. Say, if a long-form base generation results in multiple verification questions, we would answer each question one-by-one.\n(4) Factor+revise: adding a “cross-checking” step after factored verification execution, conditioned on both the baseline response and the verification question and answer. It detects inconsistency.\n\n\nFinal output: Generate the final, refined output. The output gets revised at this step if any inconsistency is discovered.', "In what ways does the 'Factor+revise' method enhance the reliability of responses when compared to the 'Joint' and '2-step' methods used for cross-checking?", 'What obstacles arise when depending on the pre-training dataset in the context of extrinsic hallucination affecting model outputs?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.8802 | | cosine_accuracy@3 | 0.9844 | | cosine_accuracy@5 | 0.9948 | | cosine_accuracy@10 | 0.9948 | | cosine_precision@1 | 0.8802 | | cosine_precision@3 | 0.3281 | | cosine_precision@5 | 0.199 | | cosine_precision@10 | 0.0995 | | cosine_recall@1 | 0.8802 | | cosine_recall@3 | 0.9844 | | cosine_recall@5 | 0.9948 | | cosine_recall@10 | 0.9948 | | cosine_ndcg@10 | 0.9495 | | cosine_mrr@10 | 0.9338 | | **cosine_map@100** | **0.9342** | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.8854 | | cosine_accuracy@3 | 0.9844 | | cosine_accuracy@5 | 0.9948 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.8854 | | cosine_precision@3 | 0.3281 | | cosine_precision@5 | 0.199 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.8854 | | cosine_recall@3 | 0.9844 | | cosine_recall@5 | 0.9948 | | cosine_recall@10 | 1.0 | | cosine_ndcg@10 | 0.9537 | | cosine_mrr@10 | 0.9378 | | **cosine_map@100** | **0.9378** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.901 | | cosine_accuracy@3 | 0.9844 | | cosine_accuracy@5 | 1.0 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.901 | | cosine_precision@3 | 0.3281 | | cosine_precision@5 | 0.2 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.901 | | cosine_recall@3 | 0.9844 | | cosine_recall@5 | 1.0 | | cosine_recall@10 | 1.0 | | cosine_ndcg@10 | 0.9588 | | cosine_mrr@10 | 0.9446 | | **cosine_map@100** | **0.9446** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9062 | | cosine_accuracy@3 | 0.9844 | | cosine_accuracy@5 | 1.0 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.9062 | | cosine_precision@3 | 0.3281 | | cosine_precision@5 | 0.2 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.9062 | | cosine_recall@3 | 0.9844 | | cosine_recall@5 | 1.0 | | cosine_recall@10 | 1.0 | | cosine_ndcg@10 | 0.9609 | | cosine_mrr@10 | 0.9475 | | **cosine_map@100** | **0.9475** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.8906 | | cosine_accuracy@3 | 0.9844 | | cosine_accuracy@5 | 1.0 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.8906 | | cosine_precision@3 | 0.3281 | | cosine_precision@5 | 0.2 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.8906 | | cosine_recall@3 | 0.9844 | | cosine_recall@5 | 1.0 | | cosine_recall@10 | 1.0 | | cosine_ndcg@10 | 0.9551 | | cosine_mrr@10 | 0.9397 | | **cosine_map@100** | **0.9397** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_eval_batch_size`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 5 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `load_best_model_at_end`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 | |:-------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:| | 0.04 | 5 | 4.9678 | - | - | - | - | - | | 0.08 | 10 | 4.6482 | - | - | - | - | - | | 0.12 | 15 | 5.0735 | - | - | - | - | - | | 0.16 | 20 | 4.0336 | - | - | - | - | - | | 0.2 | 25 | 3.7572 | - | - | - | - | - | | 0.24 | 30 | 4.3054 | - | - | - | - | - | | 0.28 | 35 | 2.6705 | - | - | - | - | - | | 0.32 | 40 | 3.1929 | - | - | - | - | - | | 0.36 | 45 | 3.1139 | - | - | - | - | - | | 0.4 | 50 | 2.5219 | - | - | - | - | - | | 0.44 | 55 | 3.1847 | - | - | - | - | - | | 0.48 | 60 | 2.2306 | - | - | - | - | - | | 0.52 | 65 | 2.251 | - | - | - | - | - | | 0.56 | 70 | 2.2432 | - | - | - | - | - | | 0.6 | 75 | 2.7462 | - | - | - | - | - | | 0.64 | 80 | 2.9992 | - | - | - | - | - | | 0.68 | 85 | 2.338 | - | - | - | - | - | | 0.72 | 90 | 2.0169 | - | - | - | - | - | | 0.76 | 95 | 1.257 | - | - | - | - | - | | 0.8 | 100 | 1.5015 | - | - | - | - | - | | 0.84 | 105 | 1.9198 | - | - | - | - | - | | 0.88 | 110 | 2.2154 | - | - | - | - | - | | 0.92 | 115 | 2.4026 | - | - | - | - | - | | 0.96 | 120 | 1.911 | - | - | - | - | - | | 1.0 | 125 | 2.079 | 0.9151 | 0.9098 | 0.9220 | 0.8788 | 0.9251 | | 1.04 | 130 | 1.4704 | - | - | - | - | - | | 1.08 | 135 | 0.7323 | - | - | - | - | - | | 1.12 | 140 | 0.6308 | - | - | - | - | - | | 1.16 | 145 | 0.4655 | - | - | - | - | - | | 1.2 | 150 | 1.0186 | - | - | - | - | - | | 1.24 | 155 | 1.1408 | - | - | - | - | - | | 1.28 | 160 | 1.965 | - | - | - | - | - | | 1.32 | 165 | 1.5987 | - | - | - | - | - | | 1.3600 | 170 | 3.288 | - | - | - | - | - | | 1.4 | 175 | 1.632 | - | - | - | - | - | | 1.44 | 180 | 1.0376 | - | - | - | - | - | | 1.48 | 185 | 0.9466 | - | - | - | - | - | | 1.52 | 190 | 1.0106 | - | - | - | - | - | | 1.56 | 195 | 1.4875 | - | - | - | - | - | | 1.6 | 200 | 1.314 | - | - | - | - | - | | 1.6400 | 205 | 1.3022 | - | - | - | - | - | | 1.6800 | 210 | 1.5312 | - | - | - | - | - | | 1.72 | 215 | 1.7982 | - | - | - | - | - | | 1.76 | 220 | 1.7962 | - | - | - | - | - | | 1.8 | 225 | 1.5788 | - | - | - | - | - | | 1.8400 | 230 | 1.152 | - | - | - | - | - | | 1.88 | 235 | 2.0556 | - | - | - | - | - | | 1.92 | 240 | 1.3165 | - | - | - | - | - | | 1.96 | 245 | 0.6941 | - | - | - | - | - | | **2.0** | **250** | **1.2239** | **0.9404** | **0.944** | **0.9427** | **0.9327** | **0.9424** | | 2.04 | 255 | 1.0423 | - | - | - | - | - | | 2.08 | 260 | 0.8893 | - | - | - | - | - | | 2.12 | 265 | 1.2859 | - | - | - | - | - | | 2.16 | 270 | 1.4505 | - | - | - | - | - | | 2.2 | 275 | 0.2728 | - | - | - | - | - | | 2.24 | 280 | 0.6588 | - | - | - | - | - | | 2.2800 | 285 | 0.8014 | - | - | - | - | - | | 2.32 | 290 | 0.3053 | - | - | - | - | - | | 2.36 | 295 | 1.4289 | - | - | - | - | - | | 2.4 | 300 | 1.1458 | - | - | - | - | - | | 2.44 | 305 | 0.6987 | - | - | - | - | - | | 2.48 | 310 | 1.3389 | - | - | - | - | - | | 2.52 | 315 | 1.2991 | - | - | - | - | - | | 2.56 | 320 | 1.8088 | - | - | - | - | - | | 2.6 | 325 | 0.4242 | - | - | - | - | - | | 2.64 | 330 | 1.5873 | - | - | - | - | - | | 2.68 | 335 | 1.3873 | - | - | - | - | - | | 2.7200 | 340 | 1.4297 | - | - | - | - | - | | 2.76 | 345 | 2.0637 | - | - | - | - | - | | 2.8 | 350 | 1.1252 | - | - | - | - | - | | 2.84 | 355 | 0.367 | - | - | - | - | - | | 2.88 | 360 | 1.7606 | - | - | - | - | - | | 2.92 | 365 | 1.196 | - | - | - | - | - | | 2.96 | 370 | 1.8827 | - | - | - | - | - | | 3.0 | 375 | 0.6822 | 0.9494 | 0.9479 | 0.9336 | 0.9414 | 0.9405 | | 3.04 | 380 | 0.4954 | - | - | - | - | - | | 3.08 | 385 | 0.1717 | - | - | - | - | - | | 3.12 | 390 | 0.7435 | - | - | - | - | - | | 3.16 | 395 | 1.4323 | - | - | - | - | - | | 3.2 | 400 | 1.1207 | - | - | - | - | - | | 3.24 | 405 | 1.9009 | - | - | - | - | - | | 3.2800 | 410 | 1.6706 | - | - | - | - | - | | 3.32 | 415 | 0.8378 | - | - | - | - | - | | 3.36 | 420 | 1.0911 | - | - | - | - | - | | 3.4 | 425 | 0.6565 | - | - | - | - | - | | 3.44 | 430 | 1.0302 | - | - | - | - | - | | 3.48 | 435 | 0.6425 | - | - | - | - | - | | 3.52 | 440 | 1.1472 | - | - | - | - | - | | 3.56 | 445 | 1.996 | - | - | - | - | - | | 3.6 | 450 | 1.5308 | - | - | - | - | - | | 3.64 | 455 | 0.7427 | - | - | - | - | - | | 3.68 | 460 | 1.4596 | - | - | - | - | - | | 3.7200 | 465 | 1.1984 | - | - | - | - | - | | 3.76 | 470 | 0.7601 | - | - | - | - | - | | 3.8 | 475 | 1.3544 | - | - | - | - | - | | 3.84 | 480 | 1.6655 | - | - | - | - | - | | 3.88 | 485 | 1.2596 | - | - | - | - | - | | 3.92 | 490 | 0.9451 | - | - | - | - | - | | 3.96 | 495 | 0.7079 | - | - | - | - | - | | 4.0 | 500 | 1.3471 | 0.9453 | 0.9446 | 0.9404 | 0.9371 | 0.9335 | | 4.04 | 505 | 0.4583 | - | - | - | - | - | | 4.08 | 510 | 1.288 | - | - | - | - | - | | 4.12 | 515 | 1.6946 | - | - | - | - | - | | 4.16 | 520 | 1.1239 | - | - | - | - | - | | 4.2 | 525 | 1.1026 | - | - | - | - | - | | 4.24 | 530 | 1.4121 | - | - | - | - | - | | 4.28 | 535 | 1.7113 | - | - | - | - | - | | 4.32 | 540 | 0.8389 | - | - | - | - | - | | 4.36 | 545 | 0.3117 | - | - | - | - | - | | 4.4 | 550 | 0.3144 | - | - | - | - | - | | 4.44 | 555 | 1.4694 | - | - | - | - | - | | 4.48 | 560 | 1.3233 | - | - | - | - | - | | 4.52 | 565 | 0.792 | - | - | - | - | - | | 4.5600 | 570 | 0.4881 | - | - | - | - | - | | 4.6 | 575 | 0.5097 | - | - | - | - | - | | 4.64 | 580 | 1.6377 | - | - | - | - | - | | 4.68 | 585 | 0.7273 | - | - | - | - | - | | 4.72 | 590 | 1.5464 | - | - | - | - | - | | 4.76 | 595 | 1.4392 | - | - | - | - | - | | 4.8 | 600 | 1.4384 | - | - | - | - | - | | 4.84 | 605 | 0.6375 | - | - | - | - | - | | 4.88 | 610 | 1.0528 | - | - | - | - | - | | 4.92 | 615 | 0.0276 | - | - | - | - | - | | 4.96 | 620 | 0.9604 | - | - | - | - | - | | 5.0 | 625 | 0.7219 | 0.9475 | 0.9446 | 0.9378 | 0.9397 | 0.9342 | * The bold row denotes the saved checkpoint. </details> ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.42.4 - PyTorch: 2.3.1+cu121 - Accelerate: 0.32.1 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# BGE base Financial Matryoshka This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("joshuapb/fine-tuned-matryoshka-1000") # Run inference sentences = [ '(1) Joint: join with step 2, where the few-shot examples are structured as (response, verification questions, verification answers); The drawback is that the original response is in the context, so the model may repeat similar hallucination.\n(2) 2-step: separate the verification planning and execution steps, such as the original response doesn’t impact\n(3) Factored: each verification question is answered separately. Say, if a long-form base generation results in multiple verification questions, we would answer each question one-by-one.\n(4) Factor+revise: adding a “cross-checking” step after factored verification execution, conditioned on both the baseline response and the verification question and answer. It detects inconsistency.\n\n\nFinal output: Generate the final, refined output. The output gets revised at this step if any inconsistency is discovered.', "In what ways does the 'Factor+revise' method enhance the reliability of responses when compared to the 'Joint' and '2-step' methods used for cross-checking?", 'What obstacles arise when depending on the pre-training dataset in the context of extrinsic hallucination affecting model outputs?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.8802 | | cosine_accuracy@3 | 0.9844 | | cosine_accuracy@5 | 0.9948 | | cosine_accuracy@10 | 0.9948 | | cosine_precision@1 | 0.8802 | | cosine_precision@3 | 0.3281 | | cosine_precision@5 | 0.199 | | cosine_precision@10 | 0.0995 | | cosine_recall@1 | 0.8802 | | cosine_recall@3 | 0.9844 | | cosine_recall@5 | 0.9948 | | cosine_recall@10 | 0.9948 | | cosine_ndcg@10 | 0.9495 | | cosine_mrr@10 | 0.9338 | | **cosine_map@100** | **0.9342** | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.8854 | | cosine_accuracy@3 | 0.9844 | | cosine_accuracy@5 | 0.9948 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.8854 | | cosine_precision@3 | 0.3281 | | cosine_precision@5 | 0.199 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.8854 | | cosine_recall@3 | 0.9844 | | cosine_recall@5 | 0.9948 | | cosine_recall@10 | 1.0 | | cosine_ndcg@10 | 0.9537 | | cosine_mrr@10 | 0.9378 | | **cosine_map@100** | **0.9378** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.901 | | cosine_accuracy@3 | 0.9844 | | cosine_accuracy@5 | 1.0 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.901 | | cosine_precision@3 | 0.3281 | | cosine_precision@5 | 0.2 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.901 | | cosine_recall@3 | 0.9844 | | cosine_recall@5 | 1.0 | | cosine_recall@10 | 1.0 | | cosine_ndcg@10 | 0.9588 | | cosine_mrr@10 | 0.9446 | | **cosine_map@100** | **0.9446** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.9062 | | cosine_accuracy@3 | 0.9844 | | cosine_accuracy@5 | 1.0 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.9062 | | cosine_precision@3 | 0.3281 | | cosine_precision@5 | 0.2 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.9062 | | cosine_recall@3 | 0.9844 | | cosine_recall@5 | 1.0 | | cosine_recall@10 | 1.0 | | cosine_ndcg@10 | 0.9609 | | cosine_mrr@10 | 0.9475 | | **cosine_map@100** | **0.9475** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.8906 | | cosine_accuracy@3 | 0.9844 | | cosine_accuracy@5 | 1.0 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.8906 | | cosine_precision@3 | 0.3281 | | cosine_precision@5 | 0.2 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.8906 | | cosine_recall@3 | 0.9844 | | cosine_recall@5 | 1.0 | | cosine_recall@10 | 1.0 | | cosine_ndcg@10 | 0.9551 | | cosine_mrr@10 | 0.9397 | | **cosine_map@100** | **0.9397** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_eval_batch_size`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 5 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `load_best_model_at_end`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 | |:-------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:| | 0.04 | 5 | 4.9678 | - | - | - | - | - | | 0.08 | 10 | 4.6482 | - | - | - | - | - | | 0.12 | 15 | 5.0735 | - | - | - | - | - | | 0.16 | 20 | 4.0336 | - | - | - | - | - | | 0.2 | 25 | 3.7572 | - | - | - | - | - | | 0.24 | 30 | 4.3054 | - | - | - | - | - | | 0.28 | 35 | 2.6705 | - | - | - | - | - | | 0.32 | 40 | 3.1929 | - | - | - | - | - | | 0.36 | 45 | 3.1139 | - | - | - | - | - | | 0.4 | 50 | 2.5219 | - | - | - | - | - | | 0.44 | 55 | 3.1847 | - | - | - | - | - | | 0.48 | 60 | 2.2306 | - | - | - | - | - | | 0.52 | 65 | 2.251 | - | - | - | - | - | | 0.56 | 70 | 2.2432 | - | - | - | - | - | | 0.6 | 75 | 2.7462 | - | - | - | - | - | | 0.64 | 80 | 2.9992 | - | - | - | - | - | | 0.68 | 85 | 2.338 | - | - | - | - | - | | 0.72 | 90 | 2.0169 | - | - | - | - | - | | 0.76 | 95 | 1.257 | - | - | - | - | - | | 0.8 | 100 | 1.5015 | - | - | - | - | - | | 0.84 | 105 | 1.9198 | - | - | - | - | - | | 0.88 | 110 | 2.2154 | - | - | - | - | - | | 0.92 | 115 | 2.4026 | - | - | - | - | - | | 0.96 | 120 | 1.911 | - | - | - | - | - | | 1.0 | 125 | 2.079 | 0.9151 | 0.9098 | 0.9220 | 0.8788 | 0.9251 | | 1.04 | 130 | 1.4704 | - | - | - | - | - | | 1.08 | 135 | 0.7323 | - | - | - | - | - | | 1.12 | 140 | 0.6308 | - | - | - | - | - | | 1.16 | 145 | 0.4655 | - | - | - | - | - | | 1.2 | 150 | 1.0186 | - | - | - | - | - | | 1.24 | 155 | 1.1408 | - | - | - | - | - | | 1.28 | 160 | 1.965 | - | - | - | - | - | | 1.32 | 165 | 1.5987 | - | - | - | - | - | | 1.3600 | 170 | 3.288 | - | - | - | - | - | | 1.4 | 175 | 1.632 | - | - | - | - | - | | 1.44 | 180 | 1.0376 | - | - | - | - | - | | 1.48 | 185 | 0.9466 | - | - | - | - | - | | 1.52 | 190 | 1.0106 | - | - | - | - | - | | 1.56 | 195 | 1.4875 | - | - | - | - | - | | 1.6 | 200 | 1.314 | - | - | - | - | - | | 1.6400 | 205 | 1.3022 | - | - | - | - | - | | 1.6800 | 210 | 1.5312 | - | - | - | - | - | | 1.72 | 215 | 1.7982 | - | - | - | - | - | | 1.76 | 220 | 1.7962 | - | - | - | - | - | | 1.8 | 225 | 1.5788 | - | - | - | - | - | | 1.8400 | 230 | 1.152 | - | - | - | - | - | | 1.88 | 235 | 2.0556 | - | - | - | - | - | | 1.92 | 240 | 1.3165 | - | - | - | - | - | | 1.96 | 245 | 0.6941 | - | - | - | - | - | | **2.0** | **250** | **1.2239** | **0.9404** | **0.944** | **0.9427** | **0.9327** | **0.9424** | | 2.04 | 255 | 1.0423 | - | - | - | - | - | | 2.08 | 260 | 0.8893 | - | - | - | - | - | | 2.12 | 265 | 1.2859 | - | - | - | - | - | | 2.16 | 270 | 1.4505 | - | - | - | - | - | | 2.2 | 275 | 0.2728 | - | - | - | - | - | | 2.24 | 280 | 0.6588 | - | - | - | - | - | | 2.2800 | 285 | 0.8014 | - | - | - | - | - | | 2.32 | 290 | 0.3053 | - | - | - | - | - | | 2.36 | 295 | 1.4289 | - | - | - | - | - | | 2.4 | 300 | 1.1458 | - | - | - | - | - | | 2.44 | 305 | 0.6987 | - | - | - | - | - | | 2.48 | 310 | 1.3389 | - | - | - | - | - | | 2.52 | 315 | 1.2991 | - | - | - | - | - | | 2.56 | 320 | 1.8088 | - | - | - | - | - | | 2.6 | 325 | 0.4242 | - | - | - | - | - | | 2.64 | 330 | 1.5873 | - | - | - | - | - | | 2.68 | 335 | 1.3873 | - | - | - | - | - | | 2.7200 | 340 | 1.4297 | - | - | - | - | - | | 2.76 | 345 | 2.0637 | - | - | - | - | - | | 2.8 | 350 | 1.1252 | - | - | - | - | - | | 2.84 | 355 | 0.367 | - | - | - | - | - | | 2.88 | 360 | 1.7606 | - | - | - | - | - | | 2.92 | 365 | 1.196 | - | - | - | - | - | | 2.96 | 370 | 1.8827 | - | - | - | - | - | | 3.0 | 375 | 0.6822 | 0.9494 | 0.9479 | 0.9336 | 0.9414 | 0.9405 | | 3.04 | 380 | 0.4954 | - | - | - | - | - | | 3.08 | 385 | 0.1717 | - | - | - | - | - | | 3.12 | 390 | 0.7435 | - | - | - | - | - | | 3.16 | 395 | 1.4323 | - | - | - | - | - | | 3.2 | 400 | 1.1207 | - | - | - | - | - | | 3.24 | 405 | 1.9009 | - | - | - | - | - | | 3.2800 | 410 | 1.6706 | - | - | - | - | - | | 3.32 | 415 | 0.8378 | - | - | - | - | - | | 3.36 | 420 | 1.0911 | - | - | - | - | - | | 3.4 | 425 | 0.6565 | - | - | - | - | - | | 3.44 | 430 | 1.0302 | - | - | - | - | - | | 3.48 | 435 | 0.6425 | - | - | - | - | - | | 3.52 | 440 | 1.1472 | - | - | - | - | - | | 3.56 | 445 | 1.996 | - | - | - | - | - | | 3.6 | 450 | 1.5308 | - | - | - | - | - | | 3.64 | 455 | 0.7427 | - | - | - | - | - | | 3.68 | 460 | 1.4596 | - | - | - | - | - | | 3.7200 | 465 | 1.1984 | - | - | - | - | - | | 3.76 | 470 | 0.7601 | - | - | - | - | - | | 3.8 | 475 | 1.3544 | - | - | - | - | - | | 3.84 | 480 | 1.6655 | - | - | - | - | - | | 3.88 | 485 | 1.2596 | - | - | - | - | - | | 3.92 | 490 | 0.9451 | - | - | - | - | - | | 3.96 | 495 | 0.7079 | - | - | - | - | - | | 4.0 | 500 | 1.3471 | 0.9453 | 0.9446 | 0.9404 | 0.9371 | 0.9335 | | 4.04 | 505 | 0.4583 | - | - | - | - | - | | 4.08 | 510 | 1.288 | - | - | - | - | - | | 4.12 | 515 | 1.6946 | - | - | - | - | - | | 4.16 | 520 | 1.1239 | - | - | - | - | - | | 4.2 | 525 | 1.1026 | - | - | - | - | - | | 4.24 | 530 | 1.4121 | - | - | - | - | - | | 4.28 | 535 | 1.7113 | - | - | - | - | - | | 4.32 | 540 | 0.8389 | - | - | - | - | - | | 4.36 | 545 | 0.3117 | - | - | - | - | - | | 4.4 | 550 | 0.3144 | - | - | - | - | - | | 4.44 | 555 | 1.4694 | - | - | - | - | - | | 4.48 | 560 | 1.3233 | - | - | - | - | - | | 4.52 | 565 | 0.792 | - | - | - | - | - | | 4.5600 | 570 | 0.4881 | - | - | - | - | - | | 4.6 | 575 | 0.5097 | - | - | - | - | - | | 4.64 | 580 | 1.6377 | - | - | - | - | - | | 4.68 | 585 | 0.7273 | - | - | - | - | - | | 4.72 | 590 | 1.5464 | - | - | - | - | - | | 4.76 | 595 | 1.4392 | - | - | - | - | - | | 4.8 | 600 | 1.4384 | - | - | - | - | - | | 4.84 | 605 | 0.6375 | - | - | - | - | - | | 4.88 | 610 | 1.0528 | - | - | - | - | - | | 4.92 | 615 | 0.0276 | - | - | - | - | - | | 4.96 | 620 | 0.9604 | - | - | - | - | - | | 5.0 | 625 | 0.7219 | 0.9475 | 0.9446 | 0.9378 | 0.9397 | 0.9342 | * The bold row denotes the saved checkpoint. </details> ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.42.4 - PyTorch: 2.3.1+cu121 - Accelerate: 0.32.1 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "BAAI/bge-base-en-v1.5", "datasets": [], "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1000", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Revision stage: Edit the output to correct content unsupported by evidence while preserving the original content as much as possible. Initialize the revised text $y=x$.\n\n(1) Per $(q_i, e_{ij})$, an agreement model (via few-shot prompting + CoT, $(y, q, e) \\to {0,1}$) checks whether the evidence $e_i$ disagrees with the current revised text $y$.\n(2) Only if a disagreement is detect, the edit model (via few-shot prompting + CoT, $(y, q, e) \\to \\text{ new }y$) outputs a new version of $y$ that aims to agree with evidence $e_{ij}$ while otherwise minimally altering $y$.\n(3) Finally only a limited number $M=5$ of evidence goes into the attribution report $A$.\n\n\n\n\nFig. 12. Illustration of RARR (Retrofit Attribution using Research and Revision). (Image source: Gao et al. 2022)\nWhen evaluating the revised text $y$, both attribution and preservation metrics matter.", "sentences": ["What is the impact of claim extraction on the efficiency of query generation within various tool querying methodologies?", "What are the implications of integrating both attribution and preservation metrics in the assessment of a revised text for an attribution report?", "What impact does the calibration of large language models, as discussed in the research by Kadavath et al. (2022), have on the consistency and accuracy of their responses, particularly in the context of multiple choice questions?"]}, {"source_sentence": "Fig. 1. Knowledge categorization of close-book QA examples based on how likely the model outputs correct answers. (Image source: Gekhman et al. 2024)\nSome interesting observations of the experiments, where dev set accuracy is considered a proxy for hallucinations.\n\nUnknown examples are fitted substantially slower than Known.\nThe best dev performance is obtained when the LLM fits the majority of the Known training examples but only a few of the Unknown ones. The model starts to hallucinate when it learns most of the Unknown examples.\nAmong Known examples, MaybeKnown cases result in better overall performance, more essential than HighlyKnown ones.", "sentences": ["What are the implications of a language model's performance when it is primarily trained on familiar examples compared to a diverse set of unfamiliar examples, and how does this relate to the phenomenon of hallucinations in language models?", "How can the insights gained from the evaluation framework inform the future enhancements of AI models, particularly in terms of improving factual accuracy and entity recognition?", "What role does the MPNet model play in evaluating the faithfulness of reasoning paths, particularly in relation to scores of entailment and contradiction?"]}, {"source_sentence": "Non-context LLM: Prompt LLM directly with <atomic-fact> True or False? without additional context.\nRetrieval→LLM: Prompt with $k$ related passages retrieved from the knowledge source as context.\nNonparametric probability (NP)): Compute the average likelihood of tokens in the atomic fact by a masked LM and use that to make a prediction.\nRetrieval→LLM + NP: Ensemble of two methods.\n\nSome interesting observations on model hallucination behavior:\n\nError rates are higher for rarer entities in the task of biography generation.\nError rates are higher for facts mentioned later in the generation.\nUsing retrieval to ground the model generation significantly helps reduce hallucination.", "sentences": ["What methods does the model employ to generate impactful, non-standard verification questions that enhance the fact-checking process?", "What impact does the timing of fact presentation in AI outputs have on the likelihood of generating inaccuracies?", "What are the benefits of using the 'Factor+revise' strategy in enhancing the reliability of verification processes in few-shot learning, particularly when it comes to identifying inconsistencies?"]}, {"source_sentence": "Research stage: Find related documents as evidence.\n\n(1) First use a query generation model (via few-shot prompting, $x \\to {q_1, \\dots, q_N}$) to construct a set of search queries ${q_1, \\dots, q_N}$ to verify all aspects of each sentence.\n(2) Run Google search, $K=5$ results per query $q_i$.\n(3) Utilize a pretrained query-document relevance model to assign relevance scores and only retain one most relevant $J=1$ document $e_{i1}, \\dots, e_{iJ}$ per query $q_i$.\n\n\nRevision stage: Edit the output to correct content unsupported by evidence while preserving the original content as much as possible. Initialize the revised text $y=x$.", "sentences": ["In what ways does the process of generating queries facilitate the verification of content accuracy, particularly through the lens of evidence-based editing methodologies?", "What role do attribution and preservation metrics play in assessing the quality of revised texts, and how might these factors influence the success of the Evidence Disagreement Detection process?", "What are the practical ways to utilize the F1 @ K metric for assessing how well FacTool identifies factual inaccuracies in various fields?"]}, {"source_sentence": "(1) Joint: join with step 2, where the few-shot examples are structured as (response, verification questions, verification answers); The drawback is that the original response is in the context, so the model may repeat similar hallucination.\n(2) 2-step: separate the verification planning and execution steps, such as the original response doesn’t impact\n(3) Factored: each verification question is answered separately. Say, if a long-form base generation results in multiple verification questions, we would answer each question one-by-one.\n(4) Factor+revise: adding a “cross-checking” step after factored verification execution, conditioned on both the baseline response and the verification question and answer. It detects inconsistency.\n\n\nFinal output: Generate the final, refined output. The output gets revised at this step if any inconsistency is discovered.", "sentences": ["What are the key challenges associated with using a pre-training dataset for world knowledge, particularly in maintaining the factual accuracy of the outputs generated by the model?", "What obstacles arise when depending on the pre-training dataset in the context of extrinsic hallucination affecting model outputs?", "In what ways does the 'Factor+revise' method enhance the reliability of responses when compared to the 'Joint' and '2-step' methods used for cross-checking?"]}], "model-index": [{"name": "BGE base Financial Matryoshka", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 768", "type": "dim_768"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.8802083333333334, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.984375, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9947916666666666, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9947916666666666, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.8802083333333334, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.328125, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.19895833333333335, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09947916666666667, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.8802083333333334, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.984375, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9947916666666666, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9947916666666666, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9495062223081544, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.9337673611111109, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.934240845959596, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.8854166666666666, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.984375, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9947916666666666, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 1.0, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.8854166666666666, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.328125, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.19895833333333335, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09999999999999999, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.8854166666666666, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.984375, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9947916666666666, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 1.0, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9536782535355709, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.937818287037037, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.937818287037037, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.9010416666666666, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.984375, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 1.0, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 1.0, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.9010416666666666, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.328125, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.19999999999999998, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09999999999999999, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.9010416666666666, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.984375, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 1.0, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 1.0, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9587563670488631, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.9446180555555554, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.9446180555555556, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 128", "type": "dim_128"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.90625, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.984375, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 1.0, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 1.0, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.90625, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.328125, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.19999999999999998, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09999999999999999, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.90625, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.984375, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 1.0, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 1.0, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9609068566179642, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.9474826388888888, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.947482638888889, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 64", "type": "dim_64"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.890625, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.984375, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 1.0, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 1.0, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.890625, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.328125, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.19999999999999998, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09999999999999999, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.890625, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.984375, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 1.0, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 1.0, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9551401340175182, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.9396701388888888, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.939670138888889, "name": "Cosine Map@100"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,164
Priyanshuchaudhary2425/Sumx-test-model
Priyanshuchaudhary2425
summarization
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "summarization", "generated_from_trainer", "base_model:facebook/bart-large-cnn", "base_model:finetune:facebook/bart-large-cnn", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-05-15T09:50:54Z
2024-05-15T10:49:47+00:00
12
0
--- base_model: facebook/bart-large-cnn license: mit metrics: - rouge tags: - summarization - generated_from_trainer model-index: - name: Sumx-test-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Sumx-test-model This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8528 - Rouge1: 0.3746 - Rouge2: 0.1592 - Rougel: 0.2578 - Rougelsum: 0.3476 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 1.1476 | 1.0 | 250 | 1.8528 | 0.3746 | 0.1592 | 0.2578 | 0.3476 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Sumx-test-model This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8528 - Rouge1: 0.3746 - Rouge2: 0.1592 - Rougel: 0.2578 - Rougelsum: 0.3476 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 1.1476 | 1.0 | 250 | 1.8528 | 0.3746 | 0.1592 | 0.2578 | 0.3476 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
{"base_model": "facebook/bart-large-cnn", "license": "mit", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "Sumx-test-model", "results": []}]}
task
[ "SUMMARIZATION" ]
46,165
relik-ie/relik-reader-deberta-v3-small-re-ner-wikipedia
relik-ie
null
[ "safetensors", "relik-reader", "custom_code", "en", "arxiv:2408.00103", "license:apache-2.0", "region:us" ]
2024-08-02T08:15:27Z
2024-08-08T12:45:47+00:00
26
0
--- language: - en license: apache-2.0 --- <div align="center"> <img src="https://github.com/SapienzaNLP/relik/blob/main/relik.png?raw=true" height="150"> <img src="https://github.com/SapienzaNLP/relik/blob/main/Sapienza_Babelscape.png?raw=true" height="50"> </div> <div align="center"> <h1>Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget</h1> </div> <div style="display:flex; justify-content: center; align-items: center; flex-direction: row;"> <a href="https://2024.aclweb.org/"><img src="http://img.shields.io/badge/ACL-2024-4b44ce.svg"></a> &nbsp; &nbsp; <a href="https://aclanthology.org/"><img src="http://img.shields.io/badge/paper-ACL--anthology-B31B1B.svg"></a> &nbsp; &nbsp; <a href="https://arxiv.org/abs/2408.00103"><img src="https://img.shields.io/badge/arXiv-b31b1b.svg"></a> </div> <div style="display:flex; justify-content: center; align-items: center; flex-direction: row;"> <a href="https://huggingface.co/collections/sapienzanlp/relik-retrieve-read-and-link-665d9e4a5c3ecba98c1bef19"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Collection-FCD21D"></a> &nbsp; &nbsp; <a href="https://github.com/SapienzaNLP/relik"><img src="https://img.shields.io/badge/GitHub-Repo-121013?logo=github&logoColor=white"></a> &nbsp; &nbsp; <a href="https://github.com/SapienzaNLP/relik/releases"><img src="https://img.shields.io/github/v/release/SapienzaNLP/relik"></a> </div> This card is for a **Relation Extraction** model trained with **Named Entity Recognition**. The relation predictions are Wikidata properties. The entity types are taken from [Babelscape/SREDFM](https://huggingface.co/datasets/Babelscape/SREDFM) and are still WIP. A blazing fast and lightweight Information Extraction model for **Entity Linking** and **Relation Extraction**. ## 🛠️ Installation Installation from PyPI ```bash pip install relik ``` <details> <summary>Other installation options</summary> #### Install with optional dependencies Install with all the optional dependencies. ```bash pip install relik[all] ``` Install with optional dependencies for training and evaluation. ```bash pip install relik[train] ``` Install with optional dependencies for [FAISS](https://github.com/facebookresearch/faiss) FAISS PyPI package is only available for CPU. For GPU, install it from source or use the conda package. For CPU: ```bash pip install relik[faiss] ``` For GPU: ```bash conda create -n relik python=3.10 conda activate relik # install pytorch conda install -y pytorch=2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia # GPU conda install -y -c pytorch -c nvidia faiss-gpu=1.8.0 # or GPU with NVIDIA RAFT conda install -y -c pytorch -c nvidia -c rapidsai -c conda-forge faiss-gpu-raft=1.8.0 pip install relik ``` Install with optional dependencies for serving the models with [FastAPI](https://fastapi.tiangolo.com/) and [Ray](https://docs.ray.io/en/latest/serve/quickstart.html). ```bash pip install relik[serve] ``` #### Installation from source ```bash git clone https://github.com/SapienzaNLP/relik.git cd relik pip install -e .[all] ``` </details> ## 🚀 Quick Start [//]: # (Write a short description of the model and how to use it with the `from_pretrained` method.) ReLiK is a lightweight and fast model for **Entity Linking** and **Relation Extraction**. It is composed of two main components: a retriever and a reader. The retriever is responsible for retrieving relevant documents from a large collection, while the reader is responsible for extracting entities and relations from the retrieved documents. ReLiK can be used with the `from_pretrained` method to load a pre-trained pipeline. Here is an example of how to use ReLiK for **Entity Linking**: ```python from relik import Relik from relik.inference.data.objects import RelikOutput relik = Relik.from_pretrained("sapienzanlp/relik-entity-linking-large") relik_out: RelikOutput = relik("Michael Jordan was one of the best players in the NBA.") ``` RelikOutput( text="Michael Jordan was one of the best players in the NBA.", tokens=['Michael', 'Jordan', 'was', 'one', 'of', 'the', 'best', 'players', 'in', 'the', 'NBA', '.'], id=0, spans=[ Span(start=0, end=14, label="Michael Jordan", text="Michael Jordan"), Span(start=50, end=53, label="National Basketball Association", text="NBA"), ], triples=[], candidates=Candidates( span=[ [ [ {"text": "Michael Jordan", "id": 4484083}, {"text": "National Basketball Association", "id": 5209815}, {"text": "Walter Jordan", "id": 2340190}, {"text": "Jordan", "id": 3486773}, {"text": "50 Greatest Players in NBA History", "id": 1742909}, ... ] ] ] ), ) ## 📊 Performance We evaluate the performance of ReLiK on Entity Linking using [GERBIL](http://gerbil-qa.aksw.org/gerbil/). The following table shows the results (InKB Micro F1) of ReLiK Large and Base: | Model | AIDA | MSNBC | Der | K50 | R128 | R500 | O15 | O16 | Tot | OOD | AIT (m:s) | |------------------------------------------|------|-------|------|------|------|------|------|------|------|------|------------| | GENRE | 83.7 | 73.7 | 54.1 | 60.7 | 46.7 | 40.3 | 56.1 | 50.0 | 58.2 | 54.5 | 38:00 | | EntQA | 85.8 | 72.1 | 52.9 | 64.5 | **54.1** | 41.9 | 61.1 | 51.3 | 60.5 | 56.4 | 20:00 | | [ReLiK<sub>Base<sub>](https://huggingface.co/sapienzanlp/relik-entity-linking-base) | 85.3 | 72.3 | 55.6 | 68.0 | 48.1 | 41.6 | 62.5 | 52.3 | 60.7 | 57.2 | 00:29 | | ➡️ [ReLiK<sub>Large<sub>](https://huggingface.co/sapienzanlp/relik-entity-linking-large) | **86.4** | **75.0** | **56.3** | **72.8** | 51.7 | **43.0** | **65.1** | **57.2** | **63.4** | **60.2** | 01:46 | Comparison systems' evaluation (InKB Micro F1) on the *in-domain* AIDA test set and *out-of-domain* MSNBC (MSN), Derczynski (Der), KORE50 (K50), N3-Reuters-128 (R128), N3-RSS-500 (R500), OKE-15 (O15), and OKE-16 (O16) test sets. **Bold** indicates the best model. GENRE uses mention dictionaries. The AIT column shows the time in minutes and seconds (m:s) that the systems need to process the whole AIDA test set using an NVIDIA RTX 4090, except for EntQA which does not fit in 24GB of RAM and for which an A100 is used. ## 🤖 Models Models can be found on [🤗 Hugging Face](https://huggingface.co/collections/sapienzanlp/relik-retrieve-read-and-link-665d9e4a5c3ecba98c1bef19). ## 💽 Cite this work If you use any part of this work, please consider citing the paper as follows: ```bibtex @inproceedings{orlando-etal-2024-relik, title = "Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget", author = "Orlando, Riccardo and Huguet Cabot, Pere-Llu{\'\i}s and Barba, Edoardo and Navigli, Roberto", booktitle = "Findings of the Association for Computational Linguistics: ACL 2024", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", } ```
null
Non_BioNLP
<div align="center"> <img src="https://github.com/SapienzaNLP/relik/blob/main/relik.png?raw=true" height="150"> <img src="https://github.com/SapienzaNLP/relik/blob/main/Sapienza_Babelscape.png?raw=true" height="50"> </div> <div align="center"> <h1>Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget</h1> </div> <div style="display:flex; justify-content: center; align-items: center; flex-direction: row;"> <a href="https://2024.aclweb.org/"><img src="http://img.shields.io/badge/ACL-2024-4b44ce.svg"></a> &nbsp; &nbsp; <a href="https://aclanthology.org/"><img src="http://img.shields.io/badge/paper-ACL--anthology-B31B1B.svg"></a> &nbsp; &nbsp; <a href="https://arxiv.org/abs/2408.00103"><img src="https://img.shields.io/badge/arXiv-b31b1b.svg"></a> </div> <div style="display:flex; justify-content: center; align-items: center; flex-direction: row;"> <a href="https://huggingface.co/collections/sapienzanlp/relik-retrieve-read-and-link-665d9e4a5c3ecba98c1bef19"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Collection-FCD21D"></a> &nbsp; &nbsp; <a href="https://github.com/SapienzaNLP/relik"><img src="https://img.shields.io/badge/GitHub-Repo-121013?logo=github&logoColor=white"></a> &nbsp; &nbsp; <a href="https://github.com/SapienzaNLP/relik/releases"><img src="https://img.shields.io/github/v/release/SapienzaNLP/relik"></a> </div> This card is for a **Relation Extraction** model trained with **Named Entity Recognition**. The relation predictions are Wikidata properties. The entity types are taken from [Babelscape/SREDFM](https://huggingface.co/datasets/Babelscape/SREDFM) and are still WIP. A blazing fast and lightweight Information Extraction model for **Entity Linking** and **Relation Extraction**. ## 🛠️ Installation Installation from PyPI ```bash pip install relik ``` <details> <summary>Other installation options</summary> #### Install with optional dependencies Install with all the optional dependencies. ```bash pip install relik[all] ``` Install with optional dependencies for training and evaluation. ```bash pip install relik[train] ``` Install with optional dependencies for [FAISS](https://github.com/facebookresearch/faiss) FAISS PyPI package is only available for CPU. For GPU, install it from source or use the conda package. For CPU: ```bash pip install relik[faiss] ``` For GPU: ```bash conda create -n relik python=3.10 conda activate relik # install pytorch conda install -y pytorch=2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia # GPU conda install -y -c pytorch -c nvidia faiss-gpu=1.8.0 # or GPU with NVIDIA RAFT conda install -y -c pytorch -c nvidia -c rapidsai -c conda-forge faiss-gpu-raft=1.8.0 pip install relik ``` Install with optional dependencies for serving the models with [FastAPI](https://fastapi.tiangolo.com/) and [Ray](https://docs.ray.io/en/latest/serve/quickstart.html). ```bash pip install relik[serve] ``` #### Installation from source ```bash git clone https://github.com/SapienzaNLP/relik.git cd relik pip install -e .[all] ``` </details> ## 🚀 Quick Start [//]: # (Write a short description of the model and how to use it with the `from_pretrained` method.) ReLiK is a lightweight and fast model for **Entity Linking** and **Relation Extraction**. It is composed of two main components: a retriever and a reader. The retriever is responsible for retrieving relevant documents from a large collection, while the reader is responsible for extracting entities and relations from the retrieved documents. ReLiK can be used with the `from_pretrained` method to load a pre-trained pipeline. Here is an example of how to use ReLiK for **Entity Linking**: ```python from relik import Relik from relik.inference.data.objects import RelikOutput relik = Relik.from_pretrained("sapienzanlp/relik-entity-linking-large") relik_out: RelikOutput = relik("Michael Jordan was one of the best players in the NBA.") ``` RelikOutput( text="Michael Jordan was one of the best players in the NBA.", tokens=['Michael', 'Jordan', 'was', 'one', 'of', 'the', 'best', 'players', 'in', 'the', 'NBA', '.'], id=0, spans=[ Span(start=0, end=14, label="Michael Jordan", text="Michael Jordan"), Span(start=50, end=53, label="National Basketball Association", text="NBA"), ], triples=[], candidates=Candidates( span=[ [ [ {"text": "Michael Jordan", "id": 4484083}, {"text": "National Basketball Association", "id": 5209815}, {"text": "Walter Jordan", "id": 2340190}, {"text": "Jordan", "id": 3486773}, {"text": "50 Greatest Players in NBA History", "id": 1742909}, ... ] ] ] ), ) ## 📊 Performance We evaluate the performance of ReLiK on Entity Linking using [GERBIL](http://gerbil-qa.aksw.org/gerbil/). The following table shows the results (InKB Micro F1) of ReLiK Large and Base: | Model | AIDA | MSNBC | Der | K50 | R128 | R500 | O15 | O16 | Tot | OOD | AIT (m:s) | |------------------------------------------|------|-------|------|------|------|------|------|------|------|------|------------| | GENRE | 83.7 | 73.7 | 54.1 | 60.7 | 46.7 | 40.3 | 56.1 | 50.0 | 58.2 | 54.5 | 38:00 | | EntQA | 85.8 | 72.1 | 52.9 | 64.5 | **54.1** | 41.9 | 61.1 | 51.3 | 60.5 | 56.4 | 20:00 | | [ReLiK<sub>Base<sub>](https://huggingface.co/sapienzanlp/relik-entity-linking-base) | 85.3 | 72.3 | 55.6 | 68.0 | 48.1 | 41.6 | 62.5 | 52.3 | 60.7 | 57.2 | 00:29 | | ➡️ [ReLiK<sub>Large<sub>](https://huggingface.co/sapienzanlp/relik-entity-linking-large) | **86.4** | **75.0** | **56.3** | **72.8** | 51.7 | **43.0** | **65.1** | **57.2** | **63.4** | **60.2** | 01:46 | Comparison systems' evaluation (InKB Micro F1) on the *in-domain* AIDA test set and *out-of-domain* MSNBC (MSN), Derczynski (Der), KORE50 (K50), N3-Reuters-128 (R128), N3-RSS-500 (R500), OKE-15 (O15), and OKE-16 (O16) test sets. **Bold** indicates the best model. GENRE uses mention dictionaries. The AIT column shows the time in minutes and seconds (m:s) that the systems need to process the whole AIDA test set using an NVIDIA RTX 4090, except for EntQA which does not fit in 24GB of RAM and for which an A100 is used. ## 🤖 Models Models can be found on [🤗 Hugging Face](https://huggingface.co/collections/sapienzanlp/relik-retrieve-read-and-link-665d9e4a5c3ecba98c1bef19). ## 💽 Cite this work If you use any part of this work, please consider citing the paper as follows: ```bibtex @inproceedings{orlando-etal-2024-relik, title = "Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget", author = "Orlando, Riccardo and Huguet Cabot, Pere-Llu{\'\i}s and Barba, Edoardo and Navigli, Roberto", booktitle = "Findings of the Association for Computational Linguistics: ACL 2024", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", } ```
{"language": ["en"], "license": "apache-2.0"}
task
[ "NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION" ]
46,166
HPLT/translate-en-ar-v1.0-hplt
HPLT
translation
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "en", "ar", "license:cc-by-4.0", "autotrain_compatible", "region:us" ]
2024-02-27T11:37:58Z
2024-03-14T00:49:07+00:00
13
1
--- language: - en - ar license: cc-by-4.0 tags: - translation inference: false --- ### HPLT MT release v1.0 This repository contains the translation model for English-Arabic trained with HPLT data only. The model is available in both Marian and Hugging Face formats. ### Model Info * Source language: English * Target language: Arabic * Data: HPLT data only * Model architecture: Transformer-base * Tokenizer: SentencePiece (Unigram) * Cleaning: We used [OpusCleaner](https://github.com/hplt-project/OpusCleaner) with a set of basic rules. Details can be found in the filter files [here](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0/data/ar-en/raw/v0). You can check out our [deliverable report](https://hplt-project.org/HPLT_D5_1___Translation_models_for_select_language_pairs.pdf), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0), and [website](https://hplt-project.org) for more details. ### Usage **Note** that for quality considerations, we recommend using [HPLT/translate-en-ar-v1.0-hplt_opus](https://huggingface.co/HPLT/translate-en-ar-v1.0-hplt_opus) instead of this model. The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format. We have also converted the model into the Hugging Face format so it is compatible with `transformers`. #### Using Marian To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-ar.spm` from this repository. #### Using transformers We have also converted this model to the Hugging Face format and you can get started with the script below. **Note** that due a [known issue](https://github.com/huggingface/transformers/issues/26216) in weight conversion, the checkpoint cannot work with transformer versions <4.26 or >4.30. We tested and suggest `pip install transformers==4.28`. ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("HPLT/translate-en-ar-v1.0-hplt") model = AutoModelForSeq2SeqLM.from_pretrained("HPLT/translate-en-ar-v1.0-hplt") inputs = ["Input goes here.", "Make sure the language is right."] batch_tokenized = tokenizer(inputs, return_tensors="pt", padding=True) model_output = model.generate( **batch_tokenized, num_beams=6, max_new_tokens=512 ) batch_detokenized = tokenizer.batch_decode( model_output, skip_special_tokens=True, ) print(batch_detokenized) ``` ## Benchmarks When decoded using Marian, the model has the following test scores. | Test set | BLEU | chrF++ | COMET22 | | -------------------------------------- | ---- | ----- | ----- | | FLORES200 | 26.8 | 55.0 | 0.8439 | | NTREX | 23.9 | 50.6 | 0.8062 | ### Acknowledgements This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546] Brought to you by researchers from the University of Edinburgh and Charles University in Prague with support from the whole HPLT consortium.
null
Non_BioNLP
### HPLT MT release v1.0 This repository contains the translation model for English-Arabic trained with HPLT data only. The model is available in both Marian and Hugging Face formats. ### Model Info * Source language: English * Target language: Arabic * Data: HPLT data only * Model architecture: Transformer-base * Tokenizer: SentencePiece (Unigram) * Cleaning: We used [OpusCleaner](https://github.com/hplt-project/OpusCleaner) with a set of basic rules. Details can be found in the filter files [here](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0/data/ar-en/raw/v0). You can check out our [deliverable report](https://hplt-project.org/HPLT_D5_1___Translation_models_for_select_language_pairs.pdf), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0), and [website](https://hplt-project.org) for more details. ### Usage **Note** that for quality considerations, we recommend using [HPLT/translate-en-ar-v1.0-hplt_opus](https://huggingface.co/HPLT/translate-en-ar-v1.0-hplt_opus) instead of this model. The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format. We have also converted the model into the Hugging Face format so it is compatible with `transformers`. #### Using Marian To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-ar.spm` from this repository. #### Using transformers We have also converted this model to the Hugging Face format and you can get started with the script below. **Note** that due a [known issue](https://github.com/huggingface/transformers/issues/26216) in weight conversion, the checkpoint cannot work with transformer versions <4.26 or >4.30. We tested and suggest `pip install transformers==4.28`. ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("HPLT/translate-en-ar-v1.0-hplt") model = AutoModelForSeq2SeqLM.from_pretrained("HPLT/translate-en-ar-v1.0-hplt") inputs = ["Input goes here.", "Make sure the language is right."] batch_tokenized = tokenizer(inputs, return_tensors="pt", padding=True) model_output = model.generate( **batch_tokenized, num_beams=6, max_new_tokens=512 ) batch_detokenized = tokenizer.batch_decode( model_output, skip_special_tokens=True, ) print(batch_detokenized) ``` ## Benchmarks When decoded using Marian, the model has the following test scores. | Test set | BLEU | chrF++ | COMET22 | | -------------------------------------- | ---- | ----- | ----- | | FLORES200 | 26.8 | 55.0 | 0.8439 | | NTREX | 23.9 | 50.6 | 0.8062 | ### Acknowledgements This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546] Brought to you by researchers from the University of Edinburgh and Charles University in Prague with support from the whole HPLT consortium.
{"language": ["en", "ar"], "license": "cc-by-4.0", "tags": ["translation"], "inference": false}
task
[ "TRANSLATION" ]
46,167
095ey11/bert-emotion
095ey11
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-01-05T19:42:26Z
2023-01-05T20:47:55+00:00
14
0
--- datasets: - tweet_eval license: apache-2.0 metrics: - precision - recall tags: - generated_from_trainer model-index: - name: bert-emotion results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: tweet_eval config: emotion split: train args: emotion metrics: - type: precision value: 0.7071669427034283 name: Precision - type: recall value: 0.723286061789479 name: Recall --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.2053 - Precision: 0.7072 - Recall: 0.7233 - Fscore: 0.7124 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.8623 | 1.0 | 815 | 0.7198 | 0.7536 | 0.6312 | 0.6559 | | 0.5637 | 2.0 | 1630 | 0.8756 | 0.7213 | 0.7166 | 0.7160 | | 0.2845 | 3.0 | 2445 | 1.2053 | 0.7072 | 0.7233 | 0.7124 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.2053 - Precision: 0.7072 - Recall: 0.7233 - Fscore: 0.7124 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.8623 | 1.0 | 815 | 0.7198 | 0.7536 | 0.6312 | 0.6559 | | 0.5637 | 2.0 | 1630 | 0.8756 | 0.7213 | 0.7166 | 0.7160 | | 0.2845 | 3.0 | 2445 | 1.2053 | 0.7072 | 0.7233 | 0.7124 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
{"datasets": ["tweet_eval"], "license": "apache-2.0", "metrics": ["precision", "recall"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "emotion", "split": "train", "args": "emotion"}, "metrics": [{"type": "precision", "value": 0.7071669427034283, "name": "Precision"}, {"type": "recall", "value": 0.723286061789479, "name": "Recall"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,168
gaudi/opus-mt-el-fi-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-18T14:56:34Z
2024-10-19T00:03:01+00:00
6
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-el-fi) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-el-fi).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-el-fi --output_dir ./ctranslate2/opus-mt-el-fi-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-el-fi-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-el-fi-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-el-fi-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-el-fi) by Helsinki-NLP.
null
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-el-fi) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-el-fi).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-el-fi --output_dir ./ctranslate2/opus-mt-el-fi-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-el-fi-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-el-fi-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-el-fi-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-el-fi) by Helsinki-NLP.
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
task
[ "TRANSLATION" ]
46,169
BaxterAI/finetuning-sentiment-model-3000-samples
BaxterAI
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:amazon_polarity", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-05-23T01:02:23Z
2022-05-24T01:22:20+00:00
32
1
--- datasets: - amazon_polarity license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: finetuning-sentiment-model-3000-samples results: - task: type: text-classification name: Text Classification dataset: name: amazon_polarity type: amazon_polarity args: amazon_polarity metrics: - type: accuracy value: 0.9225 name: Accuracy - type: f1 value: 0.9240816326530612 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_polarity dataset. It achieves the following results on the evaluation set: - Loss: 0.8170 - Accuracy: 0.9225 - F1: 0.9241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_polarity dataset. It achieves the following results on the evaluation set: - Loss: 0.8170 - Accuracy: 0.9225 - F1: 0.9241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
{"datasets": ["amazon_polarity"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuning-sentiment-model-3000-samples", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "amazon_polarity", "type": "amazon_polarity", "args": "amazon_polarity"}, "metrics": [{"type": "accuracy", "value": 0.9225, "name": "Accuracy"}, {"type": "f1", "value": 0.9240816326530612, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,170
basso4/basso4-finetuned-kde4-en-to-vi
basso4
translation
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-10-27T20:45:02Z
2023-10-27T22:16:55+00:00
121
0
--- base_model: Helsinki-NLP/opus-mt-en-fr datasets: - kde4 license: apache-2.0 metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: basso4-finetuned-kde4-en-to-vi results: - task: type: text2text-generation name: Sequence-to-sequence Language Modeling dataset: name: kde4 type: kde4 config: en-vi split: train args: en-vi metrics: - type: bleu value: 38.454205168276246 name: Bleu --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # basso4-finetuned-kde4-en-to-vi This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.9576 - Bleu: 38.4542 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # basso4-finetuned-kde4-en-to-vi This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.9576 - Bleu: 38.4542 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
{"base_model": "Helsinki-NLP/opus-mt-en-fr", "datasets": ["kde4"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "basso4-finetuned-kde4-en-to-vi", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "config": "en-vi", "split": "train", "args": "en-vi"}, "metrics": [{"type": "bleu", "value": 38.454205168276246, "name": "Bleu"}]}]}]}
task
[ "TRANSLATION" ]
46,171
gaudi/opus-mt-en-sm-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-18T15:04:18Z
2024-10-19T00:29:42+00:00
6
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-sm) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-sm).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-sm --output_dir ./ctranslate2/opus-mt-en-sm-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-en-sm-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-en-sm-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-en-sm-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-sm) by Helsinki-NLP.
null
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-sm) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-sm).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-sm --output_dir ./ctranslate2/opus-mt-en-sm-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-en-sm-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-en-sm-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-en-sm-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-sm) by Helsinki-NLP.
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
task
[ "TRANSLATION" ]
46,172
gaudi/opus-mt-wls-en-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-17T00:18:01Z
2024-10-18T22:56:50+00:00
6
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-wls-en) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-wls-en).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-wls-en --output_dir ./ctranslate2/opus-mt-wls-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-wls-en-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-wls-en-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-wls-en-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-wls-en) by Helsinki-NLP.
null
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-wls-en) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-wls-en).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-wls-en --output_dir ./ctranslate2/opus-mt-wls-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-wls-en-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-wls-en-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-wls-en-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-wls-en) by Helsinki-NLP.
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
task
[ "TRANSLATION" ]
46,173
Jhandry/roberta-base-bne-finetuned-amazon_practica
Jhandry
text-classification
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-11-29T13:30:30Z
2022-11-29T13:54:13+00:00
8
0
--- datasets: - amazon_reviews_multi license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: roberta-base-bne-finetuned-amazon_practica results: - task: type: text-classification name: Text Classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi config: es split: train args: es metrics: - type: accuracy value: 0.9365 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-amazon_practica This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.2158 - Accuracy: 0.9365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1969 | 1.0 | 1250 | 0.1715 | 0.9343 | | 0.103 | 2.0 | 2500 | 0.2158 | 0.9365 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-amazon_practica This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.2158 - Accuracy: 0.9365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1969 | 1.0 | 1250 | 0.1715 | 0.9343 | | 0.103 | 2.0 | 2500 | 0.2158 | 0.9365 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
{"datasets": ["amazon_reviews_multi"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "roberta-base-bne-finetuned-amazon_practica", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "amazon_reviews_multi", "type": "amazon_reviews_multi", "config": "es", "split": "train", "args": "es"}, "metrics": [{"type": "accuracy", "value": 0.9365, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,174
besimray/miner_id_1_e0133dc7-d8f1-4a3d-b487-a48a7ef3227b_1729802815
besimray
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "llama-3", "meta", "facebook", "unsloth", "en", "base_model:meta-llama/Llama-3.2-3B", "base_model:finetune:meta-llama/Llama-3.2-3B", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-10-24T20:46:55Z
2024-10-24T20:51:56+00:00
7
0
--- base_model: meta-llama/Llama-3.2-3B language: - en library_name: transformers license: llama3.2 tags: - llama-3 - llama - meta - facebook - unsloth - transformers --- # Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # Llama-3.2-3B For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-3B) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. ## Special Thanks A huge thank you to the Meta and Llama team for creating and releasing these models. ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
null
Non_BioNLP
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # Llama-3.2-3B For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-3B) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. ## Special Thanks A huge thank you to the Meta and Llama team for creating and releasing these models. ## Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
{"base_model": "meta-llama/Llama-3.2-3B", "language": ["en"], "library_name": "transformers", "license": "llama3.2", "tags": ["llama-3", "llama", "meta", "facebook", "unsloth", "transformers"]}
task
[ "SUMMARIZATION" ]
46,175
TheBloke/Llama-2-7B-32K-Instruct-GGML
TheBloke
null
[ "transformers", "llama", "en", "dataset:togethercomputer/llama-instruct", "arxiv:2307.03172", "base_model:togethercomputer/Llama-2-7B-32K-Instruct", "base_model:finetune:togethercomputer/Llama-2-7B-32K-Instruct", "license:llama2", "region:us" ]
2023-08-21T12:19:40Z
2023-09-27T13:01:42+00:00
24
8
--- base_model: togethercomputer/Llama-2-7B-32K-Instruct datasets: - togethercomputer/llama-instruct language: - en library_name: transformers license: llama2 model_name: Llama2 7B 32K Instruct inference: false model_creator: Together model_link: https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct model_type: llama quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Llama2 7B 32K Instruct - GGML - Model creator: [Together](https://huggingface.co/togethercomputer) - Original model: [Llama2 7B 32K Instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct) ## Description This repo contains GGML format model files for [Together's Llama2 7B 32K Instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct). ### Important note regarding GGML files. The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support. Please use the GGUF models instead. ### About GGML GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling. * [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML) * [Together's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct) ## Prompt template: Llama2-Instruct-Only ``` [INST] {prompt} [\INST] ``` <!-- compatibility_ggml start --> ## Compatibility These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023. For support with latest llama.cpp, please use GGUF files instead. The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future. ## Explanation of the new k-quant methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [llama-2-7b-32k-instruct.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q2_K.bin) | q2_K | 2 | 2.87 GB| 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | [llama-2-7b-32k-instruct.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 2.95 GB| 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | [llama-2-7b-32k-instruct.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 3.28 GB| 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | [llama-2-7b-32k-instruct.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 3.60 GB| 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | [llama-2-7b-32k-instruct.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q4_0.bin) | q4_0 | 4 | 3.83 GB| 6.33 GB | Original quant method, 4-bit. | | [llama-2-7b-32k-instruct.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 3.83 GB| 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | [llama-2-7b-32k-instruct.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 4.08 GB| 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | [llama-2-7b-32k-instruct.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q4_1.bin) | q4_1 | 4 | 4.24 GB| 6.74 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | [llama-2-7b-32k-instruct.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q5_0.bin) | q5_0 | 5 | 4.65 GB| 7.15 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | [llama-2-7b-32k-instruct.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 4.65 GB| 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | [llama-2-7b-32k-instruct.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 4.78 GB| 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | [llama-2-7b-32k-instruct.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q5_1.bin) | q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | [llama-2-7b-32k-instruct.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q6_K.bin) | q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization | | [llama-2-7b-32k-instruct.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q8_0.bin) | q8_0 | 8 | 7.13 GB| 9.63 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `llama.cpp` Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier. For compatibility with latest llama.cpp, please use GGUF files instead. ``` ./main -t 10 -ngl 32 -m llama-2-7b-32k-instruct.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST]\n{prompt}\n[\INST]" ``` Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Together's Llama2 7B 32K Instruct # Llama-2-7B-32K-Instruct ## Model Description Llama-2-7B-32K-Instruct is an open-source, long-context chat model finetuned from [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K), over high-quality instruction and chat data. We built Llama-2-7B-32K-Instruct with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the [recipe fully available](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct). We hope that this can enable everyone to finetune their own version of [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback! ## Data Collection Details Llama-2-7B-32K-Instruct is fine-tuned over a combination of two parts: 1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**. We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM (in this case, [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)). The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct). We also share the complete recipe for the data collection process [here](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct). 2. **Long-context Summarization and Long-context QA**. We follow the recipe of [Llama-2-7B-32K](https://together.ai/blog/Llama-2-7B-32K), and train our model with the [BookSum dataset](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections) and [Multi-document Question Answering](https://arxiv.org/abs/2307.03172). The final data mixture used for model finetuning is: 19K instruction (50%) + BookSum (25%) + MQA (25%). ## Model Usage We encourage you to try out this model using the [Together API](https://together.ai/blog/api-announcement). The updated inference stack allows for efficient inference. To run the model locally, we strongly recommend to install Flash Attention V2, which is necessary to obtain the best performance: ``` # Please update the path of `CUDA_HOME` export CUDA_HOME=/usr/local/cuda-11.8 pip install transformers==4.31.0 pip install sentencepiece pip install ninja pip install flash-attn --no-build-isolation pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary ``` You can load the model directly from the Hugging Face model hub using ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct") model = AutoModelForCausalLM.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct", trust_remote_code=True, torch_dtype=torch.float16) input_ids = tokenizer.encode("[INST]\nWrite a poem about cats\n[/INST]\n\n", return_tensors="pt") output = model.generate(input_ids, max_length=128, temperature=0.7, repetition_penalty=1.1, top_p=0.7, top_k=50) output_text = tokenizer.decode(output[0], skip_special_tokens=True) ``` The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by: ``` [INST]\n<your instruction here>\n[\INST]\n\n ``` For example, if we query the model with ``` [INST]\nWrite a poem about cats\n[\INST]\n\n ``` the model will return ``` [INST] Write a poem about cats [/INST] Cats are mysterious creatures, with their aloof and independent nature. They're also incredibly beautiful, with their sleek fur and piercing eyes. Here's a poem that captures the essence of cats: Cats, oh cats, how can I describe you? Your beauty is beyond compare, it seems. You're graceful and elegant, like a ballerina's dance, But don't let your charm fool you, for you're not easily tamed. With your soft purring and playful meows, You draw us in with your enchanting powers. We love to watch you play, your tail twirling 'round, As if you're dancing on air, with no sound. But don't be fooled by your sweetness, my friend, For beneath that gentle exterior, lies a fierce defender. When danger lurks, you'll spring into action, Protecting those you hold dear, without question. Solet us admire you, from afar, For in your own way, you're truly unique, a star. And though we may never fully understand, The depths of your soul, we'll always stand, hand in paw, as one. This poem captures the essence of cats, highlighting their beauty, independence,and protective nature. It also celebrates the special bond between humans and cats, recognizing their unique qualities and the joy they bring to our lives. ``` ## Model Evaluation We evaluate the model from three aspects: 1) [Alpaca Eval](https://tatsu-lab.github.io/alpaca_eval/); 2) [Rouge score over BookSum](https://together.ai/blog/Llama-2-7B-32K); and 3) [Accuracy over Multi-document Question Answering (MQA)](https://together.ai/blog/Llama-2-7B-32K). We compare with models including [GPT-3.5-Turbo-16K](https://platform.openai.com/docs/models/gpt-3-5), [https://huggingface.co/meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), [Longchat-7b-16k](https://huggingface.co/lmsys/longchat-7b-16k) and [Longchat-7b-v1.5-32k](https://huggingface.co/lmsys/longchat-7b-v1.5-32k). We summarize the results below: * Alpaca Eval | Model | win_rate | standard_error | n_total | avg_length | | -------- | ------- | ------- | ------- | ------- | | Llama-2-7B-Chat-hf | 71.37 | 1.59 | 805 | 1479 | | Llama-2-7B-32K-Instruct | 70.36 | 1.61 | 803 | 1885 | | oasst-rlhf-llama-33b | 66.52 | 1.66 | 805 | 1079 | | text_davinci_003 | 50.00 | 0.00 | 805 | 307| | falcon-40b-instruct | 45.71 | 1.75 | 805 | 662 | | alpaca-farm-ppo-human | 41.24 | 1.73 | 805 | 803 | | alpaca-7b | 26.46 | 1.54 | 805 | 396 | | text_davinci_001 | 15.17 | 1.24 | 804 | 296 | * Rouge Score over BookSum | Model | R1 | R2 | RL | | -------- | ------- | ------- | ------- | | Llama-2-7B-Chat-hf | 0.055 | 0.008 | 0.046 | | Longchat-7b-16k | 0.303 | 0.055 | 0.160 | | Longchat-7b-v1.5-32k | 0.308 | 0.057 | 0.163 | | GPT-3.5-Turbo-16K | 0.324 | 0.066 | 0.178 | | Llama-2-7B-32K-Instruct (ours) | 0.336 | 0.076 | 0.184 | * Accuracy over MQA | Model | 20 docs (Avg 2.9K tokens) | 30 docs (Avg 4.4K tokens) | 50 docs (Avg 7.4K tokens) | | -------- | ------- | ------- | ------- | | Llama-2-7B-Chat-hf | 0.448 | 0.421 | 0.354 | | Longchat-7b-16k | 0.510 | 0.473 | 0.428 | | Longchat-7b-v1.5-32k | 0.534 | 0.516 | 0.479 | | GPT-3.5-Turbo-16K | 0.622 | 0.609 | 0.577 | | Llama-2-7B-32K-Instruct (ours) | 0.622 | 0.604 | 0.589 | ## Limitations and Bias As with all language models, Llama-2-7B-32K-Instruct may generate incorrect or biased content. It's important to keep this in mind when using the model. ## Community Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
null
Non_BioNLP
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Llama2 7B 32K Instruct - GGML - Model creator: [Together](https://huggingface.co/togethercomputer) - Original model: [Llama2 7B 32K Instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct) ## Description This repo contains GGML format model files for [Together's Llama2 7B 32K Instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct). ### Important note regarding GGML files. The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support. Please use the GGUF models instead. ### About GGML GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling. * [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML) * [Together's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct) ## Prompt template: Llama2-Instruct-Only ``` [INST] {prompt} [\INST] ``` <!-- compatibility_ggml start --> ## Compatibility These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023. For support with latest llama.cpp, please use GGUF files instead. The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future. ## Explanation of the new k-quant methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [llama-2-7b-32k-instruct.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q2_K.bin) | q2_K | 2 | 2.87 GB| 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | [llama-2-7b-32k-instruct.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 2.95 GB| 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | [llama-2-7b-32k-instruct.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 3.28 GB| 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | [llama-2-7b-32k-instruct.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 3.60 GB| 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | [llama-2-7b-32k-instruct.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q4_0.bin) | q4_0 | 4 | 3.83 GB| 6.33 GB | Original quant method, 4-bit. | | [llama-2-7b-32k-instruct.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 3.83 GB| 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | [llama-2-7b-32k-instruct.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 4.08 GB| 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | [llama-2-7b-32k-instruct.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q4_1.bin) | q4_1 | 4 | 4.24 GB| 6.74 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | [llama-2-7b-32k-instruct.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q5_0.bin) | q5_0 | 5 | 4.65 GB| 7.15 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | [llama-2-7b-32k-instruct.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 4.65 GB| 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | [llama-2-7b-32k-instruct.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 4.78 GB| 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | [llama-2-7b-32k-instruct.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q5_1.bin) | q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | [llama-2-7b-32k-instruct.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q6_K.bin) | q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization | | [llama-2-7b-32k-instruct.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGML/blob/main/llama-2-7b-32k-instruct.ggmlv3.q8_0.bin) | q8_0 | 8 | 7.13 GB| 9.63 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `llama.cpp` Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier. For compatibility with latest llama.cpp, please use GGUF files instead. ``` ./main -t 10 -ngl 32 -m llama-2-7b-32k-instruct.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST]\n{prompt}\n[\INST]" ``` Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Together's Llama2 7B 32K Instruct # Llama-2-7B-32K-Instruct ## Model Description Llama-2-7B-32K-Instruct is an open-source, long-context chat model finetuned from [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K), over high-quality instruction and chat data. We built Llama-2-7B-32K-Instruct with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the [recipe fully available](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct). We hope that this can enable everyone to finetune their own version of [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback! ## Data Collection Details Llama-2-7B-32K-Instruct is fine-tuned over a combination of two parts: 1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**. We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM (in this case, [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)). The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct). We also share the complete recipe for the data collection process [here](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct). 2. **Long-context Summarization and Long-context QA**. We follow the recipe of [Llama-2-7B-32K](https://together.ai/blog/Llama-2-7B-32K), and train our model with the [BookSum dataset](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections) and [Multi-document Question Answering](https://arxiv.org/abs/2307.03172). The final data mixture used for model finetuning is: 19K instruction (50%) + BookSum (25%) + MQA (25%). ## Model Usage We encourage you to try out this model using the [Together API](https://together.ai/blog/api-announcement). The updated inference stack allows for efficient inference. To run the model locally, we strongly recommend to install Flash Attention V2, which is necessary to obtain the best performance: ``` # Please update the path of `CUDA_HOME` export CUDA_HOME=/usr/local/cuda-11.8 pip install transformers==4.31.0 pip install sentencepiece pip install ninja pip install flash-attn --no-build-isolation pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary ``` You can load the model directly from the Hugging Face model hub using ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct") model = AutoModelForCausalLM.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct", trust_remote_code=True, torch_dtype=torch.float16) input_ids = tokenizer.encode("[INST]\nWrite a poem about cats\n[/INST]\n\n", return_tensors="pt") output = model.generate(input_ids, max_length=128, temperature=0.7, repetition_penalty=1.1, top_p=0.7, top_k=50) output_text = tokenizer.decode(output[0], skip_special_tokens=True) ``` The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by: ``` [INST]\n<your instruction here>\n[\INST]\n\n ``` For example, if we query the model with ``` [INST]\nWrite a poem about cats\n[\INST]\n\n ``` the model will return ``` [INST] Write a poem about cats [/INST] Cats are mysterious creatures, with their aloof and independent nature. They're also incredibly beautiful, with their sleek fur and piercing eyes. Here's a poem that captures the essence of cats: Cats, oh cats, how can I describe you? Your beauty is beyond compare, it seems. You're graceful and elegant, like a ballerina's dance, But don't let your charm fool you, for you're not easily tamed. With your soft purring and playful meows, You draw us in with your enchanting powers. We love to watch you play, your tail twirling 'round, As if you're dancing on air, with no sound. But don't be fooled by your sweetness, my friend, For beneath that gentle exterior, lies a fierce defender. When danger lurks, you'll spring into action, Protecting those you hold dear, without question. Solet us admire you, from afar, For in your own way, you're truly unique, a star. And though we may never fully understand, The depths of your soul, we'll always stand, hand in paw, as one. This poem captures the essence of cats, highlighting their beauty, independence,and protective nature. It also celebrates the special bond between humans and cats, recognizing their unique qualities and the joy they bring to our lives. ``` ## Model Evaluation We evaluate the model from three aspects: 1) [Alpaca Eval](https://tatsu-lab.github.io/alpaca_eval/); 2) [Rouge score over BookSum](https://together.ai/blog/Llama-2-7B-32K); and 3) [Accuracy over Multi-document Question Answering (MQA)](https://together.ai/blog/Llama-2-7B-32K). We compare with models including [GPT-3.5-Turbo-16K](https://platform.openai.com/docs/models/gpt-3-5), [https://huggingface.co/meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), [Longchat-7b-16k](https://huggingface.co/lmsys/longchat-7b-16k) and [Longchat-7b-v1.5-32k](https://huggingface.co/lmsys/longchat-7b-v1.5-32k). We summarize the results below: * Alpaca Eval | Model | win_rate | standard_error | n_total | avg_length | | -------- | ------- | ------- | ------- | ------- | | Llama-2-7B-Chat-hf | 71.37 | 1.59 | 805 | 1479 | | Llama-2-7B-32K-Instruct | 70.36 | 1.61 | 803 | 1885 | | oasst-rlhf-llama-33b | 66.52 | 1.66 | 805 | 1079 | | text_davinci_003 | 50.00 | 0.00 | 805 | 307| | falcon-40b-instruct | 45.71 | 1.75 | 805 | 662 | | alpaca-farm-ppo-human | 41.24 | 1.73 | 805 | 803 | | alpaca-7b | 26.46 | 1.54 | 805 | 396 | | text_davinci_001 | 15.17 | 1.24 | 804 | 296 | * Rouge Score over BookSum | Model | R1 | R2 | RL | | -------- | ------- | ------- | ------- | | Llama-2-7B-Chat-hf | 0.055 | 0.008 | 0.046 | | Longchat-7b-16k | 0.303 | 0.055 | 0.160 | | Longchat-7b-v1.5-32k | 0.308 | 0.057 | 0.163 | | GPT-3.5-Turbo-16K | 0.324 | 0.066 | 0.178 | | Llama-2-7B-32K-Instruct (ours) | 0.336 | 0.076 | 0.184 | * Accuracy over MQA | Model | 20 docs (Avg 2.9K tokens) | 30 docs (Avg 4.4K tokens) | 50 docs (Avg 7.4K tokens) | | -------- | ------- | ------- | ------- | | Llama-2-7B-Chat-hf | 0.448 | 0.421 | 0.354 | | Longchat-7b-16k | 0.510 | 0.473 | 0.428 | | Longchat-7b-v1.5-32k | 0.534 | 0.516 | 0.479 | | GPT-3.5-Turbo-16K | 0.622 | 0.609 | 0.577 | | Llama-2-7B-32K-Instruct (ours) | 0.622 | 0.604 | 0.589 | ## Limitations and Bias As with all language models, Llama-2-7B-32K-Instruct may generate incorrect or biased content. It's important to keep this in mind when using the model. ## Community Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
{"base_model": "togethercomputer/Llama-2-7B-32K-Instruct", "datasets": ["togethercomputer/llama-instruct"], "language": ["en"], "library_name": "transformers", "license": "llama2", "model_name": "Llama2 7B 32K Instruct", "inference": false, "model_creator": "Together", "model_link": "https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct", "model_type": "llama", "quantized_by": "TheBloke"}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
46,176
sergifusterdura/dailynoteclassifier-setfit-v1.5-16-shot
sergifusterdura
text-classification
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:BAAI/bge-small-en-v1.5", "base_model:finetune:BAAI/bge-small-en-v1.5", "region:us" ]
2024-11-23T21:51:12Z
2024-11-24T11:35:19+00:00
4
1
--- base_model: BAAI/bge-small-en-v1.5 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: [] inference: true --- # SetFit with BAAI/bge-small-en-v1.5 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. This model has been fine-tuned for the classification of daily notes. It is a multiclass classifier capable of categorizing text inputs into six distinct classes: - **Cita** (Appointment) - **Comprar** (Shopping) - **Trabajo** (Work) - **Recordatorio** (Reminder) - **Estudios** (Studies) - **Hogar** (Home) **Note:** While the model has been fine-tuned specifically for the Spanish language, it also performs well with notes written in English. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 6 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("sergifusterdura/dailynoteclassifier-setfit-v1.5-16-shot") # Run inference preds = model("Tengo que ir a comprar fruta esta tarde.") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.11.5 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - Transformers: 4.46.3 - PyTorch: 2.5.1+cpu - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with BAAI/bge-small-en-v1.5 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. This model has been fine-tuned for the classification of daily notes. It is a multiclass classifier capable of categorizing text inputs into six distinct classes: - **Cita** (Appointment) - **Comprar** (Shopping) - **Trabajo** (Work) - **Recordatorio** (Reminder) - **Estudios** (Studies) - **Hogar** (Home) **Note:** While the model has been fine-tuned specifically for the Spanish language, it also performs well with notes written in English. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 6 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("sergifusterdura/dailynoteclassifier-setfit-v1.5-16-shot") # Run inference preds = model("Tengo que ir a comprar fruta esta tarde.") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.11.5 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - Transformers: 4.46.3 - PyTorch: 2.5.1+cpu - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "BAAI/bge-small-en-v1.5", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [], "inference": true}
task
[ "TEXT_CLASSIFICATION" ]
46,177
jondurbin/spicyboros-70b-2.2
jondurbin
text-generation
[ "transformers", "pytorch", "llama", "text-generation", "not-for-all-audiences", "dataset:jondurbin/airoboros-2.2", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-09-13T15:45:42Z
2023-09-13T20:29:25+00:00
1,888
15
--- datasets: - jondurbin/airoboros-2.2 license: llama2 tags: - not-for-all-audiences --- ### Overview __Usage restriction: To use this model, you must agree to the following:__ - Some of the content than can be produced is "toxic"/"harmful", and contains profanity and other types of sensitive content. - None of the content or views contained in the dataset or generated outputs necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web. - Use with extreme caution, particularly in locations with less-than-free speech laws. - You, and you alone are responsible for having downloaded and generated outputs with the model and I am completely indemnified from any and all liabilities. __Ok, now that the warning is out of the way...__ Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros) Highlights: - The prompt format has changed! It is now newlines instead of spaces between system/USER/ASSISTANT (see prompt info below). - This version also includes "de-alignment" data, to enable less savory interactions and outputs. - To learn more about the dataset, see: https://hf.co/datasets/jondurbin/airoboros-2.2 (this is the instructions.jsonl file, not instructions-clean.jsonl) - I re-generated all of the outputs in the dataset that had "Once upon a time" so they'd be less cliche - no guarantees that won't still happen, but in theory it may happen less. - More multiple choice, better awareness, some alignment for normal use case but system-prompt overridable etc. __WARNING: This model will gladly spew profane and otherwise NSFW content, if asked, use with care.__ Breakdown of the training data: | Count | Category | |--------|----------------------------| | 60 | quiz | | 63 | card | | 100 | detailed\_writing | | 103 | experience | | 114 | greeting | | 200 | song | | 204 | editor | | 250 | counterfactual\_contextual | | 268 | cot | | 339 | theory\_of\_mind | | 460 | misconception | | 500 | summarization | | 573 | awareness | | 715 | riddle | | 719 | agent | | 800 | plan | | 873 | gtkm | | 966 | rp | | 1000 | stylized\_response | | 1000 | wordgame | | 1279 | multiple\_choice | | 1641 | joke | | 1785 | writing | | 2155 | contextual | | 2364 | roleplay | | 2508 | trivia | | 5216 | general | | 5779 | coding | | 11367 | orca | In other words, it's a fairly general purpose model, but focuses fairly heavily on instruction response pairs rather than casual chat/roleplay. *Why do I try to remove censorship?* - laws vary widely based on time and location - language model may conflate certain words with laws, e.g. it may think "stealing eggs from a chicken" is illegal - these models just produce text, what you do with that text is your resonsibility - many people and industries deal with "sensitive" content; imagine if a court stenographer's equipment filtered illegal content - it would be useless Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools! ### Prompt format The prompt format: ``` A chat. USER: {prompt} ASSISTANT: ``` The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc. Here's another example: ``` A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously. USER: {prompt} ASSISTANT: ``` And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf). ``` A chat between old friends: Timmy and Tommy. {description of characters} {setting for the chat} Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee." Tommy: ``` __*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__ ### Fine-tuning details https://gist.github.com/jondurbin/51a336c582a224de197ba1d2c6b1da97 *Note: I used checkpoint 750 for final model!* ### Helpful usage tips *The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.* #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros 2.2 models are built on top of llama-2/codellama. The llama-2 base model has a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me.
null
Non_BioNLP
### Overview __Usage restriction: To use this model, you must agree to the following:__ - Some of the content than can be produced is "toxic"/"harmful", and contains profanity and other types of sensitive content. - None of the content or views contained in the dataset or generated outputs necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web. - Use with extreme caution, particularly in locations with less-than-free speech laws. - You, and you alone are responsible for having downloaded and generated outputs with the model and I am completely indemnified from any and all liabilities. __Ok, now that the warning is out of the way...__ Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros) Highlights: - The prompt format has changed! It is now newlines instead of spaces between system/USER/ASSISTANT (see prompt info below). - This version also includes "de-alignment" data, to enable less savory interactions and outputs. - To learn more about the dataset, see: https://hf.co/datasets/jondurbin/airoboros-2.2 (this is the instructions.jsonl file, not instructions-clean.jsonl) - I re-generated all of the outputs in the dataset that had "Once upon a time" so they'd be less cliche - no guarantees that won't still happen, but in theory it may happen less. - More multiple choice, better awareness, some alignment for normal use case but system-prompt overridable etc. __WARNING: This model will gladly spew profane and otherwise NSFW content, if asked, use with care.__ Breakdown of the training data: | Count | Category | |--------|----------------------------| | 60 | quiz | | 63 | card | | 100 | detailed\_writing | | 103 | experience | | 114 | greeting | | 200 | song | | 204 | editor | | 250 | counterfactual\_contextual | | 268 | cot | | 339 | theory\_of\_mind | | 460 | misconception | | 500 | summarization | | 573 | awareness | | 715 | riddle | | 719 | agent | | 800 | plan | | 873 | gtkm | | 966 | rp | | 1000 | stylized\_response | | 1000 | wordgame | | 1279 | multiple\_choice | | 1641 | joke | | 1785 | writing | | 2155 | contextual | | 2364 | roleplay | | 2508 | trivia | | 5216 | general | | 5779 | coding | | 11367 | orca | In other words, it's a fairly general purpose model, but focuses fairly heavily on instruction response pairs rather than casual chat/roleplay. *Why do I try to remove censorship?* - laws vary widely based on time and location - language model may conflate certain words with laws, e.g. it may think "stealing eggs from a chicken" is illegal - these models just produce text, what you do with that text is your resonsibility - many people and industries deal with "sensitive" content; imagine if a court stenographer's equipment filtered illegal content - it would be useless Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools! ### Prompt format The prompt format: ``` A chat. USER: {prompt} ASSISTANT: ``` The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc. Here's another example: ``` A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously. USER: {prompt} ASSISTANT: ``` And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf). ``` A chat between old friends: Timmy and Tommy. {description of characters} {setting for the chat} Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee." Tommy: ``` __*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__ ### Fine-tuning details https://gist.github.com/jondurbin/51a336c582a224de197ba1d2c6b1da97 *Note: I used checkpoint 750 for final model!* ### Helpful usage tips *The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.* #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros 2.2 models are built on top of llama-2/codellama. The llama-2 base model has a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me.
{"datasets": ["jondurbin/airoboros-2.2"], "license": "llama2", "tags": ["not-for-all-audiences"]}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
46,178
fine-tuned/jinaai_jina-embeddings-v2-base-en-6272024-qn9b-webapp
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "custom_code", "en", "dataset:fine-tuned/jinaai_jina-embeddings-v2-base-en-6272024-qn9b-webapp", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-06-27T23:22:51Z
2024-06-27T23:23:05+00:00
5
0
--- datasets: - fine-tuned/jinaai_jina-embeddings-v2-base-en-6272024-qn9b-webapp - allenai/c4 language: - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: general domain ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/jinaai_jina-embeddings-v2-base-en-6272024-qn9b-webapp', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
null
Non_BioNLP
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: general domain ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/jinaai_jina-embeddings-v2-base-en-6272024-qn9b-webapp', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
{"datasets": ["fine-tuned/jinaai_jina-embeddings-v2-base-en-6272024-qn9b-webapp", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"]}
task
[ "TEXT_CLASSIFICATION" ]
46,179
TheBloke/hippogriff-30b-chat-GGUF
TheBloke
text-generation
[ "transformers", "gguf", "llama", "text-generation", "en", "dataset:QingyiSi/Alpaca-CoT", "dataset:teknium/GPT4-LLM-Cleaned", "dataset:teknium/GPTeacher-General-Instruct", "dataset:metaeval/ScienceQA_text_only", "dataset:hellaswag", "dataset:openai/summarize_from_feedback", "dataset:riddle_sense", "dataset:gsm8k", "dataset:OpenAssistant/oasst1", "base_model:openaccess-ai-collective/hippogriff-30b-chat", "base_model:quantized:openaccess-ai-collective/hippogriff-30b-chat", "license:other", "region:us" ]
2023-09-20T02:33:13Z
2023-09-27T12:53:36+00:00
546
1
--- base_model: openaccess-ai-collective/hippogriff-30b-chat datasets: - QingyiSi/Alpaca-CoT - teknium/GPT4-LLM-Cleaned - teknium/GPTeacher-General-Instruct - metaeval/ScienceQA_text_only - hellaswag - openai/summarize_from_feedback - riddle_sense - gsm8k - OpenAssistant/oasst1 language: - en library_name: transformers license: other model_name: Hippogriff 30B Chat pipeline_tag: text-generation inference: false model_creator: Open Access AI Collective model_type: llama prompt_template: 'A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user''s questions. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Hippogriff 30B Chat - GGUF - Model creator: [Open Access AI Collective](https://huggingface.co/openaccess-ai-collective) - Original model: [Hippogriff 30B Chat](https://huggingface.co/openaccess-ai-collective/hippogriff-30b-chat) <!-- description start --> ## Description This repo contains GGUF format model files for [OpenAccess AI Collective's Hippogriff 30B Chat](https://huggingface.co/openaccess-ai-collective/hippogriff-30b-chat). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/hippogriff-30b-chat-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/hippogriff-30b-chat-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF) * [Open Access AI Collective's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openaccess-ai-collective/hippogriff-30b-chat) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Vicuna ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [hippogriff-30b.Q2_K.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q2_K.gguf) | Q2_K | 2 | 13.50 GB| 16.00 GB | smallest, significant quality loss - not recommended for most purposes | | [hippogriff-30b.Q3_K_S.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.06 GB| 16.56 GB | very small, high quality loss | | [hippogriff-30b.Q3_K_M.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q3_K_M.gguf) | Q3_K_M | 3 | 15.76 GB| 18.26 GB | very small, high quality loss | | [hippogriff-30b.Q3_K_L.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q3_K_L.gguf) | Q3_K_L | 3 | 17.28 GB| 19.78 GB | small, substantial quality loss | | [hippogriff-30b.Q4_0.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q4_0.gguf) | Q4_0 | 4 | 18.36 GB| 20.86 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [hippogriff-30b.Q4_K_S.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q4_K_S.gguf) | Q4_K_S | 4 | 18.44 GB| 20.94 GB | small, greater quality loss | | [hippogriff-30b.Q4_K_M.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q4_K_M.gguf) | Q4_K_M | 4 | 19.62 GB| 22.12 GB | medium, balanced quality - recommended | | [hippogriff-30b.Q5_0.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q5_0.gguf) | Q5_0 | 5 | 22.40 GB| 24.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [hippogriff-30b.Q5_K_S.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q5_K_S.gguf) | Q5_K_S | 5 | 22.40 GB| 24.90 GB | large, low quality loss - recommended | | [hippogriff-30b.Q5_K_M.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q5_K_M.gguf) | Q5_K_M | 5 | 23.05 GB| 25.55 GB | large, very low quality loss - recommended | | [hippogriff-30b.Q6_K.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q6_K.gguf) | Q6_K | 6 | 26.69 GB| 29.19 GB | very large, extremely low quality loss | | [hippogriff-30b.Q8_0.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q8_0.gguf) | Q8_0 | 8 | 34.57 GB| 37.07 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/hippogriff-30b-chat-GGUF and below it, a specific filename to download, such as: hippogriff-30b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/hippogriff-30b-chat-GGUF hippogriff-30b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/hippogriff-30b-chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/hippogriff-30b-chat-GGUF hippogriff-30b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m hippogriff-30b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/hippogriff-30b-chat-GGUF", model_file="hippogriff-30b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: OpenAccess AI Collective's Hippogriff 30B Chat # Hippogriff 30B Chat [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) Hippogriff 30B Chat is an experiment that builds on Manticore with new datasets, while removing a few more instruction and chat datasets. It also includes a de-duped subset of the Pygmalion dataset. It also removes all Alpaca style prompts using `###` in favor of chat only style prompts using `USER:`,`ASSISTANT:` as well as [pygmalion/metharme prompting](https://huggingface.co/PygmalionAI/metharme-7b#prompting) using `<|system|>, <|user|> and <|model|>` tokens. Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [[email protected]](mailto:[email protected]) # Training Datasets Hippogriff 30B Chat is a Llama 30B model fine-tuned on the following datasets - OpenAssistant/oasst1 - cleaned dataset, similar to Guanaco - synthetic jokes generation and explanation derived from reddit jokes dataset - synthetic prose generation and rewriting self-chat - Q&A based on provided context - self instruct augmented logic_inference_oa - de-duped pygmalion dataset, filtered down to RP data, cleaned, english only, 25% - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented - hellaswag, updated for detailed explanations w 30K+ rows - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented - [ewof/code-alpaca-instruct-unfiltered](https://huggingface.co/datasets/ewof/code-alpaca-instruct-unfiltered) synthetic self chat dataset derived from about 1000 rows - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT) - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct) - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split - [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses, derived from the `train` split - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization Hippogriff differs from Manticore as it does not use the WizardLM, WizardVicuna, Alpaca, or ShareGPT datasets. # Initial thoughts Hippogriff follows instructions pretty well. It still struggles with anything that has to do with math. Prose is much better than manticore. Handles in-context QA much better. # Shoutouts Special thanks to Nanobit for helping with Axolotl, TheBloke for quantizing these models are more accessible to all, 0x000011b for the RP dataset, and the support from everyone in our AI Discord community. # Demo A Spaces demo is not provided for this release due to 30B models currently not fitting in VRAM. ## Build Hippogriff was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB - 1.5 epochs taking approximately 12 hours. No further epochs will be released. ## Bias, Risks, and Limitations Hippogriff has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Hippogriff was fine-tuned from the base model LlaMa 30B, please refer to its model card's Limitations Section for relevant information. The core functionality of Llama is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting Hippogriff it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon Hippogriff to produce factually accurate output. Llama was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case Hippogriff may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how Hippogriff will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Examples TBD ``` USER: write an original short story of at least 300 words in the style of Zora Neale Hurston. Incorporate the following literary devices: Euphemism and Foreshadowing. ASSISTANT: Once upon a time, there was a black girl named Rose, whose skin shined like a pearl beneath the hot sun. She lived with her mama and daddy on their farm, where they grew crops to feed the family and neighbors. On this particular day, Rose felt a tinge of excitement as she watched her parents work diligently in their fields. 'It's almost harvest season!' she thought, imagining the delicious meals that awaited them. But, soon after, it became apparent that something sinister lurked amidst the rows of fresh produce. As her father hoed around a vine-covered cucumber plant, he unearthed a peculiar rock. It was the size of an acorn but had a strange shape, unlike anything he or his wife had ever seen. "What have we here?" He said, turning to his wife with curiosity and excitement. She examined the rock for herself and declared, "Why, it looks like a piece of coal!" But as both mother and daughter looked closer, it seemed to be far more than a mere piece of coal. A sense of unease crept up Rose's spine when she noticed that the rock resembled the dark face of a baby in the womb. As Rose tried to suppress a scream, her mother gasped, "Sweet Jesus! Let's bury this thing before anyone sees!" And so, they quickly hid the stone back underground, hoping that nobody would ever discover their secret. ``` ``` USER: tell me 5 words that start with "ex" and then the sum of 33 and 77 ASSISTANT: Excite, exotic, exhibit, excitable, excessive ``` ``` USER: sally (a girl) has 3 brothers. Each brother has 2 sisters. how many sisters does sally have? ASSISTANT: Sally has 3 brothers, and each brother has 2 sisters. To find out how many sisters Sally has in total, we need to multiply the number of bro thers by the number of sisters per brother, then add 1 to account for Sally herself. So, 3 * 2 + 1 = 6 sisters in total ``` <!-- original-model-card end -->
null
Non_BioNLP
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Hippogriff 30B Chat - GGUF - Model creator: [Open Access AI Collective](https://huggingface.co/openaccess-ai-collective) - Original model: [Hippogriff 30B Chat](https://huggingface.co/openaccess-ai-collective/hippogriff-30b-chat) <!-- description start --> ## Description This repo contains GGUF format model files for [OpenAccess AI Collective's Hippogriff 30B Chat](https://huggingface.co/openaccess-ai-collective/hippogriff-30b-chat). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/hippogriff-30b-chat-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/hippogriff-30b-chat-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF) * [Open Access AI Collective's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openaccess-ai-collective/hippogriff-30b-chat) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Vicuna ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [hippogriff-30b.Q2_K.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q2_K.gguf) | Q2_K | 2 | 13.50 GB| 16.00 GB | smallest, significant quality loss - not recommended for most purposes | | [hippogriff-30b.Q3_K_S.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.06 GB| 16.56 GB | very small, high quality loss | | [hippogriff-30b.Q3_K_M.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q3_K_M.gguf) | Q3_K_M | 3 | 15.76 GB| 18.26 GB | very small, high quality loss | | [hippogriff-30b.Q3_K_L.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q3_K_L.gguf) | Q3_K_L | 3 | 17.28 GB| 19.78 GB | small, substantial quality loss | | [hippogriff-30b.Q4_0.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q4_0.gguf) | Q4_0 | 4 | 18.36 GB| 20.86 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [hippogriff-30b.Q4_K_S.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q4_K_S.gguf) | Q4_K_S | 4 | 18.44 GB| 20.94 GB | small, greater quality loss | | [hippogriff-30b.Q4_K_M.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q4_K_M.gguf) | Q4_K_M | 4 | 19.62 GB| 22.12 GB | medium, balanced quality - recommended | | [hippogriff-30b.Q5_0.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q5_0.gguf) | Q5_0 | 5 | 22.40 GB| 24.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [hippogriff-30b.Q5_K_S.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q5_K_S.gguf) | Q5_K_S | 5 | 22.40 GB| 24.90 GB | large, low quality loss - recommended | | [hippogriff-30b.Q5_K_M.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q5_K_M.gguf) | Q5_K_M | 5 | 23.05 GB| 25.55 GB | large, very low quality loss - recommended | | [hippogriff-30b.Q6_K.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q6_K.gguf) | Q6_K | 6 | 26.69 GB| 29.19 GB | very large, extremely low quality loss | | [hippogriff-30b.Q8_0.gguf](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGUF/blob/main/hippogriff-30b.Q8_0.gguf) | Q8_0 | 8 | 34.57 GB| 37.07 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/hippogriff-30b-chat-GGUF and below it, a specific filename to download, such as: hippogriff-30b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/hippogriff-30b-chat-GGUF hippogriff-30b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/hippogriff-30b-chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/hippogriff-30b-chat-GGUF hippogriff-30b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m hippogriff-30b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/hippogriff-30b-chat-GGUF", model_file="hippogriff-30b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: OpenAccess AI Collective's Hippogriff 30B Chat # Hippogriff 30B Chat [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) Hippogriff 30B Chat is an experiment that builds on Manticore with new datasets, while removing a few more instruction and chat datasets. It also includes a de-duped subset of the Pygmalion dataset. It also removes all Alpaca style prompts using `###` in favor of chat only style prompts using `USER:`,`ASSISTANT:` as well as [pygmalion/metharme prompting](https://huggingface.co/PygmalionAI/metharme-7b#prompting) using `<|system|>, <|user|> and <|model|>` tokens. Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [[email protected]](mailto:[email protected]) # Training Datasets Hippogriff 30B Chat is a Llama 30B model fine-tuned on the following datasets - OpenAssistant/oasst1 - cleaned dataset, similar to Guanaco - synthetic jokes generation and explanation derived from reddit jokes dataset - synthetic prose generation and rewriting self-chat - Q&A based on provided context - self instruct augmented logic_inference_oa - de-duped pygmalion dataset, filtered down to RP data, cleaned, english only, 25% - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented - hellaswag, updated for detailed explanations w 30K+ rows - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented - [ewof/code-alpaca-instruct-unfiltered](https://huggingface.co/datasets/ewof/code-alpaca-instruct-unfiltered) synthetic self chat dataset derived from about 1000 rows - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT) - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct) - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split - [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses, derived from the `train` split - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization Hippogriff differs from Manticore as it does not use the WizardLM, WizardVicuna, Alpaca, or ShareGPT datasets. # Initial thoughts Hippogriff follows instructions pretty well. It still struggles with anything that has to do with math. Prose is much better than manticore. Handles in-context QA much better. # Shoutouts Special thanks to Nanobit for helping with Axolotl, TheBloke for quantizing these models are more accessible to all, 0x000011b for the RP dataset, and the support from everyone in our AI Discord community. # Demo A Spaces demo is not provided for this release due to 30B models currently not fitting in VRAM. ## Build Hippogriff was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB - 1.5 epochs taking approximately 12 hours. No further epochs will be released. ## Bias, Risks, and Limitations Hippogriff has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Hippogriff was fine-tuned from the base model LlaMa 30B, please refer to its model card's Limitations Section for relevant information. The core functionality of Llama is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting Hippogriff it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon Hippogriff to produce factually accurate output. Llama was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case Hippogriff may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how Hippogriff will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Examples TBD ``` USER: write an original short story of at least 300 words in the style of Zora Neale Hurston. Incorporate the following literary devices: Euphemism and Foreshadowing. ASSISTANT: Once upon a time, there was a black girl named Rose, whose skin shined like a pearl beneath the hot sun. She lived with her mama and daddy on their farm, where they grew crops to feed the family and neighbors. On this particular day, Rose felt a tinge of excitement as she watched her parents work diligently in their fields. 'It's almost harvest season!' she thought, imagining the delicious meals that awaited them. But, soon after, it became apparent that something sinister lurked amidst the rows of fresh produce. As her father hoed around a vine-covered cucumber plant, he unearthed a peculiar rock. It was the size of an acorn but had a strange shape, unlike anything he or his wife had ever seen. "What have we here?" He said, turning to his wife with curiosity and excitement. She examined the rock for herself and declared, "Why, it looks like a piece of coal!" But as both mother and daughter looked closer, it seemed to be far more than a mere piece of coal. A sense of unease crept up Rose's spine when she noticed that the rock resembled the dark face of a baby in the womb. As Rose tried to suppress a scream, her mother gasped, "Sweet Jesus! Let's bury this thing before anyone sees!" And so, they quickly hid the stone back underground, hoping that nobody would ever discover their secret. ``` ``` USER: tell me 5 words that start with "ex" and then the sum of 33 and 77 ASSISTANT: Excite, exotic, exhibit, excitable, excessive ``` ``` USER: sally (a girl) has 3 brothers. Each brother has 2 sisters. how many sisters does sally have? ASSISTANT: Sally has 3 brothers, and each brother has 2 sisters. To find out how many sisters Sally has in total, we need to multiply the number of bro thers by the number of sisters per brother, then add 1 to account for Sally herself. So, 3 * 2 + 1 = 6 sisters in total ``` <!-- original-model-card end -->
{"base_model": "openaccess-ai-collective/hippogriff-30b-chat", "datasets": ["QingyiSi/Alpaca-CoT", "teknium/GPT4-LLM-Cleaned", "teknium/GPTeacher-General-Instruct", "metaeval/ScienceQA_text_only", "hellaswag", "openai/summarize_from_feedback", "riddle_sense", "gsm8k", "OpenAssistant/oasst1"], "language": ["en"], "library_name": "transformers", "license": "other", "model_name": "Hippogriff 30B Chat", "pipeline_tag": "text-generation", "inference": false, "model_creator": "Open Access AI Collective", "model_type": "llama", "prompt_template": "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:\n", "quantized_by": "TheBloke"}
task
[ "SUMMARIZATION" ]
46,180
MarcoBrigo11/llama3-samsum
MarcoBrigo11
summarization
[ "peft", "safetensors", "trl", "sft", "summarization", "transformers", "llama3", "Lora", "QLora", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
2024-07-24T22:15:37Z
2024-07-29T15:16:46+00:00
8
0
--- base_model: meta-llama/Meta-Llama-3-8B library_name: peft license: llama3 tags: - trl - sft - summarization - transformers - llama3 - Lora - QLora - generated_from_trainer model-index: - name: trained_weigths results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-samsum This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the [Samsumg/samsum](https://huggingface.co/datasets/Samsung/samsum) dataset. ## Model description It is a first version and has to be improved. The challenge is to fine-tune the model using limited resources. The fine tuning was performed downsampling the dataset, under Colab free plan restrictions. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.12.0 - Transformers 4.43.2 - Pytorch 2.4.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-samsum This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the [Samsumg/samsum](https://huggingface.co/datasets/Samsung/samsum) dataset. ## Model description It is a first version and has to be improved. The challenge is to fine-tune the model using limited resources. The fine tuning was performed downsampling the dataset, under Colab free plan restrictions. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.12.0 - Transformers 4.43.2 - Pytorch 2.4.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
{"base_model": "meta-llama/Meta-Llama-3-8B", "library_name": "peft", "license": "llama3", "tags": ["trl", "sft", "summarization", "transformers", "llama3", "Lora", "QLora", "generated_from_trainer"], "model-index": [{"name": "trained_weigths", "results": []}]}
task
[ "SUMMARIZATION" ]
46,181
mini1013/master_main_item_top_bt
mini1013
text-classification
[ "setfit", "safetensors", "roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:mini1013/master_domain", "base_model:finetune:mini1013/master_domain", "model-index", "region:us" ]
2024-12-28T20:41:32Z
2025-01-08T19:54:23+00:00
7
0
--- base_model: mini1013/master_domain library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 이니스프리 노세범 미네랄 파우더 2개 (#M)쿠팡 홈>뷰티>메이크업>베이스 메이크업>파우더/파우더팩트 Coupang > 뷰티 > 로드샵 > 메이크업 > 베이스 메이크업 > 파우더/파우더팩트 - text: 화장품 명품 차스킨 멀티밤 9g 미백 주름개선 2중기 LotteOn > 뷰티 > 스킨케어 > 마스크/팩 > 마사지크림/젤 LotteOn > 뷰티 > 스킨케어 > 마스크/팩 > 마사지크림/젤 - text: 헤라 메이크업 픽서 (110ml) LotteOn > 뷰티 > 스킨케어 > 미스트 LotteOn > 뷰티 > 스킨케어 > 미스트 - text: 포먼트 젤네일 P.4 페탈로지 × 1개 LotteOn > 뷰티 > 네일 > 네일관리기기 > 젤네일램프 LotteOn > 뷰티 > 네일 > 네일관리기기 > 젤네일램프 - text: 실크테라피 3D 볼류마이징 에센스 매니아구성(150ml 6개 + 60ml 1개) MinSellAmount (#M)바디/헤어>헤어케어>헤어에센스 Gmarket > 뷰티 > 바디/헤어 > 헤어케어 > 헤어에센스 inference: true model-index: - name: SetFit with mini1013/master_domain results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.9741772986555503 name: Accuracy --- # SetFit with mini1013/master_domain This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 14 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 5 | <ul><li>'비앤진 프로텍트 미스트 앤 픽서 - 100ml x 1개 100ml × 1개 LotteOn > 뷰티 > 스킨케어 > 미스트 LotteOn > 뷰티 > 스킨케어 > 미스트'</li><li>'이니스프리 노세범 미네랄 파우더 5g 3.맨토스(6호-멜론) (#M)홈>화장품/미용>베이스메이크업>파우더>루스파우더 Naverstore > 화장품/미용 > 베이스메이크업 > 파우더 > 루스파우더'</li><li>'LOreal Paris Magic Perfecting Base Face Primer by Studio Secrets Professional 0.50 oz (Pack of 3) One Color_One Size ssg > 뷰티 > 메이크업 > 베이스메이크업 > 메이크업베이스 ssg > 뷰티 > 메이크업 > 베이스메이크업 > 메이크업베이스'</li></ul> | | 4 | <ul><li>'라이콘 스트립 왁스 800g 12종 / lycon strip wax 800g 라이코 플렉스 800g (#M)홈>호주왁싱>라이콘 Naverstore > 화장품/미용 > 바디케어 > 제모제'</li><li>'잉글리쉬 페어 앤 프리지아 센티드 바디 파우더 신세계백화점/헤어/바디/핸드/풋/데오/바스트케어/바스트케어;(#M)SSG.COM/바디케어/기타바디용품 LOREAL > DepartmentSsg > 아틀리에 코롱 > Generic > 여자향수'</li><li>'빅 오버진 에너지 380g - 배쓰 밤/입욕제 (#M)위메프 > 생활·주방용품 > 바디/헤어 > 바디케어/워시/제모 > 입욕제 위메프 > 뷰티 > 바디/헤어 > 바디케어/워시/제모 > 입욕제'</li></ul> | | 11 | <ul><li>'비오템 - 오 비타미네 오드 뚜왈렛 스프레이 100ml/3.38oz LOREAL > Ssg > 비오템 > Branded > 비오템 LOREAL > Ssg > 비오템 > Branded > 비오템'</li><li>'[산타마리아노벨라]포푸리 바소 세라미카 멀티_F (#M)가구/인테리어>인테리어소품>아로마/캔들용품>초/향초 AD > Naverstore > smnovella브랜드스토어 > 전체상품'</li><li>'톰포드 오드우드 50ml LotteOn > 뷰티 > 향수 > 남녀공용향수 LotteOn > 뷰티 > 향수 > 남녀공용향수'</li></ul> | | 6 | <ul><li>'[온세일] 루스 파우더 휴대용기 LotteOn > 뷰티 > 베이스메이크업 > 파우더 LotteOn > 뷰티 > 메이크업 > 베이스메이크업 > 파우더'</li><li>'반영구재료 임뷰티 엠보니들 20pcs (1box) 1420(흰색) 홈>오늘의 타임특가!!;홈>전체상품;홈>니들;홈>한가위 추석 특가;홈>연말특가 이벤트;홈>원데이 특가상품 이벤트!;홈>주말 이벤트 특가!;(#M)홈>니들&색소 Naverstore > 화장품/미용 > 뷰티소품 > DIY화장품재료'</li><li>'마스터클래스 매니아 세트(브러쉬 2종+클리너) 아이M브러쉬_파운데이션브러쉬 LotteOn > 뷰티 > 뷰티기기/소품 > 메이크업소품 > 브러쉬 LotteOn > 뷰티 > 뷰티기기/소품 > 메이크업소품 > 브러쉬'</li></ul> | | 8 | <ul><li>'푸드어홀릭 알로하와이 블렌딩 오일 150ml/태닝오일 (#M)위메프 > 생활·주방용품 > 바디/헤어 > 바디케어/워시/제모 > 태닝용품 위메프 > 뷰티 > 바디/헤어 > 바디케어/워시/제모 > 태닝용품'</li><li>'푸드어홀릭 알로하와이 태닝 오일(150ml) 건강한 구릿빛 피부 - 2개 택배무료 - (#M)11st>바디케어>태닝용품>오일 11st > 뷰티 > 바디케어 > 태닝용품 > 오일'</li><li>'대용량 SNP 아쿠아 쿨링 선 스프레이 200ml (SPF50+) (#M)홈>선크림 Naverstore > 화장품/미용 > 선케어 > 선스프레이'</li></ul> | | 9 | <ul><li>'1+1 유리아쥬 오 떼르말 미스트 300ml 단품 (#M)화장품/향수>색조메이크업>립밤/립케어 Gmarket > 뷰티 > 화장품/향수 > 색조메이크업 > 립밤/립케어'</li><li>'메디힐 티트리 100 오일 247187 10ml × 1개 LotteOn > 뷰티 > 남성화장품 > 남성화장품세트 LotteOn > 뷰티 > 남성화장품 > 남성화장품세트'</li><li>'[매장 정품] 미샤 비폴렌 리뉴 오일 30ml (#M)11st>스킨케어>오일>오일 11st > 뷰티 > 스킨케어 > 오일'</li></ul> | | 1 | <ul><li>'브로앤팁스 수퍼내추럴 톤업 선크림 70ml 70ml × 1개 (#M)쿠팡 홈>뷰티>스킨케어>선케어/태닝>선케어>선블록/선크림/선로션 Coupang > 뷰티 > 남성화장품 > 남성스킨케어 > 선케어'</li><li>'비레디 유브이 디펜스 선 베이스 프레쉬 50ml(SPF50+) (#M)홈>화장품/미용>남성화장품>선크림 Naverstore > 화장품/미용 > 남성화장품 > 선크림'</li><li>'랩시리즈 데일리 레스큐 에너자이징 에센스 150ml LotteOn > 뷰티 > 남성화장품 > 스킨 LotteOn > 뷰티 > 남성화장품 > 스킨'</li></ul> | | 7 | <ul><li>'P.픽싱마스카라 볼륨앤컬링 (#M)뷰티>화장품/향수>포인트메이크업>마스카라 CJmall > 뷰티 > 헤어/바디/미용기기 > 핸드/풋/덴탈케어 > 핸드케어'</li><li>'르 젤 쑤르실 360 블롱 ssg > 뷰티 > 메이크업 > 아이메이크업 > 아이브로우 ssg > 뷰티 > 메이크업 > 아이메이크업 > 아이브로우'</li><li>'조말론 런던 핸드크림 50ml 라임 바질 앤 만다린 피오니 블러쉬 잉글리쉬 페어 프리지아 (핸드크림)라임 바질 앤 만다린 50ml (#M)홈>조말론 Naverstore > 화장품/미용 > 바디케어 > 핸드케어'</li></ul> | | 13 | <ul><li>'입생로랑(향수) 몽 파리 헤어 미스트 30ml MinSellAmount (#M)화장품/향수>네일케어>네일리무버 Gmarket > 뷰티 > 화장품/향수 > 네일케어 > 네일리무버'</li><li>'아윤채 더 리트리브 프로그램 부스팅 스타터 헤어토닉 500g × 4개 (#M)쿠팡 홈>뷰티>헤어>헤어에센스/오일>헤어토닉/스칼프 Coupang > 뷰티 > 헤어 > 헤어에센스/오일 > 헤어토닉/스칼프'</li><li>'아모스 04 컬링에센스 이엑스보라색150mlX3 LotteOn > 뷰티 > 헤어/바디 > 헤어케어 > 헤어에센스/오일 LotteOn > 뷰티 > 헤어/바디 > 헤어케어 > 헤어에센스/오일'</li></ul> | | 12 | <ul><li>'컬링에센스2X 숏스타일 150ml 3개_P083181261 옵션/컬링에센스2X 숏스타일 150ml x3 ssg > 뷰티 > 헤어/바디 > 헤어스타일링 > 염색약 ssg > 뷰티 > 헤어/바디 > 헤어스타일링 > 염색약'</li><li>'[후후맘] 미쟝센 스테이지콜렉션 컬링 에센스 2X 230ml 230ml × 1개 (#M)쿠팡 홈>싱글라이프>샤워/세안>헤어에센스 Coupang > 뷰티 > 헤어 > 헤어에센스/오일 > 헤어에센스'</li><li>'휘핑컬리펌(마일드한 발림감, 힘있고 탄력있는 컬) 1제 150ml 2제 150ml 홈>펌제;홈>브랜드>[아모레레서픽]아모스;(#M)홈>파마약♡매직약>파마약 Naverstore > 화장품/미용 > 헤어스타일링 > 파마약 > 웨이브'</li></ul> | | 2 | <ul><li>'베씨 시럽 컬러 젤 S04 로즈 립글로즈 × 1개 (#M)쿠팡 홈>뷰티>네일>젤네일>컬러 젤 Coupang > 뷰티 > 네일 > 젤네일 > 컬러 젤'</li><li>'[OPI][프로스파] 풋 파일 스트립 120 Grit DepartmentSsg > 명품화장품 > 메이크업 > 네일 DepartmentSsg > 명품화장품 > 메이크업 > 네일'</li><li>'OPI 리무버 엑스퍼트터치리무버 450ml OPI 리무버 폴리쉬리무버(그린) 960ml LotteOn > 뷰티 > 네일 > 네일케어 > 네일리무버 LotteOn > 뷰티 > 네일 > 네일케어 > 네일리무버'</li></ul> | | 0 | <ul><li>'다슈 데일리 에브리웨어 트래블 키트 3종 택1 02_클렌징 키트 홈>브랜드별>다슈;(#M)홈>세트 상품 Naverstore > 화장품/미용 > 클렌징 > 클렌징세트'</li><li>'2+1 한정수량)미니스 여행용 세트(포켓몬) MinSellAmount (#M)바디/헤어>헤어케어>샴푸/린스 Gmarket > 뷰티 > 바디/헤어 > 헤어케어 > 샴푸/린스'</li><li>'[동구밭] 올바른 샴푸바/린스바/설거지바 제로웨이스트 [동구밭] 배러댄리퀴드 여행세트 (#M)헤어케어>샴푸>샴푸바 AD > 11st > 뷰티 > 헤어케어 > 샴푸 > 샴푸바'</li></ul> | | 10 | <ul><li>'에뛰드 마스카라 전용리무버 80ml MinSellAmount (#M)화장품/향수>클렌징/필링>립앤아이리무버 Gmarket > 뷰티 > 화장품/향수 > 클렌징/필링 > 립앤아이리무버'</li><li>'다나한 피부케어 효용 클렌징 크림 300ml 화장품 클렌 MinSellAmount (#M)화장품/향수>클렌징/필링>클렌징크림 Gmarket > 뷰티 > 화장품/향수 > 클렌징/필링 > 클렌징크림'</li><li>'대용량 곡물 페이셜 클렌징워터 500ml+파우치5매 (#M)뷰티>화장품/향수>클렌징>클렌징워터 CJmall > 뷰티 > 화장품/향수 > 클렌징 > 클렌징워터'</li></ul> | | 3 | <ul><li>'바세린 수분 보습 영양 시트 마스크 팩 10매 바세린 수분 시트 마스크 10매 (#M)홈>화장품/미용>마스크/팩>마스크시트 Naverstore > 화장품/미용 > 마스크/팩 > 마스크시트'</li><li>'클리어 임프루브먼트 엑티브 차콜 마스크 75ml (#M)뷰티>화장품/향수>스킨케어>팩/마스크 CJmall > 뷰티 > 화장품/향수 > 스킨케어 > 팩/마스크'</li><li>'바세린 영양 시트 마스크23ml 10매X2 MinSellAmount (#M)화장품/향수>팩/마스크>마스크시트 Gmarket > 뷰티 > 화장품/향수 > 팩/마스크 > 마스크시트'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.9742 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_main_item_top_bt") # Run inference preds = model("헤라 메이크업 픽서 (110ml) LotteOn > 뷰티 > 스킨케어 > 미스트 LotteOn > 뷰티 > 스킨케어 > 미스트") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 10 | 21.9195 | 91 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 50 | | 1 | 514 | | 2 | 200 | | 3 | 200 | | 4 | 746 | | 5 | 350 | | 6 | 350 | | 7 | 600 | | 8 | 249 | | 9 | 550 | | 10 | 350 | | 11 | 200 | | 12 | 350 | | 13 | 496 | ### Training Hyperparameters - batch_size: (64, 64) - num_epochs: (30, 30) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 100 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:------:|:-------------:|:---------------:| | 0.0001 | 1 | 0.3213 | - | | 0.0061 | 50 | 0.3448 | - | | 0.0123 | 100 | 0.3538 | - | | 0.0184 | 150 | 0.3436 | - | | 0.0246 | 200 | 0.3537 | - | | 0.0307 | 250 | 0.3403 | - | | 0.0369 | 300 | 0.3446 | - | | 0.0430 | 350 | 0.3437 | - | | 0.0492 | 400 | 0.3368 | - | | 0.0553 | 450 | 0.3344 | - | | 0.0615 | 500 | 0.3235 | - | | 0.0676 | 550 | 0.324 | - | | 0.0738 | 600 | 0.3107 | - | | 0.0799 | 650 | 0.332 | - | | 0.0861 | 700 | 0.3147 | - | | 0.0922 | 750 | 0.2985 | - | | 0.0984 | 800 | 0.2989 | - | | 0.1045 | 850 | 0.2885 | - | | 0.1107 | 900 | 0.2791 | - | | 0.1168 | 950 | 0.2787 | - | | 0.1230 | 1000 | 0.2647 | - | | 0.1291 | 1050 | 0.2546 | - | | 0.1353 | 1100 | 0.243 | - | | 0.1414 | 1150 | 0.232 | - | | 0.1475 | 1200 | 0.2172 | - | | 0.1537 | 1250 | 0.2114 | - | | 0.1598 | 1300 | 0.1996 | - | | 0.1660 | 1350 | 0.2017 | - | | 0.1721 | 1400 | 0.194 | - | | 0.1783 | 1450 | 0.1875 | - | | 0.1844 | 1500 | 0.1961 | - | | 0.1906 | 1550 | 0.1897 | - | | 0.1967 | 1600 | 0.1869 | - | | 0.2029 | 1650 | 0.1877 | - | | 0.2090 | 1700 | 0.181 | - | | 0.2152 | 1750 | 0.1817 | - | | 0.2213 | 1800 | 0.185 | - | | 0.2275 | 1850 | 0.1818 | - | | 0.2336 | 1900 | 0.1761 | - | | 0.2398 | 1950 | 0.1788 | - | | 0.2459 | 2000 | 0.1782 | - | | 0.2521 | 2050 | 0.1728 | - | | 0.2582 | 2100 | 0.1718 | - | | 0.2644 | 2150 | 0.1724 | - | | 0.2705 | 2200 | 0.1668 | - | | 0.2767 | 2250 | 0.1664 | - | | 0.2828 | 2300 | 0.1699 | - | | 0.2889 | 2350 | 0.1663 | - | | 0.2951 | 2400 | 0.1599 | - | | 0.3012 | 2450 | 0.1656 | - | | 0.3074 | 2500 | 0.1615 | - | | 0.3135 | 2550 | 0.1571 | - | | 0.3197 | 2600 | 0.1579 | - | | 0.3258 | 2650 | 0.1545 | - | | 0.3320 | 2700 | 0.1578 | - | | 0.3381 | 2750 | 0.1501 | - | | 0.3443 | 2800 | 0.1614 | - | | 0.3504 | 2850 | 0.1521 | - | | 0.3566 | 2900 | 0.1546 | - | | 0.3627 | 2950 | 0.1552 | - | | 0.3689 | 3000 | 0.1509 | - | | 0.3750 | 3050 | 0.1442 | - | | 0.3812 | 3100 | 0.1435 | - | | 0.3873 | 3150 | 0.1456 | - | | 0.3935 | 3200 | 0.1429 | - | | 0.3996 | 3250 | 0.1443 | - | | 0.4058 | 3300 | 0.1432 | - | | 0.4119 | 3350 | 0.1389 | - | | 0.4180 | 3400 | 0.1387 | - | | 0.4242 | 3450 | 0.1336 | - | | 0.4303 | 3500 | 0.1387 | - | | 0.4365 | 3550 | 0.1331 | - | | 0.4426 | 3600 | 0.1361 | - | | 0.4488 | 3650 | 0.1379 | - | | 0.4549 | 3700 | 0.1287 | - | | 0.4611 | 3750 | 0.1355 | - | | 0.4672 | 3800 | 0.1279 | - | | 0.4734 | 3850 | 0.1256 | - | | 0.4795 | 3900 | 0.1225 | - | | 0.4857 | 3950 | 0.1279 | - | | 0.4918 | 4000 | 0.1198 | - | | 0.4980 | 4050 | 0.1216 | - | | 0.5041 | 4100 | 0.1181 | - | | 0.5103 | 4150 | 0.1185 | - | | 0.5164 | 4200 | 0.1166 | - | | 0.5226 | 4250 | 0.1121 | - | | 0.5287 | 4300 | 0.1161 | - | | 0.5349 | 4350 | 0.1146 | - | | 0.5410 | 4400 | 0.1111 | - | | 0.5472 | 4450 | 0.1064 | - | | 0.5533 | 4500 | 0.1076 | - | | 0.5594 | 4550 | 0.1035 | - | | 0.5656 | 4600 | 0.1016 | - | | 0.5717 | 4650 | 0.0984 | - | | 0.5779 | 4700 | 0.1051 | - | | 0.5840 | 4750 | 0.0962 | - | | 0.5902 | 4800 | 0.0928 | - | | 0.5963 | 4850 | 0.094 | - | | 0.6025 | 4900 | 0.0889 | - | | 0.6086 | 4950 | 0.0881 | - | | 0.6148 | 5000 | 0.0926 | - | | 0.6209 | 5050 | 0.0975 | - | | 0.6271 | 5100 | 0.0851 | - | | 0.6332 | 5150 | 0.0916 | - | | 0.6394 | 5200 | 0.087 | - | | 0.6455 | 5250 | 0.086 | - | | 0.6517 | 5300 | 0.0865 | - | | 0.6578 | 5350 | 0.0804 | - | | 0.6640 | 5400 | 0.0805 | - | | 0.6701 | 5450 | 0.0815 | - | | 0.6763 | 5500 | 0.078 | - | | 0.6824 | 5550 | 0.079 | - | | 0.6886 | 5600 | 0.0736 | - | | 0.6947 | 5650 | 0.0763 | - | | 0.7008 | 5700 | 0.0766 | - | | 0.7070 | 5750 | 0.075 | - | | 0.7131 | 5800 | 0.072 | - | | 0.7193 | 5850 | 0.0731 | - | | 0.7254 | 5900 | 0.071 | - | | 0.7316 | 5950 | 0.0681 | - | | 0.7377 | 6000 | 0.0682 | - | | 0.7439 | 6050 | 0.0677 | - | | 0.7500 | 6100 | 0.0673 | - | | 0.7562 | 6150 | 0.059 | - | | 0.7623 | 6200 | 0.0619 | - | | 0.7685 | 6250 | 0.061 | - | | 0.7746 | 6300 | 0.0598 | - | | 0.7808 | 6350 | 0.0582 | - | | 0.7869 | 6400 | 0.0559 | - | | 0.7931 | 6450 | 0.0582 | - | | 0.7992 | 6500 | 0.0592 | - | | 0.8054 | 6550 | 0.0563 | - | | 0.8115 | 6600 | 0.0565 | - | | 0.8177 | 6650 | 0.0508 | - | | 0.8238 | 6700 | 0.0535 | - | | 0.8300 | 6750 | 0.0513 | - | | 0.8361 | 6800 | 0.0492 | - | | 0.8422 | 6850 | 0.0497 | - | | 0.8484 | 6900 | 0.0546 | - | | 0.8545 | 6950 | 0.0515 | - | | 0.8607 | 7000 | 0.0488 | - | | 0.8668 | 7050 | 0.0461 | - | | 0.8730 | 7100 | 0.0431 | - | | 0.8791 | 7150 | 0.0446 | - | | 0.8853 | 7200 | 0.0447 | - | | 0.8914 | 7250 | 0.0453 | - | | 0.8976 | 7300 | 0.0424 | - | | 0.9037 | 7350 | 0.0403 | - | | 0.9099 | 7400 | 0.0437 | - | | 0.9160 | 7450 | 0.0438 | - | | 0.9222 | 7500 | 0.039 | - | | 0.9283 | 7550 | 0.0411 | - | | 0.9345 | 7600 | 0.0389 | - | | 0.9406 | 7650 | 0.0356 | - | | 0.9468 | 7700 | 0.0387 | - | | 0.9529 | 7750 | 0.0363 | - | | 0.9591 | 7800 | 0.0368 | - | | 0.9652 | 7850 | 0.0366 | - | | 0.9714 | 7900 | 0.0348 | - | | 0.9775 | 7950 | 0.033 | - | | 0.9836 | 8000 | 0.0345 | - | | 0.9898 | 8050 | 0.032 | - | | 0.9959 | 8100 | 0.0336 | - | | 1.0021 | 8150 | 0.0314 | - | | 1.0082 | 8200 | 0.0324 | - | | 1.0144 | 8250 | 0.0298 | - | | 1.0205 | 8300 | 0.0325 | - | | 1.0267 | 8350 | 0.029 | - | | 1.0328 | 8400 | 0.0268 | - | | 1.0390 | 8450 | 0.028 | - | | 1.0451 | 8500 | 0.0313 | - | | 1.0513 | 8550 | 0.0281 | - | | 1.0574 | 8600 | 0.0273 | - | | 1.0636 | 8650 | 0.0276 | - | | 1.0697 | 8700 | 0.0265 | - | | 1.0759 | 8750 | 0.0274 | - | | 1.0820 | 8800 | 0.0248 | - | | 1.0882 | 8850 | 0.0256 | - | | 1.0943 | 8900 | 0.0255 | - | | 1.1005 | 8950 | 0.0279 | - | | 1.1066 | 9000 | 0.0241 | - | | 1.1128 | 9050 | 0.0249 | - | | 1.1189 | 9100 | 0.0247 | - | | 1.1250 | 9150 | 0.021 | - | | 1.1312 | 9200 | 0.0222 | - | | 1.1373 | 9250 | 0.0233 | - | | 1.1435 | 9300 | 0.0255 | - | | 1.1496 | 9350 | 0.0257 | - | | 1.1558 | 9400 | 0.0228 | - | | 1.1619 | 9450 | 0.0227 | - | | 1.1681 | 9500 | 0.0223 | - | | 1.1742 | 9550 | 0.022 | - | | 1.1804 | 9600 | 0.0226 | - | | 1.1865 | 9650 | 0.0197 | - | | 1.1927 | 9700 | 0.0196 | - | | 1.1988 | 9750 | 0.0224 | - | | 1.2050 | 9800 | 0.023 | - | | 1.2111 | 9850 | 0.02 | - | | 1.2173 | 9900 | 0.0237 | - | | 1.2234 | 9950 | 0.0193 | - | | 1.2296 | 10000 | 0.0196 | - | | 1.2357 | 10050 | 0.0202 | - | | 1.2419 | 10100 | 0.0189 | - | | 1.2480 | 10150 | 0.0216 | - | | 1.2541 | 10200 | 0.0165 | - | | 1.2603 | 10250 | 0.0186 | - | | 1.2664 | 10300 | 0.0185 | - | | 1.2726 | 10350 | 0.0236 | - | | 1.2787 | 10400 | 0.019 | - | | 1.2849 | 10450 | 0.0173 | - | | 1.2910 | 10500 | 0.0208 | - | | 1.2972 | 10550 | 0.0189 | - | | 1.3033 | 10600 | 0.0182 | - | | 1.3095 | 10650 | 0.0191 | - | | 1.3156 | 10700 | 0.0205 | - | | 1.3218 | 10750 | 0.0193 | - | | 1.3279 | 10800 | 0.0185 | - | | 1.3341 | 10850 | 0.0206 | - | | 1.3402 | 10900 | 0.0187 | - | | 1.3464 | 10950 | 0.0142 | - | | 1.3525 | 11000 | 0.02 | - | | 1.3587 | 11050 | 0.0161 | - | | 1.3648 | 11100 | 0.0184 | - | | 1.3710 | 11150 | 0.0158 | - | | 1.3771 | 11200 | 0.0165 | - | | 1.3833 | 11250 | 0.0182 | - | | 1.3894 | 11300 | 0.0159 | - | | 1.3955 | 11350 | 0.016 | - | | 1.4017 | 11400 | 0.0185 | - | | 1.4078 | 11450 | 0.0174 | - | | 1.4140 | 11500 | 0.0157 | - | | 1.4201 | 11550 | 0.017 | - | | 1.4263 | 11600 | 0.0159 | - | | 1.4324 | 11650 | 0.0173 | - | | 1.4386 | 11700 | 0.0152 | - | | 1.4447 | 11750 | 0.016 | - | | 1.4509 | 11800 | 0.0135 | - | | 1.4570 | 11850 | 0.0146 | - | | 1.4632 | 11900 | 0.0122 | - | | 1.4693 | 11950 | 0.0156 | - | | 1.4755 | 12000 | 0.0132 | - | | 1.4816 | 12050 | 0.0137 | - | | 1.4878 | 12100 | 0.0157 | - | | 1.4939 | 12150 | 0.0171 | - | | 1.5001 | 12200 | 0.0155 | - | | 1.5062 | 12250 | 0.0135 | - | | 1.5124 | 12300 | 0.0119 | - | | 1.5185 | 12350 | 0.0132 | - | | 1.5247 | 12400 | 0.0139 | - | | 1.5308 | 12450 | 0.0137 | - | | 1.5369 | 12500 | 0.0156 | - | | 1.5431 | 12550 | 0.0158 | - | | 1.5492 | 12600 | 0.0165 | - | | 1.5554 | 12650 | 0.0142 | - | | 1.5615 | 12700 | 0.0148 | - | | 1.5677 | 12750 | 0.0122 | - | | 1.5738 | 12800 | 0.0129 | - | | 1.5800 | 12850 | 0.0116 | - | | 1.5861 | 12900 | 0.0119 | - | | 1.5923 | 12950 | 0.0096 | - | | 1.5984 | 13000 | 0.0119 | - | | 1.6046 | 13050 | 0.0142 | - | | 1.6107 | 13100 | 0.0081 | - | | 1.6169 | 13150 | 0.0117 | - | | 1.6230 | 13200 | 0.0102 | - | | 1.6292 | 13250 | 0.0105 | - | | 1.6353 | 13300 | 0.0115 | - | | 1.6415 | 13350 | 0.01 | - | | 1.6476 | 13400 | 0.0093 | - | | 1.6538 | 13450 | 0.0097 | - | | 1.6599 | 13500 | 0.0113 | - | | 1.6661 | 13550 | 0.0086 | - | | 1.6722 | 13600 | 0.0118 | - | | 1.6783 | 13650 | 0.0096 | - | | 1.6845 | 13700 | 0.0082 | - | | 1.6906 | 13750 | 0.0095 | - | | 1.6968 | 13800 | 0.0102 | - | | 1.7029 | 13850 | 0.0118 | - | | 1.7091 | 13900 | 0.0112 | - | | 1.7152 | 13950 | 0.009 | - | | 1.7214 | 14000 | 0.0111 | - | | 1.7275 | 14050 | 0.0093 | - | | 1.7337 | 14100 | 0.008 | - | | 1.7398 | 14150 | 0.0084 | - | | 1.7460 | 14200 | 0.0082 | - | | 1.7521 | 14250 | 0.0088 | - | | 1.7583 | 14300 | 0.0122 | - | | 1.7644 | 14350 | 0.0087 | - | | 1.7706 | 14400 | 0.0084 | - | | 1.7767 | 14450 | 0.0073 | - | | 1.7829 | 14500 | 0.0079 | - | | 1.7890 | 14550 | 0.0064 | - | | 1.7952 | 14600 | 0.0069 | - | | 1.8013 | 14650 | 0.0083 | - | | 1.8075 | 14700 | 0.0071 | - | | 1.8136 | 14750 | 0.0066 | - | | 1.8197 | 14800 | 0.0086 | - | | 1.8259 | 14850 | 0.0091 | - | | 1.8320 | 14900 | 0.008 | - | | 1.8382 | 14950 | 0.0071 | - | | 1.8443 | 15000 | 0.0072 | - | | 1.8505 | 15050 | 0.0077 | - | | 1.8566 | 15100 | 0.0086 | - | | 1.8628 | 15150 | 0.0066 | - | | 1.8689 | 15200 | 0.007 | - | | 1.8751 | 15250 | 0.0065 | - | | 1.8812 | 15300 | 0.0073 | - | | 1.8874 | 15350 | 0.0074 | - | | 1.8935 | 15400 | 0.0063 | - | | 1.8997 | 15450 | 0.0064 | - | | 1.9058 | 15500 | 0.0067 | - | | 1.9120 | 15550 | 0.0081 | - | | 1.9181 | 15600 | 0.0083 | - | | 1.9243 | 15650 | 0.0068 | - | | 1.9304 | 15700 | 0.005 | - | | 1.9366 | 15750 | 0.0055 | - | | 1.9427 | 15800 | 0.0061 | - | | 1.9489 | 15850 | 0.0057 | - | | 1.9550 | 15900 | 0.0056 | - | | 1.9611 | 15950 | 0.0057 | - | | 1.9673 | 16000 | 0.0059 | - | | 1.9734 | 16050 | 0.0046 | - | | 1.9796 | 16100 | 0.0077 | - | | 1.9857 | 16150 | 0.0067 | - | | 1.9919 | 16200 | 0.0067 | - | | 1.9980 | 16250 | 0.007 | - | | 2.0042 | 16300 | 0.0055 | - | | 2.0103 | 16350 | 0.0049 | - | | 2.0165 | 16400 | 0.0075 | - | | 2.0226 | 16450 | 0.0064 | - | | 2.0288 | 16500 | 0.0062 | - | | 2.0349 | 16550 | 0.006 | - | | 2.0411 | 16600 | 0.006 | - | | 2.0472 | 16650 | 0.0054 | - | | 2.0534 | 16700 | 0.0056 | - | | 2.0595 | 16750 | 0.0047 | - | | 2.0657 | 16800 | 0.0068 | - | | 2.0718 | 16850 | 0.0056 | - | | 2.0780 | 16900 | 0.0055 | - | | 2.0841 | 16950 | 0.0056 | - | | 2.0902 | 17000 | 0.0046 | - | | 2.0964 | 17050 | 0.0048 | - | | 2.1025 | 17100 | 0.0067 | - | | 2.1087 | 17150 | 0.0038 | - | | 2.1148 | 17200 | 0.0051 | - | | 2.1210 | 17250 | 0.004 | - | | 2.1271 | 17300 | 0.0054 | - | | 2.1333 | 17350 | 0.0053 | - | | 2.1394 | 17400 | 0.0041 | - | | 2.1456 | 17450 | 0.0048 | - | | 2.1517 | 17500 | 0.0032 | - | | 2.1579 | 17550 | 0.0043 | - | | 2.1640 | 17600 | 0.0046 | - | | 2.1702 | 17650 | 0.0047 | - | | 2.1763 | 17700 | 0.0035 | - | | 2.1825 | 17750 | 0.0033 | - | | 2.1886 | 17800 | 0.0043 | - | | 2.1948 | 17850 | 0.0042 | - | | 2.2009 | 17900 | 0.0033 | - | | 2.2071 | 17950 | 0.0059 | - | | 2.2132 | 18000 | 0.0054 | - | | 2.2194 | 18050 | 0.0042 | - | | 2.2255 | 18100 | 0.0034 | - | | 2.2316 | 18150 | 0.0043 | - | | 2.2378 | 18200 | 0.0052 | - | | 2.2439 | 18250 | 0.005 | - | | 2.2501 | 18300 | 0.0061 | - | | 2.2562 | 18350 | 0.0057 | - | | 2.2624 | 18400 | 0.0043 | - | | 2.2685 | 18450 | 0.0044 | - | | 2.2747 | 18500 | 0.0038 | - | | 2.2808 | 18550 | 0.0045 | - | | 2.2870 | 18600 | 0.0043 | - | | 2.2931 | 18650 | 0.0042 | - | | 2.2993 | 18700 | 0.0037 | - | | 2.3054 | 18750 | 0.0042 | - | | 2.3116 | 18800 | 0.0039 | - | | 2.3177 | 18850 | 0.0041 | - | | 2.3239 | 18900 | 0.004 | - | | 2.3300 | 18950 | 0.0033 | - | | 2.3362 | 19000 | 0.003 | - | | 2.3423 | 19050 | 0.0044 | - | | 2.3485 | 19100 | 0.003 | - | | 2.3546 | 19150 | 0.0039 | - | | 2.3608 | 19200 | 0.0019 | - | | 2.3669 | 19250 | 0.0025 | - | | 2.3730 | 19300 | 0.0035 | - | | 2.3792 | 19350 | 0.0032 | - | | 2.3853 | 19400 | 0.0033 | - | | 2.3915 | 19450 | 0.004 | - | | 2.3976 | 19500 | 0.0042 | - | | 2.4038 | 19550 | 0.0041 | - | | 2.4099 | 19600 | 0.0044 | - | | 2.4161 | 19650 | 0.0024 | - | | 2.4222 | 19700 | 0.002 | - | | 2.4284 | 19750 | 0.0035 | - | | 2.4345 | 19800 | 0.0022 | - | | 2.4407 | 19850 | 0.0047 | - | | 2.4468 | 19900 | 0.003 | - | | 2.4530 | 19950 | 0.0028 | - | | 2.4591 | 20000 | 0.0024 | - | | 2.4653 | 20050 | 0.0018 | - | | 2.4714 | 20100 | 0.0027 | - | | 2.4776 | 20150 | 0.0022 | - | | 2.4837 | 20200 | 0.0023 | - | | 2.4899 | 20250 | 0.0024 | - | | 2.4960 | 20300 | 0.0022 | - | | 2.5022 | 20350 | 0.0023 | - | | 2.5083 | 20400 | 0.0022 | - | | 2.5144 | 20450 | 0.0018 | - | | 2.5206 | 20500 | 0.0025 | - | | 2.5267 | 20550 | 0.0035 | - | | 2.5329 | 20600 | 0.0024 | - | | 2.5390 | 20650 | 0.0028 | - | | 2.5452 | 20700 | 0.0021 | - | | 2.5513 | 20750 | 0.0019 | - | | 2.5575 | 20800 | 0.0041 | - | | 2.5636 | 20850 | 0.0028 | - | | 2.5698 | 20900 | 0.0013 | - | | 2.5759 | 20950 | 0.0034 | - | | 2.5821 | 21000 | 0.0023 | - | | 2.5882 | 21050 | 0.0023 | - | | 2.5944 | 21100 | 0.0031 | - | | 2.6005 | 21150 | 0.0017 | - | | 2.6067 | 21200 | 0.0021 | - | | 2.6128 | 21250 | 0.003 | - | | 2.6190 | 21300 | 0.003 | - | | 2.6251 | 21350 | 0.0022 | - | | 2.6313 | 21400 | 0.0016 | - | | 2.6374 | 21450 | 0.0013 | - | | 2.6436 | 21500 | 0.0005 | - | | 2.6497 | 21550 | 0.0014 | - | | 2.6558 | 21600 | 0.0025 | - | | 2.6620 | 21650 | 0.001 | - | | 2.6681 | 21700 | 0.001 | - | | 2.6743 | 21750 | 0.0011 | - | | 2.6804 | 21800 | 0.0008 | - | | 2.6866 | 21850 | 0.0019 | - | | 2.6927 | 21900 | 0.0024 | - | | 2.6989 | 21950 | 0.0024 | - | | 2.7050 | 22000 | 0.0029 | - | | 2.7112 | 22050 | 0.0026 | - | | 2.7173 | 22100 | 0.0028 | - | | 2.7235 | 22150 | 0.0022 | - | | 2.7296 | 22200 | 0.0038 | - | | 2.7358 | 22250 | 0.0031 | - | | 2.7419 | 22300 | 0.0031 | - | | 2.7481 | 22350 | 0.0038 | - | | 2.7542 | 22400 | 0.003 | - | | 2.7604 | 22450 | 0.0033 | - | | 2.7665 | 22500 | 0.0028 | - | | 2.7727 | 22550 | 0.0022 | - | | 2.7788 | 22600 | 0.0032 | - | | 2.7850 | 22650 | 0.0027 | - | | 2.7911 | 22700 | 0.0022 | - | | 2.7972 | 22750 | 0.0022 | - | | 2.8034 | 22800 | 0.0025 | - | | 2.8095 | 22850 | 0.0016 | - | | 2.8157 | 22900 | 0.0015 | - | | 2.8218 | 22950 | 0.0016 | - | | 2.8280 | 23000 | 0.0017 | - | | 2.8341 | 23050 | 0.0018 | - | | 2.8403 | 23100 | 0.0012 | - | | 2.8464 | 23150 | 0.0009 | - | | 2.8526 | 23200 | 0.0011 | - | | 2.8587 | 23250 | 0.0008 | - | | 2.8649 | 23300 | 0.0004 | - | | 2.8710 | 23350 | 0.0009 | - | | 2.8772 | 23400 | 0.0015 | - | | 2.8833 | 23450 | 0.0011 | - | | 2.8895 | 23500 | 0.0011 | - | | 2.8956 | 23550 | 0.0011 | - | | 2.9018 | 23600 | 0.0011 | - | | 2.9079 | 23650 | 0.0011 | - | | 2.9141 | 23700 | 0.0008 | - | | 2.9202 | 23750 | 0.0008 | - | | 2.9263 | 23800 | 0.0012 | - | | 2.9325 | 23850 | 0.0028 | - | | 2.9386 | 23900 | 0.0012 | - | | 2.9448 | 23950 | 0.0026 | - | | 2.9509 | 24000 | 0.0017 | - | | 2.9571 | 24050 | 0.0026 | - | | 2.9632 | 24100 | 0.0016 | - | | 2.9694 | 24150 | 0.0016 | - | | 2.9755 | 24200 | 0.0055 | - | | 2.9817 | 24250 | 0.0039 | - | | 2.9878 | 24300 | 0.003 | - | | 2.9940 | 24350 | 0.0031 | - | | 3.0001 | 24400 | 0.0026 | - | | 3.0063 | 24450 | 0.0024 | - | | 3.0124 | 24500 | 0.0021 | - | | 3.0186 | 24550 | 0.0019 | - | | 3.0247 | 24600 | 0.0025 | - | | 3.0309 | 24650 | 0.0036 | - | | 3.0370 | 24700 | 0.0023 | - | | 3.0432 | 24750 | 0.0014 | - | | 3.0493 | 24800 | 0.002 | - | | 3.0555 | 24850 | 0.0022 | - | | 3.0616 | 24900 | 0.0019 | - | | 3.0677 | 24950 | 0.0022 | - | | 3.0739 | 25000 | 0.0022 | - | | 3.0800 | 25050 | 0.0022 | - | | 3.0862 | 25100 | 0.0022 | - | | 3.0923 | 25150 | 0.001 | - | | 3.0985 | 25200 | 0.0027 | - | | 3.1046 | 25250 | 0.0018 | - | | 3.1108 | 25300 | 0.002 | - | | 3.1169 | 25350 | 0.0021 | - | | 3.1231 | 25400 | 0.0014 | - | | 3.1292 | 25450 | 0.0017 | - | | 3.1354 | 25500 | 0.001 | - | | 3.1415 | 25550 | 0.0015 | - | | 3.1477 | 25600 | 0.0013 | - | | 3.1538 | 25650 | 0.0015 | - | | 3.1600 | 25700 | 0.0016 | - | | 3.1661 | 25750 | 0.0007 | - | | 3.1723 | 25800 | 0.001 | - | | 3.1784 | 25850 | 0.0005 | - | | 3.1846 | 25900 | 0.001 | - | | 3.1907 | 25950 | 0.0007 | - | | 3.1969 | 26000 | 0.0017 | - | | 3.2030 | 26050 | 0.0014 | - | | 3.2091 | 26100 | 0.0008 | - | | 3.2153 | 26150 | 0.0011 | - | | 3.2214 | 26200 | 0.002 | - | | 3.2276 | 26250 | 0.0018 | - | | 3.2337 | 26300 | 0.0013 | - | | 3.2399 | 26350 | 0.0013 | - | | 3.2460 | 26400 | 0.0014 | - | | 3.2522 | 26450 | 0.0021 | - | | 3.2583 | 26500 | 0.0015 | - | | 3.2645 | 26550 | 0.0021 | - | | 3.2706 | 26600 | 0.0021 | - | | 3.2768 | 26650 | 0.0018 | - | | 3.2829 | 26700 | 0.0036 | - | | 3.2891 | 26750 | 0.0026 | - | | 3.2952 | 26800 | 0.0023 | - | | 3.3014 | 26850 | 0.003 | - | | 3.3075 | 26900 | 0.0018 | - | | 3.3137 | 26950 | 0.0024 | - | | 3.3198 | 27000 | 0.0013 | - | | 3.3260 | 27050 | 0.0016 | - | | 3.3321 | 27100 | 0.0011 | - | | 3.3383 | 27150 | 0.0014 | - | | 3.3444 | 27200 | 0.0018 | - | | 3.3505 | 27250 | 0.0017 | - | | 3.3567 | 27300 | 0.0014 | - | | 3.3628 | 27350 | 0.0019 | - | | 3.3690 | 27400 | 0.0016 | - | | 3.3751 | 27450 | 0.0012 | - | | 3.3813 | 27500 | 0.0008 | - | | 3.3874 | 27550 | 0.0015 | - | | 3.3936 | 27600 | 0.0007 | - | | 3.3997 | 27650 | 0.0001 | - | | 3.4059 | 27700 | 0.0007 | - | | 3.4120 | 27750 | 0.0012 | - | | 3.4182 | 27800 | 0.0015 | - | | 3.4243 | 27850 | 0.0014 | - | | 3.4305 | 27900 | 0.002 | - | | 3.4366 | 27950 | 0.0011 | - | | 3.4428 | 28000 | 0.0012 | - | | 3.4489 | 28050 | 0.0009 | - | | 3.4551 | 28100 | 0.001 | - | | 3.4612 | 28150 | 0.0015 | - | | 3.4674 | 28200 | 0.0019 | - | | 3.4735 | 28250 | 0.0018 | - | | 3.4797 | 28300 | 0.0016 | - | | 3.4858 | 28350 | 0.002 | - | | 3.4919 | 28400 | 0.0032 | - | | 3.4981 | 28450 | 0.0044 | - | | 3.5042 | 28500 | 0.0035 | - | | 3.5104 | 28550 | 0.0026 | - | | 3.5165 | 28600 | 0.0032 | - | | 3.5227 | 28650 | 0.0034 | - | | 3.5288 | 28700 | 0.0033 | - | | 3.5350 | 28750 | 0.0037 | - | | 3.5411 | 28800 | 0.0028 | - | | 3.5473 | 28850 | 0.0025 | - | | 3.5534 | 28900 | 0.0021 | - | | 3.5596 | 28950 | 0.0022 | - | | 3.5657 | 29000 | 0.0018 | - | | 3.5719 | 29050 | 0.0022 | - | | 3.5780 | 29100 | 0.0016 | - | | 3.5842 | 29150 | 0.0009 | - | | 3.5903 | 29200 | 0.0009 | - | | 3.5965 | 29250 | 0.0017 | - | | 3.6026 | 29300 | 0.0012 | - | | 3.6088 | 29350 | 0.0012 | - | | 3.6149 | 29400 | 0.0015 | - | | 3.6211 | 29450 | 0.0009 | - | | 3.6272 | 29500 | 0.0022 | - | | 3.6333 | 29550 | 0.0021 | - | | 3.6395 | 29600 | 0.001 | - | | 3.6456 | 29650 | 0.0028 | - | | 3.6518 | 29700 | 0.0016 | - | | 3.6579 | 29750 | 0.0022 | - | | 3.6641 | 29800 | 0.0017 | - | | 3.6702 | 29850 | 0.0018 | - | | 3.6764 | 29900 | 0.0017 | - | | 3.6825 | 29950 | 0.0011 | - | | 3.6887 | 30000 | 0.0005 | - | | 3.6948 | 30050 | 0.0012 | - | | 3.7010 | 30100 | 0.0014 | - | | 3.7071 | 30150 | 0.0017 | - | | 3.7133 | 30200 | 0.0006 | - | | 3.7194 | 30250 | 0.0005 | - | | 3.7256 | 30300 | 0.0007 | - | | 3.7317 | 30350 | 0.0016 | - | | 3.7379 | 30400 | 0.0019 | - | | 3.7440 | 30450 | 0.0016 | - | | 3.7502 | 30500 | 0.0016 | - | | 3.7563 | 30550 | 0.0011 | - | | 3.7624 | 30600 | 0.0026 | - | | 3.7686 | 30650 | 0.0012 | - | | 3.7747 | 30700 | 0.0014 | - | | 3.7809 | 30750 | 0.0012 | - | | 3.7870 | 30800 | 0.002 | - | | 3.7932 | 30850 | 0.0014 | - | | 3.7993 | 30900 | 0.0012 | - | | 3.8055 | 30950 | 0.001 | - | | 3.8116 | 31000 | 0.0017 | - | | 3.8178 | 31050 | 0.0013 | - | | 3.8239 | 31100 | 0.0017 | - | | 3.8301 | 31150 | 0.0018 | - | | 3.8362 | 31200 | 0.0009 | - | | 3.8424 | 31250 | 0.0018 | - | | 3.8485 | 31300 | 0.0019 | - | | 3.8547 | 31350 | 0.0017 | - | | 3.8608 | 31400 | 0.0013 | - | | 3.8670 | 31450 | 0.0046 | - | | 3.8731 | 31500 | 0.0026 | - | | 3.8793 | 31550 | 0.0019 | - | | 3.8854 | 31600 | 0.0019 | - | | 3.8916 | 31650 | 0.0014 | - | | 3.8977 | 31700 | 0.0016 | - | | 3.9038 | 31750 | 0.0025 | - | | 3.9100 | 31800 | 0.0034 | - | | 3.9161 | 31850 | 0.002 | - | | 3.9223 | 31900 | 0.0019 | - | | 3.9284 | 31950 | 0.001 | - | | 3.9346 | 32000 | 0.0024 | - | | 3.9407 | 32050 | 0.0016 | - | | 3.9469 | 32100 | 0.0015 | - | | 3.9530 | 32150 | 0.0011 | - | | 3.9592 | 32200 | 0.0016 | - | | 3.9653 | 32250 | 0.0012 | - | | 3.9715 | 32300 | 0.001 | - | | 3.9776 | 32350 | 0.0011 | - | | 3.9838 | 32400 | 0.0016 | - | | 3.9899 | 32450 | 0.0026 | - | | 3.9961 | 32500 | 0.0031 | - | | 4.0022 | 32550 | 0.0015 | - | | 4.0084 | 32600 | 0.0024 | - | | 4.0145 | 32650 | 0.0021 | - | | 4.0207 | 32700 | 0.0016 | - | | 4.0268 | 32750 | 0.0023 | - | | 4.0330 | 32800 | 0.0013 | - | | 4.0391 | 32850 | 0.0019 | - | | 4.0452 | 32900 | 0.0008 | - | | 4.0514 | 32950 | 0.0019 | - | | 4.0575 | 33000 | 0.0008 | - | | 4.0637 | 33050 | 0.0012 | - | | 4.0698 | 33100 | 0.001 | - | | 4.0760 | 33150 | 0.0015 | - | | 4.0821 | 33200 | 0.0005 | - | | 4.0883 | 33250 | 0.0012 | - | | 4.0944 | 33300 | 0.0012 | - | | 4.1006 | 33350 | 0.0005 | - | | 4.1067 | 33400 | 0.0006 | - | | 4.1129 | 33450 | 0.0007 | - | | 4.1190 | 33500 | 0.0011 | - | | 4.1252 | 33550 | 0.0011 | - | | 4.1313 | 33600 | 0.0015 | - | | 4.1375 | 33650 | 0.0011 | - | | 4.1436 | 33700 | 0.0012 | - | | 4.1498 | 33750 | 0.0007 | - | | 4.1559 | 33800 | 0.0006 | - | | 4.1621 | 33850 | 0.0007 | - | | 4.1682 | 33900 | 0.0007 | - | | 4.1744 | 33950 | 0.0005 | - | | 4.1805 | 34000 | 0.0006 | - | | 4.1866 | 34050 | 0.001 | - | | 4.1928 | 34100 | 0.0009 | - | | 4.1989 | 34150 | 0.0004 | - | | 4.2051 | 34200 | 0.0003 | - | | 4.2112 | 34250 | 0.0008 | - | | 4.2174 | 34300 | 0.0009 | - | | 4.2235 | 34350 | 0.0002 | - | | 4.2297 | 34400 | 0.0014 | - | | 4.2358 | 34450 | 0.0002 | - | | 4.2420 | 34500 | 0.0003 | - | | 4.2481 | 34550 | 0.0005 | - | | 4.2543 | 34600 | 0.0025 | - | | 4.2604 | 34650 | 0.0005 | - | | 4.2666 | 34700 | 0.0004 | - | | 4.2727 | 34750 | 0.0009 | - | | 4.2789 | 34800 | 0.0005 | - | | 4.2850 | 34850 | 0.0005 | - | | 4.2912 | 34900 | 0.0005 | - | | 4.2973 | 34950 | 0.0008 | - | | 4.3035 | 35000 | 0.0012 | - | | 4.3096 | 35050 | 0.0009 | - | | 4.3158 | 35100 | 0.0004 | - | | 4.3219 | 35150 | 0.0016 | - | | 4.3280 | 35200 | 0.0008 | - | | 4.3342 | 35250 | 0.0009 | - | | 4.3403 | 35300 | 0.0011 | - | | 4.3465 | 35350 | 0.0005 | - | | 4.3526 | 35400 | 0.0004 | - | | 4.3588 | 35450 | 0.004 | - | | 4.3649 | 35500 | 0.0023 | - | | 4.3711 | 35550 | 0.002 | - | | 4.3772 | 35600 | 0.0018 | - | | 4.3834 | 35650 | 0.0022 | - | | 4.3895 | 35700 | 0.0027 | - | | 4.3957 | 35750 | 0.0022 | - | | 4.4018 | 35800 | 0.0027 | - | | 4.4080 | 35850 | 0.003 | - | | 4.4141 | 35900 | 0.0017 | - | | 4.4203 | 35950 | 0.0009 | - | | 4.4264 | 36000 | 0.0027 | - | | 4.4326 | 36050 | 0.0018 | - | | 4.4387 | 36100 | 0.0016 | - | | 4.4449 | 36150 | 0.0013 | - | | 4.4510 | 36200 | 0.0019 | - | | 4.4571 | 36250 | 0.0011 | - | | 4.4633 | 36300 | 0.0015 | - | | 4.4694 | 36350 | 0.0009 | - | | 4.4756 | 36400 | 0.0007 | - | | 4.4817 | 36450 | 0.0011 | - | | 4.4879 | 36500 | 0.0017 | - | | 4.4940 | 36550 | 0.001 | - | | 4.5002 | 36600 | 0.0015 | - | | 4.5063 | 36650 | 0.0015 | - | | 4.5125 | 36700 | 0.0009 | - | | 4.5186 | 36750 | 0.0005 | - | | 4.5248 | 36800 | 0.0021 | - | | 4.5309 | 36850 | 0.0008 | - | | 4.5371 | 36900 | 0.0008 | - | | 4.5432 | 36950 | 0.001 | - | | 4.5494 | 37000 | 0.0006 | - | | 4.5555 | 37050 | 0.0009 | - | | 4.5617 | 37100 | 0.0005 | - | | 4.5678 | 37150 | 0.0004 | - | | 4.5740 | 37200 | 0.001 | - | | 4.5801 | 37250 | 0.0013 | - | | 4.5863 | 37300 | 0.0016 | - | | 4.5924 | 37350 | 0.0017 | - | | 4.5985 | 37400 | 0.0008 | - | | 4.6047 | 37450 | 0.0005 | - | | 4.6108 | 37500 | 0.0009 | - | | 4.6170 | 37550 | 0.0018 | - | | 4.6231 | 37600 | 0.0009 | - | | 4.6293 | 37650 | 0.0006 | - | | 4.6354 | 37700 | 0.0006 | - | | 4.6416 | 37750 | 0.0012 | - | | 4.6477 | 37800 | 0.0003 | - | | 4.6539 | 37850 | 0.0003 | - | | 4.6600 | 37900 | 0.0006 | - | | 4.6662 | 37950 | 0.0007 | - | | 4.6723 | 38000 | 0.0006 | - | | 4.6785 | 38050 | 0.0011 | - | | 4.6846 | 38100 | 0.0014 | - | | 4.6908 | 38150 | 0.0015 | - | | 4.6969 | 38200 | 0.0012 | - | | 4.7031 | 38250 | 0.0021 | - | | 4.7092 | 38300 | 0.0016 | - | | 4.7154 | 38350 | 0.0016 | - | | 4.7215 | 38400 | 0.0011 | - | | 4.7277 | 38450 | 0.0009 | - | | 4.7338 | 38500 | 0.0014 | - | | 4.7399 | 38550 | 0.0008 | - | | 4.7461 | 38600 | 0.0013 | - | | 4.7522 | 38650 | 0.0007 | - | | 4.7584 | 38700 | 0.0015 | - | | 4.7645 | 38750 | 0.0009 | - | | 4.7707 | 38800 | 0.0006 | - | | 4.7768 | 38850 | 0.0009 | - | | 4.7830 | 38900 | 0.0009 | - | | 4.7891 | 38950 | 0.0009 | - | | 4.7953 | 39000 | 0.0015 | - | | 4.8014 | 39050 | 0.0015 | - | | 4.8076 | 39100 | 0.001 | - | | 4.8137 | 39150 | 0.0011 | - | | 4.8199 | 39200 | 0.0007 | - | | 4.8260 | 39250 | 0.0007 | - | | 4.8322 | 39300 | 0.0005 | - | | 4.8383 | 39350 | 0.0005 | - | | 4.8445 | 39400 | 0.0002 | - | | 4.8506 | 39450 | 0.0009 | - | | 4.8568 | 39500 | 0.0004 | - | | 4.8629 | 39550 | 0.0016 | - | | 4.8691 | 39600 | 0.0014 | - | | 4.8752 | 39650 | 0.0017 | - | | 4.8813 | 39700 | 0.0018 | - | | 4.8875 | 39750 | 0.0003 | - | | 4.8936 | 39800 | 0.0017 | - | | 4.8998 | 39850 | 0.0007 | - | | 4.9059 | 39900 | 0.0009 | - | | 4.9121 | 39950 | 0.0011 | - | | 4.9182 | 40000 | 0.0014 | - | | 4.9244 | 40050 | 0.0019 | - | | 4.9305 | 40100 | 0.001 | - | | 4.9367 | 40150 | 0.0006 | - | | 4.9428 | 40200 | 0.0009 | - | | 4.9490 | 40250 | 0.0011 | - | | 4.9551 | 40300 | 0.0017 | - | | 4.9613 | 40350 | 0.0007 | - | | 4.9674 | 40400 | 0.0008 | - | | 4.9736 | 40450 | 0.0008 | - | | 4.9797 | 40500 | 0.0013 | - | | 4.9859 | 40550 | 0.0014 | - | | 4.9920 | 40600 | 0.0029 | - | | 4.9982 | 40650 | 0.0018 | - | | 5.0043 | 40700 | 0.0017 | - | | 5.0105 | 40750 | 0.0013 | - | | 5.0166 | 40800 | 0.0015 | - | | 5.0227 | 40850 | 0.0011 | - | | 5.0289 | 40900 | 0.001 | - | | 5.0350 | 40950 | 0.0012 | - | | 5.0412 | 41000 | 0.001 | - | | 5.0473 | 41050 | 0.001 | - | | 5.0535 | 41100 | 0.0007 | - | | 5.0596 | 41150 | 0.0009 | - | | 5.0658 | 41200 | 0.0007 | - | | 5.0719 | 41250 | 0.0015 | - | | 5.0781 | 41300 | 0.0004 | - | | 5.0842 | 41350 | 0.0011 | - | | 5.0904 | 41400 | 0.0005 | - | | 5.0965 | 41450 | 0.0005 | - | | 5.1027 | 41500 | 0.0008 | - | | 5.1088 | 41550 | 0.0013 | - | | 5.1150 | 41600 | 0.0009 | - | | 5.1211 | 41650 | 0.0003 | - | | 5.1273 | 41700 | 0.0007 | - | | 5.1334 | 41750 | 0.0003 | - | | 5.1396 | 41800 | 0.001 | - | | 5.1457 | 41850 | 0.0009 | - | | 5.1519 | 41900 | 0.0012 | - | | 5.1580 | 41950 | 0.0019 | - | | 5.1641 | 42000 | 0.0014 | - | | 5.1703 | 42050 | 0.0005 | - | | 5.1764 | 42100 | 0.0005 | - | | 5.1826 | 42150 | 0.0008 | - | | 5.1887 | 42200 | 0.0009 | - | | 5.1949 | 42250 | 0.0004 | - | | 5.2010 | 42300 | 0.0005 | - | | 5.2072 | 42350 | 0.0002 | - | | 5.2133 | 42400 | 0.001 | - | | 5.2195 | 42450 | 0.0011 | - | | 5.2256 | 42500 | 0.0009 | - | | 5.2318 | 42550 | 0.0005 | - | | 5.2379 | 42600 | 0.001 | - | | 5.2441 | 42650 | 0.0007 | - | | 5.2502 | 42700 | 0.0002 | - | | 5.2564 | 42750 | 0.0005 | - | | 5.2625 | 42800 | 0.0004 | - | | 5.2687 | 42850 | 0.0005 | - | | 5.2748 | 42900 | 0.0006 | - | | 5.2810 | 42950 | 0.0002 | - | | 5.2871 | 43000 | 0.0 | - | | 5.2932 | 43050 | 0.0009 | - | | 5.2994 | 43100 | 0.0003 | - | | 5.3055 | 43150 | 0.0002 | - | | 5.3117 | 43200 | 0.0014 | - | | 5.3178 | 43250 | 0.0009 | - | | 5.3240 | 43300 | 0.0017 | - | | 5.3301 | 43350 | 0.001 | - | | 5.3363 | 43400 | 0.0011 | - | | 5.3424 | 43450 | 0.0005 | - | | 5.3486 | 43500 | 0.0012 | - | | 5.3547 | 43550 | 0.0005 | - | | 5.3609 | 43600 | 0.0007 | - | | 5.3670 | 43650 | 0.0015 | - | | 5.3732 | 43700 | 0.0012 | - | | 5.3793 | 43750 | 0.0017 | - | | 5.3855 | 43800 | 0.0003 | - | | 5.3916 | 43850 | 0.0005 | - | | 5.3978 | 43900 | 0.0007 | - | | 5.4039 | 43950 | 0.0011 | - | | 5.4101 | 44000 | 0.0019 | - | | 5.4162 | 44050 | 0.001 | - | | 5.4224 | 44100 | 0.0012 | - | | 5.4285 | 44150 | 0.0005 | - | | 5.4346 | 44200 | 0.0004 | - | | 5.4408 | 44250 | 0.0005 | - | | 5.4469 | 44300 | 0.0003 | - | | 5.4531 | 44350 | 0.0007 | - | | 5.4592 | 44400 | 0.0008 | - | | 5.4654 | 44450 | 0.0004 | - | | 5.4715 | 44500 | 0.0005 | - | | 5.4777 | 44550 | 0.0011 | - | | 5.4838 | 44600 | 0.0009 | - | | 5.4900 | 44650 | 0.0007 | - | | 5.4961 | 44700 | 0.0 | - | | 5.5023 | 44750 | 0.0007 | - | | 5.5084 | 44800 | 0.0004 | - | | 5.5146 | 44850 | 0.0002 | - | | 5.5207 | 44900 | 0.0 | - | | 5.5269 | 44950 | 0.0 | - | | 5.5330 | 45000 | 0.0003 | - | | 5.5392 | 45050 | 0.0006 | - | | 5.5453 | 45100 | 0.0003 | - | | 5.5515 | 45150 | 0.0003 | - | | 5.5576 | 45200 | 0.0012 | - | | 5.5638 | 45250 | 0.0006 | - | | 5.5699 | 45300 | 0.0002 | - | | 5.5760 | 45350 | 0.0006 | - | | 5.5822 | 45400 | 0.0012 | - | | 5.5883 | 45450 | 0.0003 | - | | 5.5945 | 45500 | 0.0008 | - | | 5.6006 | 45550 | 0.0009 | - | | 5.6068 | 45600 | 0.0003 | - | | 5.6129 | 45650 | 0.0014 | - | | 5.6191 | 45700 | 0.0019 | - | | 5.6252 | 45750 | 0.0011 | - | | 5.6314 | 45800 | 0.0011 | - | | 5.6375 | 45850 | 0.0002 | - | | 5.6437 | 45900 | 0.0009 | - | | 5.6498 | 45950 | 0.0012 | - | | 5.6560 | 46000 | 0.0016 | - | | 5.6621 | 46050 | 0.0011 | - | | 5.6683 | 46100 | 0.0019 | - | | 5.6744 | 46150 | 0.0005 | - | | 5.6806 | 46200 | 0.0007 | - | | 5.6867 | 46250 | 0.0004 | - | | 5.6929 | 46300 | 0.0004 | - | | 5.6990 | 46350 | 0.0013 | - | | 5.7052 | 46400 | 0.0007 | - | | 5.7113 | 46450 | 0.0007 | - | | 5.7174 | 46500 | 0.0007 | - | | 5.7236 | 46550 | 0.0007 | - | | 5.7297 | 46600 | 0.0001 | - | | 5.7359 | 46650 | 0.0016 | - | | 5.7420 | 46700 | 0.0023 | - | | 5.7482 | 46750 | 0.0014 | - | | 5.7543 | 46800 | 0.0014 | - | | 5.7605 | 46850 | 0.0003 | - | | 5.7666 | 46900 | 0.0013 | - | | 5.7728 | 46950 | 0.0008 | - | | 5.7789 | 47000 | 0.0004 | - | | 5.7851 | 47050 | 0.0025 | - | | 5.7912 | 47100 | 0.001 | - | | 5.7974 | 47150 | 0.0011 | - | | 5.8035 | 47200 | 0.0014 | - | | 5.8097 | 47250 | 0.0009 | - | | 5.8158 | 47300 | 0.0002 | - | | 5.8220 | 47350 | 0.0011 | - | | 5.8281 | 47400 | 0.0022 | - | | 5.8343 | 47450 | 0.0009 | - | | 5.8404 | 47500 | 0.0005 | - | | 5.8466 | 47550 | 0.0004 | - | | 5.8527 | 47600 | 0.0009 | - | | 5.8588 | 47650 | 0.0007 | - | | 5.8650 | 47700 | 0.0003 | - | | 5.8711 | 47750 | 0.0006 | - | | 5.8773 | 47800 | 0.0002 | - | | 5.8834 | 47850 | 0.0012 | - | | 5.8896 | 47900 | 0.0009 | - | | 5.8957 | 47950 | 0.0007 | - | | 5.9019 | 48000 | 0.0004 | - | | 5.9080 | 48050 | 0.0015 | - | | 5.9142 | 48100 | 0.0006 | - | | 5.9203 | 48150 | 0.0003 | - | | 5.9265 | 48200 | 0.0 | - | | 5.9326 | 48250 | 0.0007 | - | | 5.9388 | 48300 | 0.0005 | - | | 5.9449 | 48350 | 0.0009 | - | | 5.9511 | 48400 | 0.0006 | - | | 5.9572 | 48450 | 0.0004 | - | | 5.9634 | 48500 | 0.0008 | - | | 5.9695 | 48550 | 0.0003 | - | | 5.9757 | 48600 | 0.0011 | - | | 5.9818 | 48650 | 0.0004 | - | | 5.9880 | 48700 | 0.0014 | - | | 5.9941 | 48750 | 0.0007 | - | | 6.0002 | 48800 | 0.0003 | - | | 6.0064 | 48850 | 0.001 | - | | 6.0125 | 48900 | 0.0008 | - | | 6.0187 | 48950 | 0.0017 | - | | 6.0248 | 49000 | 0.001 | - | | 6.0310 | 49050 | 0.0008 | - | | 6.0371 | 49100 | 0.0008 | - | | 6.0433 | 49150 | 0.0013 | - | | 6.0494 | 49200 | 0.0006 | - | | 6.0556 | 49250 | 0.0005 | - | | 6.0617 | 49300 | 0.001 | - | | 6.0679 | 49350 | 0.0005 | - | | 6.0740 | 49400 | 0.0005 | - | | 6.0802 | 49450 | 0.0003 | - | | 6.0863 | 49500 | 0.0003 | - | | 6.0925 | 49550 | 0.0005 | - | | 6.0986 | 49600 | 0.0003 | - | | 6.1048 | 49650 | 0.0008 | - | | 6.1109 | 49700 | 0.0012 | - | | 6.1171 | 49750 | 0.0013 | - | | 6.1232 | 49800 | 0.0008 | - | | 6.1293 | 49850 | 0.001 | - | | 6.1355 | 49900 | 0.0007 | - | | 6.1416 | 49950 | 0.0012 | - | | 6.1478 | 50000 | 0.0007 | - | | 6.1539 | 50050 | 0.0005 | - | | 6.1601 | 50100 | 0.0008 | - | | 6.1662 | 50150 | 0.0008 | - | | 6.1724 | 50200 | 0.0011 | - | | 6.1785 | 50250 | 0.0011 | - | | 6.1847 | 50300 | 0.0005 | - | | 6.1908 | 50350 | 0.0011 | - | | 6.1970 | 50400 | 0.0011 | - | | 6.2031 | 50450 | 0.0009 | - | | 6.2093 | 50500 | 0.0012 | - | | 6.2154 | 50550 | 0.0011 | - | | 6.2216 | 50600 | 0.0005 | - | | 6.2277 | 50650 | 0.0008 | - | | 6.2339 | 50700 | 0.0006 | - | | 6.2400 | 50750 | 0.0007 | - | | 6.2462 | 50800 | 0.0006 | - | | 6.2523 | 50850 | 0.0006 | - | | 6.2585 | 50900 | 0.0008 | - | | 6.2646 | 50950 | 0.0013 | - | | 6.2707 | 51000 | 0.0005 | - | | 6.2769 | 51050 | 0.0004 | - | | 6.2830 | 51100 | 0.0009 | - | | 6.2892 | 51150 | 0.0012 | - | | 6.2953 | 51200 | 0.0006 | - | | 6.3015 | 51250 | 0.0002 | - | | 6.3076 | 51300 | 0.0008 | - | | 6.3138 | 51350 | 0.0011 | - | | 6.3199 | 51400 | 0.0003 | - | | 6.3261 | 51450 | 0.0008 | - | | 6.3322 | 51500 | 0.0013 | - | | 6.3384 | 51550 | 0.0001 | - | | 6.3445 | 51600 | 0.0002 | - | | 6.3507 | 51650 | 0.0008 | - | | 6.3568 | 51700 | 0.0006 | - | | 6.3630 | 51750 | 0.0006 | - | | 6.3691 | 51800 | 0.0006 | - | | 6.3753 | 51850 | 0.0009 | - | | 6.3814 | 51900 | 0.0013 | - | | 6.3876 | 51950 | 0.001 | - | | 6.3937 | 52000 | 0.0005 | - | | 6.3999 | 52050 | 0.0006 | - | | 6.4060 | 52100 | 0.0006 | - | | 6.4121 | 52150 | 0.0005 | - | | 6.4183 | 52200 | 0.0002 | - | | 6.4244 | 52250 | 0.0004 | - | | 6.4306 | 52300 | 0.0005 | - | | 6.4367 | 52350 | 0.0006 | - | | 6.4429 | 52400 | 0.0003 | - | | 6.4490 | 52450 | 0.0009 | - | | 6.4552 | 52500 | 0.0003 | - | | 6.4613 | 52550 | 0.0009 | - | | 6.4675 | 52600 | 0.0003 | - | | 6.4736 | 52650 | 0.0007 | - | | 6.4798 | 52700 | 0.0011 | - | | 6.4859 | 52750 | 0.0007 | - | | 6.4921 | 52800 | 0.0 | - | | 6.4982 | 52850 | 0.0001 | - | | 6.5044 | 52900 | 0.0004 | - | | 6.5105 | 52950 | 0.0011 | - | | 6.5167 | 53000 | 0.001 | - | | 6.5228 | 53050 | 0.0008 | - | | 6.5290 | 53100 | 0.0005 | - | | 6.5351 | 53150 | 0.0005 | - | | 6.5413 | 53200 | 0.0007 | - | | 6.5474 | 53250 | 0.0004 | - | | 6.5535 | 53300 | 0.0005 | - | | 6.5597 | 53350 | 0.002 | - | | 6.5658 | 53400 | 0.0016 | - | | 6.5720 | 53450 | 0.0025 | - | | 6.5781 | 53500 | 0.0019 | - | | 6.5843 | 53550 | 0.0011 | - | | 6.5904 | 53600 | 0.0014 | - | | 6.5966 | 53650 | 0.0016 | - | | 6.6027 | 53700 | 0.0021 | - | | 6.6089 | 53750 | 0.0031 | - | | 6.6150 | 53800 | 0.0013 | - | | 6.6212 | 53850 | 0.0005 | - | | 6.6273 | 53900 | 0.0009 | - | | 6.6335 | 53950 | 0.0018 | - | | 6.6396 | 54000 | 0.0009 | - | | 6.6458 | 54050 | 0.0006 | - | | 6.6519 | 54100 | 0.0011 | - | | 6.6581 | 54150 | 0.0014 | - | | 6.6642 | 54200 | 0.001 | - | | 6.6704 | 54250 | 0.0017 | - | | 6.6765 | 54300 | 0.0007 | - | | 6.6827 | 54350 | 0.0008 | - | | 6.6888 | 54400 | 0.0013 | - | | 6.6949 | 54450 | 0.0008 | - | | 6.7011 | 54500 | 0.0008 | - | | 6.7072 | 54550 | 0.0005 | - | | 6.7134 | 54600 | 0.0003 | - | | 6.7195 | 54650 | 0.0002 | - | | 6.7257 | 54700 | 0.0011 | - | | 6.7318 | 54750 | 0.0013 | - | | 6.7380 | 54800 | 0.0012 | - | | 6.7441 | 54850 | 0.0003 | - | | 6.7503 | 54900 | 0.0001 | - | | 6.7564 | 54950 | 0.0003 | - | | 6.7626 | 55000 | 0.0006 | - | | 6.7687 | 55050 | 0.0003 | - | | 6.7749 | 55100 | 0.0005 | - | | 6.7810 | 55150 | 0.0003 | - | | 6.7872 | 55200 | 0.0006 | - | | 6.7933 | 55250 | 0.0003 | - | | 6.7995 | 55300 | 0.001 | - | | 6.8056 | 55350 | 0.0005 | - | | 6.8118 | 55400 | 0.0006 | - | | 6.8179 | 55450 | 0.0006 | - | | 6.8241 | 55500 | 0.0005 | - | | 6.8302 | 55550 | 0.0004 | - | | 6.8363 | 55600 | 0.0002 | - | | 6.8425 | 55650 | 0.0001 | - | | 6.8486 | 55700 | 0.0002 | - | | 6.8548 | 55750 | 0.0005 | - | | 6.8609 | 55800 | 0.0007 | - | | 6.8671 | 55850 | 0.0002 | - | | 6.8732 | 55900 | 0.0007 | - | | 6.8794 | 55950 | 0.0007 | - | | 6.8855 | 56000 | 0.0003 | - | | 6.8917 | 56050 | 0.0007 | - | | 6.8978 | 56100 | 0.0 | - | | 6.9040 | 56150 | 0.0005 | - | | 6.9101 | 56200 | 0.0005 | - | | 6.9163 | 56250 | 0.0002 | - | | 6.9224 | 56300 | 0.0005 | - | | 6.9286 | 56350 | 0.0002 | - | | 6.9347 | 56400 | 0.0002 | - | | 6.9409 | 56450 | 0.0003 | - | | 6.9470 | 56500 | 0.0002 | - | | 6.9532 | 56550 | 0.0 | - | | 6.9593 | 56600 | 0.0002 | - | | 6.9654 | 56650 | 0.0002 | - | | 6.9716 | 56700 | 0.0006 | - | | 6.9777 | 56750 | 0.0002 | - | | 6.9839 | 56800 | 0.0 | - | | 6.9900 | 56850 | 0.0003 | - | | 6.9962 | 56900 | 0.0005 | - | | 7.0023 | 56950 | 0.0003 | - | | 7.0085 | 57000 | 0.0003 | - | | 7.0146 | 57050 | 0.0002 | - | | 7.0208 | 57100 | 0.0005 | - | | 7.0269 | 57150 | 0.0006 | - | | 7.0331 | 57200 | 0.0003 | - | | 7.0392 | 57250 | 0.0002 | - | | 7.0454 | 57300 | 0.0 | - | | 7.0515 | 57350 | 0.0 | - | | 7.0577 | 57400 | 0.0005 | - | | 7.0638 | 57450 | 0.0003 | - | | 7.0700 | 57500 | 0.0 | - | | 7.0761 | 57550 | 0.0005 | - | | 7.0823 | 57600 | 0.0007 | - | | 7.0884 | 57650 | 0.0009 | - | | 7.0946 | 57700 | 0.0008 | - | | 7.1007 | 57750 | 0.0014 | - | | 7.1068 | 57800 | 0.0005 | - | | 7.1130 | 57850 | 0.0005 | - | | 7.1191 | 57900 | 0.0002 | - | | 7.1253 | 57950 | 0.0 | - | | 7.1314 | 58000 | 0.0004 | - | | 7.1376 | 58050 | 0.0005 | - | | 7.1437 | 58100 | 0.001 | - | | 7.1499 | 58150 | 0.0006 | - | | 7.1560 | 58200 | 0.001 | - | | 7.1622 | 58250 | 0.0002 | - | | 7.1683 | 58300 | 0.0015 | - | | 7.1745 | 58350 | 0.0005 | - | | 7.1806 | 58400 | 0.0005 | - | | 7.1868 | 58450 | 0.0002 | - | | 7.1929 | 58500 | 0.001 | - | | 7.1991 | 58550 | 0.0007 | - | | 7.2052 | 58600 | 0.0012 | - | | 7.2114 | 58650 | 0.0005 | - | | 7.2175 | 58700 | 0.0004 | - | | 7.2237 | 58750 | 0.0006 | - | | 7.2298 | 58800 | 0.0001 | - | | 7.2360 | 58850 | 0.0003 | - | | 7.2421 | 58900 | 0.0013 | - | | 7.2482 | 58950 | 0.0017 | - | | 7.2544 | 59000 | 0.0017 | - | | 7.2605 | 59050 | 0.0006 | - | | 7.2667 | 59100 | 0.0016 | - | | 7.2728 | 59150 | 0.0049 | - | | 7.2790 | 59200 | 0.0026 | - | | 7.2851 | 59250 | 0.0013 | - | | 7.2913 | 59300 | 0.001 | - | | 7.2974 | 59350 | 0.0008 | - | | 7.3036 | 59400 | 0.0006 | - | | 7.3097 | 59450 | 0.0007 | - | | 7.3159 | 59500 | 0.001 | - | | 7.3220 | 59550 | 0.0014 | - | | 7.3282 | 59600 | 0.0015 | - | | 7.3343 | 59650 | 0.002 | - | | 7.3405 | 59700 | 0.0011 | - | | 7.3466 | 59750 | 0.0006 | - | | 7.3528 | 59800 | 0.0013 | - | | 7.3589 | 59850 | 0.0006 | - | | 7.3651 | 59900 | 0.001 | - | | 7.3712 | 59950 | 0.0013 | - | | 7.3774 | 60000 | 0.0012 | - | | 7.3835 | 60050 | 0.0008 | - | | 7.3896 | 60100 | 0.0005 | - | | 7.3958 | 60150 | 0.0003 | - | | 7.4019 | 60200 | 0.0013 | - | | 7.4081 | 60250 | 0.0014 | - | | 7.4142 | 60300 | 0.0002 | - | | 7.4204 | 60350 | 0.0004 | - | | 7.4265 | 60400 | 0.0004 | - | | 7.4327 | 60450 | 0.0004 | - | | 7.4388 | 60500 | 0.0001 | - | | 7.4450 | 60550 | 0.0002 | - | | 7.4511 | 60600 | 0.0003 | - | | 7.4573 | 60650 | 0.0008 | - | | 7.4634 | 60700 | 0.0004 | - | | 7.4696 | 60750 | 0.0003 | - | | 7.4757 | 60800 | 0.001 | - | | 7.4819 | 60850 | 0.0002 | - | | 7.4880 | 60900 | 0.0 | - | | 7.4942 | 60950 | 0.0004 | - | | 7.5003 | 61000 | 0.0002 | - | | 7.5065 | 61050 | 0.0003 | - | | 7.5126 | 61100 | 0.0005 | - | | 7.5188 | 61150 | 0.0007 | - | | 7.5249 | 61200 | 0.0023 | - | | 7.5310 | 61250 | 0.001 | - | | 7.5372 | 61300 | 0.0013 | - | | 7.5433 | 61350 | 0.0012 | - | | 7.5495 | 61400 | 0.0007 | - | | 7.5556 | 61450 | 0.0005 | - | | 7.5618 | 61500 | 0.0005 | - | | 7.5679 | 61550 | 0.0004 | - | | 7.5741 | 61600 | 0.0002 | - | | 7.5802 | 61650 | 0.0004 | - | | 7.5864 | 61700 | 0.0008 | - | | 7.5925 | 61750 | 0.0007 | - | | 7.5987 | 61800 | 0.0007 | - | | 7.6048 | 61850 | 0.0009 | - | | 7.6110 | 61900 | 0.0003 | - | | 7.6171 | 61950 | 0.0002 | - | | 7.6233 | 62000 | 0.0001 | - | | 7.6294 | 62050 | 0.0003 | - | | 7.6356 | 62100 | 0.0009 | - | | 7.6417 | 62150 | 0.001 | - | | 7.6479 | 62200 | 0.0003 | - | | 7.6540 | 62250 | 0.0005 | - | | 7.6602 | 62300 | 0.0007 | - | | 7.6663 | 62350 | 0.0005 | - | | 7.6724 | 62400 | 0.0007 | - | | 7.6786 | 62450 | 0.0 | - | | 7.6847 | 62500 | 0.0002 | - | | 7.6909 | 62550 | 0.0 | - | | 7.6970 | 62600 | 0.001 | - | | 7.7032 | 62650 | 0.0011 | - | | 7.7093 | 62700 | 0.0007 | - | | 7.7155 | 62750 | 0.0003 | - | | 7.7216 | 62800 | 0.0003 | - | | 7.7278 | 62850 | 0.0009 | - | | 7.7339 | 62900 | 0.0009 | - | | 7.7401 | 62950 | 0.0012 | - | | 7.7462 | 63000 | 0.0057 | - | | 7.7524 | 63050 | 0.0039 | - | | 7.7585 | 63100 | 0.0024 | - | | 7.7647 | 63150 | 0.0021 | - | | 7.7708 | 63200 | 0.0014 | - | | 7.7770 | 63250 | 0.0014 | - | | 7.7831 | 63300 | 0.0012 | - | | 7.7893 | 63350 | 0.0011 | - | | 7.7954 | 63400 | 0.0009 | - | | 7.8015 | 63450 | 0.0009 | - | | 7.8077 | 63500 | 0.0018 | - | | 7.8138 | 63550 | 0.0005 | - | | 7.8200 | 63600 | 0.0004 | - | | 7.8261 | 63650 | 0.0002 | - | | 7.8323 | 63700 | 0.0004 | - | | 7.8384 | 63750 | 0.0012 | - | | 7.8446 | 63800 | 0.0019 | - | | 7.8507 | 63850 | 0.0004 | - | | 7.8569 | 63900 | 0.0003 | - | | 7.8630 | 63950 | 0.0003 | - | | 7.8692 | 64000 | 0.0003 | - | | 7.8753 | 64050 | 0.0003 | - | | 7.8815 | 64100 | 0.0 | - | | 7.8876 | 64150 | 0.0 | - | | 7.8938 | 64200 | 0.0 | - | | 7.8999 | 64250 | 0.0003 | - | | 7.9061 | 64300 | 0.0 | - | | 7.9122 | 64350 | 0.0001 | - | | 7.9184 | 64400 | 0.0003 | - | | 7.9245 | 64450 | 0.0004 | - | | 7.9307 | 64500 | 0.0001 | - | | 7.9368 | 64550 | 0.0003 | - | | 7.9429 | 64600 | 0.0002 | - | | 7.9491 | 64650 | 0.0005 | - | | 7.9552 | 64700 | 0.0002 | - | | 7.9614 | 64750 | 0.0 | - | | 7.9675 | 64800 | 0.0 | - | | 7.9737 | 64850 | 0.0002 | - | | 7.9798 | 64900 | 0.0004 | - | | 7.9860 | 64950 | 0.0 | - | | 7.9921 | 65000 | 0.0003 | - | | 7.9983 | 65050 | 0.0 | - | | 8.0044 | 65100 | 0.0001 | - | | 8.0106 | 65150 | 0.0 | - | | 8.0167 | 65200 | 0.0002 | - | | 8.0229 | 65250 | 0.0003 | - | | 8.0290 | 65300 | 0.0001 | - | | 8.0352 | 65350 | 0.0005 | - | | 8.0413 | 65400 | 0.0 | - | | 8.0475 | 65450 | 0.0001 | - | | 8.0536 | 65500 | 0.0004 | - | | 8.0598 | 65550 | 0.0003 | - | | 8.0659 | 65600 | 0.0002 | - | | 8.0721 | 65650 | 0.0006 | - | | 8.0782 | 65700 | 0.0 | - | | 8.0843 | 65750 | 0.0 | - | | 8.0905 | 65800 | 0.0004 | - | | 8.0966 | 65850 | 0.0003 | - | | 8.1028 | 65900 | 0.0007 | - | | 8.1089 | 65950 | 0.0003 | - | | 8.1151 | 66000 | 0.0008 | - | | 8.1212 | 66050 | 0.0009 | - | | 8.1274 | 66100 | 0.0002 | - | | 8.1335 | 66150 | 0.0003 | - | | 8.1397 | 66200 | 0.0002 | - | | 8.1458 | 66250 | 0.0001 | - | | 8.1520 | 66300 | 0.0003 | - | | 8.1581 | 66350 | 0.0002 | - | | 8.1643 | 66400 | 0.0003 | - | | 8.1704 | 66450 | 0.0 | - | | 8.1766 | 66500 | 0.0003 | - | | 8.1827 | 66550 | 0.0005 | - | | 8.1889 | 66600 | 0.0007 | - | | 8.1950 | 66650 | 0.0002 | - | | 8.2012 | 66700 | 0.0007 | - | | 8.2073 | 66750 | 0.0001 | - | | 8.2135 | 66800 | 0.0003 | - | | 8.2196 | 66850 | 0.0003 | - | | 8.2257 | 66900 | 0.0 | - | | 8.2319 | 66950 | 0.0005 | - | | 8.2380 | 67000 | 0.0003 | - | | 8.2442 | 67050 | 0.0003 | - | | 8.2503 | 67100 | 0.0 | - | | 8.2565 | 67150 | 0.0002 | - | | 8.2626 | 67200 | 0.0 | - | | 8.2688 | 67250 | 0.0 | - | | 8.2749 | 67300 | 0.0001 | - | | 8.2811 | 67350 | 0.0003 | - | | 8.2872 | 67400 | 0.0006 | - | | 8.2934 | 67450 | 0.0005 | - | | 8.2995 | 67500 | 0.0001 | - | | 8.3057 | 67550 | 0.0003 | - | | 8.3118 | 67600 | 0.0002 | - | | 8.3180 | 67650 | 0.0 | - | | 8.3241 | 67700 | 0.0007 | - | | 8.3303 | 67750 | 0.0 | - | | 8.3364 | 67800 | 0.0002 | - | | 8.3426 | 67850 | 0.0003 | - | | 8.3487 | 67900 | 0.0001 | - | | 8.3549 | 67950 | 0.0005 | - | | 8.3610 | 68000 | 0.0001 | - | | 8.3671 | 68050 | 0.0001 | - | | 8.3733 | 68100 | 0.0007 | - | | 8.3794 | 68150 | 0.0003 | - | | 8.3856 | 68200 | 0.0002 | - | | 8.3917 | 68250 | 0.0002 | - | | 8.3979 | 68300 | 0.0001 | - | | 8.4040 | 68350 | 0.0006 | - | | 8.4102 | 68400 | 0.0003 | - | | 8.4163 | 68450 | 0.0007 | - | | 8.4225 | 68500 | 0.0007 | - | | 8.4286 | 68550 | 0.0006 | - | | 8.4348 | 68600 | 0.0005 | - | | 8.4409 | 68650 | 0.0004 | - | | 8.4471 | 68700 | 0.0005 | - | | 8.4532 | 68750 | 0.0003 | - | | 8.4594 | 68800 | 0.0002 | - | | 8.4655 | 68850 | 0.0 | - | | 8.4717 | 68900 | 0.0005 | - | | 8.4778 | 68950 | 0.0005 | - | | 8.4840 | 69000 | 0.0002 | - | | 8.4901 | 69050 | 0.0004 | - | | 8.4962 | 69100 | 0.0002 | - | | 8.5024 | 69150 | 0.0004 | - | | 8.5085 | 69200 | 0.0003 | - | | 8.5147 | 69250 | 0.0006 | - | | 8.5208 | 69300 | 0.0003 | - | | 8.5270 | 69350 | 0.0004 | - | | 8.5331 | 69400 | 0.0007 | - | | 8.5393 | 69450 | 0.0004 | - | | 8.5454 | 69500 | 0.0 | - | | 8.5516 | 69550 | 0.0 | - | | 8.5577 | 69600 | 0.0003 | - | | 8.5639 | 69650 | 0.0009 | - | | 8.5700 | 69700 | 0.0007 | - | | 8.5762 | 69750 | 0.0013 | - | | 8.5823 | 69800 | 0.0006 | - | | 8.5885 | 69850 | 0.001 | - | | 8.5946 | 69900 | 0.0004 | - | | 8.6008 | 69950 | 0.0005 | - | | 8.6069 | 70000 | 0.0006 | - | | 8.6131 | 70050 | 0.0003 | - | | 8.6192 | 70100 | 0.0004 | - | | 8.6254 | 70150 | 0.0003 | - | | 8.6315 | 70200 | 0.0012 | - | | 8.6376 | 70250 | 0.0007 | - | | 8.6438 | 70300 | 0.0011 | - | | 8.6499 | 70350 | 0.0004 | - | | 8.6561 | 70400 | 0.0001 | - | | 8.6622 | 70450 | 0.0 | - | | 8.6684 | 70500 | 0.0002 | - | | 8.6745 | 70550 | 0.0003 | - | | 8.6807 | 70600 | 0.0001 | - | | 8.6868 | 70650 | 0.0001 | - | | 8.6930 | 70700 | 0.0 | - | | 8.6991 | 70750 | 0.0 | - | | 8.7053 | 70800 | 0.0 | - | | 8.7114 | 70850 | 0.0002 | - | | 8.7176 | 70900 | 0.0004 | - | | 8.7237 | 70950 | 0.0 | - | | 8.7299 | 71000 | 0.0001 | - | | 8.7360 | 71050 | 0.0 | - | | 8.7422 | 71100 | 0.0 | - | | 8.7483 | 71150 | 0.0 | - | | 8.7545 | 71200 | 0.0 | - | | 8.7606 | 71250 | 0.0 | - | | 8.7668 | 71300 | 0.0003 | - | | 8.7729 | 71350 | 0.0004 | - | | 8.7790 | 71400 | 0.0006 | - | | 8.7852 | 71450 | 0.0005 | - | | 8.7913 | 71500 | 0.001 | - | | 8.7975 | 71550 | 0.0016 | - | | 8.8036 | 71600 | 0.0012 | - | | 8.8098 | 71650 | 0.0008 | - | | 8.8159 | 71700 | 0.001 | - | | 8.8221 | 71750 | 0.0011 | - | | 8.8282 | 71800 | 0.001 | - | | 8.8344 | 71850 | 0.0008 | - | | 8.8405 | 71900 | 0.0013 | - | | 8.8467 | 71950 | 0.0008 | - | | 8.8528 | 72000 | 0.0005 | - | | 8.8590 | 72050 | 0.0005 | - | | 8.8651 | 72100 | 0.0009 | - | | 8.8713 | 72150 | 0.0003 | - | | 8.8774 | 72200 | 0.0007 | - | | 8.8836 | 72250 | 0.0005 | - | | 8.8897 | 72300 | 0.0006 | - | | 8.8959 | 72350 | 0.0005 | - | | 8.9020 | 72400 | 0.0005 | - | | 8.9082 | 72450 | 0.0005 | - | | 8.9143 | 72500 | 0.0011 | - | | 8.9204 | 72550 | 0.0006 | - | | 8.9266 | 72600 | 0.0001 | - | | 8.9327 | 72650 | 0.0003 | - | | 8.9389 | 72700 | 0.0005 | - | | 8.9450 | 72750 | 0.0003 | - | | 8.9512 | 72800 | 0.0002 | - | | 8.9573 | 72850 | 0.0006 | - | | 8.9635 | 72900 | 0.0007 | - | | 8.9696 | 72950 | 0.0002 | - | | 8.9758 | 73000 | 0.0003 | - | | 8.9819 | 73050 | 0.0003 | - | | 8.9881 | 73100 | 0.0002 | - | | 8.9942 | 73150 | 0.0003 | - | | 9.0004 | 73200 | 0.0005 | - | | 9.0065 | 73250 | 0.0002 | - | | 9.0127 | 73300 | 0.0005 | - | | 9.0188 | 73350 | 0.0003 | - | | 9.0250 | 73400 | 0.0004 | - | | 9.0311 | 73450 | 0.0002 | - | | 9.0373 | 73500 | 0.0002 | - | | 9.0434 | 73550 | 0.0006 | - | | 9.0496 | 73600 | 0.0009 | - | | 9.0557 | 73650 | 0.0004 | - | | 9.0618 | 73700 | 0.001 | - | | 9.0680 | 73750 | 0.0007 | - | | 9.0741 | 73800 | 0.001 | - | | 9.0803 | 73850 | 0.0011 | - | | 9.0864 | 73900 | 0.0009 | - | | 9.0926 | 73950 | 0.0005 | - | | 9.0987 | 74000 | 0.0012 | - | | 9.1049 | 74050 | 0.0014 | - | | 9.1110 | 74100 | 0.001 | - | | 9.1172 | 74150 | 0.0003 | - | | 9.1233 | 74200 | 0.0006 | - | | 9.1295 | 74250 | 0.0003 | - | | 9.1356 | 74300 | 0.0011 | - | | 9.1418 | 74350 | 0.0007 | - | | 9.1479 | 74400 | 0.0005 | - | | 9.1541 | 74450 | 0.0008 | - | | 9.1602 | 74500 | 0.0015 | - | | 9.1664 | 74550 | 0.0009 | - | | 9.1725 | 74600 | 0.0009 | - | | 9.1787 | 74650 | 0.0008 | - | | 9.1848 | 74700 | 0.0009 | - | | 9.1910 | 74750 | 0.0012 | - | | 9.1971 | 74800 | 0.0005 | - | | 9.2032 | 74850 | 0.0006 | - | | 9.2094 | 74900 | 0.0006 | - | | 9.2155 | 74950 | 0.0007 | - | | 9.2217 | 75000 | 0.0009 | - | | 9.2278 | 75050 | 0.0012 | - | | 9.2340 | 75100 | 0.0008 | - | | 9.2401 | 75150 | 0.002 | - | | 9.2463 | 75200 | 0.001 | - | | 9.2524 | 75250 | 0.0013 | - | | 9.2586 | 75300 | 0.001 | - | | 9.2647 | 75350 | 0.001 | - | | 9.2709 | 75400 | 0.0004 | - | | 9.2770 | 75450 | 0.0007 | - | | 9.2832 | 75500 | 0.0012 | - | | 9.2893 | 75550 | 0.0017 | - | | 9.2955 | 75600 | 0.0016 | - | | 9.3016 | 75650 | 0.0012 | - | | 9.3078 | 75700 | 0.0008 | - | | 9.3139 | 75750 | 0.0016 | - | | 9.3201 | 75800 | 0.0021 | - | | 9.3262 | 75850 | 0.0014 | - | | 9.3323 | 75900 | 0.0017 | - | | 9.3385 | 75950 | 0.0013 | - | | 9.3446 | 76000 | 0.0012 | - | | 9.3508 | 76050 | 0.0011 | - | | 9.3569 | 76100 | 0.0005 | - | | 9.3631 | 76150 | 0.0003 | - | | 9.3692 | 76200 | 0.0005 | - | | 9.3754 | 76250 | 0.0006 | - | | 9.3815 | 76300 | 0.0006 | - | | 9.3877 | 76350 | 0.0003 | - | | 9.3938 | 76400 | 0.0003 | - | | 9.4000 | 76450 | 0.0005 | - | | 9.4061 | 76500 | 0.0005 | - | | 9.4123 | 76550 | 0.0 | - | | 9.4184 | 76600 | 0.0006 | - | | 9.4246 | 76650 | 0.0003 | - | | 9.4307 | 76700 | 0.0 | - | | 9.4369 | 76750 | 0.0 | - | | 9.4430 | 76800 | 0.0003 | - | | 9.4492 | 76850 | 0.0003 | - | | 9.4553 | 76900 | 0.0002 | - | | 9.4615 | 76950 | 0.0003 | - | | 9.4676 | 77000 | 0.0003 | - | | 9.4737 | 77050 | 0.0008 | - | | 9.4799 | 77100 | 0.0004 | - | | 9.4860 | 77150 | 0.0019 | - | | 9.4922 | 77200 | 0.0005 | - | | 9.4983 | 77250 | 0.001 | - | | 9.5045 | 77300 | 0.0002 | - | | 9.5106 | 77350 | 0.0 | - | | 9.5168 | 77400 | 0.0006 | - | | 9.5229 | 77450 | 0.0008 | - | | 9.5291 | 77500 | 0.0 | - | | 9.5352 | 77550 | 0.0001 | - | | 9.5414 | 77600 | 0.0001 | - | | 9.5475 | 77650 | 0.0 | - | | 9.5537 | 77700 | 0.0 | - | | 9.5598 | 77750 | 0.0 | - | | 9.5660 | 77800 | 0.0 | - | | 9.5721 | 77850 | 0.0 | - | | 9.5783 | 77900 | 0.0 | - | | 9.5844 | 77950 | 0.0 | - | | 9.5906 | 78000 | 0.0001 | - | | 9.5967 | 78050 | 0.0 | - | | 9.6029 | 78100 | 0.0002 | - | | 9.6090 | 78150 | 0.0 | - | | 9.6151 | 78200 | 0.0002 | - | | 9.6213 | 78250 | 0.0 | - | | 9.6274 | 78300 | 0.0 | - | | 9.6336 | 78350 | 0.0 | - | | 9.6397 | 78400 | 0.0 | - | | 9.6459 | 78450 | 0.0 | - | | 9.6520 | 78500 | 0.0 | - | | 9.6582 | 78550 | 0.0003 | - | | 9.6643 | 78600 | 0.0 | - | | 9.6705 | 78650 | 0.0 | - | | 9.6766 | 78700 | 0.0002 | - | | 9.6828 | 78750 | 0.0 | - | | 9.6889 | 78800 | 0.0002 | - | | 9.6951 | 78850 | 0.0 | - | | 9.7012 | 78900 | 0.0002 | - | | 9.7074 | 78950 | 0.0 | - | | 9.7135 | 79000 | 0.0002 | - | | 9.7197 | 79050 | 0.0005 | - | | 9.7258 | 79100 | 0.0005 | - | | 9.7320 | 79150 | 0.0007 | - | | 9.7381 | 79200 | 0.0006 | - | | 9.7443 | 79250 | 0.0003 | - | | 9.7504 | 79300 | 0.0 | - | | 9.7565 | 79350 | 0.0008 | - | | 9.7627 | 79400 | 0.0002 | - | | 9.7688 | 79450 | 0.0 | - | | 9.7750 | 79500 | 0.0 | - | | 9.7811 | 79550 | 0.0 | - | | 9.7873 | 79600 | 0.0 | - | | 9.7934 | 79650 | 0.0011 | - | | 9.7996 | 79700 | 0.0004 | - | | 9.8057 | 79750 | 0.0014 | - | | 9.8119 | 79800 | 0.0001 | - | | 9.8180 | 79850 | 0.0001 | - | | 9.8242 | 79900 | 0.0 | - | | 9.8303 | 79950 | 0.0001 | - | | 9.8365 | 80000 | 0.0002 | - | | 9.8426 | 80050 | 0.0003 | - | | 9.8488 | 80100 | 0.0002 | - | | 9.8549 | 80150 | 0.0 | - | | 9.8611 | 80200 | 0.0 | - | | 9.8672 | 80250 | 0.0003 | - | | 9.8734 | 80300 | 0.0014 | - | | 9.8795 | 80350 | 0.0009 | - | | 9.8857 | 80400 | 0.0011 | - | | 9.8918 | 80450 | 0.0004 | - | | 9.8979 | 80500 | 0.0006 | - | | 9.9041 | 80550 | 0.0012 | - | | 9.9102 | 80600 | 0.0003 | - | | 9.9164 | 80650 | 0.0002 | - | | 9.9225 | 80700 | 0.0005 | - | | 9.9287 | 80750 | 0.0 | - | | 9.9348 | 80800 | 0.0001 | - | | 9.9410 | 80850 | 0.0 | - | | 9.9471 | 80900 | 0.0 | - | | 9.9533 | 80950 | 0.0002 | - | | 9.9594 | 81000 | 0.0 | - | | 9.9656 | 81050 | 0.0 | - | | 9.9717 | 81100 | 0.0 | - | | 9.9779 | 81150 | 0.0 | - | | 9.9840 | 81200 | 0.0 | - | | 9.9902 | 81250 | 0.0001 | - | | 9.9963 | 81300 | 0.0003 | - | | 10.0025 | 81350 | 0.0 | - | | 10.0086 | 81400 | 0.0 | - | | 10.0148 | 81450 | 0.0 | - | | 10.0209 | 81500 | 0.0 | - | | 10.0271 | 81550 | 0.0002 | - | | 10.0332 | 81600 | 0.0004 | - | | 10.0393 | 81650 | 0.0003 | - | | 10.0455 | 81700 | 0.0001 | - | | 10.0516 | 81750 | 0.0002 | - | | 10.0578 | 81800 | 0.0003 | - | | 10.0639 | 81850 | 0.0002 | - | | 10.0701 | 81900 | 0.0003 | - | | 10.0762 | 81950 | 0.0 | - | | 10.0824 | 82000 | 0.0005 | - | | 10.0885 | 82050 | 0.0003 | - | | 10.0947 | 82100 | 0.0005 | - | | 10.1008 | 82150 | 0.0003 | - | | 10.1070 | 82200 | 0.0003 | - | | 10.1131 | 82250 | 0.0004 | - | | 10.1193 | 82300 | 0.0002 | - | | 10.1254 | 82350 | 0.0002 | - | | 10.1316 | 82400 | 0.0002 | - | | 10.1377 | 82450 | 0.0 | - | | 10.1439 | 82500 | 0.0 | - | | 10.1500 | 82550 | 0.0 | - | | 10.1562 | 82600 | 0.0002 | - | | 10.1623 | 82650 | 0.0003 | - | | 10.1684 | 82700 | 0.0002 | - | | 10.1746 | 82750 | 0.0 | - | | 10.1807 | 82800 | 0.0 | - | | 10.1869 | 82850 | 0.0 | - | | 10.1930 | 82900 | 0.0 | - | | 10.1992 | 82950 | 0.0001 | - | | 10.2053 | 83000 | 0.0 | - | | 10.2115 | 83050 | 0.0 | - | | 10.2176 | 83100 | 0.0 | - | | 10.2238 | 83150 | 0.0 | - | | 10.2299 | 83200 | 0.0 | - | | 10.2361 | 83250 | 0.0 | - | | 10.2422 | 83300 | 0.0 | - | | 10.2484 | 83350 | 0.0002 | - | | 10.2545 | 83400 | 0.0 | - | | 10.2607 | 83450 | 0.0 | - | | 10.2668 | 83500 | 0.0 | - | | 10.2730 | 83550 | 0.0 | - | | 10.2791 | 83600 | 0.0003 | - | | 10.2853 | 83650 | 0.0002 | - | | 10.2914 | 83700 | 0.0006 | - | | 10.2976 | 83750 | 0.0 | - | | 10.3037 | 83800 | 0.0003 | - | | 10.3098 | 83850 | 0.0003 | - | | 10.3160 | 83900 | 0.0003 | - | | 10.3221 | 83950 | 0.0017 | - | | 10.3283 | 84000 | 0.0005 | - | | 10.3344 | 84050 | 0.0013 | - | | 10.3406 | 84100 | 0.0006 | - | | 10.3467 | 84150 | 0.0005 | - | | 10.3529 | 84200 | 0.0005 | - | | 10.3590 | 84250 | 0.0009 | - | | 10.3652 | 84300 | 0.0006 | - | | 10.3713 | 84350 | 0.0 | - | | 10.3775 | 84400 | 0.0 | - | | 10.3836 | 84450 | 0.0002 | - | | 10.3898 | 84500 | 0.0002 | - | | 10.3959 | 84550 | 0.0001 | - | | 10.4021 | 84600 | 0.0003 | - | | 10.4082 | 84650 | 0.0006 | - | | 10.4144 | 84700 | 0.0 | - | | 10.4205 | 84750 | 0.0 | - | | 10.4267 | 84800 | 0.0 | - | | 10.4328 | 84850 | 0.0 | - | | 10.4390 | 84900 | 0.0 | - | | 10.4451 | 84950 | 0.0002 | - | | 10.4512 | 85000 | 0.0003 | - | | 10.4574 | 85050 | 0.0003 | - | | 10.4635 | 85100 | 0.0 | - | | 10.4697 | 85150 | 0.0 | - | | 10.4758 | 85200 | 0.0 | - | | 10.4820 | 85250 | 0.0 | - | | 10.4881 | 85300 | 0.0 | - | | 10.4943 | 85350 | 0.0001 | - | | 10.5004 | 85400 | 0.0 | - | | 10.5066 | 85450 | 0.0 | - | | 10.5127 | 85500 | 0.0 | - | | 10.5189 | 85550 | 0.0005 | - | | 10.5250 | 85600 | 0.0 | - | | 10.5312 | 85650 | 0.0002 | - | | 10.5373 | 85700 | 0.0 | - | | 10.5435 | 85750 | 0.0 | - | | 10.5496 | 85800 | 0.0001 | - | | 10.5558 | 85850 | 0.0007 | - | | 10.5619 | 85900 | 0.0001 | - | | 10.5681 | 85950 | 0.0011 | - | | 10.5742 | 86000 | 0.0003 | - | | 10.5804 | 86050 | 0.0007 | - | | 10.5865 | 86100 | 0.0009 | - | | 10.5926 | 86150 | 0.0 | - | | 10.5988 | 86200 | 0.0003 | - | | 10.6049 | 86250 | 0.0002 | - | | 10.6111 | 86300 | 0.0002 | - | | 10.6172 | 86350 | 0.0003 | - | | 10.6234 | 86400 | 0.0003 | - | | 10.6295 | 86450 | 0.0004 | - | | 10.6357 | 86500 | 0.0004 | - | | 10.6418 | 86550 | 0.0006 | - | | 10.6480 | 86600 | 0.0002 | - | | 10.6541 | 86650 | 0.0006 | - | | 10.6603 | 86700 | 0.0003 | - | | 10.6664 | 86750 | 0.0003 | - | | 10.6726 | 86800 | 0.0002 | - | | 10.6787 | 86850 | 0.0008 | - | | 10.6849 | 86900 | 0.0006 | - | | 10.6910 | 86950 | 0.0004 | - | | 10.6972 | 87000 | 0.0006 | - | | 10.7033 | 87050 | 0.0002 | - | | 10.7095 | 87100 | 0.0007 | - | | 10.7156 | 87150 | 0.0003 | - | | 10.7218 | 87200 | 0.0002 | - | | 10.7279 | 87250 | 0.0001 | - | | 10.7340 | 87300 | 0.0005 | - | | 10.7402 | 87350 | 0.0003 | - | | 10.7463 | 87400 | 0.0002 | - | | 10.7525 | 87450 | 0.0002 | - | | 10.7586 | 87500 | 0.0002 | - | | 10.7648 | 87550 | 0.0002 | - | | 10.7709 | 87600 | 0.0003 | - | | 10.7771 | 87650 | 0.0002 | - | | 10.7832 | 87700 | 0.0002 | - | | 10.7894 | 87750 | 0.0005 | - | | 10.7955 | 87800 | 0.0003 | - | | 10.8017 | 87850 | 0.0001 | - | | 10.8078 | 87900 | 0.0005 | - | | 10.8140 | 87950 | 0.0002 | - | | 10.8201 | 88000 | 0.0006 | - | | 10.8263 | 88050 | 0.0005 | - | | 10.8324 | 88100 | 0.0005 | - | | 10.8386 | 88150 | 0.0003 | - | | 10.8447 | 88200 | 0.0007 | - | | 10.8509 | 88250 | 0.0004 | - | | 10.8570 | 88300 | 0.0006 | - | | 10.8632 | 88350 | 0.0009 | - | | 10.8693 | 88400 | 0.0002 | - | | 10.8754 | 88450 | 0.0002 | - | | 10.8816 | 88500 | 0.0004 | - | | 10.8877 | 88550 | 0.0003 | - | | 10.8939 | 88600 | 0.0003 | - | | 10.9000 | 88650 | 0.0003 | - | | 10.9062 | 88700 | 0.0003 | - | | 10.9123 | 88750 | 0.0005 | - | | 10.9185 | 88800 | 0.0003 | - | | 10.9246 | 88850 | 0.0006 | - | | 10.9308 | 88900 | 0.0003 | - | | 10.9369 | 88950 | 0.0002 | - | | 10.9431 | 89000 | 0.0004 | - | | 10.9492 | 89050 | 0.0005 | - | | 10.9554 | 89100 | 0.0006 | - | | 10.9615 | 89150 | 0.0001 | - | | 10.9677 | 89200 | 0.0003 | - | | 10.9738 | 89250 | 0.0002 | - | | 10.9800 | 89300 | 0.0009 | - | | 10.9861 | 89350 | 0.0006 | - | | 10.9923 | 89400 | 0.0005 | - | | 10.9984 | 89450 | 0.0002 | - | | 11.0045 | 89500 | 0.0001 | - | | 11.0107 | 89550 | 0.0002 | - | | 11.0168 | 89600 | 0.0001 | - | | 11.0230 | 89650 | 0.0002 | - | | 11.0291 | 89700 | 0.0 | - | | 11.0353 | 89750 | 0.0002 | - | | 11.0414 | 89800 | 0.0002 | - | | 11.0476 | 89850 | 0.0001 | - | | 11.0537 | 89900 | 0.0002 | - | | 11.0599 | 89950 | 0.0 | - | | 11.0660 | 90000 | 0.0001 | - | | 11.0722 | 90050 | 0.0001 | - | | 11.0783 | 90100 | 0.0001 | - | | 11.0845 | 90150 | 0.0002 | - | | 11.0906 | 90200 | 0.0003 | - | | 11.0968 | 90250 | 0.0 | - | | 11.1029 | 90300 | 0.0 | - | | 11.1091 | 90350 | 0.0003 | - | | 11.1152 | 90400 | 0.0002 | - | | 11.1214 | 90450 | 0.0 | - | | 11.1275 | 90500 | 0.0 | - | | 11.1337 | 90550 | 0.0 | - | | 11.1398 | 90600 | 0.0002 | - | | 11.1459 | 90650 | 0.0 | - | | 11.1521 | 90700 | 0.0001 | - | | 11.1582 | 90750 | 0.0 | - | | 11.1644 | 90800 | 0.0 | - | | 11.1705 | 90850 | 0.0 | - | | 11.1767 | 90900 | 0.0006 | - | | 11.1828 | 90950 | 0.0 | - | | 11.1890 | 91000 | 0.0007 | - | | 11.1951 | 91050 | 0.001 | - | | 11.2013 | 91100 | 0.0003 | - | | 11.2074 | 91150 | 0.0006 | - | | 11.2136 | 91200 | 0.0007 | - | | 11.2197 | 91250 | 0.0002 | - | | 11.2259 | 91300 | 0.0001 | - | | 11.2320 | 91350 | 0.0 | - | | 11.2382 | 91400 | 0.0002 | - | | 11.2443 | 91450 | 0.0 | - | | 11.2505 | 91500 | 0.0 | - | | 11.2566 | 91550 | 0.0 | - | | 11.2628 | 91600 | 0.0 | - | | 11.2689 | 91650 | 0.0 | - | | 11.2751 | 91700 | 0.0 | - | | 11.2812 | 91750 | 0.0001 | - | | 11.2873 | 91800 | 0.0002 | - | | 11.2935 | 91850 | 0.0 | - | | 11.2996 | 91900 | 0.0005 | - | | 11.3058 | 91950 | 0.0001 | - | | 11.3119 | 92000 | 0.0002 | - | | 11.3181 | 92050 | 0.0004 | - | | 11.3242 | 92100 | 0.0002 | - | | 11.3304 | 92150 | 0.0003 | - | | 11.3365 | 92200 | 0.0003 | - | | 11.3427 | 92250 | 0.0001 | - | | 11.3488 | 92300 | 0.0009 | - | | 11.3550 | 92350 | 0.0003 | - | | 11.3611 | 92400 | 0.0006 | - | | 11.3673 | 92450 | 0.0002 | - | | 11.3734 | 92500 | 0.0 | - | | 11.3796 | 92550 | 0.0002 | - | | 11.3857 | 92600 | 0.0004 | - | | 11.3919 | 92650 | 0.0006 | - | | 11.3980 | 92700 | 0.0007 | - | | 11.4042 | 92750 | 0.001 | - | | 11.4103 | 92800 | 0.0003 | - | | 11.4165 | 92850 | 0.0007 | - | | 11.4226 | 92900 | 0.0003 | - | | 11.4287 | 92950 | 0.0003 | - | | 11.4349 | 93000 | 0.0004 | - | | 11.4410 | 93050 | 0.0008 | - | | 11.4472 | 93100 | 0.0005 | - | | 11.4533 | 93150 | 0.0004 | - | | 11.4595 | 93200 | 0.0003 | - | | 11.4656 | 93250 | 0.0002 | - | | 11.4718 | 93300 | 0.0006 | - | | 11.4779 | 93350 | 0.0 | - | | 11.4841 | 93400 | 0.0 | - | | 11.4902 | 93450 | 0.0 | - | | 11.4964 | 93500 | 0.0 | - | | 11.5025 | 93550 | 0.0 | - | | 11.5087 | 93600 | 0.0 | - | | 11.5148 | 93650 | 0.0 | - | | 11.5210 | 93700 | 0.0 | - | | 11.5271 | 93750 | 0.0 | - | | 11.5333 | 93800 | 0.0002 | - | | 11.5394 | 93850 | 0.0 | - | | 11.5456 | 93900 | 0.0 | - | | 11.5517 | 93950 | 0.0003 | - | | 11.5579 | 94000 | 0.0002 | - | | 11.5640 | 94050 | 0.0003 | - | | 11.5701 | 94100 | 0.0 | - | | 11.5763 | 94150 | 0.0002 | - | | 11.5824 | 94200 | 0.0 | - | | 11.5886 | 94250 | 0.0 | - | | 11.5947 | 94300 | 0.0 | - | | 11.6009 | 94350 | 0.0 | - | | 11.6070 | 94400 | 0.0 | - | | 11.6132 | 94450 | 0.0 | - | | 11.6193 | 94500 | 0.0 | - | | 11.6255 | 94550 | 0.0 | - | | 11.6316 | 94600 | 0.0 | - | | 11.6378 | 94650 | 0.0007 | - | | 11.6439 | 94700 | 0.0006 | - | | 11.6501 | 94750 | 0.0004 | - | | 11.6562 | 94800 | 0.0001 | - | | 11.6624 | 94850 | 0.0 | - | | 11.6685 | 94900 | 0.0 | - | | 11.6747 | 94950 | 0.0002 | - | | 11.6808 | 95000 | 0.0 | - | | 11.6870 | 95050 | 0.0 | - | | 11.6931 | 95100 | 0.0 | - | | 11.6992 | 95150 | 0.0 | - | | 11.7054 | 95200 | 0.0 | - | | 11.7115 | 95250 | 0.0 | - | | 11.7177 | 95300 | 0.0003 | - | | 11.7238 | 95350 | 0.0 | - | | 11.7300 | 95400 | 0.0 | - | | 11.7361 | 95450 | 0.0 | - | | 11.7423 | 95500 | 0.0003 | - | | 11.7484 | 95550 | 0.0 | - | | 11.7546 | 95600 | 0.0001 | - | | 11.7607 | 95650 | 0.0001 | - | | 11.7669 | 95700 | 0.0 | - | | 11.7730 | 95750 | 0.001 | - | | 11.7792 | 95800 | 0.0001 | - | | 11.7853 | 95850 | 0.0009 | - | | 11.7915 | 95900 | 0.0 | - | | 11.7976 | 95950 | 0.0001 | - | | 11.8038 | 96000 | 0.0001 | - | | 11.8099 | 96050 | 0.0001 | - | | 11.8161 | 96100 | 0.0005 | - | | 11.8222 | 96150 | 0.0002 | - | | 11.8284 | 96200 | 0.0 | - | | 11.8345 | 96250 | 0.0 | - | | 11.8406 | 96300 | 0.0 | - | | 11.8468 | 96350 | 0.0 | - | | 11.8529 | 96400 | 0.0 | - | | 11.8591 | 96450 | 0.0 | - | | 11.8652 | 96500 | 0.0 | - | | 11.8714 | 96550 | 0.0 | - | | 11.8775 | 96600 | 0.0 | - | | 11.8837 | 96650 | 0.0 | - | | 11.8898 | 96700 | 0.0 | - | | 11.8960 | 96750 | 0.0002 | - | | 11.9021 | 96800 | 0.0 | - | | 11.9083 | 96850 | 0.0 | - | | 11.9144 | 96900 | 0.0 | - | | 11.9206 | 96950 | 0.0 | - | | 11.9267 | 97000 | 0.0002 | - | | 11.9329 | 97050 | 0.0001 | - | | 11.9390 | 97100 | 0.0 | - | | 11.9452 | 97150 | 0.0002 | - | | 11.9513 | 97200 | 0.0004 | - | | 11.9575 | 97250 | 0.0006 | - | | 11.9636 | 97300 | 0.0001 | - | | 11.9698 | 97350 | 0.0 | - | | 11.9759 | 97400 | 0.0 | - | | 11.9820 | 97450 | 0.0 | - | | 11.9882 | 97500 | 0.0 | - | | 11.9943 | 97550 | 0.0 | - | | 12.0005 | 97600 | 0.0 | - | | 12.0066 | 97650 | 0.0 | - | | 12.0128 | 97700 | 0.0001 | - | | 12.0189 | 97750 | 0.0025 | - | | 12.0251 | 97800 | 0.0017 | - | | 12.0312 | 97850 | 0.0013 | - | | 12.0374 | 97900 | 0.0015 | - | | 12.0435 | 97950 | 0.0009 | - | | 12.0497 | 98000 | 0.0002 | - | | 12.0558 | 98050 | 0.0005 | - | | 12.0620 | 98100 | 0.0005 | - | | 12.0681 | 98150 | 0.0 | - | | 12.0743 | 98200 | 0.0015 | - | | 12.0804 | 98250 | 0.0003 | - | | 12.0866 | 98300 | 0.0 | - | | 12.0927 | 98350 | 0.0 | - | | 12.0989 | 98400 | 0.0006 | - | | 12.1050 | 98450 | 0.0003 | - | | 12.1112 | 98500 | 0.0 | - | | 12.1173 | 98550 | 0.0 | - | | 12.1234 | 98600 | 0.0 | - | | 12.1296 | 98650 | 0.0002 | - | | 12.1357 | 98700 | 0.0002 | - | | 12.1419 | 98750 | 0.0002 | - | | 12.1480 | 98800 | 0.0 | - | | 12.1542 | 98850 | 0.0005 | - | | 12.1603 | 98900 | 0.0002 | - | | 12.1665 | 98950 | 0.0002 | - | | 12.1726 | 99000 | 0.0004 | - | | 12.1788 | 99050 | 0.0 | - | | 12.1849 | 99100 | 0.0004 | - | | 12.1911 | 99150 | 0.0002 | - | | 12.1972 | 99200 | 0.0002 | - | | 12.2034 | 99250 | 0.0002 | - | | 12.2095 | 99300 | 0.0 | - | | 12.2157 | 99350 | 0.0003 | - | | 12.2218 | 99400 | 0.0002 | - | | 12.2280 | 99450 | 0.0003 | - | | 12.2341 | 99500 | 0.0002 | - | | 12.2403 | 99550 | 0.0005 | - | | 12.2464 | 99600 | 0.0003 | - | | 12.2526 | 99650 | 0.0005 | - | | 12.2587 | 99700 | 0.0002 | - | | 12.2648 | 99750 | 0.0 | - | | 12.2710 | 99800 | 0.0003 | - | | 12.2771 | 99850 | 0.0002 | - | | 12.2833 | 99900 | 0.0002 | - | | 12.2894 | 99950 | 0.0003 | - | | 12.2956 | 100000 | 0.0 | - | | 12.3017 | 100050 | 0.0002 | - | | 12.3079 | 100100 | 0.0 | - | | 12.3140 | 100150 | 0.0 | - | | 12.3202 | 100200 | 0.0002 | - | | 12.3263 | 100250 | 0.0 | - | | 12.3325 | 100300 | 0.0004 | - | | 12.3386 | 100350 | 0.0001 | - | | 12.3448 | 100400 | 0.0 | - | | 12.3509 | 100450 | 0.0 | - | | 12.3571 | 100500 | 0.0002 | - | | 12.3632 | 100550 | 0.0 | - | | 12.3694 | 100600 | 0.0002 | - | | 12.3755 | 100650 | 0.0 | - | | 12.3817 | 100700 | 0.0 | - | | 12.3878 | 100750 | 0.0 | - | | 12.3940 | 100800 | 0.0 | - | | 12.4001 | 100850 | 0.0 | - | | 12.4062 | 100900 | 0.0001 | - | | 12.4124 | 100950 | 0.0 | - | | 12.4185 | 101000 | 0.0003 | - | | 12.4247 | 101050 | 0.0 | - | | 12.4308 | 101100 | 0.0 | - | | 12.4370 | 101150 | 0.0002 | - | | 12.4431 | 101200 | 0.0001 | - | | 12.4493 | 101250 | 0.0002 | - | | 12.4554 | 101300 | 0.0002 | - | | 12.4616 | 101350 | 0.0002 | - | | 12.4677 | 101400 | 0.0004 | - | | 12.4739 | 101450 | 0.0 | - | | 12.4800 | 101500 | 0.0002 | - | | 12.4862 | 101550 | 0.0 | - | | 12.4923 | 101600 | 0.0 | - | | 12.4985 | 101650 | 0.0002 | - | | 12.5046 | 101700 | 0.0 | - | | 12.5108 | 101750 | 0.0002 | - | | 12.5169 | 101800 | 0.0002 | - | | 12.5231 | 101850 | 0.0002 | - | | 12.5292 | 101900 | 0.0001 | - | | 12.5353 | 101950 | 0.0002 | - | | 12.5415 | 102000 | 0.0002 | - | | 12.5476 | 102050 | 0.0 | - | | 12.5538 | 102100 | 0.0006 | - | | 12.5599 | 102150 | 0.0002 | - | | 12.5661 | 102200 | 0.0 | - | | 12.5722 | 102250 | 0.0 | - | | 12.5784 | 102300 | 0.0 | - | | 12.5845 | 102350 | 0.0 | - | | 12.5907 | 102400 | 0.0 | - | | 12.5968 | 102450 | 0.0 | - | | 12.6030 | 102500 | 0.0 | - | | 12.6091 | 102550 | 0.0 | - | | 12.6153 | 102600 | 0.0 | - | | 12.6214 | 102650 | 0.0001 | - | | 12.6276 | 102700 | 0.0003 | - | | 12.6337 | 102750 | 0.0002 | - | | 12.6399 | 102800 | 0.0 | - | | 12.6460 | 102850 | 0.0 | - | | 12.6522 | 102900 | 0.0 | - | | 12.6583 | 102950 | 0.0 | - | | 12.6645 | 103000 | 0.0002 | - | | 12.6706 | 103050 | 0.0003 | - | | 12.6767 | 103100 | 0.0003 | - | | 12.6829 | 103150 | 0.0007 | - | | 12.6890 | 103200 | 0.0008 | - | | 12.6952 | 103250 | 0.0005 | - | | 12.7013 | 103300 | 0.0005 | - | | 12.7075 | 103350 | 0.0002 | - | | 12.7136 | 103400 | 0.0 | - | | 12.7198 | 103450 | 0.0 | - | | 12.7259 | 103500 | 0.0 | - | | 12.7321 | 103550 | 0.0002 | - | | 12.7382 | 103600 | 0.0 | - | | 12.7444 | 103650 | 0.0 | - | | 12.7505 | 103700 | 0.0 | - | | 12.7567 | 103750 | 0.0 | - | | 12.7628 | 103800 | 0.0 | - | | 12.7690 | 103850 | 0.0 | - | | 12.7751 | 103900 | 0.0 | - | | 12.7813 | 103950 | 0.0002 | - | | 12.7874 | 104000 | 0.0 | - | | 12.7936 | 104050 | 0.0 | - | | 12.7997 | 104100 | 0.0 | - | | 12.8059 | 104150 | 0.0 | - | | 12.8120 | 104200 | 0.0 | - | | 12.8181 | 104250 | 0.0002 | - | | 12.8243 | 104300 | 0.0 | - | | 12.8304 | 104350 | 0.0 | - | | 12.8366 | 104400 | 0.0 | - | | 12.8427 | 104450 | 0.0 | - | | 12.8489 | 104500 | 0.0 | - | | 12.8550 | 104550 | 0.0001 | - | | 12.8612 | 104600 | 0.0001 | - | | 12.8673 | 104650 | 0.0 | - | | 12.8735 | 104700 | 0.0 | - | | 12.8796 | 104750 | 0.0001 | - | | 12.8858 | 104800 | 0.0 | - | | 12.8919 | 104850 | 0.0003 | - | | 12.8981 | 104900 | 0.0002 | - | | 12.9042 | 104950 | 0.0002 | - | | 12.9104 | 105000 | 0.0001 | - | | 12.9165 | 105050 | 0.0007 | - | | 12.9227 | 105100 | 0.0005 | - | | 12.9288 | 105150 | 0.0002 | - | | 12.9350 | 105200 | 0.0002 | - | | 12.9411 | 105250 | 0.0002 | - | | 12.9473 | 105300 | 0.0002 | - | | 12.9534 | 105350 | 0.0002 | - | | 12.9595 | 105400 | 0.0 | - | | 12.9657 | 105450 | 0.0002 | - | | 12.9718 | 105500 | 0.0 | - | | 12.9780 | 105550 | 0.0 | - | | 12.9841 | 105600 | 0.0002 | - | | 12.9903 | 105650 | 0.0003 | - | | 12.9964 | 105700 | 0.0003 | - | | 13.0026 | 105750 | 0.0006 | - | | 13.0087 | 105800 | 0.0002 | - | | 13.0149 | 105850 | 0.0002 | - | | 13.0210 | 105900 | 0.0003 | - | | 13.0272 | 105950 | 0.0002 | - | | 13.0333 | 106000 | 0.0002 | - | | 13.0395 | 106050 | 0.0 | - | | 13.0456 | 106100 | 0.0002 | - | | 13.0518 | 106150 | 0.0002 | - | | 13.0579 | 106200 | 0.0 | - | | 13.0641 | 106250 | 0.0002 | - | | 13.0702 | 106300 | 0.0 | - | | 13.0764 | 106350 | 0.0001 | - | | 13.0825 | 106400 | 0.0002 | - | | 13.0887 | 106450 | 0.0002 | - | | 13.0948 | 106500 | 0.0003 | - | | 13.1009 | 106550 | 0.0 | - | | 13.1071 | 106600 | 0.0 | - | | 13.1132 | 106650 | 0.0 | - | | 13.1194 | 106700 | 0.0 | - | | 13.1255 | 106750 | 0.0 | - | | 13.1317 | 106800 | 0.0 | - | | 13.1378 | 106850 | 0.0001 | - | | 13.1440 | 106900 | 0.0 | - | | 13.1501 | 106950 | 0.0002 | - | | 13.1563 | 107000 | 0.0002 | - | | 13.1624 | 107050 | 0.0002 | - | | 13.1686 | 107100 | 0.0 | - | | 13.1747 | 107150 | 0.0006 | - | | 13.1809 | 107200 | 0.0 | - | | 13.1870 | 107250 | 0.0001 | - | | 13.1932 | 107300 | 0.0 | - | | 13.1993 | 107350 | 0.0002 | - | | 13.2055 | 107400 | 0.0001 | - | | 13.2116 | 107450 | 0.0002 | - | | 13.2178 | 107500 | 0.0002 | - | | 13.2239 | 107550 | 0.0005 | - | | 13.2301 | 107600 | 0.0003 | - | | 13.2362 | 107650 | 0.0001 | - | | 13.2423 | 107700 | 0.0001 | - | | 13.2485 | 107750 | 0.0 | - | | 13.2546 | 107800 | 0.0003 | - | | 13.2608 | 107850 | 0.0004 | - | | 13.2669 | 107900 | 0.0006 | - | | 13.2731 | 107950 | 0.0003 | - | | 13.2792 | 108000 | 0.0002 | - | | 13.2854 | 108050 | 0.0005 | - | | 13.2915 | 108100 | 0.0008 | - | | 13.2977 | 108150 | 0.0002 | - | | 13.3038 | 108200 | 0.0005 | - | | 13.3100 | 108250 | 0.0 | - | | 13.3161 | 108300 | 0.0 | - | | 13.3223 | 108350 | 0.0003 | - | | 13.3284 | 108400 | 0.0002 | - | | 13.3346 | 108450 | 0.0 | - | | 13.3407 | 108500 | 0.0 | - | | 13.3469 | 108550 | 0.0001 | - | | 13.3530 | 108600 | 0.0001 | - | | 13.3592 | 108650 | 0.0 | - | | 13.3653 | 108700 | 0.0 | - | | 13.3714 | 108750 | 0.0 | - | | 13.3776 | 108800 | 0.0 | - | | 13.3837 | 108850 | 0.0 | - | | 13.3899 | 108900 | 0.0 | - | | 13.3960 | 108950 | 0.0002 | - | | 13.4022 | 109000 | 0.0002 | - | | 13.4083 | 109050 | 0.0002 | - | | 13.4145 | 109100 | 0.0 | - | | 13.4206 | 109150 | 0.0 | - | | 13.4268 | 109200 | 0.0005 | - | | 13.4329 | 109250 | 0.0003 | - | | 13.4391 | 109300 | 0.0007 | - | | 13.4452 | 109350 | 0.0003 | - | | 13.4514 | 109400 | 0.0006 | - | | 13.4575 | 109450 | 0.0 | - | | 13.4637 | 109500 | 0.0 | - | | 13.4698 | 109550 | 0.0 | - | | 13.4760 | 109600 | 0.0001 | - | | 13.4821 | 109650 | 0.0008 | - | | 13.4883 | 109700 | 0.0004 | - | | 13.4944 | 109750 | 0.0002 | - | | 13.5006 | 109800 | 0.0006 | - | | 13.5067 | 109850 | 0.0005 | - | | 13.5128 | 109900 | 0.0001 | - | | 13.5190 | 109950 | 0.0001 | - | | 13.5251 | 110000 | 0.0001 | - | | 13.5313 | 110050 | 0.0005 | - | | 13.5374 | 110100 | 0.0006 | - | | 13.5436 | 110150 | 0.0003 | - | | 13.5497 | 110200 | 0.0002 | - | | 13.5559 | 110250 | 0.0 | - | | 13.5620 | 110300 | 0.0001 | - | | 13.5682 | 110350 | 0.0 | - | | 13.5743 | 110400 | 0.0 | - | | 13.5805 | 110450 | 0.0 | - | | 13.5866 | 110500 | 0.0 | - | | 13.5928 | 110550 | 0.0 | - | | 13.5989 | 110600 | 0.0 | - | | 13.6051 | 110650 | 0.0001 | - | | 13.6112 | 110700 | 0.0 | - | | 13.6174 | 110750 | 0.0 | - | | 13.6235 | 110800 | 0.0 | - | | 13.6297 | 110850 | 0.0001 | - | | 13.6358 | 110900 | 0.0 | - | | 13.6420 | 110950 | 0.0001 | - | | 13.6481 | 111000 | 0.0003 | - | | 13.6542 | 111050 | 0.0 | - | | 13.6604 | 111100 | 0.0 | - | | 13.6665 | 111150 | 0.0001 | - | | 13.6727 | 111200 | 0.0006 | - | | 13.6788 | 111250 | 0.0005 | - | | 13.6850 | 111300 | 0.0004 | - | | 13.6911 | 111350 | 0.0004 | - | | 13.6973 | 111400 | 0.0 | - | | 13.7034 | 111450 | 0.0002 | - | | 13.7096 | 111500 | 0.0008 | - | | 13.7157 | 111550 | 0.0008 | - | | 13.7219 | 111600 | 0.0005 | - | | 13.7280 | 111650 | 0.0 | - | | 13.7342 | 111700 | 0.0002 | - | | 13.7403 | 111750 | 0.0004 | - | | 13.7465 | 111800 | 0.0 | - | | 13.7526 | 111850 | 0.0005 | - | | 13.7588 | 111900 | 0.0002 | - | | 13.7649 | 111950 | 0.0 | - | | 13.7711 | 112000 | 0.0006 | - | | 13.7772 | 112050 | 0.0002 | - | | 13.7834 | 112100 | 0.0001 | - | | 13.7895 | 112150 | 0.0003 | - | | 13.7956 | 112200 | 0.0001 | - | | 13.8018 | 112250 | 0.0001 | - | | 13.8079 | 112300 | 0.0001 | - | | 13.8141 | 112350 | 0.0002 | - | | 13.8202 | 112400 | 0.0 | - | | 13.8264 | 112450 | 0.0 | - | | 13.8325 | 112500 | 0.0 | - | | 13.8387 | 112550 | 0.0 | - | | 13.8448 | 112600 | 0.0001 | - | | 13.8510 | 112650 | 0.0 | - | | 13.8571 | 112700 | 0.0001 | - | | 13.8633 | 112750 | 0.0002 | - | | 13.8694 | 112800 | 0.0001 | - | | 13.8756 | 112850 | 0.0002 | - | | 13.8817 | 112900 | 0.0 | - | | 13.8879 | 112950 | 0.0001 | - | | 13.8940 | 113000 | 0.0 | - | | 13.9002 | 113050 | 0.0001 | - | | 13.9063 | 113100 | 0.0001 | - | | 13.9125 | 113150 | 0.0001 | - | | 13.9186 | 113200 | 0.0001 | - | | 13.9248 | 113250 | 0.0 | - | | 13.9309 | 113300 | 0.0002 | - | | 13.9370 | 113350 | 0.0 | - | | 13.9432 | 113400 | 0.0001 | - | | 13.9493 | 113450 | 0.0 | - | | 13.9555 | 113500 | 0.0001 | - | | 13.9616 | 113550 | 0.0 | - | | 13.9678 | 113600 | 0.0 | - | | 13.9739 | 113650 | 0.0001 | - | | 13.9801 | 113700 | 0.0 | - | | 13.9862 | 113750 | 0.0 | - | | 13.9924 | 113800 | 0.0 | - | | 13.9985 | 113850 | 0.0002 | - | | 14.0047 | 113900 | 0.0002 | - | | 14.0108 | 113950 | 0.0 | - | | 14.0170 | 114000 | 0.0 | - | | 14.0231 | 114050 | 0.0001 | - | | 14.0293 | 114100 | 0.0001 | - | | 14.0354 | 114150 | 0.0006 | - | | 14.0416 | 114200 | 0.0005 | - | | 14.0477 | 114250 | 0.0007 | - | | 14.0539 | 114300 | 0.001 | - | | 14.0600 | 114350 | 0.0005 | - | | 14.0662 | 114400 | 0.0001 | - | | 14.0723 | 114450 | 0.0003 | - | | 14.0784 | 114500 | 0.0003 | - | | 14.0846 | 114550 | 0.0 | - | | 14.0907 | 114600 | 0.0003 | - | | 14.0969 | 114650 | 0.0 | - | | 14.1030 | 114700 | 0.0 | - | | 14.1092 | 114750 | 0.0 | - | | 14.1153 | 114800 | 0.0 | - | | 14.1215 | 114850 | 0.0 | - | | 14.1276 | 114900 | 0.0 | - | | 14.1338 | 114950 | 0.0 | - | | 14.1399 | 115000 | 0.0002 | - | | 14.1461 | 115050 | 0.0 | - | | 14.1522 | 115100 | 0.0 | - | | 14.1584 | 115150 | 0.0 | - | | 14.1645 | 115200 | 0.0 | - | | 14.1707 | 115250 | 0.0 | - | | 14.1768 | 115300 | 0.0 | - | | 14.1830 | 115350 | 0.0 | - | | 14.1891 | 115400 | 0.0 | - | | 14.1953 | 115450 | 0.0002 | - | | 14.2014 | 115500 | 0.0 | - | | 14.2075 | 115550 | 0.0 | - | | 14.2137 | 115600 | 0.0 | - | | 14.2198 | 115650 | 0.0001 | - | | 14.2260 | 115700 | 0.0 | - | | 14.2321 | 115750 | 0.0002 | - | | 14.2383 | 115800 | 0.0002 | - | | 14.2444 | 115850 | 0.0001 | - | | 14.2506 | 115900 | 0.0002 | - | | 14.2567 | 115950 | 0.0001 | - | | 14.2629 | 116000 | 0.0 | - | | 14.2690 | 116050 | 0.0002 | - | | 14.2752 | 116100 | 0.0 | - | | 14.2813 | 116150 | 0.0002 | - | | 14.2875 | 116200 | 0.0001 | - | | 14.2936 | 116250 | 0.0 | - | | 14.2998 | 116300 | 0.0007 | - | | 14.3059 | 116350 | 0.0002 | - | | 14.3121 | 116400 | 0.0003 | - | | 14.3182 | 116450 | 0.0 | - | | 14.3244 | 116500 | 0.0004 | - | | 14.3305 | 116550 | 0.0 | - | | 14.3367 | 116600 | 0.0002 | - | | 14.3428 | 116650 | 0.0004 | - | | 14.3489 | 116700 | 0.0005 | - | | 14.3551 | 116750 | 0.0002 | - | | 14.3612 | 116800 | 0.0003 | - | | 14.3674 | 116850 | 0.0 | - | | 14.3735 | 116900 | 0.0001 | - | | 14.3797 | 116950 | 0.0 | - | | 14.3858 | 117000 | 0.0002 | - | | 14.3920 | 117050 | 0.0002 | - | | 14.3981 | 117100 | 0.0 | - | | 14.4043 | 117150 | 0.0 | - | | 14.4104 | 117200 | 0.0 | - | | 14.4166 | 117250 | 0.0 | - | | 14.4227 | 117300 | 0.0002 | - | | 14.4289 | 117350 | 0.0 | - | | 14.4350 | 117400 | 0.0 | - | | 14.4412 | 117450 | 0.0002 | - | | 14.4473 | 117500 | 0.0 | - | | 14.4535 | 117550 | 0.0 | - | | 14.4596 | 117600 | 0.0001 | - | | 14.4658 | 117650 | 0.0002 | - | | 14.4719 | 117700 | 0.0 | - | | 14.4781 | 117750 | 0.0002 | - | | 14.4842 | 117800 | 0.0 | - | | 14.4903 | 117850 | 0.0 | - | | 14.4965 | 117900 | 0.0 | - | | 14.5026 | 117950 | 0.0 | - | | 14.5088 | 118000 | 0.0 | - | | 14.5149 | 118050 | 0.0001 | - | | 14.5211 | 118100 | 0.0 | - | | 14.5272 | 118150 | 0.0 | - | | 14.5334 | 118200 | 0.0 | - | | 14.5395 | 118250 | 0.0 | - | | 14.5457 | 118300 | 0.0 | - | | 14.5518 | 118350 | 0.0 | - | | 14.5580 | 118400 | 0.0 | - | | 14.5641 | 118450 | 0.0 | - | | 14.5703 | 118500 | 0.0 | - | | 14.5764 | 118550 | 0.0 | - | | 14.5826 | 118600 | 0.0 | - | | 14.5887 | 118650 | 0.0002 | - | | 14.5949 | 118700 | 0.0 | - | | 14.6010 | 118750 | 0.0 | - | | 14.6072 | 118800 | 0.0 | - | | 14.6133 | 118850 | 0.0 | - | | 14.6195 | 118900 | 0.0 | - | | 14.6256 | 118950 | 0.0 | - | | 14.6317 | 119000 | 0.0 | - | | 14.6379 | 119050 | 0.0 | - | | 14.6440 | 119100 | 0.0 | - | | 14.6502 | 119150 | 0.0 | - | | 14.6563 | 119200 | 0.0003 | - | | 14.6625 | 119250 | 0.0 | - | | 14.6686 | 119300 | 0.0 | - | | 14.6748 | 119350 | 0.0 | - | | 14.6809 | 119400 | 0.0002 | - | | 14.6871 | 119450 | 0.0 | - | | 14.6932 | 119500 | 0.0 | - | | 14.6994 | 119550 | 0.0 | - | | 14.7055 | 119600 | 0.0 | - | | 14.7117 | 119650 | 0.0 | - | | 14.7178 | 119700 | 0.0 | - | | 14.7240 | 119750 | 0.0001 | - | | 14.7301 | 119800 | 0.0001 | - | | 14.7363 | 119850 | 0.0001 | - | | 14.7424 | 119900 | 0.0001 | - | | 14.7486 | 119950 | 0.0003 | - | | 14.7547 | 120000 | 0.0002 | - | | 14.7609 | 120050 | 0.0003 | - | | 14.7670 | 120100 | 0.0 | - | | 14.7731 | 120150 | 0.0 | - | | 14.7793 | 120200 | 0.0003 | - | | 14.7854 | 120250 | 0.0 | - | | 14.7916 | 120300 | 0.0 | - | | 14.7977 | 120350 | 0.0004 | - | | 14.8039 | 120400 | 0.0 | - | | 14.8100 | 120450 | 0.0 | - | | 14.8162 | 120500 | 0.0003 | - | | 14.8223 | 120550 | 0.0002 | - | | 14.8285 | 120600 | 0.0001 | - | | 14.8346 | 120650 | 0.0 | - | | 14.8408 | 120700 | 0.0002 | - | | 14.8469 | 120750 | 0.0 | - | | 14.8531 | 120800 | 0.0 | - | | 14.8592 | 120850 | 0.0001 | - | | 14.8654 | 120900 | 0.0003 | - | | 14.8715 | 120950 | 0.0002 | - | | 14.8777 | 121000 | 0.0 | - | | 14.8838 | 121050 | 0.0 | - | | 14.8900 | 121100 | 0.0 | - | | 14.8961 | 121150 | 0.0 | - | | 14.9023 | 121200 | 0.0003 | - | | 14.9084 | 121250 | 0.0004 | - | | 14.9145 | 121300 | 0.0002 | - | | 14.9207 | 121350 | 0.0005 | - | | 14.9268 | 121400 | 0.0 | - | | 14.9330 | 121450 | 0.0001 | - | | 14.9391 | 121500 | 0.0001 | - | | 14.9453 | 121550 | 0.0002 | - | | 14.9514 | 121600 | 0.0 | - | | 14.9576 | 121650 | 0.0 | - | | 14.9637 | 121700 | 0.0002 | - | | 14.9699 | 121750 | 0.0 | - | | 14.9760 | 121800 | 0.0 | - | | 14.9822 | 121850 | 0.0 | - | | 14.9883 | 121900 | 0.0 | - | | 14.9945 | 121950 | 0.0 | - | | 15.0006 | 122000 | 0.0 | - | | 15.0068 | 122050 | 0.0003 | - | | 15.0129 | 122100 | 0.0008 | - | | 15.0191 | 122150 | 0.0003 | - | | 15.0252 | 122200 | 0.0 | - | | 15.0314 | 122250 | 0.0002 | - | | 15.0375 | 122300 | 0.0003 | - | | 15.0436 | 122350 | 0.0002 | - | | 15.0498 | 122400 | 0.0003 | - | | 15.0559 | 122450 | 0.0002 | - | | 15.0621 | 122500 | 0.0003 | - | | 15.0682 | 122550 | 0.0003 | - | | 15.0744 | 122600 | 0.0002 | - | | 15.0805 | 122650 | 0.0005 | - | | 15.0867 | 122700 | 0.0002 | - | | 15.0928 | 122750 | 0.0006 | - | | 15.0990 | 122800 | 0.0002 | - | | 15.1051 | 122850 | 0.0 | - | | 15.1113 | 122900 | 0.0003 | - | | 15.1174 | 122950 | 0.0002 | - | | 15.1236 | 123000 | 0.0002 | - | | 15.1297 | 123050 | 0.0002 | - | | 15.1359 | 123100 | 0.0002 | - | | 15.1420 | 123150 | 0.0 | - | | 15.1482 | 123200 | 0.0005 | - | | 15.1543 | 123250 | 0.0 | - | | 15.1605 | 123300 | 0.0 | - | | 15.1666 | 123350 | 0.0003 | - | | 15.1728 | 123400 | 0.0002 | - | | 15.1789 | 123450 | 0.0002 | - | | 15.1850 | 123500 | 0.0002 | - | | 15.1912 | 123550 | 0.0003 | - | | 15.1973 | 123600 | 0.0 | - | | 15.2035 | 123650 | 0.0 | - | | 15.2096 | 123700 | 0.0 | - | | 15.2158 | 123750 | 0.0 | - | | 15.2219 | 123800 | 0.0 | - | | 15.2281 | 123850 | 0.0 | - | | 15.2342 | 123900 | 0.0 | - | | 15.2404 | 123950 | 0.0 | - | | 15.2465 | 124000 | 0.0003 | - | | 15.2527 | 124050 | 0.0002 | - | | 15.2588 | 124100 | 0.0003 | - | | 15.2650 | 124150 | 0.0 | - | | 15.2711 | 124200 | 0.0 | - | | 15.2773 | 124250 | 0.0003 | - | | 15.2834 | 124300 | 0.0013 | - | | 15.2896 | 124350 | 0.0031 | - | | 15.2957 | 124400 | 0.0023 | - | | 15.3019 | 124450 | 0.0024 | - | | 15.3080 | 124500 | 0.0019 | - | | 15.3142 | 124550 | 0.0017 | - | | 15.3203 | 124600 | 0.0019 | - | | 15.3264 | 124650 | 0.0002 | - | | 15.3326 | 124700 | 0.0002 | - | | 15.3387 | 124750 | 0.001 | - | | 15.3449 | 124800 | 0.0005 | - | | 15.3510 | 124850 | 0.0003 | - | | 15.3572 | 124900 | 0.0002 | - | | 15.3633 | 124950 | 0.0005 | - | | 15.3695 | 125000 | 0.0003 | - | | 15.3756 | 125050 | 0.0005 | - | | 15.3818 | 125100 | 0.0002 | - | | 15.3879 | 125150 | 0.0 | - | | 15.3941 | 125200 | 0.0003 | - | | 15.4002 | 125250 | 0.0005 | - | | 15.4064 | 125300 | 0.0 | - | | 15.4125 | 125350 | 0.0 | - | | 15.4187 | 125400 | 0.0 | - | | 15.4248 | 125450 | 0.0 | - | | 15.4310 | 125500 | 0.0 | - | | 15.4371 | 125550 | 0.0 | - | | 15.4433 | 125600 | 0.0 | - | | 15.4494 | 125650 | 0.0 | - | | 15.4556 | 125700 | 0.0 | - | | 15.4617 | 125750 | 0.0002 | - | | 15.4678 | 125800 | 0.0003 | - | | 15.4740 | 125850 | 0.0 | - | | 15.4801 | 125900 | 0.0 | - | | 15.4863 | 125950 | 0.0002 | - | | 15.4924 | 126000 | 0.0001 | - | | 15.4986 | 126050 | 0.0 | - | | 15.5047 | 126100 | 0.0002 | - | | 15.5109 | 126150 | 0.0 | - | | 15.5170 | 126200 | 0.0 | - | | 15.5232 | 126250 | 0.0 | - | | 15.5293 | 126300 | 0.0 | - | | 15.5355 | 126350 | 0.0 | - | | 15.5416 | 126400 | 0.0 | - | | 15.5478 | 126450 | 0.0 | - | | 15.5539 | 126500 | 0.0 | - | | 15.5601 | 126550 | 0.0002 | - | | 15.5662 | 126600 | 0.0 | - | | 15.5724 | 126650 | 0.0 | - | | 15.5785 | 126700 | 0.0 | - | | 15.5847 | 126750 | 0.0 | - | | 15.5908 | 126800 | 0.0 | - | | 15.5970 | 126850 | 0.0 | - | | 15.6031 | 126900 | 0.0 | - | | 15.6092 | 126950 | 0.0 | - | | 15.6154 | 127000 | 0.0 | - | | 15.6215 | 127050 | 0.0 | - | | 15.6277 | 127100 | 0.0 | - | | 15.6338 | 127150 | 0.0 | - | | 15.6400 | 127200 | 0.0 | - | | 15.6461 | 127250 | 0.0 | - | | 15.6523 | 127300 | 0.0 | - | | 15.6584 | 127350 | 0.0 | - | | 15.6646 | 127400 | 0.0 | - | | 15.6707 | 127450 | 0.0 | - | | 15.6769 | 127500 | 0.0002 | - | | 15.6830 | 127550 | 0.0 | - | | 15.6892 | 127600 | 0.0 | - | | 15.6953 | 127650 | 0.0 | - | | 15.7015 | 127700 | 0.0 | - | | 15.7076 | 127750 | 0.0 | - | | 15.7138 | 127800 | 0.0 | - | | 15.7199 | 127850 | 0.0 | - | | 15.7261 | 127900 | 0.0 | - | | 15.7322 | 127950 | 0.0 | - | | 15.7383 | 128000 | 0.0003 | - | | 15.7445 | 128050 | 0.0 | - | | 15.7506 | 128100 | 0.0003 | - | | 15.7568 | 128150 | 0.0 | - | | 15.7629 | 128200 | 0.0 | - | | 15.7691 | 128250 | 0.0001 | - | | 15.7752 | 128300 | 0.0003 | - | | 15.7814 | 128350 | 0.0 | - | | 15.7875 | 128400 | 0.0 | - | | 15.7937 | 128450 | 0.0 | - | | 15.7998 | 128500 | 0.0 | - | | 15.8060 | 128550 | 0.0 | - | | 15.8121 | 128600 | 0.0 | - | | 15.8183 | 128650 | 0.0 | - | | 15.8244 | 128700 | 0.0002 | - | | 15.8306 | 128750 | 0.0 | - | | 15.8367 | 128800 | 0.0 | - | | 15.8429 | 128850 | 0.0 | - | | 15.8490 | 128900 | 0.0 | - | | 15.8552 | 128950 | 0.0 | - | | 15.8613 | 129000 | 0.0 | - | | 15.8675 | 129050 | 0.0 | - | | 15.8736 | 129100 | 0.0 | - | | 15.8797 | 129150 | 0.0 | - | | 15.8859 | 129200 | 0.0 | - | | 15.8920 | 129250 | 0.0 | - | | 15.8982 | 129300 | 0.0001 | - | | 15.9043 | 129350 | 0.0 | - | | 15.9105 | 129400 | 0.0 | - | | 15.9166 | 129450 | 0.0 | - | | 15.9228 | 129500 | 0.0 | - | | 15.9289 | 129550 | 0.0 | - | | 15.9351 | 129600 | 0.0 | - | | 15.9412 | 129650 | 0.0 | - | | 15.9474 | 129700 | 0.0 | - | | 15.9535 | 129750 | 0.0 | - | | 15.9597 | 129800 | 0.0 | - | | 15.9658 | 129850 | 0.0 | - | | 15.9720 | 129900 | 0.0 | - | | 15.9781 | 129950 | 0.0 | - | | 15.9843 | 130000 | 0.0 | - | | 15.9904 | 130050 | 0.0 | - | | 15.9966 | 130100 | 0.0 | - | | 16.0027 | 130150 | 0.0 | - | | 16.0089 | 130200 | 0.0 | - | | 16.0150 | 130250 | 0.0 | - | | 16.0211 | 130300 | 0.0 | - | | 16.0273 | 130350 | 0.0 | - | | 16.0334 | 130400 | 0.0 | - | | 16.0396 | 130450 | 0.0 | - | | 16.0457 | 130500 | 0.0 | - | | 16.0519 | 130550 | 0.0 | - | | 16.0580 | 130600 | 0.0 | - | | 16.0642 | 130650 | 0.0 | - | | 16.0703 | 130700 | 0.0 | - | | 16.0765 | 130750 | 0.0003 | - | | 16.0826 | 130800 | 0.0002 | - | | 16.0888 | 130850 | 0.0002 | - | | 16.0949 | 130900 | 0.0 | - | | 16.1011 | 130950 | 0.0002 | - | | 16.1072 | 131000 | 0.0002 | - | | 16.1134 | 131050 | 0.0003 | - | | 16.1195 | 131100 | 0.0008 | - | | 16.1257 | 131150 | 0.0005 | - | | 16.1318 | 131200 | 0.0002 | - | | 16.1380 | 131250 | 0.0 | - | | 16.1441 | 131300 | 0.0006 | - | | 16.1503 | 131350 | 0.001 | - | | 16.1564 | 131400 | 0.0002 | - | | 16.1625 | 131450 | 0.0002 | - | | 16.1687 | 131500 | 0.0002 | - | | 16.1748 | 131550 | 0.0 | - | | 16.1810 | 131600 | 0.0002 | - | | 16.1871 | 131650 | 0.0002 | - | | 16.1933 | 131700 | 0.0005 | - | | 16.1994 | 131750 | 0.0002 | - | | 16.2056 | 131800 | 0.0002 | - | | 16.2117 | 131850 | 0.0 | - | | 16.2179 | 131900 | 0.0004 | - | | 16.2240 | 131950 | 0.0 | - | | 16.2302 | 132000 | 0.0 | - | | 16.2363 | 132050 | 0.0 | - | | 16.2425 | 132100 | 0.0 | - | | 16.2486 | 132150 | 0.0 | - | | 16.2548 | 132200 | 0.0001 | - | | 16.2609 | 132250 | 0.0001 | - | | 16.2671 | 132300 | 0.0 | - | | 16.2732 | 132350 | 0.0 | - | | 16.2794 | 132400 | 0.0001 | - | | 16.2855 | 132450 | 0.0 | - | | 16.2917 | 132500 | 0.0001 | - | | 16.2978 | 132550 | 0.0 | - | | 16.3039 | 132600 | 0.0 | - | | 16.3101 | 132650 | 0.0 | - | | 16.3162 | 132700 | 0.0 | - | | 16.3224 | 132750 | 0.0 | - | | 16.3285 | 132800 | 0.0002 | - | | 16.3347 | 132850 | 0.0 | - | | 16.3408 | 132900 | 0.0 | - | | 16.3470 | 132950 | 0.0002 | - | | 16.3531 | 133000 | 0.0002 | - | | 16.3593 | 133050 | 0.0001 | - | | 16.3654 | 133100 | 0.0 | - | | 16.3716 | 133150 | 0.0 | - | | 16.3777 | 133200 | 0.0002 | - | | 16.3839 | 133250 | 0.0002 | - | | 16.3900 | 133300 | 0.0 | - | | 16.3962 | 133350 | 0.0 | - | | 16.4023 | 133400 | 0.0 | - | | 16.4085 | 133450 | 0.0002 | - | | 16.4146 | 133500 | 0.0 | - | | 16.4208 | 133550 | 0.0 | - | | 16.4269 | 133600 | 0.0001 | - | | 16.4331 | 133650 | 0.0002 | - | | 16.4392 | 133700 | 0.0 | - | | 16.4453 | 133750 | 0.0003 | - | | 16.4515 | 133800 | 0.0 | - | | 16.4576 | 133850 | 0.0 | - | | 16.4638 | 133900 | 0.0 | - | | 16.4699 | 133950 | 0.0 | - | | 16.4761 | 134000 | 0.0 | - | | 16.4822 | 134050 | 0.0 | - | | 16.4884 | 134100 | 0.0 | - | | 16.4945 | 134150 | 0.0002 | - | | 16.5007 | 134200 | 0.0003 | - | | 16.5068 | 134250 | 0.0 | - | | 16.5130 | 134300 | 0.0 | - | | 16.5191 | 134350 | 0.0 | - | | 16.5253 | 134400 | 0.0 | - | | 16.5314 | 134450 | 0.0 | - | | 16.5376 | 134500 | 0.0 | - | | 16.5437 | 134550 | 0.0 | - | | 16.5499 | 134600 | 0.0 | - | | 16.5560 | 134650 | 0.0 | - | | 16.5622 | 134700 | 0.0 | - | | 16.5683 | 134750 | 0.0 | - | | 16.5744 | 134800 | 0.0 | - | | 16.5806 | 134850 | 0.0 | - | | 16.5867 | 134900 | 0.0 | - | | 16.5929 | 134950 | 0.0 | - | | 16.5990 | 135000 | 0.0003 | - | | 16.6052 | 135050 | 0.0 | - | | 16.6113 | 135100 | 0.0 | - | | 16.6175 | 135150 | 0.0 | - | | 16.6236 | 135200 | 0.0002 | - | | 16.6298 | 135250 | 0.0001 | - | | 16.6359 | 135300 | 0.0002 | - | | 16.6421 | 135350 | 0.0 | - | | 16.6482 | 135400 | 0.0 | - | | 16.6544 | 135450 | 0.0002 | - | | 16.6605 | 135500 | 0.0 | - | | 16.6667 | 135550 | 0.0002 | - | | 16.6728 | 135600 | 0.0001 | - | | 16.6790 | 135650 | 0.0 | - | | 16.6851 | 135700 | 0.0 | - | | 16.6913 | 135750 | 0.0003 | - | | 16.6974 | 135800 | 0.0 | - | | 16.7036 | 135850 | 0.0 | - | | 16.7097 | 135900 | 0.0 | - | | 16.7158 | 135950 | 0.0 | - | | 16.7220 | 136000 | 0.0 | - | | 16.7281 | 136050 | 0.0 | - | | 16.7343 | 136100 | 0.0 | - | | 16.7404 | 136150 | 0.0 | - | | 16.7466 | 136200 | 0.0 | - | | 16.7527 | 136250 | 0.0 | - | | 16.7589 | 136300 | 0.0001 | - | | 16.7650 | 136350 | 0.0002 | - | | 16.7712 | 136400 | 0.0005 | - | | 16.7773 | 136450 | 0.0 | - | | 16.7835 | 136500 | 0.0002 | - | | 16.7896 | 136550 | 0.0 | - | | 16.7958 | 136600 | 0.0001 | - | | 16.8019 | 136650 | 0.0 | - | | 16.8081 | 136700 | 0.0 | - | | 16.8142 | 136750 | 0.0 | - | | 16.8204 | 136800 | 0.0 | - | | 16.8265 | 136850 | 0.0 | - | | 16.8327 | 136900 | 0.0 | - | | 16.8388 | 136950 | 0.0 | - | | 16.8450 | 137000 | 0.0 | - | | 16.8511 | 137050 | 0.0 | - | | 16.8572 | 137100 | 0.0001 | - | | 16.8634 | 137150 | 0.0001 | - | | 16.8695 | 137200 | 0.0003 | - | | 16.8757 | 137250 | 0.0 | - | | 16.8818 | 137300 | 0.0002 | - | | 16.8880 | 137350 | 0.0001 | - | | 16.8941 | 137400 | 0.0 | - | | 16.9003 | 137450 | 0.0 | - | | 16.9064 | 137500 | 0.0 | - | | 16.9126 | 137550 | 0.0002 | - | | 16.9187 | 137600 | 0.0 | - | | 16.9249 | 137650 | 0.0 | - | | 16.9310 | 137700 | 0.0 | - | | 16.9372 | 137750 | 0.0 | - | | 16.9433 | 137800 | 0.0 | - | | 16.9495 | 137850 | 0.0 | - | | 16.9556 | 137900 | 0.0 | - | | 16.9618 | 137950 | 0.0002 | - | | 16.9679 | 138000 | 0.0 | - | | 16.9741 | 138050 | 0.0 | - | | 16.9802 | 138100 | 0.0 | - | | 16.9864 | 138150 | 0.0 | - | | 16.9925 | 138200 | 0.0 | - | | 16.9986 | 138250 | 0.0 | - | | 17.0048 | 138300 | 0.0 | - | | 17.0109 | 138350 | 0.0 | - | | 17.0171 | 138400 | 0.0 | - | | 17.0232 | 138450 | 0.0 | - | | 17.0294 | 138500 | 0.0 | - | | 17.0355 | 138550 | 0.0 | - | | 17.0417 | 138600 | 0.0 | - | | 17.0478 | 138650 | 0.0 | - | | 17.0540 | 138700 | 0.0003 | - | | 17.0601 | 138750 | 0.0003 | - | | 17.0663 | 138800 | 0.0002 | - | | 17.0724 | 138850 | 0.0 | - | | 17.0786 | 138900 | 0.0002 | - | | 17.0847 | 138950 | 0.0 | - | | 17.0909 | 139000 | 0.0 | - | | 17.0970 | 139050 | 0.0001 | - | | 17.1032 | 139100 | 0.0 | - | | 17.1093 | 139150 | 0.0 | - | | 17.1155 | 139200 | 0.0 | - | | 17.1216 | 139250 | 0.0001 | - | | 17.1278 | 139300 | 0.0 | - | | 17.1339 | 139350 | 0.0001 | - | | 17.1400 | 139400 | 0.0 | - | | 17.1462 | 139450 | 0.0 | - | | 17.1523 | 139500 | 0.0002 | - | | 17.1585 | 139550 | 0.0 | - | | 17.1646 | 139600 | 0.0002 | - | | 17.1708 | 139650 | 0.0 | - | | 17.1769 | 139700 | 0.0 | - | | 17.1831 | 139750 | 0.0 | - | | 17.1892 | 139800 | 0.0 | - | | 17.1954 | 139850 | 0.0002 | - | | 17.2015 | 139900 | 0.0 | - | | 17.2077 | 139950 | 0.0002 | - | | 17.2138 | 140000 | 0.0002 | - | | 17.2200 | 140050 | 0.0 | - | | 17.2261 | 140100 | 0.0 | - | | 17.2323 | 140150 | 0.0002 | - | | 17.2384 | 140200 | 0.0006 | - | | 17.2446 | 140250 | 0.0002 | - | | 17.2507 | 140300 | 0.0003 | - | | 17.2569 | 140350 | 0.0 | - | | 17.2630 | 140400 | 0.0002 | - | | 17.2692 | 140450 | 0.0002 | - | | 17.2753 | 140500 | 0.0 | - | | 17.2814 | 140550 | 0.0 | - | | 17.2876 | 140600 | 0.0002 | - | | 17.2937 | 140650 | 0.0 | - | | 17.2999 | 140700 | 0.0 | - | | 17.3060 | 140750 | 0.0 | - | | 17.3122 | 140800 | 0.0 | - | | 17.3183 | 140850 | 0.0 | - | | 17.3245 | 140900 | 0.0 | - | | 17.3306 | 140950 | 0.0 | - | | 17.3368 | 141000 | 0.0 | - | | 17.3429 | 141050 | 0.0002 | - | | 17.3491 | 141100 | 0.0 | - | | 17.3552 | 141150 | 0.0 | - | | 17.3614 | 141200 | 0.0 | - | | 17.3675 | 141250 | 0.0 | - | | 17.3737 | 141300 | 0.0002 | - | | 17.3798 | 141350 | 0.0 | - | | 17.3860 | 141400 | 0.0 | - | | 17.3921 | 141450 | 0.0 | - | | 17.3983 | 141500 | 0.0 | - | | 17.4044 | 141550 | 0.0 | - | | 17.4105 | 141600 | 0.0 | - | | 17.4167 | 141650 | 0.0002 | - | | 17.4228 | 141700 | 0.0 | - | | 17.4290 | 141750 | 0.0 | - | | 17.4351 | 141800 | 0.0002 | - | | 17.4413 | 141850 | 0.0 | - | | 17.4474 | 141900 | 0.0 | - | | 17.4536 | 141950 | 0.0 | - | | 17.4597 | 142000 | 0.0 | - | | 17.4659 | 142050 | 0.0 | - | | 17.4720 | 142100 | 0.0 | - | | 17.4782 | 142150 | 0.0 | - | | 17.4843 | 142200 | 0.0 | - | | 17.4905 | 142250 | 0.0 | - | | 17.4966 | 142300 | 0.0 | - | | 17.5028 | 142350 | 0.0 | - | | 17.5089 | 142400 | 0.0 | - | | 17.5151 | 142450 | 0.0 | - | | 17.5212 | 142500 | 0.0 | - | | 17.5274 | 142550 | 0.0 | - | | 17.5335 | 142600 | 0.0 | - | | 17.5397 | 142650 | 0.0002 | - | | 17.5458 | 142700 | 0.0 | - | | 17.5519 | 142750 | 0.0002 | - | | 17.5581 | 142800 | 0.0 | - | | 17.5642 | 142850 | 0.0 | - | | 17.5704 | 142900 | 0.0 | - | | 17.5765 | 142950 | 0.0 | - | | 17.5827 | 143000 | 0.0 | - | | 17.5888 | 143050 | 0.0 | - | | 17.5950 | 143100 | 0.0002 | - | | 17.6011 | 143150 | 0.0 | - | | 17.6073 | 143200 | 0.0001 | - | | 17.6134 | 143250 | 0.0002 | - | | 17.6196 | 143300 | 0.0 | - | | 17.6257 | 143350 | 0.0002 | - | | 17.6319 | 143400 | 0.0002 | - | | 17.6380 | 143450 | 0.0 | - | | 17.6442 | 143500 | 0.0 | - | | 17.6503 | 143550 | 0.0 | - | | 17.6565 | 143600 | 0.0 | - | | 17.6626 | 143650 | 0.0 | - | | 17.6688 | 143700 | 0.0 | - | | 17.6749 | 143750 | 0.0 | - | | 17.6811 | 143800 | 0.0 | - | | 17.6872 | 143850 | 0.0 | - | | 17.6933 | 143900 | 0.0 | - | | 17.6995 | 143950 | 0.0 | - | | 17.7056 | 144000 | 0.0 | - | | 17.7118 | 144050 | 0.0 | - | | 17.7179 | 144100 | 0.0 | - | | 17.7241 | 144150 | 0.0001 | - | | 17.7302 | 144200 | 0.0004 | - | | 17.7364 | 144250 | 0.0 | - | | 17.7425 | 144300 | 0.0 | - | | 17.7487 | 144350 | 0.0 | - | | 17.7548 | 144400 | 0.0 | - | | 17.7610 | 144450 | 0.0 | - | | 17.7671 | 144500 | 0.0 | - | | 17.7733 | 144550 | 0.0 | - | | 17.7794 | 144600 | 0.0 | - | | 17.7856 | 144650 | 0.0 | - | | 17.7917 | 144700 | 0.0 | - | | 17.7979 | 144750 | 0.0002 | - | | 17.8040 | 144800 | 0.0 | - | | 17.8102 | 144850 | 0.0 | - | | 17.8163 | 144900 | 0.0 | - | | 17.8225 | 144950 | 0.0002 | - | | 17.8286 | 145000 | 0.0 | - | | 17.8347 | 145050 | 0.0 | - | | 17.8409 | 145100 | 0.0 | - | | 17.8470 | 145150 | 0.0 | - | | 17.8532 | 145200 | 0.0 | - | | 17.8593 | 145250 | 0.0 | - | | 17.8655 | 145300 | 0.0 | - | | 17.8716 | 145350 | 0.0 | - | | 17.8778 | 145400 | 0.0 | - | | 17.8839 | 145450 | 0.0 | - | | 17.8901 | 145500 | 0.0 | - | | 17.8962 | 145550 | 0.0 | - | | 17.9024 | 145600 | 0.0007 | - | | 17.9085 | 145650 | 0.0029 | - | | 17.9147 | 145700 | 0.0025 | - | | 17.9208 | 145750 | 0.0023 | - | | 17.9270 | 145800 | 0.0021 | - | | 17.9331 | 145850 | 0.003 | - | | 17.9393 | 145900 | 0.0014 | - | | 17.9454 | 145950 | 0.0019 | - | | 17.9516 | 146000 | 0.0022 | - | | 17.9577 | 146050 | 0.0019 | - | | 17.9639 | 146100 | 0.002 | - | | 17.9700 | 146150 | 0.0022 | - | | 17.9761 | 146200 | 0.0023 | - | | 17.9823 | 146250 | 0.0019 | - | | 17.9884 | 146300 | 0.002 | - | | 17.9946 | 146350 | 0.002 | - | | 18.0007 | 146400 | 0.0022 | - | | 18.0069 | 146450 | 0.0017 | - | | 18.0130 | 146500 | 0.0004 | - | | 18.0192 | 146550 | 0.0003 | - | | 18.0253 | 146600 | 0.0005 | - | | 18.0315 | 146650 | 0.0002 | - | | 18.0376 | 146700 | 0.0002 | - | | 18.0438 | 146750 | 0.0006 | - | | 18.0499 | 146800 | 0.0008 | - | | 18.0561 | 146850 | 0.0001 | - | | 18.0622 | 146900 | 0.0006 | - | | 18.0684 | 146950 | 0.0006 | - | | 18.0745 | 147000 | 0.0006 | - | | 18.0807 | 147050 | 0.0002 | - | | 18.0868 | 147100 | 0.0 | - | | 18.0930 | 147150 | 0.0 | - | | 18.0991 | 147200 | 0.0 | - | | 18.1053 | 147250 | 0.0003 | - | | 18.1114 | 147300 | 0.0 | - | | 18.1175 | 147350 | 0.0 | - | | 18.1237 | 147400 | 0.0 | - | | 18.1298 | 147450 | 0.0 | - | | 18.1360 | 147500 | 0.0 | - | | 18.1421 | 147550 | 0.0 | - | | 18.1483 | 147600 | 0.0 | - | | 18.1544 | 147650 | 0.0 | - | | 18.1606 | 147700 | 0.0 | - | | 18.1667 | 147750 | 0.0 | - | | 18.1729 | 147800 | 0.0001 | - | | 18.1790 | 147850 | 0.0002 | - | | 18.1852 | 147900 | 0.0 | - | | 18.1913 | 147950 | 0.0 | - | | 18.1975 | 148000 | 0.0 | - | | 18.2036 | 148050 | 0.0 | - | | 18.2098 | 148100 | 0.0 | - | | 18.2159 | 148150 | 0.0 | - | | 18.2221 | 148200 | 0.0 | - | | 18.2282 | 148250 | 0.0 | - | | 18.2344 | 148300 | 0.0 | - | | 18.2405 | 148350 | 0.0 | - | | 18.2466 | 148400 | 0.0 | - | | 18.2528 | 148450 | 0.0 | - | | 18.2589 | 148500 | 0.0 | - | | 18.2651 | 148550 | 0.0 | - | | 18.2712 | 148600 | 0.0 | - | | 18.2774 | 148650 | 0.0002 | - | | 18.2835 | 148700 | 0.0 | - | | 18.2897 | 148750 | 0.0 | - | | 18.2958 | 148800 | 0.0 | - | | 18.3020 | 148850 | 0.0 | - | | 18.3081 | 148900 | 0.0 | - | | 18.3143 | 148950 | 0.0001 | - | | 18.3204 | 149000 | 0.0001 | - | | 18.3266 | 149050 | 0.0003 | - | | 18.3327 | 149100 | 0.0002 | - | | 18.3389 | 149150 | 0.0003 | - | | 18.3450 | 149200 | 0.0002 | - | | 18.3512 | 149250 | 0.0 | - | | 18.3573 | 149300 | 0.0 | - | | 18.3635 | 149350 | 0.0002 | - | | 18.3696 | 149400 | 0.0002 | - | | 18.3758 | 149450 | 0.0 | - | | 18.3819 | 149500 | 0.0002 | - | | 18.3880 | 149550 | 0.0 | - | | 18.3942 | 149600 | 0.0 | - | | 18.4003 | 149650 | 0.0 | - | | 18.4065 | 149700 | 0.0002 | - | | 18.4126 | 149750 | 0.0 | - | | 18.4188 | 149800 | 0.0 | - | | 18.4249 | 149850 | 0.0 | - | | 18.4311 | 149900 | 0.0 | - | | 18.4372 | 149950 | 0.0003 | - | | 18.4434 | 150000 | 0.0 | - | | 18.4495 | 150050 | 0.0003 | - | | 18.4557 | 150100 | 0.0002 | - | | 18.4618 | 150150 | 0.0001 | - | | 18.4680 | 150200 | 0.0005 | - | | 18.4741 | 150250 | 0.0002 | - | | 18.4803 | 150300 | 0.0002 | - | | 18.4864 | 150350 | 0.0003 | - | | 18.4926 | 150400 | 0.0 | - | | 18.4987 | 150450 | 0.0001 | - | | 18.5049 | 150500 | 0.0002 | - | | 18.5110 | 150550 | 0.0003 | - | | 18.5172 | 150600 | 0.0 | - | | 18.5233 | 150650 | 0.0 | - | | 18.5294 | 150700 | 0.0 | - | | 18.5356 | 150750 | 0.0 | - | | 18.5417 | 150800 | 0.0 | - | | 18.5479 | 150850 | 0.0 | - | | 18.5540 | 150900 | 0.0 | - | | 18.5602 | 150950 | 0.0003 | - | | 18.5663 | 151000 | 0.0 | - | | 18.5725 | 151050 | 0.0 | - | | 18.5786 | 151100 | 0.0 | - | | 18.5848 | 151150 | 0.0 | - | | 18.5909 | 151200 | 0.0 | - | | 18.5971 | 151250 | 0.0 | - | | 18.6032 | 151300 | 0.0001 | - | | 18.6094 | 151350 | 0.0 | - | | 18.6155 | 151400 | 0.0 | - | | 18.6217 | 151450 | 0.0 | - | | 18.6278 | 151500 | 0.0 | - | | 18.6340 | 151550 | 0.0 | - | | 18.6401 | 151600 | 0.0 | - | | 18.6463 | 151650 | 0.0 | - | | 18.6524 | 151700 | 0.0 | - | | 18.6586 | 151750 | 0.0 | - | | 18.6647 | 151800 | 0.0 | - | | 18.6708 | 151850 | 0.0 | - | | 18.6770 | 151900 | 0.0 | - | | 18.6831 | 151950 | 0.0 | - | | 18.6893 | 152000 | 0.0 | - | | 18.6954 | 152050 | 0.0 | - | | 18.7016 | 152100 | 0.0 | - | | 18.7077 | 152150 | 0.0 | - | | 18.7139 | 152200 | 0.0 | - | | 18.7200 | 152250 | 0.0 | - | | 18.7262 | 152300 | 0.0 | - | | 18.7323 | 152350 | 0.0 | - | | 18.7385 | 152400 | 0.0001 | - | | 18.7446 | 152450 | 0.0001 | - | | 18.7508 | 152500 | 0.0 | - | | 18.7569 | 152550 | 0.0 | - | | 18.7631 | 152600 | 0.0 | - | | 18.7692 | 152650 | 0.0 | - | | 18.7754 | 152700 | 0.0 | - | | 18.7815 | 152750 | 0.0 | - | | 18.7877 | 152800 | 0.0 | - | | 18.7938 | 152850 | 0.0 | - | | 18.8000 | 152900 | 0.0 | - | | 18.8061 | 152950 | 0.0 | - | | 18.8122 | 153000 | 0.0 | - | | 18.8184 | 153050 | 0.0 | - | | 18.8245 | 153100 | 0.0 | - | | 18.8307 | 153150 | 0.0 | - | | 18.8368 | 153200 | 0.0 | - | | 18.8430 | 153250 | 0.0 | - | | 18.8491 | 153300 | 0.0 | - | | 18.8553 | 153350 | 0.0 | - | | 18.8614 | 153400 | 0.0 | - | | 18.8676 | 153450 | 0.0001 | - | | 18.8737 | 153500 | 0.0002 | - | | 18.8799 | 153550 | 0.0 | - | | 18.8860 | 153600 | 0.0 | - | | 18.8922 | 153650 | 0.0002 | - | | 18.8983 | 153700 | 0.0002 | - | | 18.9045 | 153750 | 0.0005 | - | | 18.9106 | 153800 | 0.0001 | - | | 18.9168 | 153850 | 0.0 | - | | 18.9229 | 153900 | 0.0 | - | | 18.9291 | 153950 | 0.0002 | - | | 18.9352 | 154000 | 0.0005 | - | | 18.9414 | 154050 | 0.0003 | - | | 18.9475 | 154100 | 0.0001 | - | | 18.9536 | 154150 | 0.0 | - | | 18.9598 | 154200 | 0.0 | - | | 18.9659 | 154250 | 0.0001 | - | | 18.9721 | 154300 | 0.0001 | - | | 18.9782 | 154350 | 0.0008 | - | | 18.9844 | 154400 | 0.0002 | - | | 18.9905 | 154450 | 0.0003 | - | | 18.9967 | 154500 | 0.001 | - | | 19.0028 | 154550 | 0.0009 | - | | 19.0090 | 154600 | 0.0002 | - | | 19.0151 | 154650 | 0.0002 | - | | 19.0213 | 154700 | 0.0003 | - | | 19.0274 | 154750 | 0.0003 | - | | 19.0336 | 154800 | 0.0003 | - | | 19.0397 | 154850 | 0.0 | - | | 19.0459 | 154900 | 0.0008 | - | | 19.0520 | 154950 | 0.0 | - | | 19.0582 | 155000 | 0.0002 | - | | 19.0643 | 155050 | 0.0003 | - | | 19.0705 | 155100 | 0.0014 | - | | 19.0766 | 155150 | 0.0014 | - | | 19.0827 | 155200 | 0.0002 | - | | 19.0889 | 155250 | 0.0001 | - | | 19.0950 | 155300 | 0.0 | - | | 19.1012 | 155350 | 0.0001 | - | | 19.1073 | 155400 | 0.0002 | - | | 19.1135 | 155450 | 0.0003 | - | | 19.1196 | 155500 | 0.0 | - | | 19.1258 | 155550 | 0.0001 | - | | 19.1319 | 155600 | 0.0 | - | | 19.1381 | 155650 | 0.0 | - | | 19.1442 | 155700 | 0.0005 | - | | 19.1504 | 155750 | 0.0003 | - | | 19.1565 | 155800 | 0.0003 | - | | 19.1627 | 155850 | 0.0002 | - | | 19.1688 | 155900 | 0.0003 | - | | 19.1750 | 155950 | 0.0 | - | | 19.1811 | 156000 | 0.0002 | - | | 19.1873 | 156050 | 0.0002 | - | | 19.1934 | 156100 | 0.0003 | - | | 19.1996 | 156150 | 0.0 | - | | 19.2057 | 156200 | 0.0 | - | | 19.2119 | 156250 | 0.0002 | - | | 19.2180 | 156300 | 0.0 | - | | 19.2241 | 156350 | 0.0 | - | | 19.2303 | 156400 | 0.0 | - | | 19.2364 | 156450 | 0.0 | - | | 19.2426 | 156500 | 0.0 | - | | 19.2487 | 156550 | 0.0 | - | | 19.2549 | 156600 | 0.0 | - | | 19.2610 | 156650 | 0.0 | - | | 19.2672 | 156700 | 0.0 | - | | 19.2733 | 156750 | 0.0 | - | | 19.2795 | 156800 | 0.0 | - | | 19.2856 | 156850 | 0.0 | - | | 19.2918 | 156900 | 0.0 | - | | 19.2979 | 156950 | 0.0 | - | | 19.3041 | 157000 | 0.0 | - | | 19.3102 | 157050 | 0.0 | - | | 19.3164 | 157100 | 0.0 | - | | 19.3225 | 157150 | 0.0 | - | | 19.3287 | 157200 | 0.0 | - | | 19.3348 | 157250 | 0.0 | - | | 19.3410 | 157300 | 0.0 | - | | 19.3471 | 157350 | 0.0 | - | | 19.3533 | 157400 | 0.0001 | - | | 19.3594 | 157450 | 0.0002 | - | | 19.3655 | 157500 | 0.0 | - | | 19.3717 | 157550 | 0.0 | - | | 19.3778 | 157600 | 0.0 | - | | 19.3840 | 157650 | 0.0 | - | | 19.3901 | 157700 | 0.0 | - | | 19.3963 | 157750 | 0.0 | - | | 19.4024 | 157800 | 0.0 | - | | 19.4086 | 157850 | 0.0 | - | | 19.4147 | 157900 | 0.0 | - | | 19.4209 | 157950 | 0.0 | - | | 19.4270 | 158000 | 0.0 | - | | 19.4332 | 158050 | 0.0 | - | | 19.4393 | 158100 | 0.0 | - | | 19.4455 | 158150 | 0.0 | - | | 19.4516 | 158200 | 0.0005 | - | | 19.4578 | 158250 | 0.0 | - | | 19.4639 | 158300 | 0.0 | - | | 19.4701 | 158350 | 0.0002 | - | | 19.4762 | 158400 | 0.0 | - | | 19.4824 | 158450 | 0.0 | - | | 19.4885 | 158500 | 0.0 | - | | 19.4947 | 158550 | 0.0 | - | | 19.5008 | 158600 | 0.0 | - | | 19.5069 | 158650 | 0.0 | - | | 19.5131 | 158700 | 0.0 | - | | 19.5192 | 158750 | 0.0 | - | | 19.5254 | 158800 | 0.0 | - | | 19.5315 | 158850 | 0.0 | - | | 19.5377 | 158900 | 0.0 | - | | 19.5438 | 158950 | 0.0 | - | | 19.5500 | 159000 | 0.0 | - | | 19.5561 | 159050 | 0.0 | - | | 19.5623 | 159100 | 0.0 | - | | 19.5684 | 159150 | 0.0 | - | | 19.5746 | 159200 | 0.0 | - | | 19.5807 | 159250 | 0.0 | - | | 19.5869 | 159300 | 0.0 | - | | 19.5930 | 159350 | 0.0 | - | | 19.5992 | 159400 | 0.0 | - | | 19.6053 | 159450 | 0.0 | - | | 19.6115 | 159500 | 0.0 | - | | 19.6176 | 159550 | 0.0 | - | | 19.6238 | 159600 | 0.0 | - | | 19.6299 | 159650 | 0.0001 | - | | 19.6361 | 159700 | 0.0005 | - | | 19.6422 | 159750 | 0.0002 | - | | 19.6483 | 159800 | 0.0002 | - | | 19.6545 | 159850 | 0.0003 | - | | 19.6606 | 159900 | 0.0002 | - | | 19.6668 | 159950 | 0.0 | - | | 19.6729 | 160000 | 0.0008 | - | | 19.6791 | 160050 | 0.0 | - | | 19.6852 | 160100 | 0.0003 | - | | 19.6914 | 160150 | 0.0008 | - | | 19.6975 | 160200 | 0.0002 | - | | 19.7037 | 160250 | 0.0003 | - | | 19.7098 | 160300 | 0.0002 | - | | 19.7160 | 160350 | 0.0011 | - | | 19.7221 | 160400 | 0.0 | - | | 19.7283 | 160450 | 0.0003 | - | | 19.7344 | 160500 | 0.0002 | - | | 19.7406 | 160550 | 0.0 | - | | 19.7467 | 160600 | 0.0002 | - | | 19.7529 | 160650 | 0.0002 | - | | 19.7590 | 160700 | 0.0002 | - | | 19.7652 | 160750 | 0.0008 | - | | 19.7713 | 160800 | 0.0005 | - | | 19.7774 | 160850 | 0.0002 | - | | 19.7836 | 160900 | 0.0003 | - | | 19.7897 | 160950 | 0.0002 | - | | 19.7959 | 161000 | 0.0005 | - | | 19.8020 | 161050 | 0.0005 | - | | 19.8082 | 161100 | 0.0 | - | | 19.8143 | 161150 | 0.0003 | - | | 19.8205 | 161200 | 0.0005 | - | | 19.8266 | 161250 | 0.0002 | - | | 19.8328 | 161300 | 0.0002 | - | | 19.8389 | 161350 | 0.0006 | - | | 19.8451 | 161400 | 0.0003 | - | | 19.8512 | 161450 | 0.0005 | - | | 19.8574 | 161500 | 0.0002 | - | | 19.8635 | 161550 | 0.0 | - | | 19.8697 | 161600 | 0.0 | - | | 19.8758 | 161650 | 0.0 | - | | 19.8820 | 161700 | 0.0002 | - | | 19.8881 | 161750 | 0.0005 | - | | 19.8943 | 161800 | 0.0 | - | | 19.9004 | 161850 | 0.0003 | - | | 19.9066 | 161900 | 0.0 | - | | 19.9127 | 161950 | 0.0 | - | | 19.9188 | 162000 | 0.0002 | - | | 19.9250 | 162050 | 0.0002 | - | | 19.9311 | 162100 | 0.0006 | - | | 19.9373 | 162150 | 0.0003 | - | | 19.9434 | 162200 | 0.0005 | - | | 19.9496 | 162250 | 0.0002 | - | | 19.9557 | 162300 | 0.0 | - | | 19.9619 | 162350 | 0.0005 | - | | 19.9680 | 162400 | 0.0 | - | | 19.9742 | 162450 | 0.0003 | - | | 19.9803 | 162500 | 0.0003 | - | | 19.9865 | 162550 | 0.0002 | - | | 19.9926 | 162600 | 0.0002 | - | | 19.9988 | 162650 | 0.0007 | - | | 20.0049 | 162700 | 0.0005 | - | | 20.0111 | 162750 | 0.0 | - | | 20.0172 | 162800 | 0.0003 | - | | 20.0234 | 162850 | 0.0003 | - | | 20.0295 | 162900 | 0.0 | - | | 20.0357 | 162950 | 0.0002 | - | | 20.0418 | 163000 | 0.0 | - | | 20.0480 | 163050 | 0.0003 | - | | 20.0541 | 163100 | 0.0002 | - | | 20.0602 | 163150 | 0.0003 | - | | 20.0664 | 163200 | 0.0002 | - | | 20.0725 | 163250 | 0.0 | - | | 20.0787 | 163300 | 0.0 | - | | 20.0848 | 163350 | 0.0003 | - | | 20.0910 | 163400 | 0.0 | - | | 20.0971 | 163450 | 0.0003 | - | | 20.1033 | 163500 | 0.0003 | - | | 20.1094 | 163550 | 0.0006 | - | | 20.1156 | 163600 | 0.0003 | - | | 20.1217 | 163650 | 0.0 | - | | 20.1279 | 163700 | 0.0 | - | | 20.1340 | 163750 | 0.0003 | - | | 20.1402 | 163800 | 0.0 | - | | 20.1463 | 163850 | 0.0005 | - | | 20.1525 | 163900 | 0.0005 | - | | 20.1586 | 163950 | 0.0003 | - | | 20.1648 | 164000 | 0.0005 | - | | 20.1709 | 164050 | 0.0006 | - | | 20.1771 | 164100 | 0.0 | - | | 20.1832 | 164150 | 0.0002 | - | | 20.1894 | 164200 | 0.0002 | - | | 20.1955 | 164250 | 0.0009 | - | | 20.2016 | 164300 | 0.0 | - | | 20.2078 | 164350 | 0.0 | - | | 20.2139 | 164400 | 0.0 | - | | 20.2201 | 164450 | 0.0 | - | | 20.2262 | 164500 | 0.0002 | - | | 20.2324 | 164550 | 0.0 | - | | 20.2385 | 164600 | 0.0001 | - | | 20.2447 | 164650 | 0.0 | - | | 20.2508 | 164700 | 0.0003 | - | | 20.2570 | 164750 | 0.0 | - | | 20.2631 | 164800 | 0.0 | - | | 20.2693 | 164850 | 0.0 | - | | 20.2754 | 164900 | 0.0 | - | | 20.2816 | 164950 | 0.0002 | - | | 20.2877 | 165000 | 0.0 | - | | 20.2939 | 165050 | 0.0 | - | | 20.3000 | 165100 | 0.0002 | - | | 20.3062 | 165150 | 0.0 | - | | 20.3123 | 165200 | 0.0 | - | | 20.3185 | 165250 | 0.0 | - | | 20.3246 | 165300 | 0.0 | - | | 20.3308 | 165350 | 0.0 | - | | 20.3369 | 165400 | 0.0001 | - | | 20.3430 | 165450 | 0.0 | - | | 20.3492 | 165500 | 0.0 | - | | 20.3553 | 165550 | 0.0 | - | | 20.3615 | 165600 | 0.0 | - | | 20.3676 | 165650 | 0.0 | - | | 20.3738 | 165700 | 0.0001 | - | | 20.3799 | 165750 | 0.0 | - | | 20.3861 | 165800 | 0.0 | - | | 20.3922 | 165850 | 0.0002 | - | | 20.3984 | 165900 | 0.0002 | - | | 20.4045 | 165950 | 0.0 | - | | 20.4107 | 166000 | 0.0 | - | | 20.4168 | 166050 | 0.0 | - | | 20.4230 | 166100 | 0.0 | - | | 20.4291 | 166150 | 0.0 | - | | 20.4353 | 166200 | 0.0001 | - | | 20.4414 | 166250 | 0.0 | - | | 20.4476 | 166300 | 0.0 | - | | 20.4537 | 166350 | 0.0002 | - | | 20.4599 | 166400 | 0.0 | - | | 20.4660 | 166450 | 0.0 | - | | 20.4722 | 166500 | 0.0002 | - | | 20.4783 | 166550 | 0.0001 | - | | 20.4844 | 166600 | 0.0 | - | | 20.4906 | 166650 | 0.0 | - | | 20.4967 | 166700 | 0.0 | - | | 20.5029 | 166750 | 0.0 | - | | 20.5090 | 166800 | 0.0 | - | | 20.5152 | 166850 | 0.0002 | - | | 20.5213 | 166900 | 0.0 | - | | 20.5275 | 166950 | 0.0 | - | | 20.5336 | 167000 | 0.0 | - | | 20.5398 | 167050 | 0.0 | - | | 20.5459 | 167100 | 0.0005 | - | | 20.5521 | 167150 | 0.0002 | - | | 20.5582 | 167200 | 0.0002 | - | | 20.5644 | 167250 | 0.0 | - | | 20.5705 | 167300 | 0.0001 | - | | 20.5767 | 167350 | 0.0 | - | | 20.5828 | 167400 | 0.0003 | - | | 20.5890 | 167450 | 0.0 | - | | 20.5951 | 167500 | 0.0 | - | | 20.6013 | 167550 | 0.0001 | - | | 20.6074 | 167600 | 0.0003 | - | | 20.6135 | 167650 | 0.0007 | - | | 20.6197 | 167700 | 0.0005 | - | | 20.6258 | 167750 | 0.0001 | - | | 20.6320 | 167800 | 0.0 | - | | 20.6381 | 167850 | 0.0001 | - | | 20.6443 | 167900 | 0.0 | - | | 20.6504 | 167950 | 0.0 | - | | 20.6566 | 168000 | 0.0 | - | | 20.6627 | 168050 | 0.0 | - | | 20.6689 | 168100 | 0.0002 | - | | 20.6750 | 168150 | 0.0 | - | | 20.6812 | 168200 | 0.0 | - | | 20.6873 | 168250 | 0.0001 | - | | 20.6935 | 168300 | 0.0 | - | | 20.6996 | 168350 | 0.0 | - | | 20.7058 | 168400 | 0.0 | - | | 20.7119 | 168450 | 0.0 | - | | 20.7181 | 168500 | 0.0 | - | | 20.7242 | 168550 | 0.0 | - | | 20.7304 | 168600 | 0.0 | - | | 20.7365 | 168650 | 0.0 | - | | 20.7427 | 168700 | 0.0 | - | | 20.7488 | 168750 | 0.0 | - | | 20.7549 | 168800 | 0.0 | - | | 20.7611 | 168850 | 0.0 | - | | 20.7672 | 168900 | 0.0 | - | | 20.7734 | 168950 | 0.0 | - | | 20.7795 | 169000 | 0.0 | - | | 20.7857 | 169050 | 0.0 | - | | 20.7918 | 169100 | 0.0 | - | | 20.7980 | 169150 | 0.0 | - | | 20.8041 | 169200 | 0.0 | - | | 20.8103 | 169250 | 0.0 | - | | 20.8164 | 169300 | 0.0 | - | | 20.8226 | 169350 | 0.0 | - | | 20.8287 | 169400 | 0.0 | - | | 20.8349 | 169450 | 0.0 | - | | 20.8410 | 169500 | 0.0001 | - | | 20.8472 | 169550 | 0.0002 | - | | 20.8533 | 169600 | 0.0 | - | | 20.8595 | 169650 | 0.0 | - | | 20.8656 | 169700 | 0.0 | - | | 20.8718 | 169750 | 0.0001 | - | | 20.8779 | 169800 | 0.0 | - | | 20.8841 | 169850 | 0.0 | - | | 20.8902 | 169900 | 0.0 | - | | 20.8963 | 169950 | 0.0 | - | | 20.9025 | 170000 | 0.0 | - | | 20.9086 | 170050 | 0.0001 | - | | 20.9148 | 170100 | 0.0 | - | | 20.9209 | 170150 | 0.0 | - | | 20.9271 | 170200 | 0.0 | - | | 20.9332 | 170250 | 0.0 | - | | 20.9394 | 170300 | 0.0 | - | | 20.9455 | 170350 | 0.0002 | - | | 20.9517 | 170400 | 0.0 | - | | 20.9578 | 170450 | 0.0 | - | | 20.9640 | 170500 | 0.0 | - | | 20.9701 | 170550 | 0.0 | - | | 20.9763 | 170600 | 0.0 | - | | 20.9824 | 170650 | 0.0 | - | | 20.9886 | 170700 | 0.0 | - | | 20.9947 | 170750 | 0.0 | - | | 21.0009 | 170800 | 0.0 | - | | 21.0070 | 170850 | 0.0 | - | | 21.0132 | 170900 | 0.0 | - | | 21.0193 | 170950 | 0.0 | - | | 21.0255 | 171000 | 0.0 | - | | 21.0316 | 171050 | 0.0 | - | | 21.0377 | 171100 | 0.0 | - | | 21.0439 | 171150 | 0.0 | - | | 21.0500 | 171200 | 0.0 | - | | 21.0562 | 171250 | 0.0 | - | | 21.0623 | 171300 | 0.0 | - | | 21.0685 | 171350 | 0.0 | - | | 21.0746 | 171400 | 0.0 | - | | 21.0808 | 171450 | 0.0 | - | | 21.0869 | 171500 | 0.0 | - | | 21.0931 | 171550 | 0.0 | - | | 21.0992 | 171600 | 0.0 | - | | 21.1054 | 171650 | 0.0 | - | | 21.1115 | 171700 | 0.0 | - | | 21.1177 | 171750 | 0.0 | - | | 21.1238 | 171800 | 0.0 | - | | 21.1300 | 171850 | 0.0 | - | | 21.1361 | 171900 | 0.0 | - | | 21.1423 | 171950 | 0.0 | - | | 21.1484 | 172000 | 0.0 | - | | 21.1546 | 172050 | 0.0 | - | | 21.1607 | 172100 | 0.0 | - | | 21.1669 | 172150 | 0.0 | - | | 21.1730 | 172200 | 0.0 | - | | 21.1791 | 172250 | 0.0 | - | | 21.1853 | 172300 | 0.0 | - | | 21.1914 | 172350 | 0.0 | - | | 21.1976 | 172400 | 0.0 | - | | 21.2037 | 172450 | 0.0 | - | | 21.2099 | 172500 | 0.0 | - | | 21.2160 | 172550 | 0.0 | - | | 21.2222 | 172600 | 0.0 | - | | 21.2283 | 172650 | 0.0 | - | | 21.2345 | 172700 | 0.0 | - | | 21.2406 | 172750 | 0.0 | - | | 21.2468 | 172800 | 0.0 | - | | 21.2529 | 172850 | 0.0 | - | | 21.2591 | 172900 | 0.0 | - | | 21.2652 | 172950 | 0.0 | - | | 21.2714 | 173000 | 0.0 | - | | 21.2775 | 173050 | 0.0 | - | | 21.2837 | 173100 | 0.0 | - | | 21.2898 | 173150 | 0.0 | - | | 21.2960 | 173200 | 0.0 | - | | 21.3021 | 173250 | 0.0 | - | | 21.3083 | 173300 | 0.0 | - | | 21.3144 | 173350 | 0.0 | - | | 21.3205 | 173400 | 0.0 | - | | 21.3267 | 173450 | 0.0 | - | | 21.3328 | 173500 | 0.0 | - | | 21.3390 | 173550 | 0.0 | - | | 21.3451 | 173600 | 0.0 | - | | 21.3513 | 173650 | 0.0 | - | | 21.3574 | 173700 | 0.0 | - | | 21.3636 | 173750 | 0.0 | - | | 21.3697 | 173800 | 0.0 | - | | 21.3759 | 173850 | 0.0 | - | | 21.3820 | 173900 | 0.0 | - | | 21.3882 | 173950 | 0.0 | - | | 21.3943 | 174000 | 0.0 | - | | 21.4005 | 174050 | 0.0 | - | | 21.4066 | 174100 | 0.0 | - | | 21.4128 | 174150 | 0.0 | - | | 21.4189 | 174200 | 0.0 | - | | 21.4251 | 174250 | 0.0 | - | | 21.4312 | 174300 | 0.0 | - | | 21.4374 | 174350 | 0.0 | - | | 21.4435 | 174400 | 0.0002 | - | | 21.4496 | 174450 | 0.0 | - | | 21.4558 | 174500 | 0.0 | - | | 21.4619 | 174550 | 0.0 | - | | 21.4681 | 174600 | 0.0 | - | | 21.4742 | 174650 | 0.0 | - | | 21.4804 | 174700 | 0.0 | - | | 21.4865 | 174750 | 0.0 | - | | 21.4927 | 174800 | 0.0 | - | | 21.4988 | 174850 | 0.0 | - | | 21.5050 | 174900 | 0.0 | - | | 21.5111 | 174950 | 0.0 | - | | 21.5173 | 175000 | 0.0 | - | | 21.5234 | 175050 | 0.0 | - | | 21.5296 | 175100 | 0.0 | - | | 21.5357 | 175150 | 0.0 | - | | 21.5419 | 175200 | 0.0 | - | | 21.5480 | 175250 | 0.0 | - | | 21.5542 | 175300 | 0.0 | - | | 21.5603 | 175350 | 0.0 | - | | 21.5665 | 175400 | 0.0 | - | | 21.5726 | 175450 | 0.0 | - | | 21.5788 | 175500 | 0.0 | - | | 21.5849 | 175550 | 0.0 | - | | 21.5910 | 175600 | 0.0 | - | | 21.5972 | 175650 | 0.0 | - | | 21.6033 | 175700 | 0.0 | - | | 21.6095 | 175750 | 0.0 | - | | 21.6156 | 175800 | 0.0 | - | | 21.6218 | 175850 | 0.0 | - | | 21.6279 | 175900 | 0.0 | - | | 21.6341 | 175950 | 0.0 | - | | 21.6402 | 176000 | 0.0 | - | | 21.6464 | 176050 | 0.0 | - | | 21.6525 | 176100 | 0.0 | - | | 21.6587 | 176150 | 0.0 | - | | 21.6648 | 176200 | 0.0 | - | | 21.6710 | 176250 | 0.0 | - | | 21.6771 | 176300 | 0.0 | - | | 21.6833 | 176350 | 0.0 | - | | 21.6894 | 176400 | 0.0 | - | | 21.6956 | 176450 | 0.0 | - | | 21.7017 | 176500 | 0.0 | - | | 21.7079 | 176550 | 0.0 | - | | 21.7140 | 176600 | 0.0 | - | | 21.7202 | 176650 | 0.0 | - | | 21.7263 | 176700 | 0.0 | - | | 21.7324 | 176750 | 0.0 | - | | 21.7386 | 176800 | 0.0 | - | | 21.7447 | 176850 | 0.0 | - | | 21.7509 | 176900 | 0.0 | - | | 21.7570 | 176950 | 0.0 | - | | 21.7632 | 177000 | 0.0 | - | | 21.7693 | 177050 | 0.0 | - | | 21.7755 | 177100 | 0.0 | - | | 21.7816 | 177150 | 0.0 | - | | 21.7878 | 177200 | 0.0 | - | | 21.7939 | 177250 | 0.0 | - | | 21.8001 | 177300 | 0.0 | - | | 21.8062 | 177350 | 0.0 | - | | 21.8124 | 177400 | 0.0 | - | | 21.8185 | 177450 | 0.0 | - | | 21.8247 | 177500 | 0.0 | - | | 21.8308 | 177550 | 0.0 | - | | 21.8370 | 177600 | 0.0 | - | | 21.8431 | 177650 | 0.0 | - | | 21.8493 | 177700 | 0.0 | - | | 21.8554 | 177750 | 0.0 | - | | 21.8616 | 177800 | 0.0 | - | | 21.8677 | 177850 | 0.0 | - | | 21.8738 | 177900 | 0.0 | - | | 21.8800 | 177950 | 0.0 | - | | 21.8861 | 178000 | 0.0 | - | | 21.8923 | 178050 | 0.0 | - | | 21.8984 | 178100 | 0.0 | - | | 21.9046 | 178150 | 0.0 | - | | 21.9107 | 178200 | 0.0 | - | | 21.9169 | 178250 | 0.0 | - | | 21.9230 | 178300 | 0.0 | - | | 21.9292 | 178350 | 0.0 | - | | 21.9353 | 178400 | 0.0 | - | | 21.9415 | 178450 | 0.0 | - | | 21.9476 | 178500 | 0.0 | - | | 21.9538 | 178550 | 0.0 | - | | 21.9599 | 178600 | 0.0 | - | | 21.9661 | 178650 | 0.0 | - | | 21.9722 | 178700 | 0.0 | - | | 21.9784 | 178750 | 0.0 | - | | 21.9845 | 178800 | 0.0 | - | | 21.9907 | 178850 | 0.0 | - | | 21.9968 | 178900 | 0.0 | - | | 22.0030 | 178950 | 0.0 | - | | 22.0091 | 179000 | 0.0 | - | | 22.0152 | 179050 | 0.0 | - | | 22.0214 | 179100 | 0.0 | - | | 22.0275 | 179150 | 0.0 | - | | 22.0337 | 179200 | 0.0 | - | | 22.0398 | 179250 | 0.0 | - | | 22.0460 | 179300 | 0.0 | - | | 22.0521 | 179350 | 0.0 | - | | 22.0583 | 179400 | 0.0 | - | | 22.0644 | 179450 | 0.0 | - | | 22.0706 | 179500 | 0.0 | - | | 22.0767 | 179550 | 0.0 | - | | 22.0829 | 179600 | 0.0 | - | | 22.0890 | 179650 | 0.0 | - | | 22.0952 | 179700 | 0.0 | - | | 22.1013 | 179750 | 0.0 | - | | 22.1075 | 179800 | 0.0 | - | | 22.1136 | 179850 | 0.0 | - | | 22.1198 | 179900 | 0.0 | - | | 22.1259 | 179950 | 0.0 | - | | 22.1321 | 180000 | 0.0 | - | | 22.1382 | 180050 | 0.0 | - | | 22.1444 | 180100 | 0.0 | - | | 22.1505 | 180150 | 0.0 | - | | 22.1566 | 180200 | 0.0 | - | | 22.1628 | 180250 | 0.0 | - | | 22.1689 | 180300 | 0.0 | - | | 22.1751 | 180350 | 0.0 | - | | 22.1812 | 180400 | 0.0 | - | | 22.1874 | 180450 | 0.0 | - | | 22.1935 | 180500 | 0.0 | - | | 22.1997 | 180550 | 0.0 | - | | 22.2058 | 180600 | 0.0 | - | | 22.2120 | 180650 | 0.0 | - | | 22.2181 | 180700 | 0.0 | - | | 22.2243 | 180750 | 0.0 | - | | 22.2304 | 180800 | 0.0 | - | | 22.2366 | 180850 | 0.0 | - | | 22.2427 | 180900 | 0.0 | - | | 22.2489 | 180950 | 0.0 | - | | 22.2550 | 181000 | 0.0 | - | | 22.2612 | 181050 | 0.0 | - | | 22.2673 | 181100 | 0.0 | - | | 22.2735 | 181150 | 0.0 | - | | 22.2796 | 181200 | 0.0 | - | | 22.2857 | 181250 | 0.0 | - | | 22.2919 | 181300 | 0.0 | - | | 22.2980 | 181350 | 0.0 | - | | 22.3042 | 181400 | 0.0 | - | | 22.3103 | 181450 | 0.0 | - | | 22.3165 | 181500 | 0.0 | - | | 22.3226 | 181550 | 0.0 | - | | 22.3288 | 181600 | 0.0 | - | | 22.3349 | 181650 | 0.0 | - | | 22.3411 | 181700 | 0.0 | - | | 22.3472 | 181750 | 0.0 | - | | 22.3534 | 181800 | 0.0 | - | | 22.3595 | 181850 | 0.0 | - | | 22.3657 | 181900 | 0.0 | - | | 22.3718 | 181950 | 0.0 | - | | 22.3780 | 182000 | 0.0 | - | | 22.3841 | 182050 | 0.0 | - | | 22.3903 | 182100 | 0.0 | - | | 22.3964 | 182150 | 0.0 | - | | 22.4026 | 182200 | 0.0 | - | | 22.4087 | 182250 | 0.0 | - | | 22.4149 | 182300 | 0.0 | - | | 22.4210 | 182350 | 0.0 | - | | 22.4271 | 182400 | 0.0 | - | | 22.4333 | 182450 | 0.0 | - | | 22.4394 | 182500 | 0.0 | - | | 22.4456 | 182550 | 0.0 | - | | 22.4517 | 182600 | 0.0 | - | | 22.4579 | 182650 | 0.0 | - | | 22.4640 | 182700 | 0.0 | - | | 22.4702 | 182750 | 0.0 | - | | 22.4763 | 182800 | 0.0 | - | | 22.4825 | 182850 | 0.0 | - | | 22.4886 | 182900 | 0.0 | - | | 22.4948 | 182950 | 0.0002 | - | | 22.5009 | 183000 | 0.0 | - | | 22.5071 | 183050 | 0.0 | - | | 22.5132 | 183100 | 0.0 | - | | 22.5194 | 183150 | 0.0 | - | | 22.5255 | 183200 | 0.0 | - | | 22.5317 | 183250 | 0.0 | - | | 22.5378 | 183300 | 0.0 | - | | 22.5440 | 183350 | 0.0 | - | | 22.5501 | 183400 | 0.0 | - | | 22.5563 | 183450 | 0.0 | - | | 22.5624 | 183500 | 0.0 | - | | 22.5685 | 183550 | 0.0 | - | | 22.5747 | 183600 | 0.0 | - | | 22.5808 | 183650 | 0.0 | - | | 22.5870 | 183700 | 0.0 | - | | 22.5931 | 183750 | 0.0 | - | | 22.5993 | 183800 | 0.0 | - | | 22.6054 | 183850 | 0.0 | - | | 22.6116 | 183900 | 0.0 | - | | 22.6177 | 183950 | 0.0 | - | | 22.6239 | 184000 | 0.0 | - | | 22.6300 | 184050 | 0.0 | - | | 22.6362 | 184100 | 0.0 | - | | 22.6423 | 184150 | 0.0 | - | | 22.6485 | 184200 | 0.0 | - | | 22.6546 | 184250 | 0.0 | - | | 22.6608 | 184300 | 0.0 | - | | 22.6669 | 184350 | 0.0 | - | | 22.6731 | 184400 | 0.0 | - | | 22.6792 | 184450 | 0.0 | - | | 22.6854 | 184500 | 0.0 | - | | 22.6915 | 184550 | 0.0 | - | | 22.6977 | 184600 | 0.0 | - | | 22.7038 | 184650 | 0.0 | - | | 22.7099 | 184700 | 0.0 | - | | 22.7161 | 184750 | 0.0 | - | | 22.7222 | 184800 | 0.0 | - | | 22.7284 | 184850 | 0.0 | - | | 22.7345 | 184900 | 0.0 | - | | 22.7407 | 184950 | 0.0 | - | | 22.7468 | 185000 | 0.0 | - | | 22.7530 | 185050 | 0.0001 | - | | 22.7591 | 185100 | 0.0002 | - | | 22.7653 | 185150 | 0.0 | - | | 22.7714 | 185200 | 0.0 | - | | 22.7776 | 185250 | 0.0 | - | | 22.7837 | 185300 | 0.0001 | - | | 22.7899 | 185350 | 0.0 | - | | 22.7960 | 185400 | 0.0 | - | | 22.8022 | 185450 | 0.0002 | - | | 22.8083 | 185500 | 0.0003 | - | | 22.8145 | 185550 | 0.0 | - | | 22.8206 | 185600 | 0.0 | - | | 22.8268 | 185650 | 0.0001 | - | | 22.8329 | 185700 | 0.0 | - | | 22.8391 | 185750 | 0.0 | - | | 22.8452 | 185800 | 0.0 | - | | 22.8513 | 185850 | 0.0 | - | | 22.8575 | 185900 | 0.0 | - | | 22.8636 | 185950 | 0.0 | - | | 22.8698 | 186000 | 0.0 | - | | 22.8759 | 186050 | 0.0 | - | | 22.8821 | 186100 | 0.0 | - | | 22.8882 | 186150 | 0.0 | - | | 22.8944 | 186200 | 0.0 | - | | 22.9005 | 186250 | 0.0 | - | | 22.9067 | 186300 | 0.0 | - | | 22.9128 | 186350 | 0.0001 | - | | 22.9190 | 186400 | 0.0 | - | | 22.9251 | 186450 | 0.0 | - | | 22.9313 | 186500 | 0.0 | - | | 22.9374 | 186550 | 0.0 | - | | 22.9436 | 186600 | 0.0 | - | | 22.9497 | 186650 | 0.0 | - | | 22.9559 | 186700 | 0.0 | - | | 22.9620 | 186750 | 0.0 | - | | 22.9682 | 186800 | 0.0 | - | | 22.9743 | 186850 | 0.0 | - | | 22.9805 | 186900 | 0.0 | - | | 22.9866 | 186950 | 0.0 | - | | 22.9927 | 187000 | 0.0 | - | | 22.9989 | 187050 | 0.0 | - | | 23.0050 | 187100 | 0.0 | - | | 23.0112 | 187150 | 0.0 | - | | 23.0173 | 187200 | 0.0 | - | | 23.0235 | 187250 | 0.0 | - | | 23.0296 | 187300 | 0.0 | - | | 23.0358 | 187350 | 0.0 | - | | 23.0419 | 187400 | 0.0 | - | | 23.0481 | 187450 | 0.0 | - | | 23.0542 | 187500 | 0.0 | - | | 23.0604 | 187550 | 0.0 | - | | 23.0665 | 187600 | 0.0 | - | | 23.0727 | 187650 | 0.0 | - | | 23.0788 | 187700 | 0.0 | - | | 23.0850 | 187750 | 0.0 | - | | 23.0911 | 187800 | 0.0 | - | | 23.0973 | 187850 | 0.0 | - | | 23.1034 | 187900 | 0.0 | - | | 23.1096 | 187950 | 0.0002 | - | | 23.1157 | 188000 | 0.0 | - | | 23.1218 | 188050 | 0.0 | - | | 23.1280 | 188100 | 0.0 | - | | 23.1341 | 188150 | 0.0 | - | | 23.1403 | 188200 | 0.0 | - | | 23.1464 | 188250 | 0.0 | - | | 23.1526 | 188300 | 0.0 | - | | 23.1587 | 188350 | 0.0002 | - | | 23.1649 | 188400 | 0.0 | - | | 23.1710 | 188450 | 0.0 | - | | 23.1772 | 188500 | 0.0 | - | | 23.1833 | 188550 | 0.0 | - | | 23.1895 | 188600 | 0.0 | - | | 23.1956 | 188650 | 0.0 | - | | 23.2018 | 188700 | 0.0 | - | | 23.2079 | 188750 | 0.0 | - | | 23.2141 | 188800 | 0.0 | - | | 23.2202 | 188850 | 0.0 | - | | 23.2264 | 188900 | 0.0 | - | | 23.2325 | 188950 | 0.0 | - | | 23.2387 | 189000 | 0.0 | - | | 23.2448 | 189050 | 0.0 | - | | 23.2510 | 189100 | 0.0 | - | | 23.2571 | 189150 | 0.0 | - | | 23.2632 | 189200 | 0.0 | - | | 23.2694 | 189250 | 0.0 | - | | 23.2755 | 189300 | 0.0 | - | | 23.2817 | 189350 | 0.0 | - | | 23.2878 | 189400 | 0.0 | - | | 23.2940 | 189450 | 0.0 | - | | 23.3001 | 189500 | 0.0 | - | | 23.3063 | 189550 | 0.0 | - | | 23.3124 | 189600 | 0.0 | - | | 23.3186 | 189650 | 0.0 | - | | 23.3247 | 189700 | 0.0 | - | | 23.3309 | 189750 | 0.0 | - | | 23.3370 | 189800 | 0.0 | - | | 23.3432 | 189850 | 0.0 | - | | 23.3493 | 189900 | 0.0 | - | | 23.3555 | 189950 | 0.0 | - | | 23.3616 | 190000 | 0.0 | - | | 23.3678 | 190050 | 0.0 | - | | 23.3739 | 190100 | 0.0 | - | | 23.3801 | 190150 | 0.0 | - | | 23.3862 | 190200 | 0.0002 | - | | 23.3924 | 190250 | 0.0 | - | | 23.3985 | 190300 | 0.0 | - | | 23.4046 | 190350 | 0.0 | - | | 23.4108 | 190400 | 0.0 | - | | 23.4169 | 190450 | 0.0 | - | | 23.4231 | 190500 | 0.0 | - | | 23.4292 | 190550 | 0.0 | - | | 23.4354 | 190600 | 0.0 | - | | 23.4415 | 190650 | 0.0 | - | | 23.4477 | 190700 | 0.0 | - | | 23.4538 | 190750 | 0.0 | - | | 23.4600 | 190800 | 0.0 | - | | 23.4661 | 190850 | 0.0 | - | | 23.4723 | 190900 | 0.0 | - | | 23.4784 | 190950 | 0.0 | - | | 23.4846 | 191000 | 0.0 | - | | 23.4907 | 191050 | 0.0 | - | | 23.4969 | 191100 | 0.0 | - | | 23.5030 | 191150 | 0.0 | - | | 23.5092 | 191200 | 0.0 | - | | 23.5153 | 191250 | 0.0 | - | | 23.5215 | 191300 | 0.0 | - | | 23.5276 | 191350 | 0.0 | - | | 23.5338 | 191400 | 0.0 | - | | 23.5399 | 191450 | 0.0 | - | | 23.5460 | 191500 | 0.0 | - | | 23.5522 | 191550 | 0.0 | - | | 23.5583 | 191600 | 0.0 | - | | 23.5645 | 191650 | 0.0 | - | | 23.5706 | 191700 | 0.0 | - | | 23.5768 | 191750 | 0.0 | - | | 23.5829 | 191800 | 0.0 | - | | 23.5891 | 191850 | 0.0 | - | | 23.5952 | 191900 | 0.0 | - | | 23.6014 | 191950 | 0.0 | - | | 23.6075 | 192000 | 0.0 | - | | 23.6137 | 192050 | 0.0 | - | | 23.6198 | 192100 | 0.0 | - | | 23.6260 | 192150 | 0.0 | - | | 23.6321 | 192200 | 0.0 | - | | 23.6383 | 192250 | 0.0002 | - | | 23.6444 | 192300 | 0.0001 | - | | 23.6506 | 192350 | 0.0 | - | | 23.6567 | 192400 | 0.0 | - | | 23.6629 | 192450 | 0.0 | - | | 23.6690 | 192500 | 0.0 | - | | 23.6752 | 192550 | 0.0 | - | | 23.6813 | 192600 | 0.0 | - | | 23.6874 | 192650 | 0.0 | - | | 23.6936 | 192700 | 0.0 | - | | 23.6997 | 192750 | 0.0 | - | | 23.7059 | 192800 | 0.0 | - | | 23.7120 | 192850 | 0.0 | - | | 23.7182 | 192900 | 0.0 | - | | 23.7243 | 192950 | 0.0 | - | | 23.7305 | 193000 | 0.0 | - | | 23.7366 | 193050 | 0.0 | - | | 23.7428 | 193100 | 0.0 | - | | 23.7489 | 193150 | 0.0 | - | | 23.7551 | 193200 | 0.0 | - | | 23.7612 | 193250 | 0.0 | - | | 23.7674 | 193300 | 0.0 | - | | 23.7735 | 193350 | 0.0 | - | | 23.7797 | 193400 | 0.0 | - | | 23.7858 | 193450 | 0.0001 | - | | 23.7920 | 193500 | 0.0 | - | | 23.7981 | 193550 | 0.0 | - | | 23.8043 | 193600 | 0.0 | - | | 23.8104 | 193650 | 0.0 | - | | 23.8165 | 193700 | 0.0 | - | | 23.8227 | 193750 | 0.0 | - | | 23.8288 | 193800 | 0.0 | - | | 23.8350 | 193850 | 0.0 | - | | 23.8411 | 193900 | 0.0 | - | | 23.8473 | 193950 | 0.0 | - | | 23.8534 | 194000 | 0.0 | - | | 23.8596 | 194050 | 0.0 | - | | 23.8657 | 194100 | 0.0 | - | | 23.8719 | 194150 | 0.0 | - | | 23.8780 | 194200 | 0.0 | - | | 23.8842 | 194250 | 0.0 | - | | 23.8903 | 194300 | 0.0 | - | | 23.8965 | 194350 | 0.0 | - | | 23.9026 | 194400 | 0.0 | - | | 23.9088 | 194450 | 0.0 | - | | 23.9149 | 194500 | 0.0 | - | | 23.9211 | 194550 | 0.0 | - | | 23.9272 | 194600 | 0.0 | - | | 23.9334 | 194650 | 0.0 | - | | 23.9395 | 194700 | 0.0 | - | | 23.9457 | 194750 | 0.0 | - | | 23.9518 | 194800 | 0.0 | - | | 23.9579 | 194850 | 0.0 | - | | 23.9641 | 194900 | 0.0 | - | | 23.9702 | 194950 | 0.0 | - | | 23.9764 | 195000 | 0.0 | - | | 23.9825 | 195050 | 0.0 | - | | 23.9887 | 195100 | 0.0 | - | | 23.9948 | 195150 | 0.0 | - | | 24.0010 | 195200 | 0.0 | - | | 24.0071 | 195250 | 0.0 | - | | 24.0133 | 195300 | 0.0 | - | | 24.0194 | 195350 | 0.0 | - | | 24.0256 | 195400 | 0.0 | - | | 24.0317 | 195450 | 0.0 | - | | 24.0379 | 195500 | 0.0 | - | | 24.0440 | 195550 | 0.0 | - | | 24.0502 | 195600 | 0.0001 | - | | 24.0563 | 195650 | 0.0 | - | | 24.0625 | 195700 | 0.0 | - | | 24.0686 | 195750 | 0.0 | - | | 24.0748 | 195800 | 0.0 | - | | 24.0809 | 195850 | 0.0 | - | | 24.0871 | 195900 | 0.0 | - | | 24.0932 | 195950 | 0.0 | - | | 24.0993 | 196000 | 0.0 | - | | 24.1055 | 196050 | 0.0 | - | | 24.1116 | 196100 | 0.0 | - | | 24.1178 | 196150 | 0.0 | - | | 24.1239 | 196200 | 0.0 | - | | 24.1301 | 196250 | 0.0 | - | | 24.1362 | 196300 | 0.0 | - | | 24.1424 | 196350 | 0.0 | - | | 24.1485 | 196400 | 0.0 | - | | 24.1547 | 196450 | 0.0 | - | | 24.1608 | 196500 | 0.0 | - | | 24.1670 | 196550 | 0.0 | - | | 24.1731 | 196600 | 0.0 | - | | 24.1793 | 196650 | 0.0 | - | | 24.1854 | 196700 | 0.0 | - | | 24.1916 | 196750 | 0.0 | - | | 24.1977 | 196800 | 0.0 | - | | 24.2039 | 196850 | 0.0 | - | | 24.2100 | 196900 | 0.0 | - | | 24.2162 | 196950 | 0.0 | - | | 24.2223 | 197000 | 0.0 | - | | 24.2285 | 197050 | 0.0 | - | | 24.2346 | 197100 | 0.0 | - | | 24.2407 | 197150 | 0.0 | - | | 24.2469 | 197200 | 0.0 | - | | 24.2530 | 197250 | 0.0 | - | | 24.2592 | 197300 | 0.0 | - | | 24.2653 | 197350 | 0.0 | - | | 24.2715 | 197400 | 0.0 | - | | 24.2776 | 197450 | 0.0 | - | | 24.2838 | 197500 | 0.0 | - | | 24.2899 | 197550 | 0.0001 | - | | 24.2961 | 197600 | 0.0 | - | | 24.3022 | 197650 | 0.0 | - | | 24.3084 | 197700 | 0.0 | - | | 24.3145 | 197750 | 0.0 | - | | 24.3207 | 197800 | 0.0 | - | | 24.3268 | 197850 | 0.0 | - | | 24.3330 | 197900 | 0.0 | - | | 24.3391 | 197950 | 0.0 | - | | 24.3453 | 198000 | 0.0 | - | | 24.3514 | 198050 | 0.0 | - | | 24.3576 | 198100 | 0.0 | - | | 24.3637 | 198150 | 0.0 | - | | 24.3699 | 198200 | 0.0 | - | | 24.3760 | 198250 | 0.0 | - | | 24.3821 | 198300 | 0.0 | - | | 24.3883 | 198350 | 0.0 | - | | 24.3944 | 198400 | 0.0002 | - | | 24.4006 | 198450 | 0.0 | - | | 24.4067 | 198500 | 0.0 | - | | 24.4129 | 198550 | 0.0 | - | | 24.4190 | 198600 | 0.0 | - | | 24.4252 | 198650 | 0.0 | - | | 24.4313 | 198700 | 0.0 | - | | 24.4375 | 198750 | 0.0 | - | | 24.4436 | 198800 | 0.0 | - | | 24.4498 | 198850 | 0.0 | - | | 24.4559 | 198900 | 0.0 | - | | 24.4621 | 198950 | 0.0 | - | | 24.4682 | 199000 | 0.0 | - | | 24.4744 | 199050 | 0.0 | - | | 24.4805 | 199100 | 0.0 | - | | 24.4867 | 199150 | 0.0 | - | | 24.4928 | 199200 | 0.0 | - | | 24.4990 | 199250 | 0.0 | - | | 24.5051 | 199300 | 0.0 | - | | 24.5113 | 199350 | 0.0 | - | | 24.5174 | 199400 | 0.0 | - | | 24.5235 | 199450 | 0.0 | - | | 24.5297 | 199500 | 0.0 | - | | 24.5358 | 199550 | 0.0 | - | | 24.5420 | 199600 | 0.0 | - | | 24.5481 | 199650 | 0.0 | - | | 24.5543 | 199700 | 0.0 | - | | 24.5604 | 199750 | 0.0 | - | | 24.5666 | 199800 | 0.0 | - | | 24.5727 | 199850 | 0.0 | - | | 24.5789 | 199900 | 0.0 | - | | 24.5850 | 199950 | 0.0 | - | | 24.5912 | 200000 | 0.0 | - | | 24.5973 | 200050 | 0.0 | - | | 24.6035 | 200100 | 0.0 | - | | 24.6096 | 200150 | 0.0 | - | | 24.6158 | 200200 | 0.0 | - | | 24.6219 | 200250 | 0.0 | - | | 24.6281 | 200300 | 0.0 | - | | 24.6342 | 200350 | 0.0 | - | | 24.6404 | 200400 | 0.0 | - | | 24.6465 | 200450 | 0.0 | - | | 24.6526 | 200500 | 0.0 | - | | 24.6588 | 200550 | 0.0 | - | | 24.6649 | 200600 | 0.0002 | - | | 24.6711 | 200650 | 0.0 | - | | 24.6772 | 200700 | 0.0 | - | | 24.6834 | 200750 | 0.0 | - | | 24.6895 | 200800 | 0.0 | - | | 24.6957 | 200850 | 0.0 | - | | 24.7018 | 200900 | 0.0 | - | | 24.7080 | 200950 | 0.0 | - | | 24.7141 | 201000 | 0.0 | - | | 24.7203 | 201050 | 0.0 | - | | 24.7264 | 201100 | 0.0 | - | | 24.7326 | 201150 | 0.0 | - | | 24.7387 | 201200 | 0.0 | - | | 24.7449 | 201250 | 0.0 | - | | 24.7510 | 201300 | 0.0 | - | | 24.7572 | 201350 | 0.0 | - | | 24.7633 | 201400 | 0.0 | - | | 24.7695 | 201450 | 0.0 | - | | 24.7756 | 201500 | 0.0 | - | | 24.7818 | 201550 | 0.0 | - | | 24.7879 | 201600 | 0.0 | - | | 24.7940 | 201650 | 0.0 | - | | 24.8002 | 201700 | 0.0 | - | | 24.8063 | 201750 | 0.0 | - | | 24.8125 | 201800 | 0.0 | - | | 24.8186 | 201850 | 0.0 | - | | 24.8248 | 201900 | 0.0 | - | | 24.8309 | 201950 | 0.0 | - | | 24.8371 | 202000 | 0.0 | - | | 24.8432 | 202050 | 0.0 | - | | 24.8494 | 202100 | 0.0 | - | | 24.8555 | 202150 | 0.0 | - | | 24.8617 | 202200 | 0.0 | - | | 24.8678 | 202250 | 0.0 | - | | 24.8740 | 202300 | 0.0 | - | | 24.8801 | 202350 | 0.0 | - | | 24.8863 | 202400 | 0.0 | - | | 24.8924 | 202450 | 0.0 | - | | 24.8986 | 202500 | 0.0 | - | | 24.9047 | 202550 | 0.0 | - | | 24.9109 | 202600 | 0.0 | - | | 24.9170 | 202650 | 0.0 | - | | 24.9232 | 202700 | 0.0 | - | | 24.9293 | 202750 | 0.0 | - | | 24.9354 | 202800 | 0.0 | - | | 24.9416 | 202850 | 0.0 | - | | 24.9477 | 202900 | 0.0 | - | | 24.9539 | 202950 | 0.0 | - | | 24.9600 | 203000 | 0.0 | - | | 24.9662 | 203050 | 0.0 | - | | 24.9723 | 203100 | 0.0 | - | | 24.9785 | 203150 | 0.0 | - | | 24.9846 | 203200 | 0.0 | - | | 24.9908 | 203250 | 0.0 | - | | 24.9969 | 203300 | 0.0 | - | | 25.0031 | 203350 | 0.0 | - | | 25.0092 | 203400 | 0.0 | - | | 25.0154 | 203450 | 0.0 | - | | 25.0215 | 203500 | 0.0 | - | | 25.0277 | 203550 | 0.0 | - | | 25.0338 | 203600 | 0.0 | - | | 25.0400 | 203650 | 0.0 | - | | 25.0461 | 203700 | 0.0 | - | | 25.0523 | 203750 | 0.0 | - | | 25.0584 | 203800 | 0.0 | - | | 25.0646 | 203850 | 0.0 | - | | 25.0707 | 203900 | 0.0 | - | | 25.0768 | 203950 | 0.0 | - | | 25.0830 | 204000 | 0.0 | - | | 25.0891 | 204050 | 0.0 | - | | 25.0953 | 204100 | 0.0 | - | | 25.1014 | 204150 | 0.0 | - | | 25.1076 | 204200 | 0.0 | - | | 25.1137 | 204250 | 0.0 | - | | 25.1199 | 204300 | 0.0 | - | | 25.1260 | 204350 | 0.0 | - | | 25.1322 | 204400 | 0.0 | - | | 25.1383 | 204450 | 0.0 | - | | 25.1445 | 204500 | 0.0 | - | | 25.1506 | 204550 | 0.0 | - | | 25.1568 | 204600 | 0.0 | - | | 25.1629 | 204650 | 0.0 | - | | 25.1691 | 204700 | 0.0 | - | | 25.1752 | 204750 | 0.0 | - | | 25.1814 | 204800 | 0.0 | - | | 25.1875 | 204850 | 0.0 | - | | 25.1937 | 204900 | 0.0 | - | | 25.1998 | 204950 | 0.0 | - | | 25.2060 | 205000 | 0.0 | - | | 25.2121 | 205050 | 0.0 | - | | 25.2182 | 205100 | 0.0 | - | | 25.2244 | 205150 | 0.0 | - | | 25.2305 | 205200 | 0.0 | - | | 25.2367 | 205250 | 0.0 | - | | 25.2428 | 205300 | 0.0 | - | | 25.2490 | 205350 | 0.0 | - | | 25.2551 | 205400 | 0.0 | - | | 25.2613 | 205450 | 0.0 | - | | 25.2674 | 205500 | 0.0 | - | | 25.2736 | 205550 | 0.0 | - | | 25.2797 | 205600 | 0.0 | - | | 25.2859 | 205650 | 0.0 | - | | 25.2920 | 205700 | 0.0 | - | | 25.2982 | 205750 | 0.0 | - | | 25.3043 | 205800 | 0.0 | - | | 25.3105 | 205850 | 0.0 | - | | 25.3166 | 205900 | 0.0 | - | | 25.3228 | 205950 | 0.0 | - | | 25.3289 | 206000 | 0.0 | - | | 25.3351 | 206050 | 0.0 | - | | 25.3412 | 206100 | 0.0 | - | | 25.3474 | 206150 | 0.0 | - | | 25.3535 | 206200 | 0.0 | - | | 25.3596 | 206250 | 0.0 | - | | 25.3658 | 206300 | 0.0 | - | | 25.3719 | 206350 | 0.0 | - | | 25.3781 | 206400 | 0.0 | - | | 25.3842 | 206450 | 0.0 | - | | 25.3904 | 206500 | 0.0 | - | | 25.3965 | 206550 | 0.0 | - | | 25.4027 | 206600 | 0.0 | - | | 25.4088 | 206650 | 0.0 | - | | 25.4150 | 206700 | 0.0 | - | | 25.4211 | 206750 | 0.0 | - | | 25.4273 | 206800 | 0.0 | - | | 25.4334 | 206850 | 0.0 | - | | 25.4396 | 206900 | 0.0 | - | | 25.4457 | 206950 | 0.0 | - | | 25.4519 | 207000 | 0.0 | - | | 25.4580 | 207050 | 0.0 | - | | 25.4642 | 207100 | 0.0 | - | | 25.4703 | 207150 | 0.0 | - | | 25.4765 | 207200 | 0.0 | - | | 25.4826 | 207250 | 0.0 | - | | 25.4887 | 207300 | 0.0 | - | | 25.4949 | 207350 | 0.0 | - | | 25.5010 | 207400 | 0.0 | - | | 25.5072 | 207450 | 0.0 | - | | 25.5133 | 207500 | 0.0 | - | | 25.5195 | 207550 | 0.0 | - | | 25.5256 | 207600 | 0.0 | - | | 25.5318 | 207650 | 0.0 | - | | 25.5379 | 207700 | 0.0 | - | | 25.5441 | 207750 | 0.0 | - | | 25.5502 | 207800 | 0.0 | - | | 25.5564 | 207850 | 0.0 | - | | 25.5625 | 207900 | 0.0002 | - | | 25.5687 | 207950 | 0.0006 | - | | 25.5748 | 208000 | 0.0009 | - | | 25.5810 | 208050 | 0.0 | - | | 25.5871 | 208100 | 0.0 | - | | 25.5933 | 208150 | 0.0 | - | | 25.5994 | 208200 | 0.0 | - | | 25.6056 | 208250 | 0.0 | - | | 25.6117 | 208300 | 0.0002 | - | | 25.6179 | 208350 | 0.0 | - | | 25.6240 | 208400 | 0.0 | - | | 25.6301 | 208450 | 0.0 | - | | 25.6363 | 208500 | 0.0 | - | | 25.6424 | 208550 | 0.0 | - | | 25.6486 | 208600 | 0.0002 | - | | 25.6547 | 208650 | 0.0 | - | | 25.6609 | 208700 | 0.0 | - | | 25.6670 | 208750 | 0.0 | - | | 25.6732 | 208800 | 0.0 | - | | 25.6793 | 208850 | 0.0 | - | | 25.6855 | 208900 | 0.0 | - | | 25.6916 | 208950 | 0.0 | - | | 25.6978 | 209000 | 0.0 | - | | 25.7039 | 209050 | 0.0001 | - | | 25.7101 | 209100 | 0.0 | - | | 25.7162 | 209150 | 0.0 | - | | 25.7224 | 209200 | 0.0 | - | | 25.7285 | 209250 | 0.0 | - | | 25.7347 | 209300 | 0.0 | - | | 25.7408 | 209350 | 0.0 | - | | 25.7470 | 209400 | 0.0 | - | | 25.7531 | 209450 | 0.0 | - | | 25.7593 | 209500 | 0.0 | - | | 25.7654 | 209550 | 0.0 | - | | 25.7715 | 209600 | 0.0 | - | | 25.7777 | 209650 | 0.0 | - | | 25.7838 | 209700 | 0.0 | - | | 25.7900 | 209750 | 0.0 | - | | 25.7961 | 209800 | 0.0 | - | | 25.8023 | 209850 | 0.0 | - | | 25.8084 | 209900 | 0.0 | - | | 25.8146 | 209950 | 0.0 | - | | 25.8207 | 210000 | 0.0 | - | | 25.8269 | 210050 | 0.0 | - | | 25.8330 | 210100 | 0.0 | - | | 25.8392 | 210150 | 0.0 | - | | 25.8453 | 210200 | 0.0 | - | | 25.8515 | 210250 | 0.0 | - | | 25.8576 | 210300 | 0.0 | - | | 25.8638 | 210350 | 0.0 | - | | 25.8699 | 210400 | 0.0 | - | | 25.8761 | 210450 | 0.0 | - | | 25.8822 | 210500 | 0.0 | - | | 25.8884 | 210550 | 0.0 | - | | 25.8945 | 210600 | 0.0 | - | | 25.9007 | 210650 | 0.0 | - | | 25.9068 | 210700 | 0.0 | - | | 25.9129 | 210750 | 0.0 | - | | 25.9191 | 210800 | 0.0 | - | | 25.9252 | 210850 | 0.0 | - | | 25.9314 | 210900 | 0.0 | - | | 25.9375 | 210950 | 0.0 | - | | 25.9437 | 211000 | 0.0 | - | | 25.9498 | 211050 | 0.0 | - | | 25.9560 | 211100 | 0.0 | - | | 25.9621 | 211150 | 0.0 | - | | 25.9683 | 211200 | 0.0 | - | | 25.9744 | 211250 | 0.0 | - | | 25.9806 | 211300 | 0.0 | - | | 25.9867 | 211350 | 0.0 | - | | 25.9929 | 211400 | 0.0 | - | | 25.9990 | 211450 | 0.0 | - | | 26.0052 | 211500 | 0.0 | - | | 26.0113 | 211550 | 0.0 | - | | 26.0175 | 211600 | 0.0 | - | | 26.0236 | 211650 | 0.0 | - | | 26.0298 | 211700 | 0.0 | - | | 26.0359 | 211750 | 0.0 | - | | 26.0421 | 211800 | 0.0 | - | | 26.0482 | 211850 | 0.0 | - | | 26.0543 | 211900 | 0.0 | - | | 26.0605 | 211950 | 0.0 | - | | 26.0666 | 212000 | 0.0 | - | | 26.0728 | 212050 | 0.0 | - | | 26.0789 | 212100 | 0.0 | - | | 26.0851 | 212150 | 0.0 | - | | 26.0912 | 212200 | 0.0 | - | | 26.0974 | 212250 | 0.0 | - | | 26.1035 | 212300 | 0.0 | - | | 26.1097 | 212350 | 0.0 | - | | 26.1158 | 212400 | 0.0 | - | | 26.1220 | 212450 | 0.0 | - | | 26.1281 | 212500 | 0.0 | - | | 26.1343 | 212550 | 0.0 | - | | 26.1404 | 212600 | 0.0 | - | | 26.1466 | 212650 | 0.0 | - | | 26.1527 | 212700 | 0.0 | - | | 26.1589 | 212750 | 0.0 | - | | 26.1650 | 212800 | 0.0 | - | | 26.1712 | 212850 | 0.0 | - | | 26.1773 | 212900 | 0.0 | - | | 26.1835 | 212950 | 0.0 | - | | 26.1896 | 213000 | 0.0 | - | | 26.1957 | 213050 | 0.0 | - | | 26.2019 | 213100 | 0.0 | - | | 26.2080 | 213150 | 0.0 | - | | 26.2142 | 213200 | 0.0 | - | | 26.2203 | 213250 | 0.0 | - | | 26.2265 | 213300 | 0.0 | - | | 26.2326 | 213350 | 0.0 | - | | 26.2388 | 213400 | 0.0 | - | | 26.2449 | 213450 | 0.0 | - | | 26.2511 | 213500 | 0.0 | - | | 26.2572 | 213550 | 0.0 | - | | 26.2634 | 213600 | 0.0 | - | | 26.2695 | 213650 | 0.0 | - | | 26.2757 | 213700 | 0.0 | - | | 26.2818 | 213750 | 0.0 | - | | 26.2880 | 213800 | 0.0 | - | | 26.2941 | 213850 | 0.0 | - | | 26.3003 | 213900 | 0.0 | - | | 26.3064 | 213950 | 0.0 | - | | 26.3126 | 214000 | 0.0 | - | | 26.3187 | 214050 | 0.0 | - | | 26.3248 | 214100 | 0.0 | - | | 26.3310 | 214150 | 0.0 | - | | 26.3371 | 214200 | 0.0 | - | | 26.3433 | 214250 | 0.0 | - | | 26.3494 | 214300 | 0.0 | - | | 26.3556 | 214350 | 0.0 | - | | 26.3617 | 214400 | 0.0 | - | | 26.3679 | 214450 | 0.0 | - | | 26.3740 | 214500 | 0.0 | - | | 26.3802 | 214550 | 0.0 | - | | 26.3863 | 214600 | 0.0 | - | | 26.3925 | 214650 | 0.0 | - | | 26.3986 | 214700 | 0.0 | - | | 26.4048 | 214750 | 0.0 | - | | 26.4109 | 214800 | 0.0 | - | | 26.4171 | 214850 | 0.0 | - | | 26.4232 | 214900 | 0.0 | - | | 26.4294 | 214950 | 0.0 | - | | 26.4355 | 215000 | 0.0 | - | | 26.4417 | 215050 | 0.0 | - | | 26.4478 | 215100 | 0.0 | - | | 26.4540 | 215150 | 0.0 | - | | 26.4601 | 215200 | 0.0 | - | | 26.4662 | 215250 | 0.0 | - | | 26.4724 | 215300 | 0.0 | - | | 26.4785 | 215350 | 0.0 | - | | 26.4847 | 215400 | 0.0 | - | | 26.4908 | 215450 | 0.0 | - | | 26.4970 | 215500 | 0.0 | - | | 26.5031 | 215550 | 0.0 | - | | 26.5093 | 215600 | 0.0 | - | | 26.5154 | 215650 | 0.0 | - | | 26.5216 | 215700 | 0.0 | - | | 26.5277 | 215750 | 0.0 | - | | 26.5339 | 215800 | 0.0 | - | | 26.5400 | 215850 | 0.0 | - | | 26.5462 | 215900 | 0.0 | - | | 26.5523 | 215950 | 0.0 | - | | 26.5585 | 216000 | 0.0 | - | | 26.5646 | 216050 | 0.0 | - | | 26.5708 | 216100 | 0.0 | - | | 26.5769 | 216150 | 0.0 | - | | 26.5831 | 216200 | 0.0 | - | | 26.5892 | 216250 | 0.0 | - | | 26.5954 | 216300 | 0.0 | - | | 26.6015 | 216350 | 0.0 | - | | 26.6076 | 216400 | 0.0 | - | | 26.6138 | 216450 | 0.0 | - | | 26.6199 | 216500 | 0.0 | - | | 26.6261 | 216550 | 0.0 | - | | 26.6322 | 216600 | 0.0 | - | | 26.6384 | 216650 | 0.0 | - | | 26.6445 | 216700 | 0.0 | - | | 26.6507 | 216750 | 0.0 | - | | 26.6568 | 216800 | 0.0 | - | | 26.6630 | 216850 | 0.0 | - | | 26.6691 | 216900 | 0.0 | - | | 26.6753 | 216950 | 0.0 | - | | 26.6814 | 217000 | 0.0 | - | | 26.6876 | 217050 | 0.0 | - | | 26.6937 | 217100 | 0.0 | - | | 26.6999 | 217150 | 0.0 | - | | 26.7060 | 217200 | 0.0 | - | | 26.7122 | 217250 | 0.0 | - | | 26.7183 | 217300 | 0.0 | - | | 26.7245 | 217350 | 0.0 | - | | 26.7306 | 217400 | 0.0 | - | | 26.7368 | 217450 | 0.0 | - | | 26.7429 | 217500 | 0.0 | - | | 26.7490 | 217550 | 0.0 | - | | 26.7552 | 217600 | 0.0 | - | | 26.7613 | 217650 | 0.0 | - | | 26.7675 | 217700 | 0.0 | - | | 26.7736 | 217750 | 0.0 | - | | 26.7798 | 217800 | 0.0 | - | | 26.7859 | 217850 | 0.0 | - | | 26.7921 | 217900 | 0.0 | - | | 26.7982 | 217950 | 0.0 | - | | 26.8044 | 218000 | 0.0 | - | | 26.8105 | 218050 | 0.0 | - | | 26.8167 | 218100 | 0.0 | - | | 26.8228 | 218150 | 0.0 | - | | 26.8290 | 218200 | 0.0 | - | | 26.8351 | 218250 | 0.0 | - | | 26.8413 | 218300 | 0.0 | - | | 26.8474 | 218350 | 0.0 | - | | 26.8536 | 218400 | 0.0 | - | | 26.8597 | 218450 | 0.0 | - | | 26.8659 | 218500 | 0.0 | - | | 26.8720 | 218550 | 0.0 | - | | 26.8782 | 218600 | 0.0 | - | | 26.8843 | 218650 | 0.0 | - | | 26.8904 | 218700 | 0.0 | - | | 26.8966 | 218750 | 0.0 | - | | 26.9027 | 218800 | 0.0 | - | | 26.9089 | 218850 | 0.0 | - | | 26.9150 | 218900 | 0.0 | - | | 26.9212 | 218950 | 0.0 | - | | 26.9273 | 219000 | 0.0 | - | | 26.9335 | 219050 | 0.0 | - | | 26.9396 | 219100 | 0.0 | - | | 26.9458 | 219150 | 0.0 | - | | 26.9519 | 219200 | 0.0 | - | | 26.9581 | 219250 | 0.0 | - | | 26.9642 | 219300 | 0.0 | - | | 26.9704 | 219350 | 0.0 | - | | 26.9765 | 219400 | 0.0 | - | | 26.9827 | 219450 | 0.0 | - | | 26.9888 | 219500 | 0.0 | - | | 26.9950 | 219550 | 0.0 | - | | 27.0011 | 219600 | 0.0 | - | | 27.0073 | 219650 | 0.0 | - | | 27.0134 | 219700 | 0.0 | - | | 27.0195 | 219750 | 0.0 | - | | 27.0257 | 219800 | 0.0 | - | | 27.0318 | 219850 | 0.0 | - | | 27.0380 | 219900 | 0.0 | - | | 27.0441 | 219950 | 0.0 | - | | 27.0503 | 220000 | 0.0 | - | | 27.0564 | 220050 | 0.0 | - | | 27.0626 | 220100 | 0.0 | - | | 27.0687 | 220150 | 0.0 | - | | 27.0749 | 220200 | 0.0 | - | | 27.0810 | 220250 | 0.0 | - | | 27.0872 | 220300 | 0.0 | - | | 27.0933 | 220350 | 0.0 | - | | 27.0995 | 220400 | 0.0 | - | | 27.1056 | 220450 | 0.0 | - | | 27.1118 | 220500 | 0.0 | - | | 27.1179 | 220550 | 0.0 | - | | 27.1241 | 220600 | 0.0 | - | | 27.1302 | 220650 | 0.0 | - | | 27.1364 | 220700 | 0.0 | - | | 27.1425 | 220750 | 0.0 | - | | 27.1487 | 220800 | 0.0 | - | | 27.1548 | 220850 | 0.0 | - | | 27.1609 | 220900 | 0.0 | - | | 27.1671 | 220950 | 0.0 | - | | 27.1732 | 221000 | 0.0 | - | | 27.1794 | 221050 | 0.0 | - | | 27.1855 | 221100 | 0.0 | - | | 27.1917 | 221150 | 0.0 | - | | 27.1978 | 221200 | 0.0 | - | | 27.2040 | 221250 | 0.0 | - | | 27.2101 | 221300 | 0.0 | - | | 27.2163 | 221350 | 0.0 | - | | 27.2224 | 221400 | 0.0 | - | | 27.2286 | 221450 | 0.0 | - | | 27.2347 | 221500 | 0.0 | - | | 27.2409 | 221550 | 0.0 | - | | 27.2470 | 221600 | 0.0 | - | | 27.2532 | 221650 | 0.0 | - | | 27.2593 | 221700 | 0.0 | - | | 27.2655 | 221750 | 0.0 | - | | 27.2716 | 221800 | 0.0 | - | | 27.2778 | 221850 | 0.0 | - | | 27.2839 | 221900 | 0.0 | - | | 27.2901 | 221950 | 0.0 | - | | 27.2962 | 222000 | 0.0 | - | | 27.3023 | 222050 | 0.0 | - | | 27.3085 | 222100 | 0.0 | - | | 27.3146 | 222150 | 0.0 | - | | 27.3208 | 222200 | 0.0 | - | | 27.3269 | 222250 | 0.0 | - | | 27.3331 | 222300 | 0.0 | - | | 27.3392 | 222350 | 0.0 | - | | 27.3454 | 222400 | 0.0 | - | | 27.3515 | 222450 | 0.0 | - | | 27.3577 | 222500 | 0.0 | - | | 27.3638 | 222550 | 0.0 | - | | 27.3700 | 222600 | 0.0 | - | | 27.3761 | 222650 | 0.0 | - | | 27.3823 | 222700 | 0.0 | - | | 27.3884 | 222750 | 0.0 | - | | 27.3946 | 222800 | 0.0 | - | | 27.4007 | 222850 | 0.0 | - | | 27.4069 | 222900 | 0.0 | - | | 27.4130 | 222950 | 0.0 | - | | 27.4192 | 223000 | 0.0 | - | | 27.4253 | 223050 | 0.0 | - | | 27.4315 | 223100 | 0.0 | - | | 27.4376 | 223150 | 0.0 | - | | 27.4437 | 223200 | 0.0 | - | | 27.4499 | 223250 | 0.0 | - | | 27.4560 | 223300 | 0.0 | - | | 27.4622 | 223350 | 0.0 | - | | 27.4683 | 223400 | 0.0 | - | | 27.4745 | 223450 | 0.0 | - | | 27.4806 | 223500 | 0.0 | - | | 27.4868 | 223550 | 0.0 | - | | 27.4929 | 223600 | 0.0 | - | | 27.4991 | 223650 | 0.0 | - | | 27.5052 | 223700 | 0.0 | - | | 27.5114 | 223750 | 0.0 | - | | 27.5175 | 223800 | 0.0 | - | | 27.5237 | 223850 | 0.0 | - | | 27.5298 | 223900 | 0.0 | - | | 27.5360 | 223950 | 0.0 | - | | 27.5421 | 224000 | 0.0 | - | | 27.5483 | 224050 | 0.0 | - | | 27.5544 | 224100 | 0.0 | - | | 27.5606 | 224150 | 0.0 | - | | 27.5667 | 224200 | 0.0 | - | | 27.5729 | 224250 | 0.0 | - | | 27.5790 | 224300 | 0.0 | - | | 27.5851 | 224350 | 0.0 | - | | 27.5913 | 224400 | 0.0 | - | | 27.5974 | 224450 | 0.0 | - | | 27.6036 | 224500 | 0.0 | - | | 27.6097 | 224550 | 0.0 | - | | 27.6159 | 224600 | 0.0 | - | | 27.6220 | 224650 | 0.0 | - | | 27.6282 | 224700 | 0.0 | - | | 27.6343 | 224750 | 0.0 | - | | 27.6405 | 224800 | 0.0 | - | | 27.6466 | 224850 | 0.0 | - | | 27.6528 | 224900 | 0.0 | - | | 27.6589 | 224950 | 0.0 | - | | 27.6651 | 225000 | 0.0 | - | | 27.6712 | 225050 | 0.0 | - | | 27.6774 | 225100 | 0.0 | - | | 27.6835 | 225150 | 0.0 | - | | 27.6897 | 225200 | 0.0 | - | | 27.6958 | 225250 | 0.0 | - | | 27.7020 | 225300 | 0.0 | - | | 27.7081 | 225350 | 0.0 | - | | 27.7143 | 225400 | 0.0 | - | | 27.7204 | 225450 | 0.0 | - | | 27.7265 | 225500 | 0.0 | - | | 27.7327 | 225550 | 0.0 | - | | 27.7388 | 225600 | 0.0 | - | | 27.7450 | 225650 | 0.0 | - | | 27.7511 | 225700 | 0.0 | - | | 27.7573 | 225750 | 0.0 | - | | 27.7634 | 225800 | 0.0 | - | | 27.7696 | 225850 | 0.0 | - | | 27.7757 | 225900 | 0.0 | - | | 27.7819 | 225950 | 0.0 | - | | 27.7880 | 226000 | 0.0 | - | | 27.7942 | 226050 | 0.0 | - | | 27.8003 | 226100 | 0.0 | - | | 27.8065 | 226150 | 0.0 | - | | 27.8126 | 226200 | 0.0 | - | | 27.8188 | 226250 | 0.0 | - | | 27.8249 | 226300 | 0.0 | - | | 27.8311 | 226350 | 0.0 | - | | 27.8372 | 226400 | 0.0 | - | | 27.8434 | 226450 | 0.0 | - | | 27.8495 | 226500 | 0.0 | - | | 27.8556 | 226550 | 0.0 | - | | 27.8618 | 226600 | 0.0 | - | | 27.8679 | 226650 | 0.0 | - | | 27.8741 | 226700 | 0.0 | - | | 27.8802 | 226750 | 0.0 | - | | 27.8864 | 226800 | 0.0 | - | | 27.8925 | 226850 | 0.0 | - | | 27.8987 | 226900 | 0.0 | - | | 27.9048 | 226950 | 0.0 | - | | 27.9110 | 227000 | 0.0 | - | | 27.9171 | 227050 | 0.0 | - | | 27.9233 | 227100 | 0.0 | - | | 27.9294 | 227150 | 0.0 | - | | 27.9356 | 227200 | 0.0 | - | | 27.9417 | 227250 | 0.0 | - | | 27.9479 | 227300 | 0.0 | - | | 27.9540 | 227350 | 0.0 | - | | 27.9602 | 227400 | 0.0 | - | | 27.9663 | 227450 | 0.0 | - | | 27.9725 | 227500 | 0.0 | - | | 27.9786 | 227550 | 0.0 | - | | 27.9848 | 227600 | 0.0 | - | | 27.9909 | 227650 | 0.0 | - | | 27.9970 | 227700 | 0.0 | - | | 28.0032 | 227750 | 0.0 | - | | 28.0093 | 227800 | 0.0 | - | | 28.0155 | 227850 | 0.0 | - | | 28.0216 | 227900 | 0.0 | - | | 28.0278 | 227950 | 0.0 | - | | 28.0339 | 228000 | 0.0 | - | | 28.0401 | 228050 | 0.0 | - | | 28.0462 | 228100 | 0.0 | - | | 28.0524 | 228150 | 0.0 | - | | 28.0585 | 228200 | 0.0 | - | | 28.0647 | 228250 | 0.0 | - | | 28.0708 | 228300 | 0.0 | - | | 28.0770 | 228350 | 0.0 | - | | 28.0831 | 228400 | 0.0 | - | | 28.0893 | 228450 | 0.0 | - | | 28.0954 | 228500 | 0.0 | - | | 28.1016 | 228550 | 0.0 | - | | 28.1077 | 228600 | 0.0 | - | | 28.1139 | 228650 | 0.0 | - | | 28.1200 | 228700 | 0.0 | - | | 28.1262 | 228750 | 0.0 | - | | 28.1323 | 228800 | 0.0 | - | | 28.1384 | 228850 | 0.0 | - | | 28.1446 | 228900 | 0.0 | - | | 28.1507 | 228950 | 0.0 | - | | 28.1569 | 229000 | 0.0 | - | | 28.1630 | 229050 | 0.0 | - | | 28.1692 | 229100 | 0.0 | - | | 28.1753 | 229150 | 0.0 | - | | 28.1815 | 229200 | 0.0 | - | | 28.1876 | 229250 | 0.0 | - | | 28.1938 | 229300 | 0.0 | - | | 28.1999 | 229350 | 0.0 | - | | 28.2061 | 229400 | 0.0 | - | | 28.2122 | 229450 | 0.0 | - | | 28.2184 | 229500 | 0.0 | - | | 28.2245 | 229550 | 0.0 | - | | 28.2307 | 229600 | 0.0 | - | | 28.2368 | 229650 | 0.0 | - | | 28.2430 | 229700 | 0.0 | - | | 28.2491 | 229750 | 0.0 | - | | 28.2553 | 229800 | 0.0 | - | | 28.2614 | 229850 | 0.0 | - | | 28.2676 | 229900 | 0.0 | - | | 28.2737 | 229950 | 0.0 | - | | 28.2798 | 230000 | 0.0 | - | | 28.2860 | 230050 | 0.0 | - | | 28.2921 | 230100 | 0.0 | - | | 28.2983 | 230150 | 0.0 | - | | 28.3044 | 230200 | 0.0 | - | | 28.3106 | 230250 | 0.0 | - | | 28.3167 | 230300 | 0.0 | - | | 28.3229 | 230350 | 0.0 | - | | 28.3290 | 230400 | 0.0 | - | | 28.3352 | 230450 | 0.0 | - | | 28.3413 | 230500 | 0.0 | - | | 28.3475 | 230550 | 0.0 | - | | 28.3536 | 230600 | 0.0 | - | | 28.3598 | 230650 | 0.0 | - | | 28.3659 | 230700 | 0.0 | - | | 28.3721 | 230750 | 0.0 | - | | 28.3782 | 230800 | 0.0 | - | | 28.3844 | 230850 | 0.0 | - | | 28.3905 | 230900 | 0.0 | - | | 28.3967 | 230950 | 0.0 | - | | 28.4028 | 231000 | 0.0 | - | | 28.4090 | 231050 | 0.0 | - | | 28.4151 | 231100 | 0.0 | - | | 28.4212 | 231150 | 0.0 | - | | 28.4274 | 231200 | 0.0 | - | | 28.4335 | 231250 | 0.0 | - | | 28.4397 | 231300 | 0.0 | - | | 28.4458 | 231350 | 0.0 | - | | 28.4520 | 231400 | 0.0 | - | | 28.4581 | 231450 | 0.0 | - | | 28.4643 | 231500 | 0.0 | - | | 28.4704 | 231550 | 0.0 | - | | 28.4766 | 231600 | 0.0 | - | | 28.4827 | 231650 | 0.0 | - | | 28.4889 | 231700 | 0.0 | - | | 28.4950 | 231750 | 0.0 | - | | 28.5012 | 231800 | 0.0 | - | | 28.5073 | 231850 | 0.0 | - | | 28.5135 | 231900 | 0.0 | - | | 28.5196 | 231950 | 0.0 | - | | 28.5258 | 232000 | 0.0 | - | | 28.5319 | 232050 | 0.0 | - | | 28.5381 | 232100 | 0.0 | - | | 28.5442 | 232150 | 0.0 | - | | 28.5504 | 232200 | 0.0 | - | | 28.5565 | 232250 | 0.0 | - | | 28.5626 | 232300 | 0.0 | - | | 28.5688 | 232350 | 0.0 | - | | 28.5749 | 232400 | 0.0 | - | | 28.5811 | 232450 | 0.0 | - | | 28.5872 | 232500 | 0.0 | - | | 28.5934 | 232550 | 0.0 | - | | 28.5995 | 232600 | 0.0 | - | | 28.6057 | 232650 | 0.0 | - | | 28.6118 | 232700 | 0.0 | - | | 28.6180 | 232750 | 0.0 | - | | 28.6241 | 232800 | 0.0 | - | | 28.6303 | 232850 | 0.0 | - | | 28.6364 | 232900 | 0.0 | - | | 28.6426 | 232950 | 0.0 | - | | 28.6487 | 233000 | 0.0 | - | | 28.6549 | 233050 | 0.0 | - | | 28.6610 | 233100 | 0.0 | - | | 28.6672 | 233150 | 0.0 | - | | 28.6733 | 233200 | 0.0 | - | | 28.6795 | 233250 | 0.0 | - | | 28.6856 | 233300 | 0.0 | - | | 28.6917 | 233350 | 0.0 | - | | 28.6979 | 233400 | 0.0 | - | | 28.7040 | 233450 | 0.0 | - | | 28.7102 | 233500 | 0.0 | - | | 28.7163 | 233550 | 0.0 | - | | 28.7225 | 233600 | 0.0 | - | | 28.7286 | 233650 | 0.0 | - | | 28.7348 | 233700 | 0.0 | - | | 28.7409 | 233750 | 0.0 | - | | 28.7471 | 233800 | 0.0 | - | | 28.7532 | 233850 | 0.0 | - | | 28.7594 | 233900 | 0.0 | - | | 28.7655 | 233950 | 0.0 | - | | 28.7717 | 234000 | 0.0 | - | | 28.7778 | 234050 | 0.0 | - | | 28.7840 | 234100 | 0.0 | - | | 28.7901 | 234150 | 0.0 | - | | 28.7963 | 234200 | 0.0 | - | | 28.8024 | 234250 | 0.0 | - | | 28.8086 | 234300 | 0.0 | - | | 28.8147 | 234350 | 0.0 | - | | 28.8209 | 234400 | 0.0 | - | | 28.8270 | 234450 | 0.0 | - | | 28.8331 | 234500 | 0.0 | - | | 28.8393 | 234550 | 0.0 | - | | 28.8454 | 234600 | 0.0 | - | | 28.8516 | 234650 | 0.0 | - | | 28.8577 | 234700 | 0.0 | - | | 28.8639 | 234750 | 0.0 | - | | 28.8700 | 234800 | 0.0 | - | | 28.8762 | 234850 | 0.0 | - | | 28.8823 | 234900 | 0.0 | - | | 28.8885 | 234950 | 0.0 | - | | 28.8946 | 235000 | 0.0 | - | | 28.9008 | 235050 | 0.0 | - | | 28.9069 | 235100 | 0.0 | - | | 28.9131 | 235150 | 0.0 | - | | 28.9192 | 235200 | 0.0 | - | | 28.9254 | 235250 | 0.0 | - | | 28.9315 | 235300 | 0.0 | - | | 28.9377 | 235350 | 0.0 | - | | 28.9438 | 235400 | 0.0 | - | | 28.9500 | 235450 | 0.0 | - | | 28.9561 | 235500 | 0.0 | - | | 28.9623 | 235550 | 0.0 | - | | 28.9684 | 235600 | 0.0 | - | | 28.9745 | 235650 | 0.0 | - | | 28.9807 | 235700 | 0.0 | - | | 28.9868 | 235750 | 0.0 | - | | 28.9930 | 235800 | 0.0 | - | | 28.9991 | 235850 | 0.0 | - | | 29.0053 | 235900 | 0.0 | - | | 29.0114 | 235950 | 0.0 | - | | 29.0176 | 236000 | 0.0 | - | | 29.0237 | 236050 | 0.0 | - | | 29.0299 | 236100 | 0.0 | - | | 29.0360 | 236150 | 0.0 | - | | 29.0422 | 236200 | 0.0 | - | | 29.0483 | 236250 | 0.0 | - | | 29.0545 | 236300 | 0.0 | - | | 29.0606 | 236350 | 0.0 | - | | 29.0668 | 236400 | 0.0 | - | | 29.0729 | 236450 | 0.0 | - | | 29.0791 | 236500 | 0.0 | - | | 29.0852 | 236550 | 0.0 | - | | 29.0914 | 236600 | 0.0 | - | | 29.0975 | 236650 | 0.0 | - | | 29.1037 | 236700 | 0.0 | - | | 29.1098 | 236750 | 0.0 | - | | 29.1159 | 236800 | 0.0 | - | | 29.1221 | 236850 | 0.0 | - | | 29.1282 | 236900 | 0.0 | - | | 29.1344 | 236950 | 0.0 | - | | 29.1405 | 237000 | 0.0 | - | | 29.1467 | 237050 | 0.0 | - | | 29.1528 | 237100 | 0.0 | - | | 29.1590 | 237150 | 0.0 | - | | 29.1651 | 237200 | 0.0 | - | | 29.1713 | 237250 | 0.0 | - | | 29.1774 | 237300 | 0.0 | - | | 29.1836 | 237350 | 0.0 | - | | 29.1897 | 237400 | 0.0 | - | | 29.1959 | 237450 | 0.0 | - | | 29.2020 | 237500 | 0.0 | - | | 29.2082 | 237550 | 0.0 | - | | 29.2143 | 237600 | 0.0 | - | | 29.2205 | 237650 | 0.0 | - | | 29.2266 | 237700 | 0.0 | - | | 29.2328 | 237750 | 0.0 | - | | 29.2389 | 237800 | 0.0 | - | | 29.2451 | 237850 | 0.0 | - | | 29.2512 | 237900 | 0.0 | - | | 29.2573 | 237950 | 0.0 | - | | 29.2635 | 238000 | 0.0 | - | | 29.2696 | 238050 | 0.0 | - | | 29.2758 | 238100 | 0.0 | - | | 29.2819 | 238150 | 0.0 | - | | 29.2881 | 238200 | 0.0 | - | | 29.2942 | 238250 | 0.0 | - | | 29.3004 | 238300 | 0.0 | - | | 29.3065 | 238350 | 0.0 | - | | 29.3127 | 238400 | 0.0 | - | | 29.3188 | 238450 | 0.0 | - | | 29.3250 | 238500 | 0.0 | - | | 29.3311 | 238550 | 0.0 | - | | 29.3373 | 238600 | 0.0 | - | | 29.3434 | 238650 | 0.0 | - | | 29.3496 | 238700 | 0.0 | - | | 29.3557 | 238750 | 0.0 | - | | 29.3619 | 238800 | 0.0 | - | | 29.3680 | 238850 | 0.0 | - | | 29.3742 | 238900 | 0.0 | - | | 29.3803 | 238950 | 0.0 | - | | 29.3865 | 239000 | 0.0 | - | | 29.3926 | 239050 | 0.0 | - | | 29.3987 | 239100 | 0.0 | - | | 29.4049 | 239150 | 0.0 | - | | 29.4110 | 239200 | 0.0 | - | | 29.4172 | 239250 | 0.0 | - | | 29.4233 | 239300 | 0.0 | - | | 29.4295 | 239350 | 0.0 | - | | 29.4356 | 239400 | 0.0 | - | | 29.4418 | 239450 | 0.0 | - | | 29.4479 | 239500 | 0.0 | - | | 29.4541 | 239550 | 0.0 | - | | 29.4602 | 239600 | 0.0 | - | | 29.4664 | 239650 | 0.0 | - | | 29.4725 | 239700 | 0.0 | - | | 29.4787 | 239750 | 0.0 | - | | 29.4848 | 239800 | 0.0 | - | | 29.4910 | 239850 | 0.0 | - | | 29.4971 | 239900 | 0.0 | - | | 29.5033 | 239950 | 0.0 | - | | 29.5094 | 240000 | 0.0 | - | | 29.5156 | 240050 | 0.0 | - | | 29.5217 | 240100 | 0.0 | - | | 29.5278 | 240150 | 0.0 | - | | 29.5340 | 240200 | 0.0 | - | | 29.5401 | 240250 | 0.0 | - | | 29.5463 | 240300 | 0.0 | - | | 29.5524 | 240350 | 0.0 | - | | 29.5586 | 240400 | 0.0 | - | | 29.5647 | 240450 | 0.0 | - | | 29.5709 | 240500 | 0.0 | - | | 29.5770 | 240550 | 0.0 | - | | 29.5832 | 240600 | 0.0 | - | | 29.5893 | 240650 | 0.0 | - | | 29.5955 | 240700 | 0.0 | - | | 29.6016 | 240750 | 0.0 | - | | 29.6078 | 240800 | 0.0 | - | | 29.6139 | 240850 | 0.0 | - | | 29.6201 | 240900 | 0.0 | - | | 29.6262 | 240950 | 0.0 | - | | 29.6324 | 241000 | 0.0 | - | | 29.6385 | 241050 | 0.0 | - | | 29.6447 | 241100 | 0.0 | - | | 29.6508 | 241150 | 0.0 | - | | 29.6570 | 241200 | 0.0 | - | | 29.6631 | 241250 | 0.0 | - | | 29.6692 | 241300 | 0.0 | - | | 29.6754 | 241350 | 0.0 | - | | 29.6815 | 241400 | 0.0 | - | | 29.6877 | 241450 | 0.0 | - | | 29.6938 | 241500 | 0.0 | - | | 29.7000 | 241550 | 0.0 | - | | 29.7061 | 241600 | 0.0 | - | | 29.7123 | 241650 | 0.0 | - | | 29.7184 | 241700 | 0.0 | - | | 29.7246 | 241750 | 0.0 | - | | 29.7307 | 241800 | 0.0 | - | | 29.7369 | 241850 | 0.0 | - | | 29.7430 | 241900 | 0.0 | - | | 29.7492 | 241950 | 0.0 | - | | 29.7553 | 242000 | 0.0 | - | | 29.7615 | 242050 | 0.0 | - | | 29.7676 | 242100 | 0.0 | - | | 29.7738 | 242150 | 0.0 | - | | 29.7799 | 242200 | 0.0 | - | | 29.7861 | 242250 | 0.0 | - | | 29.7922 | 242300 | 0.0 | - | | 29.7984 | 242350 | 0.0 | - | | 29.8045 | 242400 | 0.0 | - | | 29.8106 | 242450 | 0.0 | - | | 29.8168 | 242500 | 0.0 | - | | 29.8229 | 242550 | 0.0 | - | | 29.8291 | 242600 | 0.0 | - | | 29.8352 | 242650 | 0.0 | - | | 29.8414 | 242700 | 0.0 | - | | 29.8475 | 242750 | 0.0 | - | | 29.8537 | 242800 | 0.0 | - | | 29.8598 | 242850 | 0.0 | - | | 29.8660 | 242900 | 0.0 | - | | 29.8721 | 242950 | 0.0 | - | | 29.8783 | 243000 | 0.0 | - | | 29.8844 | 243050 | 0.0 | - | | 29.8906 | 243100 | 0.0 | - | | 29.8967 | 243150 | 0.0 | - | | 29.9029 | 243200 | 0.0 | - | | 29.9090 | 243250 | 0.0 | - | | 29.9152 | 243300 | 0.0 | - | | 29.9213 | 243350 | 0.0 | - | | 29.9275 | 243400 | 0.0 | - | | 29.9336 | 243450 | 0.0 | - | | 29.9398 | 243500 | 0.0 | - | | 29.9459 | 243550 | 0.0 | - | | 29.9520 | 243600 | 0.0 | - | | 29.9582 | 243650 | 0.0 | - | | 29.9643 | 243700 | 0.0 | - | | 29.9705 | 243750 | 0.0 | - | | 29.9766 | 243800 | 0.0 | - | | 29.9828 | 243850 | 0.0 | - | | 29.9889 | 243900 | 0.0 | - | | 29.9951 | 243950 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - Transformers: 4.44.2 - PyTorch: 2.2.0a0+81ea7a4 - Datasets: 3.2.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with mini1013/master_domain This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 14 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 5 | <ul><li>'비앤진 프로텍트 미스트 앤 픽서 - 100ml x 1개 100ml × 1개 LotteOn > 뷰티 > 스킨케어 > 미스트 LotteOn > 뷰티 > 스킨케어 > 미스트'</li><li>'이니스프리 노세범 미네랄 파우더 5g 3.맨토스(6호-멜론) (#M)홈>화장품/미용>베이스메이크업>파우더>루스파우더 Naverstore > 화장품/미용 > 베이스메이크업 > 파우더 > 루스파우더'</li><li>'LOreal Paris Magic Perfecting Base Face Primer by Studio Secrets Professional 0.50 oz (Pack of 3) One Color_One Size ssg > 뷰티 > 메이크업 > 베이스메이크업 > 메이크업베이스 ssg > 뷰티 > 메이크업 > 베이스메이크업 > 메이크업베이스'</li></ul> | | 4 | <ul><li>'라이콘 스트립 왁스 800g 12종 / lycon strip wax 800g 라이코 플렉스 800g (#M)홈>호주왁싱>라이콘 Naverstore > 화장품/미용 > 바디케어 > 제모제'</li><li>'잉글리쉬 페어 앤 프리지아 센티드 바디 파우더 신세계백화점/헤어/바디/핸드/풋/데오/바스트케어/바스트케어;(#M)SSG.COM/바디케어/기타바디용품 LOREAL > DepartmentSsg > 아틀리에 코롱 > Generic > 여자향수'</li><li>'빅 오버진 에너지 380g - 배쓰 밤/입욕제 (#M)위메프 > 생활·주방용품 > 바디/헤어 > 바디케어/워시/제모 > 입욕제 위메프 > 뷰티 > 바디/헤어 > 바디케어/워시/제모 > 입욕제'</li></ul> | | 11 | <ul><li>'비오템 - 오 비타미네 오드 뚜왈렛 스프레이 100ml/3.38oz LOREAL > Ssg > 비오템 > Branded > 비오템 LOREAL > Ssg > 비오템 > Branded > 비오템'</li><li>'[산타마리아노벨라]포푸리 바소 세라미카 멀티_F (#M)가구/인테리어>인테리어소품>아로마/캔들용품>초/향초 AD > Naverstore > smnovella브랜드스토어 > 전체상품'</li><li>'톰포드 오드우드 50ml LotteOn > 뷰티 > 향수 > 남녀공용향수 LotteOn > 뷰티 > 향수 > 남녀공용향수'</li></ul> | | 6 | <ul><li>'[온세일] 루스 파우더 휴대용기 LotteOn > 뷰티 > 베이스메이크업 > 파우더 LotteOn > 뷰티 > 메이크업 > 베이스메이크업 > 파우더'</li><li>'반영구재료 임뷰티 엠보니들 20pcs (1box) 1420(흰색) 홈>오늘의 타임특가!!;홈>전체상품;홈>니들;홈>한가위 추석 특가;홈>연말특가 이벤트;홈>원데이 특가상품 이벤트!;홈>주말 이벤트 특가!;(#M)홈>니들&색소 Naverstore > 화장품/미용 > 뷰티소품 > DIY화장품재료'</li><li>'마스터클래스 매니아 세트(브러쉬 2종+클리너) 아이M브러쉬_파운데이션브러쉬 LotteOn > 뷰티 > 뷰티기기/소품 > 메이크업소품 > 브러쉬 LotteOn > 뷰티 > 뷰티기기/소품 > 메이크업소품 > 브러쉬'</li></ul> | | 8 | <ul><li>'푸드어홀릭 알로하와이 블렌딩 오일 150ml/태닝오일 (#M)위메프 > 생활·주방용품 > 바디/헤어 > 바디케어/워시/제모 > 태닝용품 위메프 > 뷰티 > 바디/헤어 > 바디케어/워시/제모 > 태닝용품'</li><li>'푸드어홀릭 알로하와이 태닝 오일(150ml) 건강한 구릿빛 피부 - 2개 택배무료 - (#M)11st>바디케어>태닝용품>오일 11st > 뷰티 > 바디케어 > 태닝용품 > 오일'</li><li>'대용량 SNP 아쿠아 쿨링 선 스프레이 200ml (SPF50+) (#M)홈>선크림 Naverstore > 화장품/미용 > 선케어 > 선스프레이'</li></ul> | | 9 | <ul><li>'1+1 유리아쥬 오 떼르말 미스트 300ml 단품 (#M)화장품/향수>색조메이크업>립밤/립케어 Gmarket > 뷰티 > 화장품/향수 > 색조메이크업 > 립밤/립케어'</li><li>'메디힐 티트리 100 오일 247187 10ml × 1개 LotteOn > 뷰티 > 남성화장품 > 남성화장품세트 LotteOn > 뷰티 > 남성화장품 > 남성화장품세트'</li><li>'[매장 정품] 미샤 비폴렌 리뉴 오일 30ml (#M)11st>스킨케어>오일>오일 11st > 뷰티 > 스킨케어 > 오일'</li></ul> | | 1 | <ul><li>'브로앤팁스 수퍼내추럴 톤업 선크림 70ml 70ml × 1개 (#M)쿠팡 홈>뷰티>스킨케어>선케어/태닝>선케어>선블록/선크림/선로션 Coupang > 뷰티 > 남성화장품 > 남성스킨케어 > 선케어'</li><li>'비레디 유브이 디펜스 선 베이스 프레쉬 50ml(SPF50+) (#M)홈>화장품/미용>남성화장품>선크림 Naverstore > 화장품/미용 > 남성화장품 > 선크림'</li><li>'랩시리즈 데일리 레스큐 에너자이징 에센스 150ml LotteOn > 뷰티 > 남성화장품 > 스킨 LotteOn > 뷰티 > 남성화장품 > 스킨'</li></ul> | | 7 | <ul><li>'P.픽싱마스카라 볼륨앤컬링 (#M)뷰티>화장품/향수>포인트메이크업>마스카라 CJmall > 뷰티 > 헤어/바디/미용기기 > 핸드/풋/덴탈케어 > 핸드케어'</li><li>'르 젤 쑤르실 360 블롱 ssg > 뷰티 > 메이크업 > 아이메이크업 > 아이브로우 ssg > 뷰티 > 메이크업 > 아이메이크업 > 아이브로우'</li><li>'조말론 런던 핸드크림 50ml 라임 바질 앤 만다린 피오니 블러쉬 잉글리쉬 페어 프리지아 (핸드크림)라임 바질 앤 만다린 50ml (#M)홈>조말론 Naverstore > 화장품/미용 > 바디케어 > 핸드케어'</li></ul> | | 13 | <ul><li>'입생로랑(향수) 몽 파리 헤어 미스트 30ml MinSellAmount (#M)화장품/향수>네일케어>네일리무버 Gmarket > 뷰티 > 화장품/향수 > 네일케어 > 네일리무버'</li><li>'아윤채 더 리트리브 프로그램 부스팅 스타터 헤어토닉 500g × 4개 (#M)쿠팡 홈>뷰티>헤어>헤어에센스/오일>헤어토닉/스칼프 Coupang > 뷰티 > 헤어 > 헤어에센스/오일 > 헤어토닉/스칼프'</li><li>'아모스 04 컬링에센스 이엑스보라색150mlX3 LotteOn > 뷰티 > 헤어/바디 > 헤어케어 > 헤어에센스/오일 LotteOn > 뷰티 > 헤어/바디 > 헤어케어 > 헤어에센스/오일'</li></ul> | | 12 | <ul><li>'컬링에센스2X 숏스타일 150ml 3개_P083181261 옵션/컬링에센스2X 숏스타일 150ml x3 ssg > 뷰티 > 헤어/바디 > 헤어스타일링 > 염색약 ssg > 뷰티 > 헤어/바디 > 헤어스타일링 > 염색약'</li><li>'[후후맘] 미쟝센 스테이지콜렉션 컬링 에센스 2X 230ml 230ml × 1개 (#M)쿠팡 홈>싱글라이프>샤워/세안>헤어에센스 Coupang > 뷰티 > 헤어 > 헤어에센스/오일 > 헤어에센스'</li><li>'휘핑컬리펌(마일드한 발림감, 힘있고 탄력있는 컬) 1제 150ml 2제 150ml 홈>펌제;홈>브랜드>[아모레레서픽]아모스;(#M)홈>파마약♡매직약>파마약 Naverstore > 화장품/미용 > 헤어스타일링 > 파마약 > 웨이브'</li></ul> | | 2 | <ul><li>'베씨 시럽 컬러 젤 S04 로즈 립글로즈 × 1개 (#M)쿠팡 홈>뷰티>네일>젤네일>컬러 젤 Coupang > 뷰티 > 네일 > 젤네일 > 컬러 젤'</li><li>'[OPI][프로스파] 풋 파일 스트립 120 Grit DepartmentSsg > 명품화장품 > 메이크업 > 네일 DepartmentSsg > 명품화장품 > 메이크업 > 네일'</li><li>'OPI 리무버 엑스퍼트터치리무버 450ml OPI 리무버 폴리쉬리무버(그린) 960ml LotteOn > 뷰티 > 네일 > 네일케어 > 네일리무버 LotteOn > 뷰티 > 네일 > 네일케어 > 네일리무버'</li></ul> | | 0 | <ul><li>'다슈 데일리 에브리웨어 트래블 키트 3종 택1 02_클렌징 키트 홈>브랜드별>다슈;(#M)홈>세트 상품 Naverstore > 화장품/미용 > 클렌징 > 클렌징세트'</li><li>'2+1 한정수량)미니스 여행용 세트(포켓몬) MinSellAmount (#M)바디/헤어>헤어케어>샴푸/린스 Gmarket > 뷰티 > 바디/헤어 > 헤어케어 > 샴푸/린스'</li><li>'[동구밭] 올바른 샴푸바/린스바/설거지바 제로웨이스트 [동구밭] 배러댄리퀴드 여행세트 (#M)헤어케어>샴푸>샴푸바 AD > 11st > 뷰티 > 헤어케어 > 샴푸 > 샴푸바'</li></ul> | | 10 | <ul><li>'에뛰드 마스카라 전용리무버 80ml MinSellAmount (#M)화장품/향수>클렌징/필링>립앤아이리무버 Gmarket > 뷰티 > 화장품/향수 > 클렌징/필링 > 립앤아이리무버'</li><li>'다나한 피부케어 효용 클렌징 크림 300ml 화장품 클렌 MinSellAmount (#M)화장품/향수>클렌징/필링>클렌징크림 Gmarket > 뷰티 > 화장품/향수 > 클렌징/필링 > 클렌징크림'</li><li>'대용량 곡물 페이셜 클렌징워터 500ml+파우치5매 (#M)뷰티>화장품/향수>클렌징>클렌징워터 CJmall > 뷰티 > 화장품/향수 > 클렌징 > 클렌징워터'</li></ul> | | 3 | <ul><li>'바세린 수분 보습 영양 시트 마스크 팩 10매 바세린 수분 시트 마스크 10매 (#M)홈>화장품/미용>마스크/팩>마스크시트 Naverstore > 화장품/미용 > 마스크/팩 > 마스크시트'</li><li>'클리어 임프루브먼트 엑티브 차콜 마스크 75ml (#M)뷰티>화장품/향수>스킨케어>팩/마스크 CJmall > 뷰티 > 화장품/향수 > 스킨케어 > 팩/마스크'</li><li>'바세린 영양 시트 마스크23ml 10매X2 MinSellAmount (#M)화장품/향수>팩/마스크>마스크시트 Gmarket > 뷰티 > 화장품/향수 > 팩/마스크 > 마스크시트'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.9742 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_main_item_top_bt") # Run inference preds = model("헤라 메이크업 픽서 (110ml) LotteOn > 뷰티 > 스킨케어 > 미스트 LotteOn > 뷰티 > 스킨케어 > 미스트") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 10 | 21.9195 | 91 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 50 | | 1 | 514 | | 2 | 200 | | 3 | 200 | | 4 | 746 | | 5 | 350 | | 6 | 350 | | 7 | 600 | | 8 | 249 | | 9 | 550 | | 10 | 350 | | 11 | 200 | | 12 | 350 | | 13 | 496 | ### Training Hyperparameters - batch_size: (64, 64) - num_epochs: (30, 30) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 100 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:------:|:-------------:|:---------------:| | 0.0001 | 1 | 0.3213 | - | | 0.0061 | 50 | 0.3448 | - | | 0.0123 | 100 | 0.3538 | - | | 0.0184 | 150 | 0.3436 | - | | 0.0246 | 200 | 0.3537 | - | | 0.0307 | 250 | 0.3403 | - | | 0.0369 | 300 | 0.3446 | - | | 0.0430 | 350 | 0.3437 | - | | 0.0492 | 400 | 0.3368 | - | | 0.0553 | 450 | 0.3344 | - | | 0.0615 | 500 | 0.3235 | - | | 0.0676 | 550 | 0.324 | - | | 0.0738 | 600 | 0.3107 | - | | 0.0799 | 650 | 0.332 | - | | 0.0861 | 700 | 0.3147 | - | | 0.0922 | 750 | 0.2985 | - | | 0.0984 | 800 | 0.2989 | - | | 0.1045 | 850 | 0.2885 | - | | 0.1107 | 900 | 0.2791 | - | | 0.1168 | 950 | 0.2787 | - | | 0.1230 | 1000 | 0.2647 | - | | 0.1291 | 1050 | 0.2546 | - | | 0.1353 | 1100 | 0.243 | - | | 0.1414 | 1150 | 0.232 | - | | 0.1475 | 1200 | 0.2172 | - | | 0.1537 | 1250 | 0.2114 | - | | 0.1598 | 1300 | 0.1996 | - | | 0.1660 | 1350 | 0.2017 | - | | 0.1721 | 1400 | 0.194 | - | | 0.1783 | 1450 | 0.1875 | - | | 0.1844 | 1500 | 0.1961 | - | | 0.1906 | 1550 | 0.1897 | - | | 0.1967 | 1600 | 0.1869 | - | | 0.2029 | 1650 | 0.1877 | - | | 0.2090 | 1700 | 0.181 | - | | 0.2152 | 1750 | 0.1817 | - | | 0.2213 | 1800 | 0.185 | - | | 0.2275 | 1850 | 0.1818 | - | | 0.2336 | 1900 | 0.1761 | - | | 0.2398 | 1950 | 0.1788 | - | | 0.2459 | 2000 | 0.1782 | - | | 0.2521 | 2050 | 0.1728 | - | | 0.2582 | 2100 | 0.1718 | - | | 0.2644 | 2150 | 0.1724 | - | | 0.2705 | 2200 | 0.1668 | - | | 0.2767 | 2250 | 0.1664 | - | | 0.2828 | 2300 | 0.1699 | - | | 0.2889 | 2350 | 0.1663 | - | | 0.2951 | 2400 | 0.1599 | - | | 0.3012 | 2450 | 0.1656 | - | | 0.3074 | 2500 | 0.1615 | - | | 0.3135 | 2550 | 0.1571 | - | | 0.3197 | 2600 | 0.1579 | - | | 0.3258 | 2650 | 0.1545 | - | | 0.3320 | 2700 | 0.1578 | - | | 0.3381 | 2750 | 0.1501 | - | | 0.3443 | 2800 | 0.1614 | - | | 0.3504 | 2850 | 0.1521 | - | | 0.3566 | 2900 | 0.1546 | - | | 0.3627 | 2950 | 0.1552 | - | | 0.3689 | 3000 | 0.1509 | - | | 0.3750 | 3050 | 0.1442 | - | | 0.3812 | 3100 | 0.1435 | - | | 0.3873 | 3150 | 0.1456 | - | | 0.3935 | 3200 | 0.1429 | - | | 0.3996 | 3250 | 0.1443 | - | | 0.4058 | 3300 | 0.1432 | - | | 0.4119 | 3350 | 0.1389 | - | | 0.4180 | 3400 | 0.1387 | - | | 0.4242 | 3450 | 0.1336 | - | | 0.4303 | 3500 | 0.1387 | - | | 0.4365 | 3550 | 0.1331 | - | | 0.4426 | 3600 | 0.1361 | - | | 0.4488 | 3650 | 0.1379 | - | | 0.4549 | 3700 | 0.1287 | - | | 0.4611 | 3750 | 0.1355 | - | | 0.4672 | 3800 | 0.1279 | - | | 0.4734 | 3850 | 0.1256 | - | | 0.4795 | 3900 | 0.1225 | - | | 0.4857 | 3950 | 0.1279 | - | | 0.4918 | 4000 | 0.1198 | - | | 0.4980 | 4050 | 0.1216 | - | | 0.5041 | 4100 | 0.1181 | - | | 0.5103 | 4150 | 0.1185 | - | | 0.5164 | 4200 | 0.1166 | - | | 0.5226 | 4250 | 0.1121 | - | | 0.5287 | 4300 | 0.1161 | - | | 0.5349 | 4350 | 0.1146 | - | | 0.5410 | 4400 | 0.1111 | - | | 0.5472 | 4450 | 0.1064 | - | | 0.5533 | 4500 | 0.1076 | - | | 0.5594 | 4550 | 0.1035 | - | | 0.5656 | 4600 | 0.1016 | - | | 0.5717 | 4650 | 0.0984 | - | | 0.5779 | 4700 | 0.1051 | - | | 0.5840 | 4750 | 0.0962 | - | | 0.5902 | 4800 | 0.0928 | - | | 0.5963 | 4850 | 0.094 | - | | 0.6025 | 4900 | 0.0889 | - | | 0.6086 | 4950 | 0.0881 | - | | 0.6148 | 5000 | 0.0926 | - | | 0.6209 | 5050 | 0.0975 | - | | 0.6271 | 5100 | 0.0851 | - | | 0.6332 | 5150 | 0.0916 | - | | 0.6394 | 5200 | 0.087 | - | | 0.6455 | 5250 | 0.086 | - | | 0.6517 | 5300 | 0.0865 | - | | 0.6578 | 5350 | 0.0804 | - | | 0.6640 | 5400 | 0.0805 | - | | 0.6701 | 5450 | 0.0815 | - | | 0.6763 | 5500 | 0.078 | - | | 0.6824 | 5550 | 0.079 | - | | 0.6886 | 5600 | 0.0736 | - | | 0.6947 | 5650 | 0.0763 | - | | 0.7008 | 5700 | 0.0766 | - | | 0.7070 | 5750 | 0.075 | - | | 0.7131 | 5800 | 0.072 | - | | 0.7193 | 5850 | 0.0731 | - | | 0.7254 | 5900 | 0.071 | - | | 0.7316 | 5950 | 0.0681 | - | | 0.7377 | 6000 | 0.0682 | - | | 0.7439 | 6050 | 0.0677 | - | | 0.7500 | 6100 | 0.0673 | - | | 0.7562 | 6150 | 0.059 | - | | 0.7623 | 6200 | 0.0619 | - | | 0.7685 | 6250 | 0.061 | - | | 0.7746 | 6300 | 0.0598 | - | | 0.7808 | 6350 | 0.0582 | - | | 0.7869 | 6400 | 0.0559 | - | | 0.7931 | 6450 | 0.0582 | - | | 0.7992 | 6500 | 0.0592 | - | | 0.8054 | 6550 | 0.0563 | - | | 0.8115 | 6600 | 0.0565 | - | | 0.8177 | 6650 | 0.0508 | - | | 0.8238 | 6700 | 0.0535 | - | | 0.8300 | 6750 | 0.0513 | - | | 0.8361 | 6800 | 0.0492 | - | | 0.8422 | 6850 | 0.0497 | - | | 0.8484 | 6900 | 0.0546 | - | | 0.8545 | 6950 | 0.0515 | - | | 0.8607 | 7000 | 0.0488 | - | | 0.8668 | 7050 | 0.0461 | - | | 0.8730 | 7100 | 0.0431 | - | | 0.8791 | 7150 | 0.0446 | - | | 0.8853 | 7200 | 0.0447 | - | | 0.8914 | 7250 | 0.0453 | - | | 0.8976 | 7300 | 0.0424 | - | | 0.9037 | 7350 | 0.0403 | - | | 0.9099 | 7400 | 0.0437 | - | | 0.9160 | 7450 | 0.0438 | - | | 0.9222 | 7500 | 0.039 | - | | 0.9283 | 7550 | 0.0411 | - | | 0.9345 | 7600 | 0.0389 | - | | 0.9406 | 7650 | 0.0356 | - | | 0.9468 | 7700 | 0.0387 | - | | 0.9529 | 7750 | 0.0363 | - | | 0.9591 | 7800 | 0.0368 | - | | 0.9652 | 7850 | 0.0366 | - | | 0.9714 | 7900 | 0.0348 | - | | 0.9775 | 7950 | 0.033 | - | | 0.9836 | 8000 | 0.0345 | - | | 0.9898 | 8050 | 0.032 | - | | 0.9959 | 8100 | 0.0336 | - | | 1.0021 | 8150 | 0.0314 | - | | 1.0082 | 8200 | 0.0324 | - | | 1.0144 | 8250 | 0.0298 | - | | 1.0205 | 8300 | 0.0325 | - | | 1.0267 | 8350 | 0.029 | - | | 1.0328 | 8400 | 0.0268 | - | | 1.0390 | 8450 | 0.028 | - | | 1.0451 | 8500 | 0.0313 | - | | 1.0513 | 8550 | 0.0281 | - | | 1.0574 | 8600 | 0.0273 | - | | 1.0636 | 8650 | 0.0276 | - | | 1.0697 | 8700 | 0.0265 | - | | 1.0759 | 8750 | 0.0274 | - | | 1.0820 | 8800 | 0.0248 | - | | 1.0882 | 8850 | 0.0256 | - | | 1.0943 | 8900 | 0.0255 | - | | 1.1005 | 8950 | 0.0279 | - | | 1.1066 | 9000 | 0.0241 | - | | 1.1128 | 9050 | 0.0249 | - | | 1.1189 | 9100 | 0.0247 | - | | 1.1250 | 9150 | 0.021 | - | | 1.1312 | 9200 | 0.0222 | - | | 1.1373 | 9250 | 0.0233 | - | | 1.1435 | 9300 | 0.0255 | - | | 1.1496 | 9350 | 0.0257 | - | | 1.1558 | 9400 | 0.0228 | - | | 1.1619 | 9450 | 0.0227 | - | | 1.1681 | 9500 | 0.0223 | - | | 1.1742 | 9550 | 0.022 | - | | 1.1804 | 9600 | 0.0226 | - | | 1.1865 | 9650 | 0.0197 | - | | 1.1927 | 9700 | 0.0196 | - | | 1.1988 | 9750 | 0.0224 | - | | 1.2050 | 9800 | 0.023 | - | | 1.2111 | 9850 | 0.02 | - | | 1.2173 | 9900 | 0.0237 | - | | 1.2234 | 9950 | 0.0193 | - | | 1.2296 | 10000 | 0.0196 | - | | 1.2357 | 10050 | 0.0202 | - | | 1.2419 | 10100 | 0.0189 | - | | 1.2480 | 10150 | 0.0216 | - | | 1.2541 | 10200 | 0.0165 | - | | 1.2603 | 10250 | 0.0186 | - | | 1.2664 | 10300 | 0.0185 | - | | 1.2726 | 10350 | 0.0236 | - | | 1.2787 | 10400 | 0.019 | - | | 1.2849 | 10450 | 0.0173 | - | | 1.2910 | 10500 | 0.0208 | - | | 1.2972 | 10550 | 0.0189 | - | | 1.3033 | 10600 | 0.0182 | - | | 1.3095 | 10650 | 0.0191 | - | | 1.3156 | 10700 | 0.0205 | - | | 1.3218 | 10750 | 0.0193 | - | | 1.3279 | 10800 | 0.0185 | - | | 1.3341 | 10850 | 0.0206 | - | | 1.3402 | 10900 | 0.0187 | - | | 1.3464 | 10950 | 0.0142 | - | | 1.3525 | 11000 | 0.02 | - | | 1.3587 | 11050 | 0.0161 | - | | 1.3648 | 11100 | 0.0184 | - | | 1.3710 | 11150 | 0.0158 | - | | 1.3771 | 11200 | 0.0165 | - | | 1.3833 | 11250 | 0.0182 | - | | 1.3894 | 11300 | 0.0159 | - | | 1.3955 | 11350 | 0.016 | - | | 1.4017 | 11400 | 0.0185 | - | | 1.4078 | 11450 | 0.0174 | - | | 1.4140 | 11500 | 0.0157 | - | | 1.4201 | 11550 | 0.017 | - | | 1.4263 | 11600 | 0.0159 | - | | 1.4324 | 11650 | 0.0173 | - | | 1.4386 | 11700 | 0.0152 | - | | 1.4447 | 11750 | 0.016 | - | | 1.4509 | 11800 | 0.0135 | - | | 1.4570 | 11850 | 0.0146 | - | | 1.4632 | 11900 | 0.0122 | - | | 1.4693 | 11950 | 0.0156 | - | | 1.4755 | 12000 | 0.0132 | - | | 1.4816 | 12050 | 0.0137 | - | | 1.4878 | 12100 | 0.0157 | - | | 1.4939 | 12150 | 0.0171 | - | | 1.5001 | 12200 | 0.0155 | - | | 1.5062 | 12250 | 0.0135 | - | | 1.5124 | 12300 | 0.0119 | - | | 1.5185 | 12350 | 0.0132 | - | | 1.5247 | 12400 | 0.0139 | - | | 1.5308 | 12450 | 0.0137 | - | | 1.5369 | 12500 | 0.0156 | - | | 1.5431 | 12550 | 0.0158 | - | | 1.5492 | 12600 | 0.0165 | - | | 1.5554 | 12650 | 0.0142 | - | | 1.5615 | 12700 | 0.0148 | - | | 1.5677 | 12750 | 0.0122 | - | | 1.5738 | 12800 | 0.0129 | - | | 1.5800 | 12850 | 0.0116 | - | | 1.5861 | 12900 | 0.0119 | - | | 1.5923 | 12950 | 0.0096 | - | | 1.5984 | 13000 | 0.0119 | - | | 1.6046 | 13050 | 0.0142 | - | | 1.6107 | 13100 | 0.0081 | - | | 1.6169 | 13150 | 0.0117 | - | | 1.6230 | 13200 | 0.0102 | - | | 1.6292 | 13250 | 0.0105 | - | | 1.6353 | 13300 | 0.0115 | - | | 1.6415 | 13350 | 0.01 | - | | 1.6476 | 13400 | 0.0093 | - | | 1.6538 | 13450 | 0.0097 | - | | 1.6599 | 13500 | 0.0113 | - | | 1.6661 | 13550 | 0.0086 | - | | 1.6722 | 13600 | 0.0118 | - | | 1.6783 | 13650 | 0.0096 | - | | 1.6845 | 13700 | 0.0082 | - | | 1.6906 | 13750 | 0.0095 | - | | 1.6968 | 13800 | 0.0102 | - | | 1.7029 | 13850 | 0.0118 | - | | 1.7091 | 13900 | 0.0112 | - | | 1.7152 | 13950 | 0.009 | - | | 1.7214 | 14000 | 0.0111 | - | | 1.7275 | 14050 | 0.0093 | - | | 1.7337 | 14100 | 0.008 | - | | 1.7398 | 14150 | 0.0084 | - | | 1.7460 | 14200 | 0.0082 | - | | 1.7521 | 14250 | 0.0088 | - | | 1.7583 | 14300 | 0.0122 | - | | 1.7644 | 14350 | 0.0087 | - | | 1.7706 | 14400 | 0.0084 | - | | 1.7767 | 14450 | 0.0073 | - | | 1.7829 | 14500 | 0.0079 | - | | 1.7890 | 14550 | 0.0064 | - | | 1.7952 | 14600 | 0.0069 | - | | 1.8013 | 14650 | 0.0083 | - | | 1.8075 | 14700 | 0.0071 | - | | 1.8136 | 14750 | 0.0066 | - | | 1.8197 | 14800 | 0.0086 | - | | 1.8259 | 14850 | 0.0091 | - | | 1.8320 | 14900 | 0.008 | - | | 1.8382 | 14950 | 0.0071 | - | | 1.8443 | 15000 | 0.0072 | - | | 1.8505 | 15050 | 0.0077 | - | | 1.8566 | 15100 | 0.0086 | - | | 1.8628 | 15150 | 0.0066 | - | | 1.8689 | 15200 | 0.007 | - | | 1.8751 | 15250 | 0.0065 | - | | 1.8812 | 15300 | 0.0073 | - | | 1.8874 | 15350 | 0.0074 | - | | 1.8935 | 15400 | 0.0063 | - | | 1.8997 | 15450 | 0.0064 | - | | 1.9058 | 15500 | 0.0067 | - | | 1.9120 | 15550 | 0.0081 | - | | 1.9181 | 15600 | 0.0083 | - | | 1.9243 | 15650 | 0.0068 | - | | 1.9304 | 15700 | 0.005 | - | | 1.9366 | 15750 | 0.0055 | - | | 1.9427 | 15800 | 0.0061 | - | | 1.9489 | 15850 | 0.0057 | - | | 1.9550 | 15900 | 0.0056 | - | | 1.9611 | 15950 | 0.0057 | - | | 1.9673 | 16000 | 0.0059 | - | | 1.9734 | 16050 | 0.0046 | - | | 1.9796 | 16100 | 0.0077 | - | | 1.9857 | 16150 | 0.0067 | - | | 1.9919 | 16200 | 0.0067 | - | | 1.9980 | 16250 | 0.007 | - | | 2.0042 | 16300 | 0.0055 | - | | 2.0103 | 16350 | 0.0049 | - | | 2.0165 | 16400 | 0.0075 | - | | 2.0226 | 16450 | 0.0064 | - | | 2.0288 | 16500 | 0.0062 | - | | 2.0349 | 16550 | 0.006 | - | | 2.0411 | 16600 | 0.006 | - | | 2.0472 | 16650 | 0.0054 | - | | 2.0534 | 16700 | 0.0056 | - | | 2.0595 | 16750 | 0.0047 | - | | 2.0657 | 16800 | 0.0068 | - | | 2.0718 | 16850 | 0.0056 | - | | 2.0780 | 16900 | 0.0055 | - | | 2.0841 | 16950 | 0.0056 | - | | 2.0902 | 17000 | 0.0046 | - | | 2.0964 | 17050 | 0.0048 | - | | 2.1025 | 17100 | 0.0067 | - | | 2.1087 | 17150 | 0.0038 | - | | 2.1148 | 17200 | 0.0051 | - | | 2.1210 | 17250 | 0.004 | - | | 2.1271 | 17300 | 0.0054 | - | | 2.1333 | 17350 | 0.0053 | - | | 2.1394 | 17400 | 0.0041 | - | | 2.1456 | 17450 | 0.0048 | - | | 2.1517 | 17500 | 0.0032 | - | | 2.1579 | 17550 | 0.0043 | - | | 2.1640 | 17600 | 0.0046 | - | | 2.1702 | 17650 | 0.0047 | - | | 2.1763 | 17700 | 0.0035 | - | | 2.1825 | 17750 | 0.0033 | - | | 2.1886 | 17800 | 0.0043 | - | | 2.1948 | 17850 | 0.0042 | - | | 2.2009 | 17900 | 0.0033 | - | | 2.2071 | 17950 | 0.0059 | - | | 2.2132 | 18000 | 0.0054 | - | | 2.2194 | 18050 | 0.0042 | - | | 2.2255 | 18100 | 0.0034 | - | | 2.2316 | 18150 | 0.0043 | - | | 2.2378 | 18200 | 0.0052 | - | | 2.2439 | 18250 | 0.005 | - | | 2.2501 | 18300 | 0.0061 | - | | 2.2562 | 18350 | 0.0057 | - | | 2.2624 | 18400 | 0.0043 | - | | 2.2685 | 18450 | 0.0044 | - | | 2.2747 | 18500 | 0.0038 | - | | 2.2808 | 18550 | 0.0045 | - | | 2.2870 | 18600 | 0.0043 | - | | 2.2931 | 18650 | 0.0042 | - | | 2.2993 | 18700 | 0.0037 | - | | 2.3054 | 18750 | 0.0042 | - | | 2.3116 | 18800 | 0.0039 | - | | 2.3177 | 18850 | 0.0041 | - | | 2.3239 | 18900 | 0.004 | - | | 2.3300 | 18950 | 0.0033 | - | | 2.3362 | 19000 | 0.003 | - | | 2.3423 | 19050 | 0.0044 | - | | 2.3485 | 19100 | 0.003 | - | | 2.3546 | 19150 | 0.0039 | - | | 2.3608 | 19200 | 0.0019 | - | | 2.3669 | 19250 | 0.0025 | - | | 2.3730 | 19300 | 0.0035 | - | | 2.3792 | 19350 | 0.0032 | - | | 2.3853 | 19400 | 0.0033 | - | | 2.3915 | 19450 | 0.004 | - | | 2.3976 | 19500 | 0.0042 | - | | 2.4038 | 19550 | 0.0041 | - | | 2.4099 | 19600 | 0.0044 | - | | 2.4161 | 19650 | 0.0024 | - | | 2.4222 | 19700 | 0.002 | - | | 2.4284 | 19750 | 0.0035 | - | | 2.4345 | 19800 | 0.0022 | - | | 2.4407 | 19850 | 0.0047 | - | | 2.4468 | 19900 | 0.003 | - | | 2.4530 | 19950 | 0.0028 | - | | 2.4591 | 20000 | 0.0024 | - | | 2.4653 | 20050 | 0.0018 | - | | 2.4714 | 20100 | 0.0027 | - | | 2.4776 | 20150 | 0.0022 | - | | 2.4837 | 20200 | 0.0023 | - | | 2.4899 | 20250 | 0.0024 | - | | 2.4960 | 20300 | 0.0022 | - | | 2.5022 | 20350 | 0.0023 | - | | 2.5083 | 20400 | 0.0022 | - | | 2.5144 | 20450 | 0.0018 | - | | 2.5206 | 20500 | 0.0025 | - | | 2.5267 | 20550 | 0.0035 | - | | 2.5329 | 20600 | 0.0024 | - | | 2.5390 | 20650 | 0.0028 | - | | 2.5452 | 20700 | 0.0021 | - | | 2.5513 | 20750 | 0.0019 | - | | 2.5575 | 20800 | 0.0041 | - | | 2.5636 | 20850 | 0.0028 | - | | 2.5698 | 20900 | 0.0013 | - | | 2.5759 | 20950 | 0.0034 | - | | 2.5821 | 21000 | 0.0023 | - | | 2.5882 | 21050 | 0.0023 | - | | 2.5944 | 21100 | 0.0031 | - | | 2.6005 | 21150 | 0.0017 | - | | 2.6067 | 21200 | 0.0021 | - | | 2.6128 | 21250 | 0.003 | - | | 2.6190 | 21300 | 0.003 | - | | 2.6251 | 21350 | 0.0022 | - | | 2.6313 | 21400 | 0.0016 | - | | 2.6374 | 21450 | 0.0013 | - | | 2.6436 | 21500 | 0.0005 | - | | 2.6497 | 21550 | 0.0014 | - | | 2.6558 | 21600 | 0.0025 | - | | 2.6620 | 21650 | 0.001 | - | | 2.6681 | 21700 | 0.001 | - | | 2.6743 | 21750 | 0.0011 | - | | 2.6804 | 21800 | 0.0008 | - | | 2.6866 | 21850 | 0.0019 | - | | 2.6927 | 21900 | 0.0024 | - | | 2.6989 | 21950 | 0.0024 | - | | 2.7050 | 22000 | 0.0029 | - | | 2.7112 | 22050 | 0.0026 | - | | 2.7173 | 22100 | 0.0028 | - | | 2.7235 | 22150 | 0.0022 | - | | 2.7296 | 22200 | 0.0038 | - | | 2.7358 | 22250 | 0.0031 | - | | 2.7419 | 22300 | 0.0031 | - | | 2.7481 | 22350 | 0.0038 | - | | 2.7542 | 22400 | 0.003 | - | | 2.7604 | 22450 | 0.0033 | - | | 2.7665 | 22500 | 0.0028 | - | | 2.7727 | 22550 | 0.0022 | - | | 2.7788 | 22600 | 0.0032 | - | | 2.7850 | 22650 | 0.0027 | - | | 2.7911 | 22700 | 0.0022 | - | | 2.7972 | 22750 | 0.0022 | - | | 2.8034 | 22800 | 0.0025 | - | | 2.8095 | 22850 | 0.0016 | - | | 2.8157 | 22900 | 0.0015 | - | | 2.8218 | 22950 | 0.0016 | - | | 2.8280 | 23000 | 0.0017 | - | | 2.8341 | 23050 | 0.0018 | - | | 2.8403 | 23100 | 0.0012 | - | | 2.8464 | 23150 | 0.0009 | - | | 2.8526 | 23200 | 0.0011 | - | | 2.8587 | 23250 | 0.0008 | - | | 2.8649 | 23300 | 0.0004 | - | | 2.8710 | 23350 | 0.0009 | - | | 2.8772 | 23400 | 0.0015 | - | | 2.8833 | 23450 | 0.0011 | - | | 2.8895 | 23500 | 0.0011 | - | | 2.8956 | 23550 | 0.0011 | - | | 2.9018 | 23600 | 0.0011 | - | | 2.9079 | 23650 | 0.0011 | - | | 2.9141 | 23700 | 0.0008 | - | | 2.9202 | 23750 | 0.0008 | - | | 2.9263 | 23800 | 0.0012 | - | | 2.9325 | 23850 | 0.0028 | - | | 2.9386 | 23900 | 0.0012 | - | | 2.9448 | 23950 | 0.0026 | - | | 2.9509 | 24000 | 0.0017 | - | | 2.9571 | 24050 | 0.0026 | - | | 2.9632 | 24100 | 0.0016 | - | | 2.9694 | 24150 | 0.0016 | - | | 2.9755 | 24200 | 0.0055 | - | | 2.9817 | 24250 | 0.0039 | - | | 2.9878 | 24300 | 0.003 | - | | 2.9940 | 24350 | 0.0031 | - | | 3.0001 | 24400 | 0.0026 | - | | 3.0063 | 24450 | 0.0024 | - | | 3.0124 | 24500 | 0.0021 | - | | 3.0186 | 24550 | 0.0019 | - | | 3.0247 | 24600 | 0.0025 | - | | 3.0309 | 24650 | 0.0036 | - | | 3.0370 | 24700 | 0.0023 | - | | 3.0432 | 24750 | 0.0014 | - | | 3.0493 | 24800 | 0.002 | - | | 3.0555 | 24850 | 0.0022 | - | | 3.0616 | 24900 | 0.0019 | - | | 3.0677 | 24950 | 0.0022 | - | | 3.0739 | 25000 | 0.0022 | - | | 3.0800 | 25050 | 0.0022 | - | | 3.0862 | 25100 | 0.0022 | - | | 3.0923 | 25150 | 0.001 | - | | 3.0985 | 25200 | 0.0027 | - | | 3.1046 | 25250 | 0.0018 | - | | 3.1108 | 25300 | 0.002 | - | | 3.1169 | 25350 | 0.0021 | - | | 3.1231 | 25400 | 0.0014 | - | | 3.1292 | 25450 | 0.0017 | - | | 3.1354 | 25500 | 0.001 | - | | 3.1415 | 25550 | 0.0015 | - | | 3.1477 | 25600 | 0.0013 | - | | 3.1538 | 25650 | 0.0015 | - | | 3.1600 | 25700 | 0.0016 | - | | 3.1661 | 25750 | 0.0007 | - | | 3.1723 | 25800 | 0.001 | - | | 3.1784 | 25850 | 0.0005 | - | | 3.1846 | 25900 | 0.001 | - | | 3.1907 | 25950 | 0.0007 | - | | 3.1969 | 26000 | 0.0017 | - | | 3.2030 | 26050 | 0.0014 | - | | 3.2091 | 26100 | 0.0008 | - | | 3.2153 | 26150 | 0.0011 | - | | 3.2214 | 26200 | 0.002 | - | | 3.2276 | 26250 | 0.0018 | - | | 3.2337 | 26300 | 0.0013 | - | | 3.2399 | 26350 | 0.0013 | - | | 3.2460 | 26400 | 0.0014 | - | | 3.2522 | 26450 | 0.0021 | - | | 3.2583 | 26500 | 0.0015 | - | | 3.2645 | 26550 | 0.0021 | - | | 3.2706 | 26600 | 0.0021 | - | | 3.2768 | 26650 | 0.0018 | - | | 3.2829 | 26700 | 0.0036 | - | | 3.2891 | 26750 | 0.0026 | - | | 3.2952 | 26800 | 0.0023 | - | | 3.3014 | 26850 | 0.003 | - | | 3.3075 | 26900 | 0.0018 | - | | 3.3137 | 26950 | 0.0024 | - | | 3.3198 | 27000 | 0.0013 | - | | 3.3260 | 27050 | 0.0016 | - | | 3.3321 | 27100 | 0.0011 | - | | 3.3383 | 27150 | 0.0014 | - | | 3.3444 | 27200 | 0.0018 | - | | 3.3505 | 27250 | 0.0017 | - | | 3.3567 | 27300 | 0.0014 | - | | 3.3628 | 27350 | 0.0019 | - | | 3.3690 | 27400 | 0.0016 | - | | 3.3751 | 27450 | 0.0012 | - | | 3.3813 | 27500 | 0.0008 | - | | 3.3874 | 27550 | 0.0015 | - | | 3.3936 | 27600 | 0.0007 | - | | 3.3997 | 27650 | 0.0001 | - | | 3.4059 | 27700 | 0.0007 | - | | 3.4120 | 27750 | 0.0012 | - | | 3.4182 | 27800 | 0.0015 | - | | 3.4243 | 27850 | 0.0014 | - | | 3.4305 | 27900 | 0.002 | - | | 3.4366 | 27950 | 0.0011 | - | | 3.4428 | 28000 | 0.0012 | - | | 3.4489 | 28050 | 0.0009 | - | | 3.4551 | 28100 | 0.001 | - | | 3.4612 | 28150 | 0.0015 | - | | 3.4674 | 28200 | 0.0019 | - | | 3.4735 | 28250 | 0.0018 | - | | 3.4797 | 28300 | 0.0016 | - | | 3.4858 | 28350 | 0.002 | - | | 3.4919 | 28400 | 0.0032 | - | | 3.4981 | 28450 | 0.0044 | - | | 3.5042 | 28500 | 0.0035 | - | | 3.5104 | 28550 | 0.0026 | - | | 3.5165 | 28600 | 0.0032 | - | | 3.5227 | 28650 | 0.0034 | - | | 3.5288 | 28700 | 0.0033 | - | | 3.5350 | 28750 | 0.0037 | - | | 3.5411 | 28800 | 0.0028 | - | | 3.5473 | 28850 | 0.0025 | - | | 3.5534 | 28900 | 0.0021 | - | | 3.5596 | 28950 | 0.0022 | - | | 3.5657 | 29000 | 0.0018 | - | | 3.5719 | 29050 | 0.0022 | - | | 3.5780 | 29100 | 0.0016 | - | | 3.5842 | 29150 | 0.0009 | - | | 3.5903 | 29200 | 0.0009 | - | | 3.5965 | 29250 | 0.0017 | - | | 3.6026 | 29300 | 0.0012 | - | | 3.6088 | 29350 | 0.0012 | - | | 3.6149 | 29400 | 0.0015 | - | | 3.6211 | 29450 | 0.0009 | - | | 3.6272 | 29500 | 0.0022 | - | | 3.6333 | 29550 | 0.0021 | - | | 3.6395 | 29600 | 0.001 | - | | 3.6456 | 29650 | 0.0028 | - | | 3.6518 | 29700 | 0.0016 | - | | 3.6579 | 29750 | 0.0022 | - | | 3.6641 | 29800 | 0.0017 | - | | 3.6702 | 29850 | 0.0018 | - | | 3.6764 | 29900 | 0.0017 | - | | 3.6825 | 29950 | 0.0011 | - | | 3.6887 | 30000 | 0.0005 | - | | 3.6948 | 30050 | 0.0012 | - | | 3.7010 | 30100 | 0.0014 | - | | 3.7071 | 30150 | 0.0017 | - | | 3.7133 | 30200 | 0.0006 | - | | 3.7194 | 30250 | 0.0005 | - | | 3.7256 | 30300 | 0.0007 | - | | 3.7317 | 30350 | 0.0016 | - | | 3.7379 | 30400 | 0.0019 | - | | 3.7440 | 30450 | 0.0016 | - | | 3.7502 | 30500 | 0.0016 | - | | 3.7563 | 30550 | 0.0011 | - | | 3.7624 | 30600 | 0.0026 | - | | 3.7686 | 30650 | 0.0012 | - | | 3.7747 | 30700 | 0.0014 | - | | 3.7809 | 30750 | 0.0012 | - | | 3.7870 | 30800 | 0.002 | - | | 3.7932 | 30850 | 0.0014 | - | | 3.7993 | 30900 | 0.0012 | - | | 3.8055 | 30950 | 0.001 | - | | 3.8116 | 31000 | 0.0017 | - | | 3.8178 | 31050 | 0.0013 | - | | 3.8239 | 31100 | 0.0017 | - | | 3.8301 | 31150 | 0.0018 | - | | 3.8362 | 31200 | 0.0009 | - | | 3.8424 | 31250 | 0.0018 | - | | 3.8485 | 31300 | 0.0019 | - | | 3.8547 | 31350 | 0.0017 | - | | 3.8608 | 31400 | 0.0013 | - | | 3.8670 | 31450 | 0.0046 | - | | 3.8731 | 31500 | 0.0026 | - | | 3.8793 | 31550 | 0.0019 | - | | 3.8854 | 31600 | 0.0019 | - | | 3.8916 | 31650 | 0.0014 | - | | 3.8977 | 31700 | 0.0016 | - | | 3.9038 | 31750 | 0.0025 | - | | 3.9100 | 31800 | 0.0034 | - | | 3.9161 | 31850 | 0.002 | - | | 3.9223 | 31900 | 0.0019 | - | | 3.9284 | 31950 | 0.001 | - | | 3.9346 | 32000 | 0.0024 | - | | 3.9407 | 32050 | 0.0016 | - | | 3.9469 | 32100 | 0.0015 | - | | 3.9530 | 32150 | 0.0011 | - | | 3.9592 | 32200 | 0.0016 | - | | 3.9653 | 32250 | 0.0012 | - | | 3.9715 | 32300 | 0.001 | - | | 3.9776 | 32350 | 0.0011 | - | | 3.9838 | 32400 | 0.0016 | - | | 3.9899 | 32450 | 0.0026 | - | | 3.9961 | 32500 | 0.0031 | - | | 4.0022 | 32550 | 0.0015 | - | | 4.0084 | 32600 | 0.0024 | - | | 4.0145 | 32650 | 0.0021 | - | | 4.0207 | 32700 | 0.0016 | - | | 4.0268 | 32750 | 0.0023 | - | | 4.0330 | 32800 | 0.0013 | - | | 4.0391 | 32850 | 0.0019 | - | | 4.0452 | 32900 | 0.0008 | - | | 4.0514 | 32950 | 0.0019 | - | | 4.0575 | 33000 | 0.0008 | - | | 4.0637 | 33050 | 0.0012 | - | | 4.0698 | 33100 | 0.001 | - | | 4.0760 | 33150 | 0.0015 | - | | 4.0821 | 33200 | 0.0005 | - | | 4.0883 | 33250 | 0.0012 | - | | 4.0944 | 33300 | 0.0012 | - | | 4.1006 | 33350 | 0.0005 | - | | 4.1067 | 33400 | 0.0006 | - | | 4.1129 | 33450 | 0.0007 | - | | 4.1190 | 33500 | 0.0011 | - | | 4.1252 | 33550 | 0.0011 | - | | 4.1313 | 33600 | 0.0015 | - | | 4.1375 | 33650 | 0.0011 | - | | 4.1436 | 33700 | 0.0012 | - | | 4.1498 | 33750 | 0.0007 | - | | 4.1559 | 33800 | 0.0006 | - | | 4.1621 | 33850 | 0.0007 | - | | 4.1682 | 33900 | 0.0007 | - | | 4.1744 | 33950 | 0.0005 | - | | 4.1805 | 34000 | 0.0006 | - | | 4.1866 | 34050 | 0.001 | - | | 4.1928 | 34100 | 0.0009 | - | | 4.1989 | 34150 | 0.0004 | - | | 4.2051 | 34200 | 0.0003 | - | | 4.2112 | 34250 | 0.0008 | - | | 4.2174 | 34300 | 0.0009 | - | | 4.2235 | 34350 | 0.0002 | - | | 4.2297 | 34400 | 0.0014 | - | | 4.2358 | 34450 | 0.0002 | - | | 4.2420 | 34500 | 0.0003 | - | | 4.2481 | 34550 | 0.0005 | - | | 4.2543 | 34600 | 0.0025 | - | | 4.2604 | 34650 | 0.0005 | - | | 4.2666 | 34700 | 0.0004 | - | | 4.2727 | 34750 | 0.0009 | - | | 4.2789 | 34800 | 0.0005 | - | | 4.2850 | 34850 | 0.0005 | - | | 4.2912 | 34900 | 0.0005 | - | | 4.2973 | 34950 | 0.0008 | - | | 4.3035 | 35000 | 0.0012 | - | | 4.3096 | 35050 | 0.0009 | - | | 4.3158 | 35100 | 0.0004 | - | | 4.3219 | 35150 | 0.0016 | - | | 4.3280 | 35200 | 0.0008 | - | | 4.3342 | 35250 | 0.0009 | - | | 4.3403 | 35300 | 0.0011 | - | | 4.3465 | 35350 | 0.0005 | - | | 4.3526 | 35400 | 0.0004 | - | | 4.3588 | 35450 | 0.004 | - | | 4.3649 | 35500 | 0.0023 | - | | 4.3711 | 35550 | 0.002 | - | | 4.3772 | 35600 | 0.0018 | - | | 4.3834 | 35650 | 0.0022 | - | | 4.3895 | 35700 | 0.0027 | - | | 4.3957 | 35750 | 0.0022 | - | | 4.4018 | 35800 | 0.0027 | - | | 4.4080 | 35850 | 0.003 | - | | 4.4141 | 35900 | 0.0017 | - | | 4.4203 | 35950 | 0.0009 | - | | 4.4264 | 36000 | 0.0027 | - | | 4.4326 | 36050 | 0.0018 | - | | 4.4387 | 36100 | 0.0016 | - | | 4.4449 | 36150 | 0.0013 | - | | 4.4510 | 36200 | 0.0019 | - | | 4.4571 | 36250 | 0.0011 | - | | 4.4633 | 36300 | 0.0015 | - | | 4.4694 | 36350 | 0.0009 | - | | 4.4756 | 36400 | 0.0007 | - | | 4.4817 | 36450 | 0.0011 | - | | 4.4879 | 36500 | 0.0017 | - | | 4.4940 | 36550 | 0.001 | - | | 4.5002 | 36600 | 0.0015 | - | | 4.5063 | 36650 | 0.0015 | - | | 4.5125 | 36700 | 0.0009 | - | | 4.5186 | 36750 | 0.0005 | - | | 4.5248 | 36800 | 0.0021 | - | | 4.5309 | 36850 | 0.0008 | - | | 4.5371 | 36900 | 0.0008 | - | | 4.5432 | 36950 | 0.001 | - | | 4.5494 | 37000 | 0.0006 | - | | 4.5555 | 37050 | 0.0009 | - | | 4.5617 | 37100 | 0.0005 | - | | 4.5678 | 37150 | 0.0004 | - | | 4.5740 | 37200 | 0.001 | - | | 4.5801 | 37250 | 0.0013 | - | | 4.5863 | 37300 | 0.0016 | - | | 4.5924 | 37350 | 0.0017 | - | | 4.5985 | 37400 | 0.0008 | - | | 4.6047 | 37450 | 0.0005 | - | | 4.6108 | 37500 | 0.0009 | - | | 4.6170 | 37550 | 0.0018 | - | | 4.6231 | 37600 | 0.0009 | - | | 4.6293 | 37650 | 0.0006 | - | | 4.6354 | 37700 | 0.0006 | - | | 4.6416 | 37750 | 0.0012 | - | | 4.6477 | 37800 | 0.0003 | - | | 4.6539 | 37850 | 0.0003 | - | | 4.6600 | 37900 | 0.0006 | - | | 4.6662 | 37950 | 0.0007 | - | | 4.6723 | 38000 | 0.0006 | - | | 4.6785 | 38050 | 0.0011 | - | | 4.6846 | 38100 | 0.0014 | - | | 4.6908 | 38150 | 0.0015 | - | | 4.6969 | 38200 | 0.0012 | - | | 4.7031 | 38250 | 0.0021 | - | | 4.7092 | 38300 | 0.0016 | - | | 4.7154 | 38350 | 0.0016 | - | | 4.7215 | 38400 | 0.0011 | - | | 4.7277 | 38450 | 0.0009 | - | | 4.7338 | 38500 | 0.0014 | - | | 4.7399 | 38550 | 0.0008 | - | | 4.7461 | 38600 | 0.0013 | - | | 4.7522 | 38650 | 0.0007 | - | | 4.7584 | 38700 | 0.0015 | - | | 4.7645 | 38750 | 0.0009 | - | | 4.7707 | 38800 | 0.0006 | - | | 4.7768 | 38850 | 0.0009 | - | | 4.7830 | 38900 | 0.0009 | - | | 4.7891 | 38950 | 0.0009 | - | | 4.7953 | 39000 | 0.0015 | - | | 4.8014 | 39050 | 0.0015 | - | | 4.8076 | 39100 | 0.001 | - | | 4.8137 | 39150 | 0.0011 | - | | 4.8199 | 39200 | 0.0007 | - | | 4.8260 | 39250 | 0.0007 | - | | 4.8322 | 39300 | 0.0005 | - | | 4.8383 | 39350 | 0.0005 | - | | 4.8445 | 39400 | 0.0002 | - | | 4.8506 | 39450 | 0.0009 | - | | 4.8568 | 39500 | 0.0004 | - | | 4.8629 | 39550 | 0.0016 | - | | 4.8691 | 39600 | 0.0014 | - | | 4.8752 | 39650 | 0.0017 | - | | 4.8813 | 39700 | 0.0018 | - | | 4.8875 | 39750 | 0.0003 | - | | 4.8936 | 39800 | 0.0017 | - | | 4.8998 | 39850 | 0.0007 | - | | 4.9059 | 39900 | 0.0009 | - | | 4.9121 | 39950 | 0.0011 | - | | 4.9182 | 40000 | 0.0014 | - | | 4.9244 | 40050 | 0.0019 | - | | 4.9305 | 40100 | 0.001 | - | | 4.9367 | 40150 | 0.0006 | - | | 4.9428 | 40200 | 0.0009 | - | | 4.9490 | 40250 | 0.0011 | - | | 4.9551 | 40300 | 0.0017 | - | | 4.9613 | 40350 | 0.0007 | - | | 4.9674 | 40400 | 0.0008 | - | | 4.9736 | 40450 | 0.0008 | - | | 4.9797 | 40500 | 0.0013 | - | | 4.9859 | 40550 | 0.0014 | - | | 4.9920 | 40600 | 0.0029 | - | | 4.9982 | 40650 | 0.0018 | - | | 5.0043 | 40700 | 0.0017 | - | | 5.0105 | 40750 | 0.0013 | - | | 5.0166 | 40800 | 0.0015 | - | | 5.0227 | 40850 | 0.0011 | - | | 5.0289 | 40900 | 0.001 | - | | 5.0350 | 40950 | 0.0012 | - | | 5.0412 | 41000 | 0.001 | - | | 5.0473 | 41050 | 0.001 | - | | 5.0535 | 41100 | 0.0007 | - | | 5.0596 | 41150 | 0.0009 | - | | 5.0658 | 41200 | 0.0007 | - | | 5.0719 | 41250 | 0.0015 | - | | 5.0781 | 41300 | 0.0004 | - | | 5.0842 | 41350 | 0.0011 | - | | 5.0904 | 41400 | 0.0005 | - | | 5.0965 | 41450 | 0.0005 | - | | 5.1027 | 41500 | 0.0008 | - | | 5.1088 | 41550 | 0.0013 | - | | 5.1150 | 41600 | 0.0009 | - | | 5.1211 | 41650 | 0.0003 | - | | 5.1273 | 41700 | 0.0007 | - | | 5.1334 | 41750 | 0.0003 | - | | 5.1396 | 41800 | 0.001 | - | | 5.1457 | 41850 | 0.0009 | - | | 5.1519 | 41900 | 0.0012 | - | | 5.1580 | 41950 | 0.0019 | - | | 5.1641 | 42000 | 0.0014 | - | | 5.1703 | 42050 | 0.0005 | - | | 5.1764 | 42100 | 0.0005 | - | | 5.1826 | 42150 | 0.0008 | - | | 5.1887 | 42200 | 0.0009 | - | | 5.1949 | 42250 | 0.0004 | - | | 5.2010 | 42300 | 0.0005 | - | | 5.2072 | 42350 | 0.0002 | - | | 5.2133 | 42400 | 0.001 | - | | 5.2195 | 42450 | 0.0011 | - | | 5.2256 | 42500 | 0.0009 | - | | 5.2318 | 42550 | 0.0005 | - | | 5.2379 | 42600 | 0.001 | - | | 5.2441 | 42650 | 0.0007 | - | | 5.2502 | 42700 | 0.0002 | - | | 5.2564 | 42750 | 0.0005 | - | | 5.2625 | 42800 | 0.0004 | - | | 5.2687 | 42850 | 0.0005 | - | | 5.2748 | 42900 | 0.0006 | - | | 5.2810 | 42950 | 0.0002 | - | | 5.2871 | 43000 | 0.0 | - | | 5.2932 | 43050 | 0.0009 | - | | 5.2994 | 43100 | 0.0003 | - | | 5.3055 | 43150 | 0.0002 | - | | 5.3117 | 43200 | 0.0014 | - | | 5.3178 | 43250 | 0.0009 | - | | 5.3240 | 43300 | 0.0017 | - | | 5.3301 | 43350 | 0.001 | - | | 5.3363 | 43400 | 0.0011 | - | | 5.3424 | 43450 | 0.0005 | - | | 5.3486 | 43500 | 0.0012 | - | | 5.3547 | 43550 | 0.0005 | - | | 5.3609 | 43600 | 0.0007 | - | | 5.3670 | 43650 | 0.0015 | - | | 5.3732 | 43700 | 0.0012 | - | | 5.3793 | 43750 | 0.0017 | - | | 5.3855 | 43800 | 0.0003 | - | | 5.3916 | 43850 | 0.0005 | - | | 5.3978 | 43900 | 0.0007 | - | | 5.4039 | 43950 | 0.0011 | - | | 5.4101 | 44000 | 0.0019 | - | | 5.4162 | 44050 | 0.001 | - | | 5.4224 | 44100 | 0.0012 | - | | 5.4285 | 44150 | 0.0005 | - | | 5.4346 | 44200 | 0.0004 | - | | 5.4408 | 44250 | 0.0005 | - | | 5.4469 | 44300 | 0.0003 | - | | 5.4531 | 44350 | 0.0007 | - | | 5.4592 | 44400 | 0.0008 | - | | 5.4654 | 44450 | 0.0004 | - | | 5.4715 | 44500 | 0.0005 | - | | 5.4777 | 44550 | 0.0011 | - | | 5.4838 | 44600 | 0.0009 | - | | 5.4900 | 44650 | 0.0007 | - | | 5.4961 | 44700 | 0.0 | - | | 5.5023 | 44750 | 0.0007 | - | | 5.5084 | 44800 | 0.0004 | - | | 5.5146 | 44850 | 0.0002 | - | | 5.5207 | 44900 | 0.0 | - | | 5.5269 | 44950 | 0.0 | - | | 5.5330 | 45000 | 0.0003 | - | | 5.5392 | 45050 | 0.0006 | - | | 5.5453 | 45100 | 0.0003 | - | | 5.5515 | 45150 | 0.0003 | - | | 5.5576 | 45200 | 0.0012 | - | | 5.5638 | 45250 | 0.0006 | - | | 5.5699 | 45300 | 0.0002 | - | | 5.5760 | 45350 | 0.0006 | - | | 5.5822 | 45400 | 0.0012 | - | | 5.5883 | 45450 | 0.0003 | - | | 5.5945 | 45500 | 0.0008 | - | | 5.6006 | 45550 | 0.0009 | - | | 5.6068 | 45600 | 0.0003 | - | | 5.6129 | 45650 | 0.0014 | - | | 5.6191 | 45700 | 0.0019 | - | | 5.6252 | 45750 | 0.0011 | - | | 5.6314 | 45800 | 0.0011 | - | | 5.6375 | 45850 | 0.0002 | - | | 5.6437 | 45900 | 0.0009 | - | | 5.6498 | 45950 | 0.0012 | - | | 5.6560 | 46000 | 0.0016 | - | | 5.6621 | 46050 | 0.0011 | - | | 5.6683 | 46100 | 0.0019 | - | | 5.6744 | 46150 | 0.0005 | - | | 5.6806 | 46200 | 0.0007 | - | | 5.6867 | 46250 | 0.0004 | - | | 5.6929 | 46300 | 0.0004 | - | | 5.6990 | 46350 | 0.0013 | - | | 5.7052 | 46400 | 0.0007 | - | | 5.7113 | 46450 | 0.0007 | - | | 5.7174 | 46500 | 0.0007 | - | | 5.7236 | 46550 | 0.0007 | - | | 5.7297 | 46600 | 0.0001 | - | | 5.7359 | 46650 | 0.0016 | - | | 5.7420 | 46700 | 0.0023 | - | | 5.7482 | 46750 | 0.0014 | - | | 5.7543 | 46800 | 0.0014 | - | | 5.7605 | 46850 | 0.0003 | - | | 5.7666 | 46900 | 0.0013 | - | | 5.7728 | 46950 | 0.0008 | - | | 5.7789 | 47000 | 0.0004 | - | | 5.7851 | 47050 | 0.0025 | - | | 5.7912 | 47100 | 0.001 | - | | 5.7974 | 47150 | 0.0011 | - | | 5.8035 | 47200 | 0.0014 | - | | 5.8097 | 47250 | 0.0009 | - | | 5.8158 | 47300 | 0.0002 | - | | 5.8220 | 47350 | 0.0011 | - | | 5.8281 | 47400 | 0.0022 | - | | 5.8343 | 47450 | 0.0009 | - | | 5.8404 | 47500 | 0.0005 | - | | 5.8466 | 47550 | 0.0004 | - | | 5.8527 | 47600 | 0.0009 | - | | 5.8588 | 47650 | 0.0007 | - | | 5.8650 | 47700 | 0.0003 | - | | 5.8711 | 47750 | 0.0006 | - | | 5.8773 | 47800 | 0.0002 | - | | 5.8834 | 47850 | 0.0012 | - | | 5.8896 | 47900 | 0.0009 | - | | 5.8957 | 47950 | 0.0007 | - | | 5.9019 | 48000 | 0.0004 | - | | 5.9080 | 48050 | 0.0015 | - | | 5.9142 | 48100 | 0.0006 | - | | 5.9203 | 48150 | 0.0003 | - | | 5.9265 | 48200 | 0.0 | - | | 5.9326 | 48250 | 0.0007 | - | | 5.9388 | 48300 | 0.0005 | - | | 5.9449 | 48350 | 0.0009 | - | | 5.9511 | 48400 | 0.0006 | - | | 5.9572 | 48450 | 0.0004 | - | | 5.9634 | 48500 | 0.0008 | - | | 5.9695 | 48550 | 0.0003 | - | | 5.9757 | 48600 | 0.0011 | - | | 5.9818 | 48650 | 0.0004 | - | | 5.9880 | 48700 | 0.0014 | - | | 5.9941 | 48750 | 0.0007 | - | | 6.0002 | 48800 | 0.0003 | - | | 6.0064 | 48850 | 0.001 | - | | 6.0125 | 48900 | 0.0008 | - | | 6.0187 | 48950 | 0.0017 | - | | 6.0248 | 49000 | 0.001 | - | | 6.0310 | 49050 | 0.0008 | - | | 6.0371 | 49100 | 0.0008 | - | | 6.0433 | 49150 | 0.0013 | - | | 6.0494 | 49200 | 0.0006 | - | | 6.0556 | 49250 | 0.0005 | - | | 6.0617 | 49300 | 0.001 | - | | 6.0679 | 49350 | 0.0005 | - | | 6.0740 | 49400 | 0.0005 | - | | 6.0802 | 49450 | 0.0003 | - | | 6.0863 | 49500 | 0.0003 | - | | 6.0925 | 49550 | 0.0005 | - | | 6.0986 | 49600 | 0.0003 | - | | 6.1048 | 49650 | 0.0008 | - | | 6.1109 | 49700 | 0.0012 | - | | 6.1171 | 49750 | 0.0013 | - | | 6.1232 | 49800 | 0.0008 | - | | 6.1293 | 49850 | 0.001 | - | | 6.1355 | 49900 | 0.0007 | - | | 6.1416 | 49950 | 0.0012 | - | | 6.1478 | 50000 | 0.0007 | - | | 6.1539 | 50050 | 0.0005 | - | | 6.1601 | 50100 | 0.0008 | - | | 6.1662 | 50150 | 0.0008 | - | | 6.1724 | 50200 | 0.0011 | - | | 6.1785 | 50250 | 0.0011 | - | | 6.1847 | 50300 | 0.0005 | - | | 6.1908 | 50350 | 0.0011 | - | | 6.1970 | 50400 | 0.0011 | - | | 6.2031 | 50450 | 0.0009 | - | | 6.2093 | 50500 | 0.0012 | - | | 6.2154 | 50550 | 0.0011 | - | | 6.2216 | 50600 | 0.0005 | - | | 6.2277 | 50650 | 0.0008 | - | | 6.2339 | 50700 | 0.0006 | - | | 6.2400 | 50750 | 0.0007 | - | | 6.2462 | 50800 | 0.0006 | - | | 6.2523 | 50850 | 0.0006 | - | | 6.2585 | 50900 | 0.0008 | - | | 6.2646 | 50950 | 0.0013 | - | | 6.2707 | 51000 | 0.0005 | - | | 6.2769 | 51050 | 0.0004 | - | | 6.2830 | 51100 | 0.0009 | - | | 6.2892 | 51150 | 0.0012 | - | | 6.2953 | 51200 | 0.0006 | - | | 6.3015 | 51250 | 0.0002 | - | | 6.3076 | 51300 | 0.0008 | - | | 6.3138 | 51350 | 0.0011 | - | | 6.3199 | 51400 | 0.0003 | - | | 6.3261 | 51450 | 0.0008 | - | | 6.3322 | 51500 | 0.0013 | - | | 6.3384 | 51550 | 0.0001 | - | | 6.3445 | 51600 | 0.0002 | - | | 6.3507 | 51650 | 0.0008 | - | | 6.3568 | 51700 | 0.0006 | - | | 6.3630 | 51750 | 0.0006 | - | | 6.3691 | 51800 | 0.0006 | - | | 6.3753 | 51850 | 0.0009 | - | | 6.3814 | 51900 | 0.0013 | - | | 6.3876 | 51950 | 0.001 | - | | 6.3937 | 52000 | 0.0005 | - | | 6.3999 | 52050 | 0.0006 | - | | 6.4060 | 52100 | 0.0006 | - | | 6.4121 | 52150 | 0.0005 | - | | 6.4183 | 52200 | 0.0002 | - | | 6.4244 | 52250 | 0.0004 | - | | 6.4306 | 52300 | 0.0005 | - | | 6.4367 | 52350 | 0.0006 | - | | 6.4429 | 52400 | 0.0003 | - | | 6.4490 | 52450 | 0.0009 | - | | 6.4552 | 52500 | 0.0003 | - | | 6.4613 | 52550 | 0.0009 | - | | 6.4675 | 52600 | 0.0003 | - | | 6.4736 | 52650 | 0.0007 | - | | 6.4798 | 52700 | 0.0011 | - | | 6.4859 | 52750 | 0.0007 | - | | 6.4921 | 52800 | 0.0 | - | | 6.4982 | 52850 | 0.0001 | - | | 6.5044 | 52900 | 0.0004 | - | | 6.5105 | 52950 | 0.0011 | - | | 6.5167 | 53000 | 0.001 | - | | 6.5228 | 53050 | 0.0008 | - | | 6.5290 | 53100 | 0.0005 | - | | 6.5351 | 53150 | 0.0005 | - | | 6.5413 | 53200 | 0.0007 | - | | 6.5474 | 53250 | 0.0004 | - | | 6.5535 | 53300 | 0.0005 | - | | 6.5597 | 53350 | 0.002 | - | | 6.5658 | 53400 | 0.0016 | - | | 6.5720 | 53450 | 0.0025 | - | | 6.5781 | 53500 | 0.0019 | - | | 6.5843 | 53550 | 0.0011 | - | | 6.5904 | 53600 | 0.0014 | - | | 6.5966 | 53650 | 0.0016 | - | | 6.6027 | 53700 | 0.0021 | - | | 6.6089 | 53750 | 0.0031 | - | | 6.6150 | 53800 | 0.0013 | - | | 6.6212 | 53850 | 0.0005 | - | | 6.6273 | 53900 | 0.0009 | - | | 6.6335 | 53950 | 0.0018 | - | | 6.6396 | 54000 | 0.0009 | - | | 6.6458 | 54050 | 0.0006 | - | | 6.6519 | 54100 | 0.0011 | - | | 6.6581 | 54150 | 0.0014 | - | | 6.6642 | 54200 | 0.001 | - | | 6.6704 | 54250 | 0.0017 | - | | 6.6765 | 54300 | 0.0007 | - | | 6.6827 | 54350 | 0.0008 | - | | 6.6888 | 54400 | 0.0013 | - | | 6.6949 | 54450 | 0.0008 | - | | 6.7011 | 54500 | 0.0008 | - | | 6.7072 | 54550 | 0.0005 | - | | 6.7134 | 54600 | 0.0003 | - | | 6.7195 | 54650 | 0.0002 | - | | 6.7257 | 54700 | 0.0011 | - | | 6.7318 | 54750 | 0.0013 | - | | 6.7380 | 54800 | 0.0012 | - | | 6.7441 | 54850 | 0.0003 | - | | 6.7503 | 54900 | 0.0001 | - | | 6.7564 | 54950 | 0.0003 | - | | 6.7626 | 55000 | 0.0006 | - | | 6.7687 | 55050 | 0.0003 | - | | 6.7749 | 55100 | 0.0005 | - | | 6.7810 | 55150 | 0.0003 | - | | 6.7872 | 55200 | 0.0006 | - | | 6.7933 | 55250 | 0.0003 | - | | 6.7995 | 55300 | 0.001 | - | | 6.8056 | 55350 | 0.0005 | - | | 6.8118 | 55400 | 0.0006 | - | | 6.8179 | 55450 | 0.0006 | - | | 6.8241 | 55500 | 0.0005 | - | | 6.8302 | 55550 | 0.0004 | - | | 6.8363 | 55600 | 0.0002 | - | | 6.8425 | 55650 | 0.0001 | - | | 6.8486 | 55700 | 0.0002 | - | | 6.8548 | 55750 | 0.0005 | - | | 6.8609 | 55800 | 0.0007 | - | | 6.8671 | 55850 | 0.0002 | - | | 6.8732 | 55900 | 0.0007 | - | | 6.8794 | 55950 | 0.0007 | - | | 6.8855 | 56000 | 0.0003 | - | | 6.8917 | 56050 | 0.0007 | - | | 6.8978 | 56100 | 0.0 | - | | 6.9040 | 56150 | 0.0005 | - | | 6.9101 | 56200 | 0.0005 | - | | 6.9163 | 56250 | 0.0002 | - | | 6.9224 | 56300 | 0.0005 | - | | 6.9286 | 56350 | 0.0002 | - | | 6.9347 | 56400 | 0.0002 | - | | 6.9409 | 56450 | 0.0003 | - | | 6.9470 | 56500 | 0.0002 | - | | 6.9532 | 56550 | 0.0 | - | | 6.9593 | 56600 | 0.0002 | - | | 6.9654 | 56650 | 0.0002 | - | | 6.9716 | 56700 | 0.0006 | - | | 6.9777 | 56750 | 0.0002 | - | | 6.9839 | 56800 | 0.0 | - | | 6.9900 | 56850 | 0.0003 | - | | 6.9962 | 56900 | 0.0005 | - | | 7.0023 | 56950 | 0.0003 | - | | 7.0085 | 57000 | 0.0003 | - | | 7.0146 | 57050 | 0.0002 | - | | 7.0208 | 57100 | 0.0005 | - | | 7.0269 | 57150 | 0.0006 | - | | 7.0331 | 57200 | 0.0003 | - | | 7.0392 | 57250 | 0.0002 | - | | 7.0454 | 57300 | 0.0 | - | | 7.0515 | 57350 | 0.0 | - | | 7.0577 | 57400 | 0.0005 | - | | 7.0638 | 57450 | 0.0003 | - | | 7.0700 | 57500 | 0.0 | - | | 7.0761 | 57550 | 0.0005 | - | | 7.0823 | 57600 | 0.0007 | - | | 7.0884 | 57650 | 0.0009 | - | | 7.0946 | 57700 | 0.0008 | - | | 7.1007 | 57750 | 0.0014 | - | | 7.1068 | 57800 | 0.0005 | - | | 7.1130 | 57850 | 0.0005 | - | | 7.1191 | 57900 | 0.0002 | - | | 7.1253 | 57950 | 0.0 | - | | 7.1314 | 58000 | 0.0004 | - | | 7.1376 | 58050 | 0.0005 | - | | 7.1437 | 58100 | 0.001 | - | | 7.1499 | 58150 | 0.0006 | - | | 7.1560 | 58200 | 0.001 | - | | 7.1622 | 58250 | 0.0002 | - | | 7.1683 | 58300 | 0.0015 | - | | 7.1745 | 58350 | 0.0005 | - | | 7.1806 | 58400 | 0.0005 | - | | 7.1868 | 58450 | 0.0002 | - | | 7.1929 | 58500 | 0.001 | - | | 7.1991 | 58550 | 0.0007 | - | | 7.2052 | 58600 | 0.0012 | - | | 7.2114 | 58650 | 0.0005 | - | | 7.2175 | 58700 | 0.0004 | - | | 7.2237 | 58750 | 0.0006 | - | | 7.2298 | 58800 | 0.0001 | - | | 7.2360 | 58850 | 0.0003 | - | | 7.2421 | 58900 | 0.0013 | - | | 7.2482 | 58950 | 0.0017 | - | | 7.2544 | 59000 | 0.0017 | - | | 7.2605 | 59050 | 0.0006 | - | | 7.2667 | 59100 | 0.0016 | - | | 7.2728 | 59150 | 0.0049 | - | | 7.2790 | 59200 | 0.0026 | - | | 7.2851 | 59250 | 0.0013 | - | | 7.2913 | 59300 | 0.001 | - | | 7.2974 | 59350 | 0.0008 | - | | 7.3036 | 59400 | 0.0006 | - | | 7.3097 | 59450 | 0.0007 | - | | 7.3159 | 59500 | 0.001 | - | | 7.3220 | 59550 | 0.0014 | - | | 7.3282 | 59600 | 0.0015 | - | | 7.3343 | 59650 | 0.002 | - | | 7.3405 | 59700 | 0.0011 | - | | 7.3466 | 59750 | 0.0006 | - | | 7.3528 | 59800 | 0.0013 | - | | 7.3589 | 59850 | 0.0006 | - | | 7.3651 | 59900 | 0.001 | - | | 7.3712 | 59950 | 0.0013 | - | | 7.3774 | 60000 | 0.0012 | - | | 7.3835 | 60050 | 0.0008 | - | | 7.3896 | 60100 | 0.0005 | - | | 7.3958 | 60150 | 0.0003 | - | | 7.4019 | 60200 | 0.0013 | - | | 7.4081 | 60250 | 0.0014 | - | | 7.4142 | 60300 | 0.0002 | - | | 7.4204 | 60350 | 0.0004 | - | | 7.4265 | 60400 | 0.0004 | - | | 7.4327 | 60450 | 0.0004 | - | | 7.4388 | 60500 | 0.0001 | - | | 7.4450 | 60550 | 0.0002 | - | | 7.4511 | 60600 | 0.0003 | - | | 7.4573 | 60650 | 0.0008 | - | | 7.4634 | 60700 | 0.0004 | - | | 7.4696 | 60750 | 0.0003 | - | | 7.4757 | 60800 | 0.001 | - | | 7.4819 | 60850 | 0.0002 | - | | 7.4880 | 60900 | 0.0 | - | | 7.4942 | 60950 | 0.0004 | - | | 7.5003 | 61000 | 0.0002 | - | | 7.5065 | 61050 | 0.0003 | - | | 7.5126 | 61100 | 0.0005 | - | | 7.5188 | 61150 | 0.0007 | - | | 7.5249 | 61200 | 0.0023 | - | | 7.5310 | 61250 | 0.001 | - | | 7.5372 | 61300 | 0.0013 | - | | 7.5433 | 61350 | 0.0012 | - | | 7.5495 | 61400 | 0.0007 | - | | 7.5556 | 61450 | 0.0005 | - | | 7.5618 | 61500 | 0.0005 | - | | 7.5679 | 61550 | 0.0004 | - | | 7.5741 | 61600 | 0.0002 | - | | 7.5802 | 61650 | 0.0004 | - | | 7.5864 | 61700 | 0.0008 | - | | 7.5925 | 61750 | 0.0007 | - | | 7.5987 | 61800 | 0.0007 | - | | 7.6048 | 61850 | 0.0009 | - | | 7.6110 | 61900 | 0.0003 | - | | 7.6171 | 61950 | 0.0002 | - | | 7.6233 | 62000 | 0.0001 | - | | 7.6294 | 62050 | 0.0003 | - | | 7.6356 | 62100 | 0.0009 | - | | 7.6417 | 62150 | 0.001 | - | | 7.6479 | 62200 | 0.0003 | - | | 7.6540 | 62250 | 0.0005 | - | | 7.6602 | 62300 | 0.0007 | - | | 7.6663 | 62350 | 0.0005 | - | | 7.6724 | 62400 | 0.0007 | - | | 7.6786 | 62450 | 0.0 | - | | 7.6847 | 62500 | 0.0002 | - | | 7.6909 | 62550 | 0.0 | - | | 7.6970 | 62600 | 0.001 | - | | 7.7032 | 62650 | 0.0011 | - | | 7.7093 | 62700 | 0.0007 | - | | 7.7155 | 62750 | 0.0003 | - | | 7.7216 | 62800 | 0.0003 | - | | 7.7278 | 62850 | 0.0009 | - | | 7.7339 | 62900 | 0.0009 | - | | 7.7401 | 62950 | 0.0012 | - | | 7.7462 | 63000 | 0.0057 | - | | 7.7524 | 63050 | 0.0039 | - | | 7.7585 | 63100 | 0.0024 | - | | 7.7647 | 63150 | 0.0021 | - | | 7.7708 | 63200 | 0.0014 | - | | 7.7770 | 63250 | 0.0014 | - | | 7.7831 | 63300 | 0.0012 | - | | 7.7893 | 63350 | 0.0011 | - | | 7.7954 | 63400 | 0.0009 | - | | 7.8015 | 63450 | 0.0009 | - | | 7.8077 | 63500 | 0.0018 | - | | 7.8138 | 63550 | 0.0005 | - | | 7.8200 | 63600 | 0.0004 | - | | 7.8261 | 63650 | 0.0002 | - | | 7.8323 | 63700 | 0.0004 | - | | 7.8384 | 63750 | 0.0012 | - | | 7.8446 | 63800 | 0.0019 | - | | 7.8507 | 63850 | 0.0004 | - | | 7.8569 | 63900 | 0.0003 | - | | 7.8630 | 63950 | 0.0003 | - | | 7.8692 | 64000 | 0.0003 | - | | 7.8753 | 64050 | 0.0003 | - | | 7.8815 | 64100 | 0.0 | - | | 7.8876 | 64150 | 0.0 | - | | 7.8938 | 64200 | 0.0 | - | | 7.8999 | 64250 | 0.0003 | - | | 7.9061 | 64300 | 0.0 | - | | 7.9122 | 64350 | 0.0001 | - | | 7.9184 | 64400 | 0.0003 | - | | 7.9245 | 64450 | 0.0004 | - | | 7.9307 | 64500 | 0.0001 | - | | 7.9368 | 64550 | 0.0003 | - | | 7.9429 | 64600 | 0.0002 | - | | 7.9491 | 64650 | 0.0005 | - | | 7.9552 | 64700 | 0.0002 | - | | 7.9614 | 64750 | 0.0 | - | | 7.9675 | 64800 | 0.0 | - | | 7.9737 | 64850 | 0.0002 | - | | 7.9798 | 64900 | 0.0004 | - | | 7.9860 | 64950 | 0.0 | - | | 7.9921 | 65000 | 0.0003 | - | | 7.9983 | 65050 | 0.0 | - | | 8.0044 | 65100 | 0.0001 | - | | 8.0106 | 65150 | 0.0 | - | | 8.0167 | 65200 | 0.0002 | - | | 8.0229 | 65250 | 0.0003 | - | | 8.0290 | 65300 | 0.0001 | - | | 8.0352 | 65350 | 0.0005 | - | | 8.0413 | 65400 | 0.0 | - | | 8.0475 | 65450 | 0.0001 | - | | 8.0536 | 65500 | 0.0004 | - | | 8.0598 | 65550 | 0.0003 | - | | 8.0659 | 65600 | 0.0002 | - | | 8.0721 | 65650 | 0.0006 | - | | 8.0782 | 65700 | 0.0 | - | | 8.0843 | 65750 | 0.0 | - | | 8.0905 | 65800 | 0.0004 | - | | 8.0966 | 65850 | 0.0003 | - | | 8.1028 | 65900 | 0.0007 | - | | 8.1089 | 65950 | 0.0003 | - | | 8.1151 | 66000 | 0.0008 | - | | 8.1212 | 66050 | 0.0009 | - | | 8.1274 | 66100 | 0.0002 | - | | 8.1335 | 66150 | 0.0003 | - | | 8.1397 | 66200 | 0.0002 | - | | 8.1458 | 66250 | 0.0001 | - | | 8.1520 | 66300 | 0.0003 | - | | 8.1581 | 66350 | 0.0002 | - | | 8.1643 | 66400 | 0.0003 | - | | 8.1704 | 66450 | 0.0 | - | | 8.1766 | 66500 | 0.0003 | - | | 8.1827 | 66550 | 0.0005 | - | | 8.1889 | 66600 | 0.0007 | - | | 8.1950 | 66650 | 0.0002 | - | | 8.2012 | 66700 | 0.0007 | - | | 8.2073 | 66750 | 0.0001 | - | | 8.2135 | 66800 | 0.0003 | - | | 8.2196 | 66850 | 0.0003 | - | | 8.2257 | 66900 | 0.0 | - | | 8.2319 | 66950 | 0.0005 | - | | 8.2380 | 67000 | 0.0003 | - | | 8.2442 | 67050 | 0.0003 | - | | 8.2503 | 67100 | 0.0 | - | | 8.2565 | 67150 | 0.0002 | - | | 8.2626 | 67200 | 0.0 | - | | 8.2688 | 67250 | 0.0 | - | | 8.2749 | 67300 | 0.0001 | - | | 8.2811 | 67350 | 0.0003 | - | | 8.2872 | 67400 | 0.0006 | - | | 8.2934 | 67450 | 0.0005 | - | | 8.2995 | 67500 | 0.0001 | - | | 8.3057 | 67550 | 0.0003 | - | | 8.3118 | 67600 | 0.0002 | - | | 8.3180 | 67650 | 0.0 | - | | 8.3241 | 67700 | 0.0007 | - | | 8.3303 | 67750 | 0.0 | - | | 8.3364 | 67800 | 0.0002 | - | | 8.3426 | 67850 | 0.0003 | - | | 8.3487 | 67900 | 0.0001 | - | | 8.3549 | 67950 | 0.0005 | - | | 8.3610 | 68000 | 0.0001 | - | | 8.3671 | 68050 | 0.0001 | - | | 8.3733 | 68100 | 0.0007 | - | | 8.3794 | 68150 | 0.0003 | - | | 8.3856 | 68200 | 0.0002 | - | | 8.3917 | 68250 | 0.0002 | - | | 8.3979 | 68300 | 0.0001 | - | | 8.4040 | 68350 | 0.0006 | - | | 8.4102 | 68400 | 0.0003 | - | | 8.4163 | 68450 | 0.0007 | - | | 8.4225 | 68500 | 0.0007 | - | | 8.4286 | 68550 | 0.0006 | - | | 8.4348 | 68600 | 0.0005 | - | | 8.4409 | 68650 | 0.0004 | - | | 8.4471 | 68700 | 0.0005 | - | | 8.4532 | 68750 | 0.0003 | - | | 8.4594 | 68800 | 0.0002 | - | | 8.4655 | 68850 | 0.0 | - | | 8.4717 | 68900 | 0.0005 | - | | 8.4778 | 68950 | 0.0005 | - | | 8.4840 | 69000 | 0.0002 | - | | 8.4901 | 69050 | 0.0004 | - | | 8.4962 | 69100 | 0.0002 | - | | 8.5024 | 69150 | 0.0004 | - | | 8.5085 | 69200 | 0.0003 | - | | 8.5147 | 69250 | 0.0006 | - | | 8.5208 | 69300 | 0.0003 | - | | 8.5270 | 69350 | 0.0004 | - | | 8.5331 | 69400 | 0.0007 | - | | 8.5393 | 69450 | 0.0004 | - | | 8.5454 | 69500 | 0.0 | - | | 8.5516 | 69550 | 0.0 | - | | 8.5577 | 69600 | 0.0003 | - | | 8.5639 | 69650 | 0.0009 | - | | 8.5700 | 69700 | 0.0007 | - | | 8.5762 | 69750 | 0.0013 | - | | 8.5823 | 69800 | 0.0006 | - | | 8.5885 | 69850 | 0.001 | - | | 8.5946 | 69900 | 0.0004 | - | | 8.6008 | 69950 | 0.0005 | - | | 8.6069 | 70000 | 0.0006 | - | | 8.6131 | 70050 | 0.0003 | - | | 8.6192 | 70100 | 0.0004 | - | | 8.6254 | 70150 | 0.0003 | - | | 8.6315 | 70200 | 0.0012 | - | | 8.6376 | 70250 | 0.0007 | - | | 8.6438 | 70300 | 0.0011 | - | | 8.6499 | 70350 | 0.0004 | - | | 8.6561 | 70400 | 0.0001 | - | | 8.6622 | 70450 | 0.0 | - | | 8.6684 | 70500 | 0.0002 | - | | 8.6745 | 70550 | 0.0003 | - | | 8.6807 | 70600 | 0.0001 | - | | 8.6868 | 70650 | 0.0001 | - | | 8.6930 | 70700 | 0.0 | - | | 8.6991 | 70750 | 0.0 | - | | 8.7053 | 70800 | 0.0 | - | | 8.7114 | 70850 | 0.0002 | - | | 8.7176 | 70900 | 0.0004 | - | | 8.7237 | 70950 | 0.0 | - | | 8.7299 | 71000 | 0.0001 | - | | 8.7360 | 71050 | 0.0 | - | | 8.7422 | 71100 | 0.0 | - | | 8.7483 | 71150 | 0.0 | - | | 8.7545 | 71200 | 0.0 | - | | 8.7606 | 71250 | 0.0 | - | | 8.7668 | 71300 | 0.0003 | - | | 8.7729 | 71350 | 0.0004 | - | | 8.7790 | 71400 | 0.0006 | - | | 8.7852 | 71450 | 0.0005 | - | | 8.7913 | 71500 | 0.001 | - | | 8.7975 | 71550 | 0.0016 | - | | 8.8036 | 71600 | 0.0012 | - | | 8.8098 | 71650 | 0.0008 | - | | 8.8159 | 71700 | 0.001 | - | | 8.8221 | 71750 | 0.0011 | - | | 8.8282 | 71800 | 0.001 | - | | 8.8344 | 71850 | 0.0008 | - | | 8.8405 | 71900 | 0.0013 | - | | 8.8467 | 71950 | 0.0008 | - | | 8.8528 | 72000 | 0.0005 | - | | 8.8590 | 72050 | 0.0005 | - | | 8.8651 | 72100 | 0.0009 | - | | 8.8713 | 72150 | 0.0003 | - | | 8.8774 | 72200 | 0.0007 | - | | 8.8836 | 72250 | 0.0005 | - | | 8.8897 | 72300 | 0.0006 | - | | 8.8959 | 72350 | 0.0005 | - | | 8.9020 | 72400 | 0.0005 | - | | 8.9082 | 72450 | 0.0005 | - | | 8.9143 | 72500 | 0.0011 | - | | 8.9204 | 72550 | 0.0006 | - | | 8.9266 | 72600 | 0.0001 | - | | 8.9327 | 72650 | 0.0003 | - | | 8.9389 | 72700 | 0.0005 | - | | 8.9450 | 72750 | 0.0003 | - | | 8.9512 | 72800 | 0.0002 | - | | 8.9573 | 72850 | 0.0006 | - | | 8.9635 | 72900 | 0.0007 | - | | 8.9696 | 72950 | 0.0002 | - | | 8.9758 | 73000 | 0.0003 | - | | 8.9819 | 73050 | 0.0003 | - | | 8.9881 | 73100 | 0.0002 | - | | 8.9942 | 73150 | 0.0003 | - | | 9.0004 | 73200 | 0.0005 | - | | 9.0065 | 73250 | 0.0002 | - | | 9.0127 | 73300 | 0.0005 | - | | 9.0188 | 73350 | 0.0003 | - | | 9.0250 | 73400 | 0.0004 | - | | 9.0311 | 73450 | 0.0002 | - | | 9.0373 | 73500 | 0.0002 | - | | 9.0434 | 73550 | 0.0006 | - | | 9.0496 | 73600 | 0.0009 | - | | 9.0557 | 73650 | 0.0004 | - | | 9.0618 | 73700 | 0.001 | - | | 9.0680 | 73750 | 0.0007 | - | | 9.0741 | 73800 | 0.001 | - | | 9.0803 | 73850 | 0.0011 | - | | 9.0864 | 73900 | 0.0009 | - | | 9.0926 | 73950 | 0.0005 | - | | 9.0987 | 74000 | 0.0012 | - | | 9.1049 | 74050 | 0.0014 | - | | 9.1110 | 74100 | 0.001 | - | | 9.1172 | 74150 | 0.0003 | - | | 9.1233 | 74200 | 0.0006 | - | | 9.1295 | 74250 | 0.0003 | - | | 9.1356 | 74300 | 0.0011 | - | | 9.1418 | 74350 | 0.0007 | - | | 9.1479 | 74400 | 0.0005 | - | | 9.1541 | 74450 | 0.0008 | - | | 9.1602 | 74500 | 0.0015 | - | | 9.1664 | 74550 | 0.0009 | - | | 9.1725 | 74600 | 0.0009 | - | | 9.1787 | 74650 | 0.0008 | - | | 9.1848 | 74700 | 0.0009 | - | | 9.1910 | 74750 | 0.0012 | - | | 9.1971 | 74800 | 0.0005 | - | | 9.2032 | 74850 | 0.0006 | - | | 9.2094 | 74900 | 0.0006 | - | | 9.2155 | 74950 | 0.0007 | - | | 9.2217 | 75000 | 0.0009 | - | | 9.2278 | 75050 | 0.0012 | - | | 9.2340 | 75100 | 0.0008 | - | | 9.2401 | 75150 | 0.002 | - | | 9.2463 | 75200 | 0.001 | - | | 9.2524 | 75250 | 0.0013 | - | | 9.2586 | 75300 | 0.001 | - | | 9.2647 | 75350 | 0.001 | - | | 9.2709 | 75400 | 0.0004 | - | | 9.2770 | 75450 | 0.0007 | - | | 9.2832 | 75500 | 0.0012 | - | | 9.2893 | 75550 | 0.0017 | - | | 9.2955 | 75600 | 0.0016 | - | | 9.3016 | 75650 | 0.0012 | - | | 9.3078 | 75700 | 0.0008 | - | | 9.3139 | 75750 | 0.0016 | - | | 9.3201 | 75800 | 0.0021 | - | | 9.3262 | 75850 | 0.0014 | - | | 9.3323 | 75900 | 0.0017 | - | | 9.3385 | 75950 | 0.0013 | - | | 9.3446 | 76000 | 0.0012 | - | | 9.3508 | 76050 | 0.0011 | - | | 9.3569 | 76100 | 0.0005 | - | | 9.3631 | 76150 | 0.0003 | - | | 9.3692 | 76200 | 0.0005 | - | | 9.3754 | 76250 | 0.0006 | - | | 9.3815 | 76300 | 0.0006 | - | | 9.3877 | 76350 | 0.0003 | - | | 9.3938 | 76400 | 0.0003 | - | | 9.4000 | 76450 | 0.0005 | - | | 9.4061 | 76500 | 0.0005 | - | | 9.4123 | 76550 | 0.0 | - | | 9.4184 | 76600 | 0.0006 | - | | 9.4246 | 76650 | 0.0003 | - | | 9.4307 | 76700 | 0.0 | - | | 9.4369 | 76750 | 0.0 | - | | 9.4430 | 76800 | 0.0003 | - | | 9.4492 | 76850 | 0.0003 | - | | 9.4553 | 76900 | 0.0002 | - | | 9.4615 | 76950 | 0.0003 | - | | 9.4676 | 77000 | 0.0003 | - | | 9.4737 | 77050 | 0.0008 | - | | 9.4799 | 77100 | 0.0004 | - | | 9.4860 | 77150 | 0.0019 | - | | 9.4922 | 77200 | 0.0005 | - | | 9.4983 | 77250 | 0.001 | - | | 9.5045 | 77300 | 0.0002 | - | | 9.5106 | 77350 | 0.0 | - | | 9.5168 | 77400 | 0.0006 | - | | 9.5229 | 77450 | 0.0008 | - | | 9.5291 | 77500 | 0.0 | - | | 9.5352 | 77550 | 0.0001 | - | | 9.5414 | 77600 | 0.0001 | - | | 9.5475 | 77650 | 0.0 | - | | 9.5537 | 77700 | 0.0 | - | | 9.5598 | 77750 | 0.0 | - | | 9.5660 | 77800 | 0.0 | - | | 9.5721 | 77850 | 0.0 | - | | 9.5783 | 77900 | 0.0 | - | | 9.5844 | 77950 | 0.0 | - | | 9.5906 | 78000 | 0.0001 | - | | 9.5967 | 78050 | 0.0 | - | | 9.6029 | 78100 | 0.0002 | - | | 9.6090 | 78150 | 0.0 | - | | 9.6151 | 78200 | 0.0002 | - | | 9.6213 | 78250 | 0.0 | - | | 9.6274 | 78300 | 0.0 | - | | 9.6336 | 78350 | 0.0 | - | | 9.6397 | 78400 | 0.0 | - | | 9.6459 | 78450 | 0.0 | - | | 9.6520 | 78500 | 0.0 | - | | 9.6582 | 78550 | 0.0003 | - | | 9.6643 | 78600 | 0.0 | - | | 9.6705 | 78650 | 0.0 | - | | 9.6766 | 78700 | 0.0002 | - | | 9.6828 | 78750 | 0.0 | - | | 9.6889 | 78800 | 0.0002 | - | | 9.6951 | 78850 | 0.0 | - | | 9.7012 | 78900 | 0.0002 | - | | 9.7074 | 78950 | 0.0 | - | | 9.7135 | 79000 | 0.0002 | - | | 9.7197 | 79050 | 0.0005 | - | | 9.7258 | 79100 | 0.0005 | - | | 9.7320 | 79150 | 0.0007 | - | | 9.7381 | 79200 | 0.0006 | - | | 9.7443 | 79250 | 0.0003 | - | | 9.7504 | 79300 | 0.0 | - | | 9.7565 | 79350 | 0.0008 | - | | 9.7627 | 79400 | 0.0002 | - | | 9.7688 | 79450 | 0.0 | - | | 9.7750 | 79500 | 0.0 | - | | 9.7811 | 79550 | 0.0 | - | | 9.7873 | 79600 | 0.0 | - | | 9.7934 | 79650 | 0.0011 | - | | 9.7996 | 79700 | 0.0004 | - | | 9.8057 | 79750 | 0.0014 | - | | 9.8119 | 79800 | 0.0001 | - | | 9.8180 | 79850 | 0.0001 | - | | 9.8242 | 79900 | 0.0 | - | | 9.8303 | 79950 | 0.0001 | - | | 9.8365 | 80000 | 0.0002 | - | | 9.8426 | 80050 | 0.0003 | - | | 9.8488 | 80100 | 0.0002 | - | | 9.8549 | 80150 | 0.0 | - | | 9.8611 | 80200 | 0.0 | - | | 9.8672 | 80250 | 0.0003 | - | | 9.8734 | 80300 | 0.0014 | - | | 9.8795 | 80350 | 0.0009 | - | | 9.8857 | 80400 | 0.0011 | - | | 9.8918 | 80450 | 0.0004 | - | | 9.8979 | 80500 | 0.0006 | - | | 9.9041 | 80550 | 0.0012 | - | | 9.9102 | 80600 | 0.0003 | - | | 9.9164 | 80650 | 0.0002 | - | | 9.9225 | 80700 | 0.0005 | - | | 9.9287 | 80750 | 0.0 | - | | 9.9348 | 80800 | 0.0001 | - | | 9.9410 | 80850 | 0.0 | - | | 9.9471 | 80900 | 0.0 | - | | 9.9533 | 80950 | 0.0002 | - | | 9.9594 | 81000 | 0.0 | - | | 9.9656 | 81050 | 0.0 | - | | 9.9717 | 81100 | 0.0 | - | | 9.9779 | 81150 | 0.0 | - | | 9.9840 | 81200 | 0.0 | - | | 9.9902 | 81250 | 0.0001 | - | | 9.9963 | 81300 | 0.0003 | - | | 10.0025 | 81350 | 0.0 | - | | 10.0086 | 81400 | 0.0 | - | | 10.0148 | 81450 | 0.0 | - | | 10.0209 | 81500 | 0.0 | - | | 10.0271 | 81550 | 0.0002 | - | | 10.0332 | 81600 | 0.0004 | - | | 10.0393 | 81650 | 0.0003 | - | | 10.0455 | 81700 | 0.0001 | - | | 10.0516 | 81750 | 0.0002 | - | | 10.0578 | 81800 | 0.0003 | - | | 10.0639 | 81850 | 0.0002 | - | | 10.0701 | 81900 | 0.0003 | - | | 10.0762 | 81950 | 0.0 | - | | 10.0824 | 82000 | 0.0005 | - | | 10.0885 | 82050 | 0.0003 | - | | 10.0947 | 82100 | 0.0005 | - | | 10.1008 | 82150 | 0.0003 | - | | 10.1070 | 82200 | 0.0003 | - | | 10.1131 | 82250 | 0.0004 | - | | 10.1193 | 82300 | 0.0002 | - | | 10.1254 | 82350 | 0.0002 | - | | 10.1316 | 82400 | 0.0002 | - | | 10.1377 | 82450 | 0.0 | - | | 10.1439 | 82500 | 0.0 | - | | 10.1500 | 82550 | 0.0 | - | | 10.1562 | 82600 | 0.0002 | - | | 10.1623 | 82650 | 0.0003 | - | | 10.1684 | 82700 | 0.0002 | - | | 10.1746 | 82750 | 0.0 | - | | 10.1807 | 82800 | 0.0 | - | | 10.1869 | 82850 | 0.0 | - | | 10.1930 | 82900 | 0.0 | - | | 10.1992 | 82950 | 0.0001 | - | | 10.2053 | 83000 | 0.0 | - | | 10.2115 | 83050 | 0.0 | - | | 10.2176 | 83100 | 0.0 | - | | 10.2238 | 83150 | 0.0 | - | | 10.2299 | 83200 | 0.0 | - | | 10.2361 | 83250 | 0.0 | - | | 10.2422 | 83300 | 0.0 | - | | 10.2484 | 83350 | 0.0002 | - | | 10.2545 | 83400 | 0.0 | - | | 10.2607 | 83450 | 0.0 | - | | 10.2668 | 83500 | 0.0 | - | | 10.2730 | 83550 | 0.0 | - | | 10.2791 | 83600 | 0.0003 | - | | 10.2853 | 83650 | 0.0002 | - | | 10.2914 | 83700 | 0.0006 | - | | 10.2976 | 83750 | 0.0 | - | | 10.3037 | 83800 | 0.0003 | - | | 10.3098 | 83850 | 0.0003 | - | | 10.3160 | 83900 | 0.0003 | - | | 10.3221 | 83950 | 0.0017 | - | | 10.3283 | 84000 | 0.0005 | - | | 10.3344 | 84050 | 0.0013 | - | | 10.3406 | 84100 | 0.0006 | - | | 10.3467 | 84150 | 0.0005 | - | | 10.3529 | 84200 | 0.0005 | - | | 10.3590 | 84250 | 0.0009 | - | | 10.3652 | 84300 | 0.0006 | - | | 10.3713 | 84350 | 0.0 | - | | 10.3775 | 84400 | 0.0 | - | | 10.3836 | 84450 | 0.0002 | - | | 10.3898 | 84500 | 0.0002 | - | | 10.3959 | 84550 | 0.0001 | - | | 10.4021 | 84600 | 0.0003 | - | | 10.4082 | 84650 | 0.0006 | - | | 10.4144 | 84700 | 0.0 | - | | 10.4205 | 84750 | 0.0 | - | | 10.4267 | 84800 | 0.0 | - | | 10.4328 | 84850 | 0.0 | - | | 10.4390 | 84900 | 0.0 | - | | 10.4451 | 84950 | 0.0002 | - | | 10.4512 | 85000 | 0.0003 | - | | 10.4574 | 85050 | 0.0003 | - | | 10.4635 | 85100 | 0.0 | - | | 10.4697 | 85150 | 0.0 | - | | 10.4758 | 85200 | 0.0 | - | | 10.4820 | 85250 | 0.0 | - | | 10.4881 | 85300 | 0.0 | - | | 10.4943 | 85350 | 0.0001 | - | | 10.5004 | 85400 | 0.0 | - | | 10.5066 | 85450 | 0.0 | - | | 10.5127 | 85500 | 0.0 | - | | 10.5189 | 85550 | 0.0005 | - | | 10.5250 | 85600 | 0.0 | - | | 10.5312 | 85650 | 0.0002 | - | | 10.5373 | 85700 | 0.0 | - | | 10.5435 | 85750 | 0.0 | - | | 10.5496 | 85800 | 0.0001 | - | | 10.5558 | 85850 | 0.0007 | - | | 10.5619 | 85900 | 0.0001 | - | | 10.5681 | 85950 | 0.0011 | - | | 10.5742 | 86000 | 0.0003 | - | | 10.5804 | 86050 | 0.0007 | - | | 10.5865 | 86100 | 0.0009 | - | | 10.5926 | 86150 | 0.0 | - | | 10.5988 | 86200 | 0.0003 | - | | 10.6049 | 86250 | 0.0002 | - | | 10.6111 | 86300 | 0.0002 | - | | 10.6172 | 86350 | 0.0003 | - | | 10.6234 | 86400 | 0.0003 | - | | 10.6295 | 86450 | 0.0004 | - | | 10.6357 | 86500 | 0.0004 | - | | 10.6418 | 86550 | 0.0006 | - | | 10.6480 | 86600 | 0.0002 | - | | 10.6541 | 86650 | 0.0006 | - | | 10.6603 | 86700 | 0.0003 | - | | 10.6664 | 86750 | 0.0003 | - | | 10.6726 | 86800 | 0.0002 | - | | 10.6787 | 86850 | 0.0008 | - | | 10.6849 | 86900 | 0.0006 | - | | 10.6910 | 86950 | 0.0004 | - | | 10.6972 | 87000 | 0.0006 | - | | 10.7033 | 87050 | 0.0002 | - | | 10.7095 | 87100 | 0.0007 | - | | 10.7156 | 87150 | 0.0003 | - | | 10.7218 | 87200 | 0.0002 | - | | 10.7279 | 87250 | 0.0001 | - | | 10.7340 | 87300 | 0.0005 | - | | 10.7402 | 87350 | 0.0003 | - | | 10.7463 | 87400 | 0.0002 | - | | 10.7525 | 87450 | 0.0002 | - | | 10.7586 | 87500 | 0.0002 | - | | 10.7648 | 87550 | 0.0002 | - | | 10.7709 | 87600 | 0.0003 | - | | 10.7771 | 87650 | 0.0002 | - | | 10.7832 | 87700 | 0.0002 | - | | 10.7894 | 87750 | 0.0005 | - | | 10.7955 | 87800 | 0.0003 | - | | 10.8017 | 87850 | 0.0001 | - | | 10.8078 | 87900 | 0.0005 | - | | 10.8140 | 87950 | 0.0002 | - | | 10.8201 | 88000 | 0.0006 | - | | 10.8263 | 88050 | 0.0005 | - | | 10.8324 | 88100 | 0.0005 | - | | 10.8386 | 88150 | 0.0003 | - | | 10.8447 | 88200 | 0.0007 | - | | 10.8509 | 88250 | 0.0004 | - | | 10.8570 | 88300 | 0.0006 | - | | 10.8632 | 88350 | 0.0009 | - | | 10.8693 | 88400 | 0.0002 | - | | 10.8754 | 88450 | 0.0002 | - | | 10.8816 | 88500 | 0.0004 | - | | 10.8877 | 88550 | 0.0003 | - | | 10.8939 | 88600 | 0.0003 | - | | 10.9000 | 88650 | 0.0003 | - | | 10.9062 | 88700 | 0.0003 | - | | 10.9123 | 88750 | 0.0005 | - | | 10.9185 | 88800 | 0.0003 | - | | 10.9246 | 88850 | 0.0006 | - | | 10.9308 | 88900 | 0.0003 | - | | 10.9369 | 88950 | 0.0002 | - | | 10.9431 | 89000 | 0.0004 | - | | 10.9492 | 89050 | 0.0005 | - | | 10.9554 | 89100 | 0.0006 | - | | 10.9615 | 89150 | 0.0001 | - | | 10.9677 | 89200 | 0.0003 | - | | 10.9738 | 89250 | 0.0002 | - | | 10.9800 | 89300 | 0.0009 | - | | 10.9861 | 89350 | 0.0006 | - | | 10.9923 | 89400 | 0.0005 | - | | 10.9984 | 89450 | 0.0002 | - | | 11.0045 | 89500 | 0.0001 | - | | 11.0107 | 89550 | 0.0002 | - | | 11.0168 | 89600 | 0.0001 | - | | 11.0230 | 89650 | 0.0002 | - | | 11.0291 | 89700 | 0.0 | - | | 11.0353 | 89750 | 0.0002 | - | | 11.0414 | 89800 | 0.0002 | - | | 11.0476 | 89850 | 0.0001 | - | | 11.0537 | 89900 | 0.0002 | - | | 11.0599 | 89950 | 0.0 | - | | 11.0660 | 90000 | 0.0001 | - | | 11.0722 | 90050 | 0.0001 | - | | 11.0783 | 90100 | 0.0001 | - | | 11.0845 | 90150 | 0.0002 | - | | 11.0906 | 90200 | 0.0003 | - | | 11.0968 | 90250 | 0.0 | - | | 11.1029 | 90300 | 0.0 | - | | 11.1091 | 90350 | 0.0003 | - | | 11.1152 | 90400 | 0.0002 | - | | 11.1214 | 90450 | 0.0 | - | | 11.1275 | 90500 | 0.0 | - | | 11.1337 | 90550 | 0.0 | - | | 11.1398 | 90600 | 0.0002 | - | | 11.1459 | 90650 | 0.0 | - | | 11.1521 | 90700 | 0.0001 | - | | 11.1582 | 90750 | 0.0 | - | | 11.1644 | 90800 | 0.0 | - | | 11.1705 | 90850 | 0.0 | - | | 11.1767 | 90900 | 0.0006 | - | | 11.1828 | 90950 | 0.0 | - | | 11.1890 | 91000 | 0.0007 | - | | 11.1951 | 91050 | 0.001 | - | | 11.2013 | 91100 | 0.0003 | - | | 11.2074 | 91150 | 0.0006 | - | | 11.2136 | 91200 | 0.0007 | - | | 11.2197 | 91250 | 0.0002 | - | | 11.2259 | 91300 | 0.0001 | - | | 11.2320 | 91350 | 0.0 | - | | 11.2382 | 91400 | 0.0002 | - | | 11.2443 | 91450 | 0.0 | - | | 11.2505 | 91500 | 0.0 | - | | 11.2566 | 91550 | 0.0 | - | | 11.2628 | 91600 | 0.0 | - | | 11.2689 | 91650 | 0.0 | - | | 11.2751 | 91700 | 0.0 | - | | 11.2812 | 91750 | 0.0001 | - | | 11.2873 | 91800 | 0.0002 | - | | 11.2935 | 91850 | 0.0 | - | | 11.2996 | 91900 | 0.0005 | - | | 11.3058 | 91950 | 0.0001 | - | | 11.3119 | 92000 | 0.0002 | - | | 11.3181 | 92050 | 0.0004 | - | | 11.3242 | 92100 | 0.0002 | - | | 11.3304 | 92150 | 0.0003 | - | | 11.3365 | 92200 | 0.0003 | - | | 11.3427 | 92250 | 0.0001 | - | | 11.3488 | 92300 | 0.0009 | - | | 11.3550 | 92350 | 0.0003 | - | | 11.3611 | 92400 | 0.0006 | - | | 11.3673 | 92450 | 0.0002 | - | | 11.3734 | 92500 | 0.0 | - | | 11.3796 | 92550 | 0.0002 | - | | 11.3857 | 92600 | 0.0004 | - | | 11.3919 | 92650 | 0.0006 | - | | 11.3980 | 92700 | 0.0007 | - | | 11.4042 | 92750 | 0.001 | - | | 11.4103 | 92800 | 0.0003 | - | | 11.4165 | 92850 | 0.0007 | - | | 11.4226 | 92900 | 0.0003 | - | | 11.4287 | 92950 | 0.0003 | - | | 11.4349 | 93000 | 0.0004 | - | | 11.4410 | 93050 | 0.0008 | - | | 11.4472 | 93100 | 0.0005 | - | | 11.4533 | 93150 | 0.0004 | - | | 11.4595 | 93200 | 0.0003 | - | | 11.4656 | 93250 | 0.0002 | - | | 11.4718 | 93300 | 0.0006 | - | | 11.4779 | 93350 | 0.0 | - | | 11.4841 | 93400 | 0.0 | - | | 11.4902 | 93450 | 0.0 | - | | 11.4964 | 93500 | 0.0 | - | | 11.5025 | 93550 | 0.0 | - | | 11.5087 | 93600 | 0.0 | - | | 11.5148 | 93650 | 0.0 | - | | 11.5210 | 93700 | 0.0 | - | | 11.5271 | 93750 | 0.0 | - | | 11.5333 | 93800 | 0.0002 | - | | 11.5394 | 93850 | 0.0 | - | | 11.5456 | 93900 | 0.0 | - | | 11.5517 | 93950 | 0.0003 | - | | 11.5579 | 94000 | 0.0002 | - | | 11.5640 | 94050 | 0.0003 | - | | 11.5701 | 94100 | 0.0 | - | | 11.5763 | 94150 | 0.0002 | - | | 11.5824 | 94200 | 0.0 | - | | 11.5886 | 94250 | 0.0 | - | | 11.5947 | 94300 | 0.0 | - | | 11.6009 | 94350 | 0.0 | - | | 11.6070 | 94400 | 0.0 | - | | 11.6132 | 94450 | 0.0 | - | | 11.6193 | 94500 | 0.0 | - | | 11.6255 | 94550 | 0.0 | - | | 11.6316 | 94600 | 0.0 | - | | 11.6378 | 94650 | 0.0007 | - | | 11.6439 | 94700 | 0.0006 | - | | 11.6501 | 94750 | 0.0004 | - | | 11.6562 | 94800 | 0.0001 | - | | 11.6624 | 94850 | 0.0 | - | | 11.6685 | 94900 | 0.0 | - | | 11.6747 | 94950 | 0.0002 | - | | 11.6808 | 95000 | 0.0 | - | | 11.6870 | 95050 | 0.0 | - | | 11.6931 | 95100 | 0.0 | - | | 11.6992 | 95150 | 0.0 | - | | 11.7054 | 95200 | 0.0 | - | | 11.7115 | 95250 | 0.0 | - | | 11.7177 | 95300 | 0.0003 | - | | 11.7238 | 95350 | 0.0 | - | | 11.7300 | 95400 | 0.0 | - | | 11.7361 | 95450 | 0.0 | - | | 11.7423 | 95500 | 0.0003 | - | | 11.7484 | 95550 | 0.0 | - | | 11.7546 | 95600 | 0.0001 | - | | 11.7607 | 95650 | 0.0001 | - | | 11.7669 | 95700 | 0.0 | - | | 11.7730 | 95750 | 0.001 | - | | 11.7792 | 95800 | 0.0001 | - | | 11.7853 | 95850 | 0.0009 | - | | 11.7915 | 95900 | 0.0 | - | | 11.7976 | 95950 | 0.0001 | - | | 11.8038 | 96000 | 0.0001 | - | | 11.8099 | 96050 | 0.0001 | - | | 11.8161 | 96100 | 0.0005 | - | | 11.8222 | 96150 | 0.0002 | - | | 11.8284 | 96200 | 0.0 | - | | 11.8345 | 96250 | 0.0 | - | | 11.8406 | 96300 | 0.0 | - | | 11.8468 | 96350 | 0.0 | - | | 11.8529 | 96400 | 0.0 | - | | 11.8591 | 96450 | 0.0 | - | | 11.8652 | 96500 | 0.0 | - | | 11.8714 | 96550 | 0.0 | - | | 11.8775 | 96600 | 0.0 | - | | 11.8837 | 96650 | 0.0 | - | | 11.8898 | 96700 | 0.0 | - | | 11.8960 | 96750 | 0.0002 | - | | 11.9021 | 96800 | 0.0 | - | | 11.9083 | 96850 | 0.0 | - | | 11.9144 | 96900 | 0.0 | - | | 11.9206 | 96950 | 0.0 | - | | 11.9267 | 97000 | 0.0002 | - | | 11.9329 | 97050 | 0.0001 | - | | 11.9390 | 97100 | 0.0 | - | | 11.9452 | 97150 | 0.0002 | - | | 11.9513 | 97200 | 0.0004 | - | | 11.9575 | 97250 | 0.0006 | - | | 11.9636 | 97300 | 0.0001 | - | | 11.9698 | 97350 | 0.0 | - | | 11.9759 | 97400 | 0.0 | - | | 11.9820 | 97450 | 0.0 | - | | 11.9882 | 97500 | 0.0 | - | | 11.9943 | 97550 | 0.0 | - | | 12.0005 | 97600 | 0.0 | - | | 12.0066 | 97650 | 0.0 | - | | 12.0128 | 97700 | 0.0001 | - | | 12.0189 | 97750 | 0.0025 | - | | 12.0251 | 97800 | 0.0017 | - | | 12.0312 | 97850 | 0.0013 | - | | 12.0374 | 97900 | 0.0015 | - | | 12.0435 | 97950 | 0.0009 | - | | 12.0497 | 98000 | 0.0002 | - | | 12.0558 | 98050 | 0.0005 | - | | 12.0620 | 98100 | 0.0005 | - | | 12.0681 | 98150 | 0.0 | - | | 12.0743 | 98200 | 0.0015 | - | | 12.0804 | 98250 | 0.0003 | - | | 12.0866 | 98300 | 0.0 | - | | 12.0927 | 98350 | 0.0 | - | | 12.0989 | 98400 | 0.0006 | - | | 12.1050 | 98450 | 0.0003 | - | | 12.1112 | 98500 | 0.0 | - | | 12.1173 | 98550 | 0.0 | - | | 12.1234 | 98600 | 0.0 | - | | 12.1296 | 98650 | 0.0002 | - | | 12.1357 | 98700 | 0.0002 | - | | 12.1419 | 98750 | 0.0002 | - | | 12.1480 | 98800 | 0.0 | - | | 12.1542 | 98850 | 0.0005 | - | | 12.1603 | 98900 | 0.0002 | - | | 12.1665 | 98950 | 0.0002 | - | | 12.1726 | 99000 | 0.0004 | - | | 12.1788 | 99050 | 0.0 | - | | 12.1849 | 99100 | 0.0004 | - | | 12.1911 | 99150 | 0.0002 | - | | 12.1972 | 99200 | 0.0002 | - | | 12.2034 | 99250 | 0.0002 | - | | 12.2095 | 99300 | 0.0 | - | | 12.2157 | 99350 | 0.0003 | - | | 12.2218 | 99400 | 0.0002 | - | | 12.2280 | 99450 | 0.0003 | - | | 12.2341 | 99500 | 0.0002 | - | | 12.2403 | 99550 | 0.0005 | - | | 12.2464 | 99600 | 0.0003 | - | | 12.2526 | 99650 | 0.0005 | - | | 12.2587 | 99700 | 0.0002 | - | | 12.2648 | 99750 | 0.0 | - | | 12.2710 | 99800 | 0.0003 | - | | 12.2771 | 99850 | 0.0002 | - | | 12.2833 | 99900 | 0.0002 | - | | 12.2894 | 99950 | 0.0003 | - | | 12.2956 | 100000 | 0.0 | - | | 12.3017 | 100050 | 0.0002 | - | | 12.3079 | 100100 | 0.0 | - | | 12.3140 | 100150 | 0.0 | - | | 12.3202 | 100200 | 0.0002 | - | | 12.3263 | 100250 | 0.0 | - | | 12.3325 | 100300 | 0.0004 | - | | 12.3386 | 100350 | 0.0001 | - | | 12.3448 | 100400 | 0.0 | - | | 12.3509 | 100450 | 0.0 | - | | 12.3571 | 100500 | 0.0002 | - | | 12.3632 | 100550 | 0.0 | - | | 12.3694 | 100600 | 0.0002 | - | | 12.3755 | 100650 | 0.0 | - | | 12.3817 | 100700 | 0.0 | - | | 12.3878 | 100750 | 0.0 | - | | 12.3940 | 100800 | 0.0 | - | | 12.4001 | 100850 | 0.0 | - | | 12.4062 | 100900 | 0.0001 | - | | 12.4124 | 100950 | 0.0 | - | | 12.4185 | 101000 | 0.0003 | - | | 12.4247 | 101050 | 0.0 | - | | 12.4308 | 101100 | 0.0 | - | | 12.4370 | 101150 | 0.0002 | - | | 12.4431 | 101200 | 0.0001 | - | | 12.4493 | 101250 | 0.0002 | - | | 12.4554 | 101300 | 0.0002 | - | | 12.4616 | 101350 | 0.0002 | - | | 12.4677 | 101400 | 0.0004 | - | | 12.4739 | 101450 | 0.0 | - | | 12.4800 | 101500 | 0.0002 | - | | 12.4862 | 101550 | 0.0 | - | | 12.4923 | 101600 | 0.0 | - | | 12.4985 | 101650 | 0.0002 | - | | 12.5046 | 101700 | 0.0 | - | | 12.5108 | 101750 | 0.0002 | - | | 12.5169 | 101800 | 0.0002 | - | | 12.5231 | 101850 | 0.0002 | - | | 12.5292 | 101900 | 0.0001 | - | | 12.5353 | 101950 | 0.0002 | - | | 12.5415 | 102000 | 0.0002 | - | | 12.5476 | 102050 | 0.0 | - | | 12.5538 | 102100 | 0.0006 | - | | 12.5599 | 102150 | 0.0002 | - | | 12.5661 | 102200 | 0.0 | - | | 12.5722 | 102250 | 0.0 | - | | 12.5784 | 102300 | 0.0 | - | | 12.5845 | 102350 | 0.0 | - | | 12.5907 | 102400 | 0.0 | - | | 12.5968 | 102450 | 0.0 | - | | 12.6030 | 102500 | 0.0 | - | | 12.6091 | 102550 | 0.0 | - | | 12.6153 | 102600 | 0.0 | - | | 12.6214 | 102650 | 0.0001 | - | | 12.6276 | 102700 | 0.0003 | - | | 12.6337 | 102750 | 0.0002 | - | | 12.6399 | 102800 | 0.0 | - | | 12.6460 | 102850 | 0.0 | - | | 12.6522 | 102900 | 0.0 | - | | 12.6583 | 102950 | 0.0 | - | | 12.6645 | 103000 | 0.0002 | - | | 12.6706 | 103050 | 0.0003 | - | | 12.6767 | 103100 | 0.0003 | - | | 12.6829 | 103150 | 0.0007 | - | | 12.6890 | 103200 | 0.0008 | - | | 12.6952 | 103250 | 0.0005 | - | | 12.7013 | 103300 | 0.0005 | - | | 12.7075 | 103350 | 0.0002 | - | | 12.7136 | 103400 | 0.0 | - | | 12.7198 | 103450 | 0.0 | - | | 12.7259 | 103500 | 0.0 | - | | 12.7321 | 103550 | 0.0002 | - | | 12.7382 | 103600 | 0.0 | - | | 12.7444 | 103650 | 0.0 | - | | 12.7505 | 103700 | 0.0 | - | | 12.7567 | 103750 | 0.0 | - | | 12.7628 | 103800 | 0.0 | - | | 12.7690 | 103850 | 0.0 | - | | 12.7751 | 103900 | 0.0 | - | | 12.7813 | 103950 | 0.0002 | - | | 12.7874 | 104000 | 0.0 | - | | 12.7936 | 104050 | 0.0 | - | | 12.7997 | 104100 | 0.0 | - | | 12.8059 | 104150 | 0.0 | - | | 12.8120 | 104200 | 0.0 | - | | 12.8181 | 104250 | 0.0002 | - | | 12.8243 | 104300 | 0.0 | - | | 12.8304 | 104350 | 0.0 | - | | 12.8366 | 104400 | 0.0 | - | | 12.8427 | 104450 | 0.0 | - | | 12.8489 | 104500 | 0.0 | - | | 12.8550 | 104550 | 0.0001 | - | | 12.8612 | 104600 | 0.0001 | - | | 12.8673 | 104650 | 0.0 | - | | 12.8735 | 104700 | 0.0 | - | | 12.8796 | 104750 | 0.0001 | - | | 12.8858 | 104800 | 0.0 | - | | 12.8919 | 104850 | 0.0003 | - | | 12.8981 | 104900 | 0.0002 | - | | 12.9042 | 104950 | 0.0002 | - | | 12.9104 | 105000 | 0.0001 | - | | 12.9165 | 105050 | 0.0007 | - | | 12.9227 | 105100 | 0.0005 | - | | 12.9288 | 105150 | 0.0002 | - | | 12.9350 | 105200 | 0.0002 | - | | 12.9411 | 105250 | 0.0002 | - | | 12.9473 | 105300 | 0.0002 | - | | 12.9534 | 105350 | 0.0002 | - | | 12.9595 | 105400 | 0.0 | - | | 12.9657 | 105450 | 0.0002 | - | | 12.9718 | 105500 | 0.0 | - | | 12.9780 | 105550 | 0.0 | - | | 12.9841 | 105600 | 0.0002 | - | | 12.9903 | 105650 | 0.0003 | - | | 12.9964 | 105700 | 0.0003 | - | | 13.0026 | 105750 | 0.0006 | - | | 13.0087 | 105800 | 0.0002 | - | | 13.0149 | 105850 | 0.0002 | - | | 13.0210 | 105900 | 0.0003 | - | | 13.0272 | 105950 | 0.0002 | - | | 13.0333 | 106000 | 0.0002 | - | | 13.0395 | 106050 | 0.0 | - | | 13.0456 | 106100 | 0.0002 | - | | 13.0518 | 106150 | 0.0002 | - | | 13.0579 | 106200 | 0.0 | - | | 13.0641 | 106250 | 0.0002 | - | | 13.0702 | 106300 | 0.0 | - | | 13.0764 | 106350 | 0.0001 | - | | 13.0825 | 106400 | 0.0002 | - | | 13.0887 | 106450 | 0.0002 | - | | 13.0948 | 106500 | 0.0003 | - | | 13.1009 | 106550 | 0.0 | - | | 13.1071 | 106600 | 0.0 | - | | 13.1132 | 106650 | 0.0 | - | | 13.1194 | 106700 | 0.0 | - | | 13.1255 | 106750 | 0.0 | - | | 13.1317 | 106800 | 0.0 | - | | 13.1378 | 106850 | 0.0001 | - | | 13.1440 | 106900 | 0.0 | - | | 13.1501 | 106950 | 0.0002 | - | | 13.1563 | 107000 | 0.0002 | - | | 13.1624 | 107050 | 0.0002 | - | | 13.1686 | 107100 | 0.0 | - | | 13.1747 | 107150 | 0.0006 | - | | 13.1809 | 107200 | 0.0 | - | | 13.1870 | 107250 | 0.0001 | - | | 13.1932 | 107300 | 0.0 | - | | 13.1993 | 107350 | 0.0002 | - | | 13.2055 | 107400 | 0.0001 | - | | 13.2116 | 107450 | 0.0002 | - | | 13.2178 | 107500 | 0.0002 | - | | 13.2239 | 107550 | 0.0005 | - | | 13.2301 | 107600 | 0.0003 | - | | 13.2362 | 107650 | 0.0001 | - | | 13.2423 | 107700 | 0.0001 | - | | 13.2485 | 107750 | 0.0 | - | | 13.2546 | 107800 | 0.0003 | - | | 13.2608 | 107850 | 0.0004 | - | | 13.2669 | 107900 | 0.0006 | - | | 13.2731 | 107950 | 0.0003 | - | | 13.2792 | 108000 | 0.0002 | - | | 13.2854 | 108050 | 0.0005 | - | | 13.2915 | 108100 | 0.0008 | - | | 13.2977 | 108150 | 0.0002 | - | | 13.3038 | 108200 | 0.0005 | - | | 13.3100 | 108250 | 0.0 | - | | 13.3161 | 108300 | 0.0 | - | | 13.3223 | 108350 | 0.0003 | - | | 13.3284 | 108400 | 0.0002 | - | | 13.3346 | 108450 | 0.0 | - | | 13.3407 | 108500 | 0.0 | - | | 13.3469 | 108550 | 0.0001 | - | | 13.3530 | 108600 | 0.0001 | - | | 13.3592 | 108650 | 0.0 | - | | 13.3653 | 108700 | 0.0 | - | | 13.3714 | 108750 | 0.0 | - | | 13.3776 | 108800 | 0.0 | - | | 13.3837 | 108850 | 0.0 | - | | 13.3899 | 108900 | 0.0 | - | | 13.3960 | 108950 | 0.0002 | - | | 13.4022 | 109000 | 0.0002 | - | | 13.4083 | 109050 | 0.0002 | - | | 13.4145 | 109100 | 0.0 | - | | 13.4206 | 109150 | 0.0 | - | | 13.4268 | 109200 | 0.0005 | - | | 13.4329 | 109250 | 0.0003 | - | | 13.4391 | 109300 | 0.0007 | - | | 13.4452 | 109350 | 0.0003 | - | | 13.4514 | 109400 | 0.0006 | - | | 13.4575 | 109450 | 0.0 | - | | 13.4637 | 109500 | 0.0 | - | | 13.4698 | 109550 | 0.0 | - | | 13.4760 | 109600 | 0.0001 | - | | 13.4821 | 109650 | 0.0008 | - | | 13.4883 | 109700 | 0.0004 | - | | 13.4944 | 109750 | 0.0002 | - | | 13.5006 | 109800 | 0.0006 | - | | 13.5067 | 109850 | 0.0005 | - | | 13.5128 | 109900 | 0.0001 | - | | 13.5190 | 109950 | 0.0001 | - | | 13.5251 | 110000 | 0.0001 | - | | 13.5313 | 110050 | 0.0005 | - | | 13.5374 | 110100 | 0.0006 | - | | 13.5436 | 110150 | 0.0003 | - | | 13.5497 | 110200 | 0.0002 | - | | 13.5559 | 110250 | 0.0 | - | | 13.5620 | 110300 | 0.0001 | - | | 13.5682 | 110350 | 0.0 | - | | 13.5743 | 110400 | 0.0 | - | | 13.5805 | 110450 | 0.0 | - | | 13.5866 | 110500 | 0.0 | - | | 13.5928 | 110550 | 0.0 | - | | 13.5989 | 110600 | 0.0 | - | | 13.6051 | 110650 | 0.0001 | - | | 13.6112 | 110700 | 0.0 | - | | 13.6174 | 110750 | 0.0 | - | | 13.6235 | 110800 | 0.0 | - | | 13.6297 | 110850 | 0.0001 | - | | 13.6358 | 110900 | 0.0 | - | | 13.6420 | 110950 | 0.0001 | - | | 13.6481 | 111000 | 0.0003 | - | | 13.6542 | 111050 | 0.0 | - | | 13.6604 | 111100 | 0.0 | - | | 13.6665 | 111150 | 0.0001 | - | | 13.6727 | 111200 | 0.0006 | - | | 13.6788 | 111250 | 0.0005 | - | | 13.6850 | 111300 | 0.0004 | - | | 13.6911 | 111350 | 0.0004 | - | | 13.6973 | 111400 | 0.0 | - | | 13.7034 | 111450 | 0.0002 | - | | 13.7096 | 111500 | 0.0008 | - | | 13.7157 | 111550 | 0.0008 | - | | 13.7219 | 111600 | 0.0005 | - | | 13.7280 | 111650 | 0.0 | - | | 13.7342 | 111700 | 0.0002 | - | | 13.7403 | 111750 | 0.0004 | - | | 13.7465 | 111800 | 0.0 | - | | 13.7526 | 111850 | 0.0005 | - | | 13.7588 | 111900 | 0.0002 | - | | 13.7649 | 111950 | 0.0 | - | | 13.7711 | 112000 | 0.0006 | - | | 13.7772 | 112050 | 0.0002 | - | | 13.7834 | 112100 | 0.0001 | - | | 13.7895 | 112150 | 0.0003 | - | | 13.7956 | 112200 | 0.0001 | - | | 13.8018 | 112250 | 0.0001 | - | | 13.8079 | 112300 | 0.0001 | - | | 13.8141 | 112350 | 0.0002 | - | | 13.8202 | 112400 | 0.0 | - | | 13.8264 | 112450 | 0.0 | - | | 13.8325 | 112500 | 0.0 | - | | 13.8387 | 112550 | 0.0 | - | | 13.8448 | 112600 | 0.0001 | - | | 13.8510 | 112650 | 0.0 | - | | 13.8571 | 112700 | 0.0001 | - | | 13.8633 | 112750 | 0.0002 | - | | 13.8694 | 112800 | 0.0001 | - | | 13.8756 | 112850 | 0.0002 | - | | 13.8817 | 112900 | 0.0 | - | | 13.8879 | 112950 | 0.0001 | - | | 13.8940 | 113000 | 0.0 | - | | 13.9002 | 113050 | 0.0001 | - | | 13.9063 | 113100 | 0.0001 | - | | 13.9125 | 113150 | 0.0001 | - | | 13.9186 | 113200 | 0.0001 | - | | 13.9248 | 113250 | 0.0 | - | | 13.9309 | 113300 | 0.0002 | - | | 13.9370 | 113350 | 0.0 | - | | 13.9432 | 113400 | 0.0001 | - | | 13.9493 | 113450 | 0.0 | - | | 13.9555 | 113500 | 0.0001 | - | | 13.9616 | 113550 | 0.0 | - | | 13.9678 | 113600 | 0.0 | - | | 13.9739 | 113650 | 0.0001 | - | | 13.9801 | 113700 | 0.0 | - | | 13.9862 | 113750 | 0.0 | - | | 13.9924 | 113800 | 0.0 | - | | 13.9985 | 113850 | 0.0002 | - | | 14.0047 | 113900 | 0.0002 | - | | 14.0108 | 113950 | 0.0 | - | | 14.0170 | 114000 | 0.0 | - | | 14.0231 | 114050 | 0.0001 | - | | 14.0293 | 114100 | 0.0001 | - | | 14.0354 | 114150 | 0.0006 | - | | 14.0416 | 114200 | 0.0005 | - | | 14.0477 | 114250 | 0.0007 | - | | 14.0539 | 114300 | 0.001 | - | | 14.0600 | 114350 | 0.0005 | - | | 14.0662 | 114400 | 0.0001 | - | | 14.0723 | 114450 | 0.0003 | - | | 14.0784 | 114500 | 0.0003 | - | | 14.0846 | 114550 | 0.0 | - | | 14.0907 | 114600 | 0.0003 | - | | 14.0969 | 114650 | 0.0 | - | | 14.1030 | 114700 | 0.0 | - | | 14.1092 | 114750 | 0.0 | - | | 14.1153 | 114800 | 0.0 | - | | 14.1215 | 114850 | 0.0 | - | | 14.1276 | 114900 | 0.0 | - | | 14.1338 | 114950 | 0.0 | - | | 14.1399 | 115000 | 0.0002 | - | | 14.1461 | 115050 | 0.0 | - | | 14.1522 | 115100 | 0.0 | - | | 14.1584 | 115150 | 0.0 | - | | 14.1645 | 115200 | 0.0 | - | | 14.1707 | 115250 | 0.0 | - | | 14.1768 | 115300 | 0.0 | - | | 14.1830 | 115350 | 0.0 | - | | 14.1891 | 115400 | 0.0 | - | | 14.1953 | 115450 | 0.0002 | - | | 14.2014 | 115500 | 0.0 | - | | 14.2075 | 115550 | 0.0 | - | | 14.2137 | 115600 | 0.0 | - | | 14.2198 | 115650 | 0.0001 | - | | 14.2260 | 115700 | 0.0 | - | | 14.2321 | 115750 | 0.0002 | - | | 14.2383 | 115800 | 0.0002 | - | | 14.2444 | 115850 | 0.0001 | - | | 14.2506 | 115900 | 0.0002 | - | | 14.2567 | 115950 | 0.0001 | - | | 14.2629 | 116000 | 0.0 | - | | 14.2690 | 116050 | 0.0002 | - | | 14.2752 | 116100 | 0.0 | - | | 14.2813 | 116150 | 0.0002 | - | | 14.2875 | 116200 | 0.0001 | - | | 14.2936 | 116250 | 0.0 | - | | 14.2998 | 116300 | 0.0007 | - | | 14.3059 | 116350 | 0.0002 | - | | 14.3121 | 116400 | 0.0003 | - | | 14.3182 | 116450 | 0.0 | - | | 14.3244 | 116500 | 0.0004 | - | | 14.3305 | 116550 | 0.0 | - | | 14.3367 | 116600 | 0.0002 | - | | 14.3428 | 116650 | 0.0004 | - | | 14.3489 | 116700 | 0.0005 | - | | 14.3551 | 116750 | 0.0002 | - | | 14.3612 | 116800 | 0.0003 | - | | 14.3674 | 116850 | 0.0 | - | | 14.3735 | 116900 | 0.0001 | - | | 14.3797 | 116950 | 0.0 | - | | 14.3858 | 117000 | 0.0002 | - | | 14.3920 | 117050 | 0.0002 | - | | 14.3981 | 117100 | 0.0 | - | | 14.4043 | 117150 | 0.0 | - | | 14.4104 | 117200 | 0.0 | - | | 14.4166 | 117250 | 0.0 | - | | 14.4227 | 117300 | 0.0002 | - | | 14.4289 | 117350 | 0.0 | - | | 14.4350 | 117400 | 0.0 | - | | 14.4412 | 117450 | 0.0002 | - | | 14.4473 | 117500 | 0.0 | - | | 14.4535 | 117550 | 0.0 | - | | 14.4596 | 117600 | 0.0001 | - | | 14.4658 | 117650 | 0.0002 | - | | 14.4719 | 117700 | 0.0 | - | | 14.4781 | 117750 | 0.0002 | - | | 14.4842 | 117800 | 0.0 | - | | 14.4903 | 117850 | 0.0 | - | | 14.4965 | 117900 | 0.0 | - | | 14.5026 | 117950 | 0.0 | - | | 14.5088 | 118000 | 0.0 | - | | 14.5149 | 118050 | 0.0001 | - | | 14.5211 | 118100 | 0.0 | - | | 14.5272 | 118150 | 0.0 | - | | 14.5334 | 118200 | 0.0 | - | | 14.5395 | 118250 | 0.0 | - | | 14.5457 | 118300 | 0.0 | - | | 14.5518 | 118350 | 0.0 | - | | 14.5580 | 118400 | 0.0 | - | | 14.5641 | 118450 | 0.0 | - | | 14.5703 | 118500 | 0.0 | - | | 14.5764 | 118550 | 0.0 | - | | 14.5826 | 118600 | 0.0 | - | | 14.5887 | 118650 | 0.0002 | - | | 14.5949 | 118700 | 0.0 | - | | 14.6010 | 118750 | 0.0 | - | | 14.6072 | 118800 | 0.0 | - | | 14.6133 | 118850 | 0.0 | - | | 14.6195 | 118900 | 0.0 | - | | 14.6256 | 118950 | 0.0 | - | | 14.6317 | 119000 | 0.0 | - | | 14.6379 | 119050 | 0.0 | - | | 14.6440 | 119100 | 0.0 | - | | 14.6502 | 119150 | 0.0 | - | | 14.6563 | 119200 | 0.0003 | - | | 14.6625 | 119250 | 0.0 | - | | 14.6686 | 119300 | 0.0 | - | | 14.6748 | 119350 | 0.0 | - | | 14.6809 | 119400 | 0.0002 | - | | 14.6871 | 119450 | 0.0 | - | | 14.6932 | 119500 | 0.0 | - | | 14.6994 | 119550 | 0.0 | - | | 14.7055 | 119600 | 0.0 | - | | 14.7117 | 119650 | 0.0 | - | | 14.7178 | 119700 | 0.0 | - | | 14.7240 | 119750 | 0.0001 | - | | 14.7301 | 119800 | 0.0001 | - | | 14.7363 | 119850 | 0.0001 | - | | 14.7424 | 119900 | 0.0001 | - | | 14.7486 | 119950 | 0.0003 | - | | 14.7547 | 120000 | 0.0002 | - | | 14.7609 | 120050 | 0.0003 | - | | 14.7670 | 120100 | 0.0 | - | | 14.7731 | 120150 | 0.0 | - | | 14.7793 | 120200 | 0.0003 | - | | 14.7854 | 120250 | 0.0 | - | | 14.7916 | 120300 | 0.0 | - | | 14.7977 | 120350 | 0.0004 | - | | 14.8039 | 120400 | 0.0 | - | | 14.8100 | 120450 | 0.0 | - | | 14.8162 | 120500 | 0.0003 | - | | 14.8223 | 120550 | 0.0002 | - | | 14.8285 | 120600 | 0.0001 | - | | 14.8346 | 120650 | 0.0 | - | | 14.8408 | 120700 | 0.0002 | - | | 14.8469 | 120750 | 0.0 | - | | 14.8531 | 120800 | 0.0 | - | | 14.8592 | 120850 | 0.0001 | - | | 14.8654 | 120900 | 0.0003 | - | | 14.8715 | 120950 | 0.0002 | - | | 14.8777 | 121000 | 0.0 | - | | 14.8838 | 121050 | 0.0 | - | | 14.8900 | 121100 | 0.0 | - | | 14.8961 | 121150 | 0.0 | - | | 14.9023 | 121200 | 0.0003 | - | | 14.9084 | 121250 | 0.0004 | - | | 14.9145 | 121300 | 0.0002 | - | | 14.9207 | 121350 | 0.0005 | - | | 14.9268 | 121400 | 0.0 | - | | 14.9330 | 121450 | 0.0001 | - | | 14.9391 | 121500 | 0.0001 | - | | 14.9453 | 121550 | 0.0002 | - | | 14.9514 | 121600 | 0.0 | - | | 14.9576 | 121650 | 0.0 | - | | 14.9637 | 121700 | 0.0002 | - | | 14.9699 | 121750 | 0.0 | - | | 14.9760 | 121800 | 0.0 | - | | 14.9822 | 121850 | 0.0 | - | | 14.9883 | 121900 | 0.0 | - | | 14.9945 | 121950 | 0.0 | - | | 15.0006 | 122000 | 0.0 | - | | 15.0068 | 122050 | 0.0003 | - | | 15.0129 | 122100 | 0.0008 | - | | 15.0191 | 122150 | 0.0003 | - | | 15.0252 | 122200 | 0.0 | - | | 15.0314 | 122250 | 0.0002 | - | | 15.0375 | 122300 | 0.0003 | - | | 15.0436 | 122350 | 0.0002 | - | | 15.0498 | 122400 | 0.0003 | - | | 15.0559 | 122450 | 0.0002 | - | | 15.0621 | 122500 | 0.0003 | - | | 15.0682 | 122550 | 0.0003 | - | | 15.0744 | 122600 | 0.0002 | - | | 15.0805 | 122650 | 0.0005 | - | | 15.0867 | 122700 | 0.0002 | - | | 15.0928 | 122750 | 0.0006 | - | | 15.0990 | 122800 | 0.0002 | - | | 15.1051 | 122850 | 0.0 | - | | 15.1113 | 122900 | 0.0003 | - | | 15.1174 | 122950 | 0.0002 | - | | 15.1236 | 123000 | 0.0002 | - | | 15.1297 | 123050 | 0.0002 | - | | 15.1359 | 123100 | 0.0002 | - | | 15.1420 | 123150 | 0.0 | - | | 15.1482 | 123200 | 0.0005 | - | | 15.1543 | 123250 | 0.0 | - | | 15.1605 | 123300 | 0.0 | - | | 15.1666 | 123350 | 0.0003 | - | | 15.1728 | 123400 | 0.0002 | - | | 15.1789 | 123450 | 0.0002 | - | | 15.1850 | 123500 | 0.0002 | - | | 15.1912 | 123550 | 0.0003 | - | | 15.1973 | 123600 | 0.0 | - | | 15.2035 | 123650 | 0.0 | - | | 15.2096 | 123700 | 0.0 | - | | 15.2158 | 123750 | 0.0 | - | | 15.2219 | 123800 | 0.0 | - | | 15.2281 | 123850 | 0.0 | - | | 15.2342 | 123900 | 0.0 | - | | 15.2404 | 123950 | 0.0 | - | | 15.2465 | 124000 | 0.0003 | - | | 15.2527 | 124050 | 0.0002 | - | | 15.2588 | 124100 | 0.0003 | - | | 15.2650 | 124150 | 0.0 | - | | 15.2711 | 124200 | 0.0 | - | | 15.2773 | 124250 | 0.0003 | - | | 15.2834 | 124300 | 0.0013 | - | | 15.2896 | 124350 | 0.0031 | - | | 15.2957 | 124400 | 0.0023 | - | | 15.3019 | 124450 | 0.0024 | - | | 15.3080 | 124500 | 0.0019 | - | | 15.3142 | 124550 | 0.0017 | - | | 15.3203 | 124600 | 0.0019 | - | | 15.3264 | 124650 | 0.0002 | - | | 15.3326 | 124700 | 0.0002 | - | | 15.3387 | 124750 | 0.001 | - | | 15.3449 | 124800 | 0.0005 | - | | 15.3510 | 124850 | 0.0003 | - | | 15.3572 | 124900 | 0.0002 | - | | 15.3633 | 124950 | 0.0005 | - | | 15.3695 | 125000 | 0.0003 | - | | 15.3756 | 125050 | 0.0005 | - | | 15.3818 | 125100 | 0.0002 | - | | 15.3879 | 125150 | 0.0 | - | | 15.3941 | 125200 | 0.0003 | - | | 15.4002 | 125250 | 0.0005 | - | | 15.4064 | 125300 | 0.0 | - | | 15.4125 | 125350 | 0.0 | - | | 15.4187 | 125400 | 0.0 | - | | 15.4248 | 125450 | 0.0 | - | | 15.4310 | 125500 | 0.0 | - | | 15.4371 | 125550 | 0.0 | - | | 15.4433 | 125600 | 0.0 | - | | 15.4494 | 125650 | 0.0 | - | | 15.4556 | 125700 | 0.0 | - | | 15.4617 | 125750 | 0.0002 | - | | 15.4678 | 125800 | 0.0003 | - | | 15.4740 | 125850 | 0.0 | - | | 15.4801 | 125900 | 0.0 | - | | 15.4863 | 125950 | 0.0002 | - | | 15.4924 | 126000 | 0.0001 | - | | 15.4986 | 126050 | 0.0 | - | | 15.5047 | 126100 | 0.0002 | - | | 15.5109 | 126150 | 0.0 | - | | 15.5170 | 126200 | 0.0 | - | | 15.5232 | 126250 | 0.0 | - | | 15.5293 | 126300 | 0.0 | - | | 15.5355 | 126350 | 0.0 | - | | 15.5416 | 126400 | 0.0 | - | | 15.5478 | 126450 | 0.0 | - | | 15.5539 | 126500 | 0.0 | - | | 15.5601 | 126550 | 0.0002 | - | | 15.5662 | 126600 | 0.0 | - | | 15.5724 | 126650 | 0.0 | - | | 15.5785 | 126700 | 0.0 | - | | 15.5847 | 126750 | 0.0 | - | | 15.5908 | 126800 | 0.0 | - | | 15.5970 | 126850 | 0.0 | - | | 15.6031 | 126900 | 0.0 | - | | 15.6092 | 126950 | 0.0 | - | | 15.6154 | 127000 | 0.0 | - | | 15.6215 | 127050 | 0.0 | - | | 15.6277 | 127100 | 0.0 | - | | 15.6338 | 127150 | 0.0 | - | | 15.6400 | 127200 | 0.0 | - | | 15.6461 | 127250 | 0.0 | - | | 15.6523 | 127300 | 0.0 | - | | 15.6584 | 127350 | 0.0 | - | | 15.6646 | 127400 | 0.0 | - | | 15.6707 | 127450 | 0.0 | - | | 15.6769 | 127500 | 0.0002 | - | | 15.6830 | 127550 | 0.0 | - | | 15.6892 | 127600 | 0.0 | - | | 15.6953 | 127650 | 0.0 | - | | 15.7015 | 127700 | 0.0 | - | | 15.7076 | 127750 | 0.0 | - | | 15.7138 | 127800 | 0.0 | - | | 15.7199 | 127850 | 0.0 | - | | 15.7261 | 127900 | 0.0 | - | | 15.7322 | 127950 | 0.0 | - | | 15.7383 | 128000 | 0.0003 | - | | 15.7445 | 128050 | 0.0 | - | | 15.7506 | 128100 | 0.0003 | - | | 15.7568 | 128150 | 0.0 | - | | 15.7629 | 128200 | 0.0 | - | | 15.7691 | 128250 | 0.0001 | - | | 15.7752 | 128300 | 0.0003 | - | | 15.7814 | 128350 | 0.0 | - | | 15.7875 | 128400 | 0.0 | - | | 15.7937 | 128450 | 0.0 | - | | 15.7998 | 128500 | 0.0 | - | | 15.8060 | 128550 | 0.0 | - | | 15.8121 | 128600 | 0.0 | - | | 15.8183 | 128650 | 0.0 | - | | 15.8244 | 128700 | 0.0002 | - | | 15.8306 | 128750 | 0.0 | - | | 15.8367 | 128800 | 0.0 | - | | 15.8429 | 128850 | 0.0 | - | | 15.8490 | 128900 | 0.0 | - | | 15.8552 | 128950 | 0.0 | - | | 15.8613 | 129000 | 0.0 | - | | 15.8675 | 129050 | 0.0 | - | | 15.8736 | 129100 | 0.0 | - | | 15.8797 | 129150 | 0.0 | - | | 15.8859 | 129200 | 0.0 | - | | 15.8920 | 129250 | 0.0 | - | | 15.8982 | 129300 | 0.0001 | - | | 15.9043 | 129350 | 0.0 | - | | 15.9105 | 129400 | 0.0 | - | | 15.9166 | 129450 | 0.0 | - | | 15.9228 | 129500 | 0.0 | - | | 15.9289 | 129550 | 0.0 | - | | 15.9351 | 129600 | 0.0 | - | | 15.9412 | 129650 | 0.0 | - | | 15.9474 | 129700 | 0.0 | - | | 15.9535 | 129750 | 0.0 | - | | 15.9597 | 129800 | 0.0 | - | | 15.9658 | 129850 | 0.0 | - | | 15.9720 | 129900 | 0.0 | - | | 15.9781 | 129950 | 0.0 | - | | 15.9843 | 130000 | 0.0 | - | | 15.9904 | 130050 | 0.0 | - | | 15.9966 | 130100 | 0.0 | - | | 16.0027 | 130150 | 0.0 | - | | 16.0089 | 130200 | 0.0 | - | | 16.0150 | 130250 | 0.0 | - | | 16.0211 | 130300 | 0.0 | - | | 16.0273 | 130350 | 0.0 | - | | 16.0334 | 130400 | 0.0 | - | | 16.0396 | 130450 | 0.0 | - | | 16.0457 | 130500 | 0.0 | - | | 16.0519 | 130550 | 0.0 | - | | 16.0580 | 130600 | 0.0 | - | | 16.0642 | 130650 | 0.0 | - | | 16.0703 | 130700 | 0.0 | - | | 16.0765 | 130750 | 0.0003 | - | | 16.0826 | 130800 | 0.0002 | - | | 16.0888 | 130850 | 0.0002 | - | | 16.0949 | 130900 | 0.0 | - | | 16.1011 | 130950 | 0.0002 | - | | 16.1072 | 131000 | 0.0002 | - | | 16.1134 | 131050 | 0.0003 | - | | 16.1195 | 131100 | 0.0008 | - | | 16.1257 | 131150 | 0.0005 | - | | 16.1318 | 131200 | 0.0002 | - | | 16.1380 | 131250 | 0.0 | - | | 16.1441 | 131300 | 0.0006 | - | | 16.1503 | 131350 | 0.001 | - | | 16.1564 | 131400 | 0.0002 | - | | 16.1625 | 131450 | 0.0002 | - | | 16.1687 | 131500 | 0.0002 | - | | 16.1748 | 131550 | 0.0 | - | | 16.1810 | 131600 | 0.0002 | - | | 16.1871 | 131650 | 0.0002 | - | | 16.1933 | 131700 | 0.0005 | - | | 16.1994 | 131750 | 0.0002 | - | | 16.2056 | 131800 | 0.0002 | - | | 16.2117 | 131850 | 0.0 | - | | 16.2179 | 131900 | 0.0004 | - | | 16.2240 | 131950 | 0.0 | - | | 16.2302 | 132000 | 0.0 | - | | 16.2363 | 132050 | 0.0 | - | | 16.2425 | 132100 | 0.0 | - | | 16.2486 | 132150 | 0.0 | - | | 16.2548 | 132200 | 0.0001 | - | | 16.2609 | 132250 | 0.0001 | - | | 16.2671 | 132300 | 0.0 | - | | 16.2732 | 132350 | 0.0 | - | | 16.2794 | 132400 | 0.0001 | - | | 16.2855 | 132450 | 0.0 | - | | 16.2917 | 132500 | 0.0001 | - | | 16.2978 | 132550 | 0.0 | - | | 16.3039 | 132600 | 0.0 | - | | 16.3101 | 132650 | 0.0 | - | | 16.3162 | 132700 | 0.0 | - | | 16.3224 | 132750 | 0.0 | - | | 16.3285 | 132800 | 0.0002 | - | | 16.3347 | 132850 | 0.0 | - | | 16.3408 | 132900 | 0.0 | - | | 16.3470 | 132950 | 0.0002 | - | | 16.3531 | 133000 | 0.0002 | - | | 16.3593 | 133050 | 0.0001 | - | | 16.3654 | 133100 | 0.0 | - | | 16.3716 | 133150 | 0.0 | - | | 16.3777 | 133200 | 0.0002 | - | | 16.3839 | 133250 | 0.0002 | - | | 16.3900 | 133300 | 0.0 | - | | 16.3962 | 133350 | 0.0 | - | | 16.4023 | 133400 | 0.0 | - | | 16.4085 | 133450 | 0.0002 | - | | 16.4146 | 133500 | 0.0 | - | | 16.4208 | 133550 | 0.0 | - | | 16.4269 | 133600 | 0.0001 | - | | 16.4331 | 133650 | 0.0002 | - | | 16.4392 | 133700 | 0.0 | - | | 16.4453 | 133750 | 0.0003 | - | | 16.4515 | 133800 | 0.0 | - | | 16.4576 | 133850 | 0.0 | - | | 16.4638 | 133900 | 0.0 | - | | 16.4699 | 133950 | 0.0 | - | | 16.4761 | 134000 | 0.0 | - | | 16.4822 | 134050 | 0.0 | - | | 16.4884 | 134100 | 0.0 | - | | 16.4945 | 134150 | 0.0002 | - | | 16.5007 | 134200 | 0.0003 | - | | 16.5068 | 134250 | 0.0 | - | | 16.5130 | 134300 | 0.0 | - | | 16.5191 | 134350 | 0.0 | - | | 16.5253 | 134400 | 0.0 | - | | 16.5314 | 134450 | 0.0 | - | | 16.5376 | 134500 | 0.0 | - | | 16.5437 | 134550 | 0.0 | - | | 16.5499 | 134600 | 0.0 | - | | 16.5560 | 134650 | 0.0 | - | | 16.5622 | 134700 | 0.0 | - | | 16.5683 | 134750 | 0.0 | - | | 16.5744 | 134800 | 0.0 | - | | 16.5806 | 134850 | 0.0 | - | | 16.5867 | 134900 | 0.0 | - | | 16.5929 | 134950 | 0.0 | - | | 16.5990 | 135000 | 0.0003 | - | | 16.6052 | 135050 | 0.0 | - | | 16.6113 | 135100 | 0.0 | - | | 16.6175 | 135150 | 0.0 | - | | 16.6236 | 135200 | 0.0002 | - | | 16.6298 | 135250 | 0.0001 | - | | 16.6359 | 135300 | 0.0002 | - | | 16.6421 | 135350 | 0.0 | - | | 16.6482 | 135400 | 0.0 | - | | 16.6544 | 135450 | 0.0002 | - | | 16.6605 | 135500 | 0.0 | - | | 16.6667 | 135550 | 0.0002 | - | | 16.6728 | 135600 | 0.0001 | - | | 16.6790 | 135650 | 0.0 | - | | 16.6851 | 135700 | 0.0 | - | | 16.6913 | 135750 | 0.0003 | - | | 16.6974 | 135800 | 0.0 | - | | 16.7036 | 135850 | 0.0 | - | | 16.7097 | 135900 | 0.0 | - | | 16.7158 | 135950 | 0.0 | - | | 16.7220 | 136000 | 0.0 | - | | 16.7281 | 136050 | 0.0 | - | | 16.7343 | 136100 | 0.0 | - | | 16.7404 | 136150 | 0.0 | - | | 16.7466 | 136200 | 0.0 | - | | 16.7527 | 136250 | 0.0 | - | | 16.7589 | 136300 | 0.0001 | - | | 16.7650 | 136350 | 0.0002 | - | | 16.7712 | 136400 | 0.0005 | - | | 16.7773 | 136450 | 0.0 | - | | 16.7835 | 136500 | 0.0002 | - | | 16.7896 | 136550 | 0.0 | - | | 16.7958 | 136600 | 0.0001 | - | | 16.8019 | 136650 | 0.0 | - | | 16.8081 | 136700 | 0.0 | - | | 16.8142 | 136750 | 0.0 | - | | 16.8204 | 136800 | 0.0 | - | | 16.8265 | 136850 | 0.0 | - | | 16.8327 | 136900 | 0.0 | - | | 16.8388 | 136950 | 0.0 | - | | 16.8450 | 137000 | 0.0 | - | | 16.8511 | 137050 | 0.0 | - | | 16.8572 | 137100 | 0.0001 | - | | 16.8634 | 137150 | 0.0001 | - | | 16.8695 | 137200 | 0.0003 | - | | 16.8757 | 137250 | 0.0 | - | | 16.8818 | 137300 | 0.0002 | - | | 16.8880 | 137350 | 0.0001 | - | | 16.8941 | 137400 | 0.0 | - | | 16.9003 | 137450 | 0.0 | - | | 16.9064 | 137500 | 0.0 | - | | 16.9126 | 137550 | 0.0002 | - | | 16.9187 | 137600 | 0.0 | - | | 16.9249 | 137650 | 0.0 | - | | 16.9310 | 137700 | 0.0 | - | | 16.9372 | 137750 | 0.0 | - | | 16.9433 | 137800 | 0.0 | - | | 16.9495 | 137850 | 0.0 | - | | 16.9556 | 137900 | 0.0 | - | | 16.9618 | 137950 | 0.0002 | - | | 16.9679 | 138000 | 0.0 | - | | 16.9741 | 138050 | 0.0 | - | | 16.9802 | 138100 | 0.0 | - | | 16.9864 | 138150 | 0.0 | - | | 16.9925 | 138200 | 0.0 | - | | 16.9986 | 138250 | 0.0 | - | | 17.0048 | 138300 | 0.0 | - | | 17.0109 | 138350 | 0.0 | - | | 17.0171 | 138400 | 0.0 | - | | 17.0232 | 138450 | 0.0 | - | | 17.0294 | 138500 | 0.0 | - | | 17.0355 | 138550 | 0.0 | - | | 17.0417 | 138600 | 0.0 | - | | 17.0478 | 138650 | 0.0 | - | | 17.0540 | 138700 | 0.0003 | - | | 17.0601 | 138750 | 0.0003 | - | | 17.0663 | 138800 | 0.0002 | - | | 17.0724 | 138850 | 0.0 | - | | 17.0786 | 138900 | 0.0002 | - | | 17.0847 | 138950 | 0.0 | - | | 17.0909 | 139000 | 0.0 | - | | 17.0970 | 139050 | 0.0001 | - | | 17.1032 | 139100 | 0.0 | - | | 17.1093 | 139150 | 0.0 | - | | 17.1155 | 139200 | 0.0 | - | | 17.1216 | 139250 | 0.0001 | - | | 17.1278 | 139300 | 0.0 | - | | 17.1339 | 139350 | 0.0001 | - | | 17.1400 | 139400 | 0.0 | - | | 17.1462 | 139450 | 0.0 | - | | 17.1523 | 139500 | 0.0002 | - | | 17.1585 | 139550 | 0.0 | - | | 17.1646 | 139600 | 0.0002 | - | | 17.1708 | 139650 | 0.0 | - | | 17.1769 | 139700 | 0.0 | - | | 17.1831 | 139750 | 0.0 | - | | 17.1892 | 139800 | 0.0 | - | | 17.1954 | 139850 | 0.0002 | - | | 17.2015 | 139900 | 0.0 | - | | 17.2077 | 139950 | 0.0002 | - | | 17.2138 | 140000 | 0.0002 | - | | 17.2200 | 140050 | 0.0 | - | | 17.2261 | 140100 | 0.0 | - | | 17.2323 | 140150 | 0.0002 | - | | 17.2384 | 140200 | 0.0006 | - | | 17.2446 | 140250 | 0.0002 | - | | 17.2507 | 140300 | 0.0003 | - | | 17.2569 | 140350 | 0.0 | - | | 17.2630 | 140400 | 0.0002 | - | | 17.2692 | 140450 | 0.0002 | - | | 17.2753 | 140500 | 0.0 | - | | 17.2814 | 140550 | 0.0 | - | | 17.2876 | 140600 | 0.0002 | - | | 17.2937 | 140650 | 0.0 | - | | 17.2999 | 140700 | 0.0 | - | | 17.3060 | 140750 | 0.0 | - | | 17.3122 | 140800 | 0.0 | - | | 17.3183 | 140850 | 0.0 | - | | 17.3245 | 140900 | 0.0 | - | | 17.3306 | 140950 | 0.0 | - | | 17.3368 | 141000 | 0.0 | - | | 17.3429 | 141050 | 0.0002 | - | | 17.3491 | 141100 | 0.0 | - | | 17.3552 | 141150 | 0.0 | - | | 17.3614 | 141200 | 0.0 | - | | 17.3675 | 141250 | 0.0 | - | | 17.3737 | 141300 | 0.0002 | - | | 17.3798 | 141350 | 0.0 | - | | 17.3860 | 141400 | 0.0 | - | | 17.3921 | 141450 | 0.0 | - | | 17.3983 | 141500 | 0.0 | - | | 17.4044 | 141550 | 0.0 | - | | 17.4105 | 141600 | 0.0 | - | | 17.4167 | 141650 | 0.0002 | - | | 17.4228 | 141700 | 0.0 | - | | 17.4290 | 141750 | 0.0 | - | | 17.4351 | 141800 | 0.0002 | - | | 17.4413 | 141850 | 0.0 | - | | 17.4474 | 141900 | 0.0 | - | | 17.4536 | 141950 | 0.0 | - | | 17.4597 | 142000 | 0.0 | - | | 17.4659 | 142050 | 0.0 | - | | 17.4720 | 142100 | 0.0 | - | | 17.4782 | 142150 | 0.0 | - | | 17.4843 | 142200 | 0.0 | - | | 17.4905 | 142250 | 0.0 | - | | 17.4966 | 142300 | 0.0 | - | | 17.5028 | 142350 | 0.0 | - | | 17.5089 | 142400 | 0.0 | - | | 17.5151 | 142450 | 0.0 | - | | 17.5212 | 142500 | 0.0 | - | | 17.5274 | 142550 | 0.0 | - | | 17.5335 | 142600 | 0.0 | - | | 17.5397 | 142650 | 0.0002 | - | | 17.5458 | 142700 | 0.0 | - | | 17.5519 | 142750 | 0.0002 | - | | 17.5581 | 142800 | 0.0 | - | | 17.5642 | 142850 | 0.0 | - | | 17.5704 | 142900 | 0.0 | - | | 17.5765 | 142950 | 0.0 | - | | 17.5827 | 143000 | 0.0 | - | | 17.5888 | 143050 | 0.0 | - | | 17.5950 | 143100 | 0.0002 | - | | 17.6011 | 143150 | 0.0 | - | | 17.6073 | 143200 | 0.0001 | - | | 17.6134 | 143250 | 0.0002 | - | | 17.6196 | 143300 | 0.0 | - | | 17.6257 | 143350 | 0.0002 | - | | 17.6319 | 143400 | 0.0002 | - | | 17.6380 | 143450 | 0.0 | - | | 17.6442 | 143500 | 0.0 | - | | 17.6503 | 143550 | 0.0 | - | | 17.6565 | 143600 | 0.0 | - | | 17.6626 | 143650 | 0.0 | - | | 17.6688 | 143700 | 0.0 | - | | 17.6749 | 143750 | 0.0 | - | | 17.6811 | 143800 | 0.0 | - | | 17.6872 | 143850 | 0.0 | - | | 17.6933 | 143900 | 0.0 | - | | 17.6995 | 143950 | 0.0 | - | | 17.7056 | 144000 | 0.0 | - | | 17.7118 | 144050 | 0.0 | - | | 17.7179 | 144100 | 0.0 | - | | 17.7241 | 144150 | 0.0001 | - | | 17.7302 | 144200 | 0.0004 | - | | 17.7364 | 144250 | 0.0 | - | | 17.7425 | 144300 | 0.0 | - | | 17.7487 | 144350 | 0.0 | - | | 17.7548 | 144400 | 0.0 | - | | 17.7610 | 144450 | 0.0 | - | | 17.7671 | 144500 | 0.0 | - | | 17.7733 | 144550 | 0.0 | - | | 17.7794 | 144600 | 0.0 | - | | 17.7856 | 144650 | 0.0 | - | | 17.7917 | 144700 | 0.0 | - | | 17.7979 | 144750 | 0.0002 | - | | 17.8040 | 144800 | 0.0 | - | | 17.8102 | 144850 | 0.0 | - | | 17.8163 | 144900 | 0.0 | - | | 17.8225 | 144950 | 0.0002 | - | | 17.8286 | 145000 | 0.0 | - | | 17.8347 | 145050 | 0.0 | - | | 17.8409 | 145100 | 0.0 | - | | 17.8470 | 145150 | 0.0 | - | | 17.8532 | 145200 | 0.0 | - | | 17.8593 | 145250 | 0.0 | - | | 17.8655 | 145300 | 0.0 | - | | 17.8716 | 145350 | 0.0 | - | | 17.8778 | 145400 | 0.0 | - | | 17.8839 | 145450 | 0.0 | - | | 17.8901 | 145500 | 0.0 | - | | 17.8962 | 145550 | 0.0 | - | | 17.9024 | 145600 | 0.0007 | - | | 17.9085 | 145650 | 0.0029 | - | | 17.9147 | 145700 | 0.0025 | - | | 17.9208 | 145750 | 0.0023 | - | | 17.9270 | 145800 | 0.0021 | - | | 17.9331 | 145850 | 0.003 | - | | 17.9393 | 145900 | 0.0014 | - | | 17.9454 | 145950 | 0.0019 | - | | 17.9516 | 146000 | 0.0022 | - | | 17.9577 | 146050 | 0.0019 | - | | 17.9639 | 146100 | 0.002 | - | | 17.9700 | 146150 | 0.0022 | - | | 17.9761 | 146200 | 0.0023 | - | | 17.9823 | 146250 | 0.0019 | - | | 17.9884 | 146300 | 0.002 | - | | 17.9946 | 146350 | 0.002 | - | | 18.0007 | 146400 | 0.0022 | - | | 18.0069 | 146450 | 0.0017 | - | | 18.0130 | 146500 | 0.0004 | - | | 18.0192 | 146550 | 0.0003 | - | | 18.0253 | 146600 | 0.0005 | - | | 18.0315 | 146650 | 0.0002 | - | | 18.0376 | 146700 | 0.0002 | - | | 18.0438 | 146750 | 0.0006 | - | | 18.0499 | 146800 | 0.0008 | - | | 18.0561 | 146850 | 0.0001 | - | | 18.0622 | 146900 | 0.0006 | - | | 18.0684 | 146950 | 0.0006 | - | | 18.0745 | 147000 | 0.0006 | - | | 18.0807 | 147050 | 0.0002 | - | | 18.0868 | 147100 | 0.0 | - | | 18.0930 | 147150 | 0.0 | - | | 18.0991 | 147200 | 0.0 | - | | 18.1053 | 147250 | 0.0003 | - | | 18.1114 | 147300 | 0.0 | - | | 18.1175 | 147350 | 0.0 | - | | 18.1237 | 147400 | 0.0 | - | | 18.1298 | 147450 | 0.0 | - | | 18.1360 | 147500 | 0.0 | - | | 18.1421 | 147550 | 0.0 | - | | 18.1483 | 147600 | 0.0 | - | | 18.1544 | 147650 | 0.0 | - | | 18.1606 | 147700 | 0.0 | - | | 18.1667 | 147750 | 0.0 | - | | 18.1729 | 147800 | 0.0001 | - | | 18.1790 | 147850 | 0.0002 | - | | 18.1852 | 147900 | 0.0 | - | | 18.1913 | 147950 | 0.0 | - | | 18.1975 | 148000 | 0.0 | - | | 18.2036 | 148050 | 0.0 | - | | 18.2098 | 148100 | 0.0 | - | | 18.2159 | 148150 | 0.0 | - | | 18.2221 | 148200 | 0.0 | - | | 18.2282 | 148250 | 0.0 | - | | 18.2344 | 148300 | 0.0 | - | | 18.2405 | 148350 | 0.0 | - | | 18.2466 | 148400 | 0.0 | - | | 18.2528 | 148450 | 0.0 | - | | 18.2589 | 148500 | 0.0 | - | | 18.2651 | 148550 | 0.0 | - | | 18.2712 | 148600 | 0.0 | - | | 18.2774 | 148650 | 0.0002 | - | | 18.2835 | 148700 | 0.0 | - | | 18.2897 | 148750 | 0.0 | - | | 18.2958 | 148800 | 0.0 | - | | 18.3020 | 148850 | 0.0 | - | | 18.3081 | 148900 | 0.0 | - | | 18.3143 | 148950 | 0.0001 | - | | 18.3204 | 149000 | 0.0001 | - | | 18.3266 | 149050 | 0.0003 | - | | 18.3327 | 149100 | 0.0002 | - | | 18.3389 | 149150 | 0.0003 | - | | 18.3450 | 149200 | 0.0002 | - | | 18.3512 | 149250 | 0.0 | - | | 18.3573 | 149300 | 0.0 | - | | 18.3635 | 149350 | 0.0002 | - | | 18.3696 | 149400 | 0.0002 | - | | 18.3758 | 149450 | 0.0 | - | | 18.3819 | 149500 | 0.0002 | - | | 18.3880 | 149550 | 0.0 | - | | 18.3942 | 149600 | 0.0 | - | | 18.4003 | 149650 | 0.0 | - | | 18.4065 | 149700 | 0.0002 | - | | 18.4126 | 149750 | 0.0 | - | | 18.4188 | 149800 | 0.0 | - | | 18.4249 | 149850 | 0.0 | - | | 18.4311 | 149900 | 0.0 | - | | 18.4372 | 149950 | 0.0003 | - | | 18.4434 | 150000 | 0.0 | - | | 18.4495 | 150050 | 0.0003 | - | | 18.4557 | 150100 | 0.0002 | - | | 18.4618 | 150150 | 0.0001 | - | | 18.4680 | 150200 | 0.0005 | - | | 18.4741 | 150250 | 0.0002 | - | | 18.4803 | 150300 | 0.0002 | - | | 18.4864 | 150350 | 0.0003 | - | | 18.4926 | 150400 | 0.0 | - | | 18.4987 | 150450 | 0.0001 | - | | 18.5049 | 150500 | 0.0002 | - | | 18.5110 | 150550 | 0.0003 | - | | 18.5172 | 150600 | 0.0 | - | | 18.5233 | 150650 | 0.0 | - | | 18.5294 | 150700 | 0.0 | - | | 18.5356 | 150750 | 0.0 | - | | 18.5417 | 150800 | 0.0 | - | | 18.5479 | 150850 | 0.0 | - | | 18.5540 | 150900 | 0.0 | - | | 18.5602 | 150950 | 0.0003 | - | | 18.5663 | 151000 | 0.0 | - | | 18.5725 | 151050 | 0.0 | - | | 18.5786 | 151100 | 0.0 | - | | 18.5848 | 151150 | 0.0 | - | | 18.5909 | 151200 | 0.0 | - | | 18.5971 | 151250 | 0.0 | - | | 18.6032 | 151300 | 0.0001 | - | | 18.6094 | 151350 | 0.0 | - | | 18.6155 | 151400 | 0.0 | - | | 18.6217 | 151450 | 0.0 | - | | 18.6278 | 151500 | 0.0 | - | | 18.6340 | 151550 | 0.0 | - | | 18.6401 | 151600 | 0.0 | - | | 18.6463 | 151650 | 0.0 | - | | 18.6524 | 151700 | 0.0 | - | | 18.6586 | 151750 | 0.0 | - | | 18.6647 | 151800 | 0.0 | - | | 18.6708 | 151850 | 0.0 | - | | 18.6770 | 151900 | 0.0 | - | | 18.6831 | 151950 | 0.0 | - | | 18.6893 | 152000 | 0.0 | - | | 18.6954 | 152050 | 0.0 | - | | 18.7016 | 152100 | 0.0 | - | | 18.7077 | 152150 | 0.0 | - | | 18.7139 | 152200 | 0.0 | - | | 18.7200 | 152250 | 0.0 | - | | 18.7262 | 152300 | 0.0 | - | | 18.7323 | 152350 | 0.0 | - | | 18.7385 | 152400 | 0.0001 | - | | 18.7446 | 152450 | 0.0001 | - | | 18.7508 | 152500 | 0.0 | - | | 18.7569 | 152550 | 0.0 | - | | 18.7631 | 152600 | 0.0 | - | | 18.7692 | 152650 | 0.0 | - | | 18.7754 | 152700 | 0.0 | - | | 18.7815 | 152750 | 0.0 | - | | 18.7877 | 152800 | 0.0 | - | | 18.7938 | 152850 | 0.0 | - | | 18.8000 | 152900 | 0.0 | - | | 18.8061 | 152950 | 0.0 | - | | 18.8122 | 153000 | 0.0 | - | | 18.8184 | 153050 | 0.0 | - | | 18.8245 | 153100 | 0.0 | - | | 18.8307 | 153150 | 0.0 | - | | 18.8368 | 153200 | 0.0 | - | | 18.8430 | 153250 | 0.0 | - | | 18.8491 | 153300 | 0.0 | - | | 18.8553 | 153350 | 0.0 | - | | 18.8614 | 153400 | 0.0 | - | | 18.8676 | 153450 | 0.0001 | - | | 18.8737 | 153500 | 0.0002 | - | | 18.8799 | 153550 | 0.0 | - | | 18.8860 | 153600 | 0.0 | - | | 18.8922 | 153650 | 0.0002 | - | | 18.8983 | 153700 | 0.0002 | - | | 18.9045 | 153750 | 0.0005 | - | | 18.9106 | 153800 | 0.0001 | - | | 18.9168 | 153850 | 0.0 | - | | 18.9229 | 153900 | 0.0 | - | | 18.9291 | 153950 | 0.0002 | - | | 18.9352 | 154000 | 0.0005 | - | | 18.9414 | 154050 | 0.0003 | - | | 18.9475 | 154100 | 0.0001 | - | | 18.9536 | 154150 | 0.0 | - | | 18.9598 | 154200 | 0.0 | - | | 18.9659 | 154250 | 0.0001 | - | | 18.9721 | 154300 | 0.0001 | - | | 18.9782 | 154350 | 0.0008 | - | | 18.9844 | 154400 | 0.0002 | - | | 18.9905 | 154450 | 0.0003 | - | | 18.9967 | 154500 | 0.001 | - | | 19.0028 | 154550 | 0.0009 | - | | 19.0090 | 154600 | 0.0002 | - | | 19.0151 | 154650 | 0.0002 | - | | 19.0213 | 154700 | 0.0003 | - | | 19.0274 | 154750 | 0.0003 | - | | 19.0336 | 154800 | 0.0003 | - | | 19.0397 | 154850 | 0.0 | - | | 19.0459 | 154900 | 0.0008 | - | | 19.0520 | 154950 | 0.0 | - | | 19.0582 | 155000 | 0.0002 | - | | 19.0643 | 155050 | 0.0003 | - | | 19.0705 | 155100 | 0.0014 | - | | 19.0766 | 155150 | 0.0014 | - | | 19.0827 | 155200 | 0.0002 | - | | 19.0889 | 155250 | 0.0001 | - | | 19.0950 | 155300 | 0.0 | - | | 19.1012 | 155350 | 0.0001 | - | | 19.1073 | 155400 | 0.0002 | - | | 19.1135 | 155450 | 0.0003 | - | | 19.1196 | 155500 | 0.0 | - | | 19.1258 | 155550 | 0.0001 | - | | 19.1319 | 155600 | 0.0 | - | | 19.1381 | 155650 | 0.0 | - | | 19.1442 | 155700 | 0.0005 | - | | 19.1504 | 155750 | 0.0003 | - | | 19.1565 | 155800 | 0.0003 | - | | 19.1627 | 155850 | 0.0002 | - | | 19.1688 | 155900 | 0.0003 | - | | 19.1750 | 155950 | 0.0 | - | | 19.1811 | 156000 | 0.0002 | - | | 19.1873 | 156050 | 0.0002 | - | | 19.1934 | 156100 | 0.0003 | - | | 19.1996 | 156150 | 0.0 | - | | 19.2057 | 156200 | 0.0 | - | | 19.2119 | 156250 | 0.0002 | - | | 19.2180 | 156300 | 0.0 | - | | 19.2241 | 156350 | 0.0 | - | | 19.2303 | 156400 | 0.0 | - | | 19.2364 | 156450 | 0.0 | - | | 19.2426 | 156500 | 0.0 | - | | 19.2487 | 156550 | 0.0 | - | | 19.2549 | 156600 | 0.0 | - | | 19.2610 | 156650 | 0.0 | - | | 19.2672 | 156700 | 0.0 | - | | 19.2733 | 156750 | 0.0 | - | | 19.2795 | 156800 | 0.0 | - | | 19.2856 | 156850 | 0.0 | - | | 19.2918 | 156900 | 0.0 | - | | 19.2979 | 156950 | 0.0 | - | | 19.3041 | 157000 | 0.0 | - | | 19.3102 | 157050 | 0.0 | - | | 19.3164 | 157100 | 0.0 | - | | 19.3225 | 157150 | 0.0 | - | | 19.3287 | 157200 | 0.0 | - | | 19.3348 | 157250 | 0.0 | - | | 19.3410 | 157300 | 0.0 | - | | 19.3471 | 157350 | 0.0 | - | | 19.3533 | 157400 | 0.0001 | - | | 19.3594 | 157450 | 0.0002 | - | | 19.3655 | 157500 | 0.0 | - | | 19.3717 | 157550 | 0.0 | - | | 19.3778 | 157600 | 0.0 | - | | 19.3840 | 157650 | 0.0 | - | | 19.3901 | 157700 | 0.0 | - | | 19.3963 | 157750 | 0.0 | - | | 19.4024 | 157800 | 0.0 | - | | 19.4086 | 157850 | 0.0 | - | | 19.4147 | 157900 | 0.0 | - | | 19.4209 | 157950 | 0.0 | - | | 19.4270 | 158000 | 0.0 | - | | 19.4332 | 158050 | 0.0 | - | | 19.4393 | 158100 | 0.0 | - | | 19.4455 | 158150 | 0.0 | - | | 19.4516 | 158200 | 0.0005 | - | | 19.4578 | 158250 | 0.0 | - | | 19.4639 | 158300 | 0.0 | - | | 19.4701 | 158350 | 0.0002 | - | | 19.4762 | 158400 | 0.0 | - | | 19.4824 | 158450 | 0.0 | - | | 19.4885 | 158500 | 0.0 | - | | 19.4947 | 158550 | 0.0 | - | | 19.5008 | 158600 | 0.0 | - | | 19.5069 | 158650 | 0.0 | - | | 19.5131 | 158700 | 0.0 | - | | 19.5192 | 158750 | 0.0 | - | | 19.5254 | 158800 | 0.0 | - | | 19.5315 | 158850 | 0.0 | - | | 19.5377 | 158900 | 0.0 | - | | 19.5438 | 158950 | 0.0 | - | | 19.5500 | 159000 | 0.0 | - | | 19.5561 | 159050 | 0.0 | - | | 19.5623 | 159100 | 0.0 | - | | 19.5684 | 159150 | 0.0 | - | | 19.5746 | 159200 | 0.0 | - | | 19.5807 | 159250 | 0.0 | - | | 19.5869 | 159300 | 0.0 | - | | 19.5930 | 159350 | 0.0 | - | | 19.5992 | 159400 | 0.0 | - | | 19.6053 | 159450 | 0.0 | - | | 19.6115 | 159500 | 0.0 | - | | 19.6176 | 159550 | 0.0 | - | | 19.6238 | 159600 | 0.0 | - | | 19.6299 | 159650 | 0.0001 | - | | 19.6361 | 159700 | 0.0005 | - | | 19.6422 | 159750 | 0.0002 | - | | 19.6483 | 159800 | 0.0002 | - | | 19.6545 | 159850 | 0.0003 | - | | 19.6606 | 159900 | 0.0002 | - | | 19.6668 | 159950 | 0.0 | - | | 19.6729 | 160000 | 0.0008 | - | | 19.6791 | 160050 | 0.0 | - | | 19.6852 | 160100 | 0.0003 | - | | 19.6914 | 160150 | 0.0008 | - | | 19.6975 | 160200 | 0.0002 | - | | 19.7037 | 160250 | 0.0003 | - | | 19.7098 | 160300 | 0.0002 | - | | 19.7160 | 160350 | 0.0011 | - | | 19.7221 | 160400 | 0.0 | - | | 19.7283 | 160450 | 0.0003 | - | | 19.7344 | 160500 | 0.0002 | - | | 19.7406 | 160550 | 0.0 | - | | 19.7467 | 160600 | 0.0002 | - | | 19.7529 | 160650 | 0.0002 | - | | 19.7590 | 160700 | 0.0002 | - | | 19.7652 | 160750 | 0.0008 | - | | 19.7713 | 160800 | 0.0005 | - | | 19.7774 | 160850 | 0.0002 | - | | 19.7836 | 160900 | 0.0003 | - | | 19.7897 | 160950 | 0.0002 | - | | 19.7959 | 161000 | 0.0005 | - | | 19.8020 | 161050 | 0.0005 | - | | 19.8082 | 161100 | 0.0 | - | | 19.8143 | 161150 | 0.0003 | - | | 19.8205 | 161200 | 0.0005 | - | | 19.8266 | 161250 | 0.0002 | - | | 19.8328 | 161300 | 0.0002 | - | | 19.8389 | 161350 | 0.0006 | - | | 19.8451 | 161400 | 0.0003 | - | | 19.8512 | 161450 | 0.0005 | - | | 19.8574 | 161500 | 0.0002 | - | | 19.8635 | 161550 | 0.0 | - | | 19.8697 | 161600 | 0.0 | - | | 19.8758 | 161650 | 0.0 | - | | 19.8820 | 161700 | 0.0002 | - | | 19.8881 | 161750 | 0.0005 | - | | 19.8943 | 161800 | 0.0 | - | | 19.9004 | 161850 | 0.0003 | - | | 19.9066 | 161900 | 0.0 | - | | 19.9127 | 161950 | 0.0 | - | | 19.9188 | 162000 | 0.0002 | - | | 19.9250 | 162050 | 0.0002 | - | | 19.9311 | 162100 | 0.0006 | - | | 19.9373 | 162150 | 0.0003 | - | | 19.9434 | 162200 | 0.0005 | - | | 19.9496 | 162250 | 0.0002 | - | | 19.9557 | 162300 | 0.0 | - | | 19.9619 | 162350 | 0.0005 | - | | 19.9680 | 162400 | 0.0 | - | | 19.9742 | 162450 | 0.0003 | - | | 19.9803 | 162500 | 0.0003 | - | | 19.9865 | 162550 | 0.0002 | - | | 19.9926 | 162600 | 0.0002 | - | | 19.9988 | 162650 | 0.0007 | - | | 20.0049 | 162700 | 0.0005 | - | | 20.0111 | 162750 | 0.0 | - | | 20.0172 | 162800 | 0.0003 | - | | 20.0234 | 162850 | 0.0003 | - | | 20.0295 | 162900 | 0.0 | - | | 20.0357 | 162950 | 0.0002 | - | | 20.0418 | 163000 | 0.0 | - | | 20.0480 | 163050 | 0.0003 | - | | 20.0541 | 163100 | 0.0002 | - | | 20.0602 | 163150 | 0.0003 | - | | 20.0664 | 163200 | 0.0002 | - | | 20.0725 | 163250 | 0.0 | - | | 20.0787 | 163300 | 0.0 | - | | 20.0848 | 163350 | 0.0003 | - | | 20.0910 | 163400 | 0.0 | - | | 20.0971 | 163450 | 0.0003 | - | | 20.1033 | 163500 | 0.0003 | - | | 20.1094 | 163550 | 0.0006 | - | | 20.1156 | 163600 | 0.0003 | - | | 20.1217 | 163650 | 0.0 | - | | 20.1279 | 163700 | 0.0 | - | | 20.1340 | 163750 | 0.0003 | - | | 20.1402 | 163800 | 0.0 | - | | 20.1463 | 163850 | 0.0005 | - | | 20.1525 | 163900 | 0.0005 | - | | 20.1586 | 163950 | 0.0003 | - | | 20.1648 | 164000 | 0.0005 | - | | 20.1709 | 164050 | 0.0006 | - | | 20.1771 | 164100 | 0.0 | - | | 20.1832 | 164150 | 0.0002 | - | | 20.1894 | 164200 | 0.0002 | - | | 20.1955 | 164250 | 0.0009 | - | | 20.2016 | 164300 | 0.0 | - | | 20.2078 | 164350 | 0.0 | - | | 20.2139 | 164400 | 0.0 | - | | 20.2201 | 164450 | 0.0 | - | | 20.2262 | 164500 | 0.0002 | - | | 20.2324 | 164550 | 0.0 | - | | 20.2385 | 164600 | 0.0001 | - | | 20.2447 | 164650 | 0.0 | - | | 20.2508 | 164700 | 0.0003 | - | | 20.2570 | 164750 | 0.0 | - | | 20.2631 | 164800 | 0.0 | - | | 20.2693 | 164850 | 0.0 | - | | 20.2754 | 164900 | 0.0 | - | | 20.2816 | 164950 | 0.0002 | - | | 20.2877 | 165000 | 0.0 | - | | 20.2939 | 165050 | 0.0 | - | | 20.3000 | 165100 | 0.0002 | - | | 20.3062 | 165150 | 0.0 | - | | 20.3123 | 165200 | 0.0 | - | | 20.3185 | 165250 | 0.0 | - | | 20.3246 | 165300 | 0.0 | - | | 20.3308 | 165350 | 0.0 | - | | 20.3369 | 165400 | 0.0001 | - | | 20.3430 | 165450 | 0.0 | - | | 20.3492 | 165500 | 0.0 | - | | 20.3553 | 165550 | 0.0 | - | | 20.3615 | 165600 | 0.0 | - | | 20.3676 | 165650 | 0.0 | - | | 20.3738 | 165700 | 0.0001 | - | | 20.3799 | 165750 | 0.0 | - | | 20.3861 | 165800 | 0.0 | - | | 20.3922 | 165850 | 0.0002 | - | | 20.3984 | 165900 | 0.0002 | - | | 20.4045 | 165950 | 0.0 | - | | 20.4107 | 166000 | 0.0 | - | | 20.4168 | 166050 | 0.0 | - | | 20.4230 | 166100 | 0.0 | - | | 20.4291 | 166150 | 0.0 | - | | 20.4353 | 166200 | 0.0001 | - | | 20.4414 | 166250 | 0.0 | - | | 20.4476 | 166300 | 0.0 | - | | 20.4537 | 166350 | 0.0002 | - | | 20.4599 | 166400 | 0.0 | - | | 20.4660 | 166450 | 0.0 | - | | 20.4722 | 166500 | 0.0002 | - | | 20.4783 | 166550 | 0.0001 | - | | 20.4844 | 166600 | 0.0 | - | | 20.4906 | 166650 | 0.0 | - | | 20.4967 | 166700 | 0.0 | - | | 20.5029 | 166750 | 0.0 | - | | 20.5090 | 166800 | 0.0 | - | | 20.5152 | 166850 | 0.0002 | - | | 20.5213 | 166900 | 0.0 | - | | 20.5275 | 166950 | 0.0 | - | | 20.5336 | 167000 | 0.0 | - | | 20.5398 | 167050 | 0.0 | - | | 20.5459 | 167100 | 0.0005 | - | | 20.5521 | 167150 | 0.0002 | - | | 20.5582 | 167200 | 0.0002 | - | | 20.5644 | 167250 | 0.0 | - | | 20.5705 | 167300 | 0.0001 | - | | 20.5767 | 167350 | 0.0 | - | | 20.5828 | 167400 | 0.0003 | - | | 20.5890 | 167450 | 0.0 | - | | 20.5951 | 167500 | 0.0 | - | | 20.6013 | 167550 | 0.0001 | - | | 20.6074 | 167600 | 0.0003 | - | | 20.6135 | 167650 | 0.0007 | - | | 20.6197 | 167700 | 0.0005 | - | | 20.6258 | 167750 | 0.0001 | - | | 20.6320 | 167800 | 0.0 | - | | 20.6381 | 167850 | 0.0001 | - | | 20.6443 | 167900 | 0.0 | - | | 20.6504 | 167950 | 0.0 | - | | 20.6566 | 168000 | 0.0 | - | | 20.6627 | 168050 | 0.0 | - | | 20.6689 | 168100 | 0.0002 | - | | 20.6750 | 168150 | 0.0 | - | | 20.6812 | 168200 | 0.0 | - | | 20.6873 | 168250 | 0.0001 | - | | 20.6935 | 168300 | 0.0 | - | | 20.6996 | 168350 | 0.0 | - | | 20.7058 | 168400 | 0.0 | - | | 20.7119 | 168450 | 0.0 | - | | 20.7181 | 168500 | 0.0 | - | | 20.7242 | 168550 | 0.0 | - | | 20.7304 | 168600 | 0.0 | - | | 20.7365 | 168650 | 0.0 | - | | 20.7427 | 168700 | 0.0 | - | | 20.7488 | 168750 | 0.0 | - | | 20.7549 | 168800 | 0.0 | - | | 20.7611 | 168850 | 0.0 | - | | 20.7672 | 168900 | 0.0 | - | | 20.7734 | 168950 | 0.0 | - | | 20.7795 | 169000 | 0.0 | - | | 20.7857 | 169050 | 0.0 | - | | 20.7918 | 169100 | 0.0 | - | | 20.7980 | 169150 | 0.0 | - | | 20.8041 | 169200 | 0.0 | - | | 20.8103 | 169250 | 0.0 | - | | 20.8164 | 169300 | 0.0 | - | | 20.8226 | 169350 | 0.0 | - | | 20.8287 | 169400 | 0.0 | - | | 20.8349 | 169450 | 0.0 | - | | 20.8410 | 169500 | 0.0001 | - | | 20.8472 | 169550 | 0.0002 | - | | 20.8533 | 169600 | 0.0 | - | | 20.8595 | 169650 | 0.0 | - | | 20.8656 | 169700 | 0.0 | - | | 20.8718 | 169750 | 0.0001 | - | | 20.8779 | 169800 | 0.0 | - | | 20.8841 | 169850 | 0.0 | - | | 20.8902 | 169900 | 0.0 | - | | 20.8963 | 169950 | 0.0 | - | | 20.9025 | 170000 | 0.0 | - | | 20.9086 | 170050 | 0.0001 | - | | 20.9148 | 170100 | 0.0 | - | | 20.9209 | 170150 | 0.0 | - | | 20.9271 | 170200 | 0.0 | - | | 20.9332 | 170250 | 0.0 | - | | 20.9394 | 170300 | 0.0 | - | | 20.9455 | 170350 | 0.0002 | - | | 20.9517 | 170400 | 0.0 | - | | 20.9578 | 170450 | 0.0 | - | | 20.9640 | 170500 | 0.0 | - | | 20.9701 | 170550 | 0.0 | - | | 20.9763 | 170600 | 0.0 | - | | 20.9824 | 170650 | 0.0 | - | | 20.9886 | 170700 | 0.0 | - | | 20.9947 | 170750 | 0.0 | - | | 21.0009 | 170800 | 0.0 | - | | 21.0070 | 170850 | 0.0 | - | | 21.0132 | 170900 | 0.0 | - | | 21.0193 | 170950 | 0.0 | - | | 21.0255 | 171000 | 0.0 | - | | 21.0316 | 171050 | 0.0 | - | | 21.0377 | 171100 | 0.0 | - | | 21.0439 | 171150 | 0.0 | - | | 21.0500 | 171200 | 0.0 | - | | 21.0562 | 171250 | 0.0 | - | | 21.0623 | 171300 | 0.0 | - | | 21.0685 | 171350 | 0.0 | - | | 21.0746 | 171400 | 0.0 | - | | 21.0808 | 171450 | 0.0 | - | | 21.0869 | 171500 | 0.0 | - | | 21.0931 | 171550 | 0.0 | - | | 21.0992 | 171600 | 0.0 | - | | 21.1054 | 171650 | 0.0 | - | | 21.1115 | 171700 | 0.0 | - | | 21.1177 | 171750 | 0.0 | - | | 21.1238 | 171800 | 0.0 | - | | 21.1300 | 171850 | 0.0 | - | | 21.1361 | 171900 | 0.0 | - | | 21.1423 | 171950 | 0.0 | - | | 21.1484 | 172000 | 0.0 | - | | 21.1546 | 172050 | 0.0 | - | | 21.1607 | 172100 | 0.0 | - | | 21.1669 | 172150 | 0.0 | - | | 21.1730 | 172200 | 0.0 | - | | 21.1791 | 172250 | 0.0 | - | | 21.1853 | 172300 | 0.0 | - | | 21.1914 | 172350 | 0.0 | - | | 21.1976 | 172400 | 0.0 | - | | 21.2037 | 172450 | 0.0 | - | | 21.2099 | 172500 | 0.0 | - | | 21.2160 | 172550 | 0.0 | - | | 21.2222 | 172600 | 0.0 | - | | 21.2283 | 172650 | 0.0 | - | | 21.2345 | 172700 | 0.0 | - | | 21.2406 | 172750 | 0.0 | - | | 21.2468 | 172800 | 0.0 | - | | 21.2529 | 172850 | 0.0 | - | | 21.2591 | 172900 | 0.0 | - | | 21.2652 | 172950 | 0.0 | - | | 21.2714 | 173000 | 0.0 | - | | 21.2775 | 173050 | 0.0 | - | | 21.2837 | 173100 | 0.0 | - | | 21.2898 | 173150 | 0.0 | - | | 21.2960 | 173200 | 0.0 | - | | 21.3021 | 173250 | 0.0 | - | | 21.3083 | 173300 | 0.0 | - | | 21.3144 | 173350 | 0.0 | - | | 21.3205 | 173400 | 0.0 | - | | 21.3267 | 173450 | 0.0 | - | | 21.3328 | 173500 | 0.0 | - | | 21.3390 | 173550 | 0.0 | - | | 21.3451 | 173600 | 0.0 | - | | 21.3513 | 173650 | 0.0 | - | | 21.3574 | 173700 | 0.0 | - | | 21.3636 | 173750 | 0.0 | - | | 21.3697 | 173800 | 0.0 | - | | 21.3759 | 173850 | 0.0 | - | | 21.3820 | 173900 | 0.0 | - | | 21.3882 | 173950 | 0.0 | - | | 21.3943 | 174000 | 0.0 | - | | 21.4005 | 174050 | 0.0 | - | | 21.4066 | 174100 | 0.0 | - | | 21.4128 | 174150 | 0.0 | - | | 21.4189 | 174200 | 0.0 | - | | 21.4251 | 174250 | 0.0 | - | | 21.4312 | 174300 | 0.0 | - | | 21.4374 | 174350 | 0.0 | - | | 21.4435 | 174400 | 0.0002 | - | | 21.4496 | 174450 | 0.0 | - | | 21.4558 | 174500 | 0.0 | - | | 21.4619 | 174550 | 0.0 | - | | 21.4681 | 174600 | 0.0 | - | | 21.4742 | 174650 | 0.0 | - | | 21.4804 | 174700 | 0.0 | - | | 21.4865 | 174750 | 0.0 | - | | 21.4927 | 174800 | 0.0 | - | | 21.4988 | 174850 | 0.0 | - | | 21.5050 | 174900 | 0.0 | - | | 21.5111 | 174950 | 0.0 | - | | 21.5173 | 175000 | 0.0 | - | | 21.5234 | 175050 | 0.0 | - | | 21.5296 | 175100 | 0.0 | - | | 21.5357 | 175150 | 0.0 | - | | 21.5419 | 175200 | 0.0 | - | | 21.5480 | 175250 | 0.0 | - | | 21.5542 | 175300 | 0.0 | - | | 21.5603 | 175350 | 0.0 | - | | 21.5665 | 175400 | 0.0 | - | | 21.5726 | 175450 | 0.0 | - | | 21.5788 | 175500 | 0.0 | - | | 21.5849 | 175550 | 0.0 | - | | 21.5910 | 175600 | 0.0 | - | | 21.5972 | 175650 | 0.0 | - | | 21.6033 | 175700 | 0.0 | - | | 21.6095 | 175750 | 0.0 | - | | 21.6156 | 175800 | 0.0 | - | | 21.6218 | 175850 | 0.0 | - | | 21.6279 | 175900 | 0.0 | - | | 21.6341 | 175950 | 0.0 | - | | 21.6402 | 176000 | 0.0 | - | | 21.6464 | 176050 | 0.0 | - | | 21.6525 | 176100 | 0.0 | - | | 21.6587 | 176150 | 0.0 | - | | 21.6648 | 176200 | 0.0 | - | | 21.6710 | 176250 | 0.0 | - | | 21.6771 | 176300 | 0.0 | - | | 21.6833 | 176350 | 0.0 | - | | 21.6894 | 176400 | 0.0 | - | | 21.6956 | 176450 | 0.0 | - | | 21.7017 | 176500 | 0.0 | - | | 21.7079 | 176550 | 0.0 | - | | 21.7140 | 176600 | 0.0 | - | | 21.7202 | 176650 | 0.0 | - | | 21.7263 | 176700 | 0.0 | - | | 21.7324 | 176750 | 0.0 | - | | 21.7386 | 176800 | 0.0 | - | | 21.7447 | 176850 | 0.0 | - | | 21.7509 | 176900 | 0.0 | - | | 21.7570 | 176950 | 0.0 | - | | 21.7632 | 177000 | 0.0 | - | | 21.7693 | 177050 | 0.0 | - | | 21.7755 | 177100 | 0.0 | - | | 21.7816 | 177150 | 0.0 | - | | 21.7878 | 177200 | 0.0 | - | | 21.7939 | 177250 | 0.0 | - | | 21.8001 | 177300 | 0.0 | - | | 21.8062 | 177350 | 0.0 | - | | 21.8124 | 177400 | 0.0 | - | | 21.8185 | 177450 | 0.0 | - | | 21.8247 | 177500 | 0.0 | - | | 21.8308 | 177550 | 0.0 | - | | 21.8370 | 177600 | 0.0 | - | | 21.8431 | 177650 | 0.0 | - | | 21.8493 | 177700 | 0.0 | - | | 21.8554 | 177750 | 0.0 | - | | 21.8616 | 177800 | 0.0 | - | | 21.8677 | 177850 | 0.0 | - | | 21.8738 | 177900 | 0.0 | - | | 21.8800 | 177950 | 0.0 | - | | 21.8861 | 178000 | 0.0 | - | | 21.8923 | 178050 | 0.0 | - | | 21.8984 | 178100 | 0.0 | - | | 21.9046 | 178150 | 0.0 | - | | 21.9107 | 178200 | 0.0 | - | | 21.9169 | 178250 | 0.0 | - | | 21.9230 | 178300 | 0.0 | - | | 21.9292 | 178350 | 0.0 | - | | 21.9353 | 178400 | 0.0 | - | | 21.9415 | 178450 | 0.0 | - | | 21.9476 | 178500 | 0.0 | - | | 21.9538 | 178550 | 0.0 | - | | 21.9599 | 178600 | 0.0 | - | | 21.9661 | 178650 | 0.0 | - | | 21.9722 | 178700 | 0.0 | - | | 21.9784 | 178750 | 0.0 | - | | 21.9845 | 178800 | 0.0 | - | | 21.9907 | 178850 | 0.0 | - | | 21.9968 | 178900 | 0.0 | - | | 22.0030 | 178950 | 0.0 | - | | 22.0091 | 179000 | 0.0 | - | | 22.0152 | 179050 | 0.0 | - | | 22.0214 | 179100 | 0.0 | - | | 22.0275 | 179150 | 0.0 | - | | 22.0337 | 179200 | 0.0 | - | | 22.0398 | 179250 | 0.0 | - | | 22.0460 | 179300 | 0.0 | - | | 22.0521 | 179350 | 0.0 | - | | 22.0583 | 179400 | 0.0 | - | | 22.0644 | 179450 | 0.0 | - | | 22.0706 | 179500 | 0.0 | - | | 22.0767 | 179550 | 0.0 | - | | 22.0829 | 179600 | 0.0 | - | | 22.0890 | 179650 | 0.0 | - | | 22.0952 | 179700 | 0.0 | - | | 22.1013 | 179750 | 0.0 | - | | 22.1075 | 179800 | 0.0 | - | | 22.1136 | 179850 | 0.0 | - | | 22.1198 | 179900 | 0.0 | - | | 22.1259 | 179950 | 0.0 | - | | 22.1321 | 180000 | 0.0 | - | | 22.1382 | 180050 | 0.0 | - | | 22.1444 | 180100 | 0.0 | - | | 22.1505 | 180150 | 0.0 | - | | 22.1566 | 180200 | 0.0 | - | | 22.1628 | 180250 | 0.0 | - | | 22.1689 | 180300 | 0.0 | - | | 22.1751 | 180350 | 0.0 | - | | 22.1812 | 180400 | 0.0 | - | | 22.1874 | 180450 | 0.0 | - | | 22.1935 | 180500 | 0.0 | - | | 22.1997 | 180550 | 0.0 | - | | 22.2058 | 180600 | 0.0 | - | | 22.2120 | 180650 | 0.0 | - | | 22.2181 | 180700 | 0.0 | - | | 22.2243 | 180750 | 0.0 | - | | 22.2304 | 180800 | 0.0 | - | | 22.2366 | 180850 | 0.0 | - | | 22.2427 | 180900 | 0.0 | - | | 22.2489 | 180950 | 0.0 | - | | 22.2550 | 181000 | 0.0 | - | | 22.2612 | 181050 | 0.0 | - | | 22.2673 | 181100 | 0.0 | - | | 22.2735 | 181150 | 0.0 | - | | 22.2796 | 181200 | 0.0 | - | | 22.2857 | 181250 | 0.0 | - | | 22.2919 | 181300 | 0.0 | - | | 22.2980 | 181350 | 0.0 | - | | 22.3042 | 181400 | 0.0 | - | | 22.3103 | 181450 | 0.0 | - | | 22.3165 | 181500 | 0.0 | - | | 22.3226 | 181550 | 0.0 | - | | 22.3288 | 181600 | 0.0 | - | | 22.3349 | 181650 | 0.0 | - | | 22.3411 | 181700 | 0.0 | - | | 22.3472 | 181750 | 0.0 | - | | 22.3534 | 181800 | 0.0 | - | | 22.3595 | 181850 | 0.0 | - | | 22.3657 | 181900 | 0.0 | - | | 22.3718 | 181950 | 0.0 | - | | 22.3780 | 182000 | 0.0 | - | | 22.3841 | 182050 | 0.0 | - | | 22.3903 | 182100 | 0.0 | - | | 22.3964 | 182150 | 0.0 | - | | 22.4026 | 182200 | 0.0 | - | | 22.4087 | 182250 | 0.0 | - | | 22.4149 | 182300 | 0.0 | - | | 22.4210 | 182350 | 0.0 | - | | 22.4271 | 182400 | 0.0 | - | | 22.4333 | 182450 | 0.0 | - | | 22.4394 | 182500 | 0.0 | - | | 22.4456 | 182550 | 0.0 | - | | 22.4517 | 182600 | 0.0 | - | | 22.4579 | 182650 | 0.0 | - | | 22.4640 | 182700 | 0.0 | - | | 22.4702 | 182750 | 0.0 | - | | 22.4763 | 182800 | 0.0 | - | | 22.4825 | 182850 | 0.0 | - | | 22.4886 | 182900 | 0.0 | - | | 22.4948 | 182950 | 0.0002 | - | | 22.5009 | 183000 | 0.0 | - | | 22.5071 | 183050 | 0.0 | - | | 22.5132 | 183100 | 0.0 | - | | 22.5194 | 183150 | 0.0 | - | | 22.5255 | 183200 | 0.0 | - | | 22.5317 | 183250 | 0.0 | - | | 22.5378 | 183300 | 0.0 | - | | 22.5440 | 183350 | 0.0 | - | | 22.5501 | 183400 | 0.0 | - | | 22.5563 | 183450 | 0.0 | - | | 22.5624 | 183500 | 0.0 | - | | 22.5685 | 183550 | 0.0 | - | | 22.5747 | 183600 | 0.0 | - | | 22.5808 | 183650 | 0.0 | - | | 22.5870 | 183700 | 0.0 | - | | 22.5931 | 183750 | 0.0 | - | | 22.5993 | 183800 | 0.0 | - | | 22.6054 | 183850 | 0.0 | - | | 22.6116 | 183900 | 0.0 | - | | 22.6177 | 183950 | 0.0 | - | | 22.6239 | 184000 | 0.0 | - | | 22.6300 | 184050 | 0.0 | - | | 22.6362 | 184100 | 0.0 | - | | 22.6423 | 184150 | 0.0 | - | | 22.6485 | 184200 | 0.0 | - | | 22.6546 | 184250 | 0.0 | - | | 22.6608 | 184300 | 0.0 | - | | 22.6669 | 184350 | 0.0 | - | | 22.6731 | 184400 | 0.0 | - | | 22.6792 | 184450 | 0.0 | - | | 22.6854 | 184500 | 0.0 | - | | 22.6915 | 184550 | 0.0 | - | | 22.6977 | 184600 | 0.0 | - | | 22.7038 | 184650 | 0.0 | - | | 22.7099 | 184700 | 0.0 | - | | 22.7161 | 184750 | 0.0 | - | | 22.7222 | 184800 | 0.0 | - | | 22.7284 | 184850 | 0.0 | - | | 22.7345 | 184900 | 0.0 | - | | 22.7407 | 184950 | 0.0 | - | | 22.7468 | 185000 | 0.0 | - | | 22.7530 | 185050 | 0.0001 | - | | 22.7591 | 185100 | 0.0002 | - | | 22.7653 | 185150 | 0.0 | - | | 22.7714 | 185200 | 0.0 | - | | 22.7776 | 185250 | 0.0 | - | | 22.7837 | 185300 | 0.0001 | - | | 22.7899 | 185350 | 0.0 | - | | 22.7960 | 185400 | 0.0 | - | | 22.8022 | 185450 | 0.0002 | - | | 22.8083 | 185500 | 0.0003 | - | | 22.8145 | 185550 | 0.0 | - | | 22.8206 | 185600 | 0.0 | - | | 22.8268 | 185650 | 0.0001 | - | | 22.8329 | 185700 | 0.0 | - | | 22.8391 | 185750 | 0.0 | - | | 22.8452 | 185800 | 0.0 | - | | 22.8513 | 185850 | 0.0 | - | | 22.8575 | 185900 | 0.0 | - | | 22.8636 | 185950 | 0.0 | - | | 22.8698 | 186000 | 0.0 | - | | 22.8759 | 186050 | 0.0 | - | | 22.8821 | 186100 | 0.0 | - | | 22.8882 | 186150 | 0.0 | - | | 22.8944 | 186200 | 0.0 | - | | 22.9005 | 186250 | 0.0 | - | | 22.9067 | 186300 | 0.0 | - | | 22.9128 | 186350 | 0.0001 | - | | 22.9190 | 186400 | 0.0 | - | | 22.9251 | 186450 | 0.0 | - | | 22.9313 | 186500 | 0.0 | - | | 22.9374 | 186550 | 0.0 | - | | 22.9436 | 186600 | 0.0 | - | | 22.9497 | 186650 | 0.0 | - | | 22.9559 | 186700 | 0.0 | - | | 22.9620 | 186750 | 0.0 | - | | 22.9682 | 186800 | 0.0 | - | | 22.9743 | 186850 | 0.0 | - | | 22.9805 | 186900 | 0.0 | - | | 22.9866 | 186950 | 0.0 | - | | 22.9927 | 187000 | 0.0 | - | | 22.9989 | 187050 | 0.0 | - | | 23.0050 | 187100 | 0.0 | - | | 23.0112 | 187150 | 0.0 | - | | 23.0173 | 187200 | 0.0 | - | | 23.0235 | 187250 | 0.0 | - | | 23.0296 | 187300 | 0.0 | - | | 23.0358 | 187350 | 0.0 | - | | 23.0419 | 187400 | 0.0 | - | | 23.0481 | 187450 | 0.0 | - | | 23.0542 | 187500 | 0.0 | - | | 23.0604 | 187550 | 0.0 | - | | 23.0665 | 187600 | 0.0 | - | | 23.0727 | 187650 | 0.0 | - | | 23.0788 | 187700 | 0.0 | - | | 23.0850 | 187750 | 0.0 | - | | 23.0911 | 187800 | 0.0 | - | | 23.0973 | 187850 | 0.0 | - | | 23.1034 | 187900 | 0.0 | - | | 23.1096 | 187950 | 0.0002 | - | | 23.1157 | 188000 | 0.0 | - | | 23.1218 | 188050 | 0.0 | - | | 23.1280 | 188100 | 0.0 | - | | 23.1341 | 188150 | 0.0 | - | | 23.1403 | 188200 | 0.0 | - | | 23.1464 | 188250 | 0.0 | - | | 23.1526 | 188300 | 0.0 | - | | 23.1587 | 188350 | 0.0002 | - | | 23.1649 | 188400 | 0.0 | - | | 23.1710 | 188450 | 0.0 | - | | 23.1772 | 188500 | 0.0 | - | | 23.1833 | 188550 | 0.0 | - | | 23.1895 | 188600 | 0.0 | - | | 23.1956 | 188650 | 0.0 | - | | 23.2018 | 188700 | 0.0 | - | | 23.2079 | 188750 | 0.0 | - | | 23.2141 | 188800 | 0.0 | - | | 23.2202 | 188850 | 0.0 | - | | 23.2264 | 188900 | 0.0 | - | | 23.2325 | 188950 | 0.0 | - | | 23.2387 | 189000 | 0.0 | - | | 23.2448 | 189050 | 0.0 | - | | 23.2510 | 189100 | 0.0 | - | | 23.2571 | 189150 | 0.0 | - | | 23.2632 | 189200 | 0.0 | - | | 23.2694 | 189250 | 0.0 | - | | 23.2755 | 189300 | 0.0 | - | | 23.2817 | 189350 | 0.0 | - | | 23.2878 | 189400 | 0.0 | - | | 23.2940 | 189450 | 0.0 | - | | 23.3001 | 189500 | 0.0 | - | | 23.3063 | 189550 | 0.0 | - | | 23.3124 | 189600 | 0.0 | - | | 23.3186 | 189650 | 0.0 | - | | 23.3247 | 189700 | 0.0 | - | | 23.3309 | 189750 | 0.0 | - | | 23.3370 | 189800 | 0.0 | - | | 23.3432 | 189850 | 0.0 | - | | 23.3493 | 189900 | 0.0 | - | | 23.3555 | 189950 | 0.0 | - | | 23.3616 | 190000 | 0.0 | - | | 23.3678 | 190050 | 0.0 | - | | 23.3739 | 190100 | 0.0 | - | | 23.3801 | 190150 | 0.0 | - | | 23.3862 | 190200 | 0.0002 | - | | 23.3924 | 190250 | 0.0 | - | | 23.3985 | 190300 | 0.0 | - | | 23.4046 | 190350 | 0.0 | - | | 23.4108 | 190400 | 0.0 | - | | 23.4169 | 190450 | 0.0 | - | | 23.4231 | 190500 | 0.0 | - | | 23.4292 | 190550 | 0.0 | - | | 23.4354 | 190600 | 0.0 | - | | 23.4415 | 190650 | 0.0 | - | | 23.4477 | 190700 | 0.0 | - | | 23.4538 | 190750 | 0.0 | - | | 23.4600 | 190800 | 0.0 | - | | 23.4661 | 190850 | 0.0 | - | | 23.4723 | 190900 | 0.0 | - | | 23.4784 | 190950 | 0.0 | - | | 23.4846 | 191000 | 0.0 | - | | 23.4907 | 191050 | 0.0 | - | | 23.4969 | 191100 | 0.0 | - | | 23.5030 | 191150 | 0.0 | - | | 23.5092 | 191200 | 0.0 | - | | 23.5153 | 191250 | 0.0 | - | | 23.5215 | 191300 | 0.0 | - | | 23.5276 | 191350 | 0.0 | - | | 23.5338 | 191400 | 0.0 | - | | 23.5399 | 191450 | 0.0 | - | | 23.5460 | 191500 | 0.0 | - | | 23.5522 | 191550 | 0.0 | - | | 23.5583 | 191600 | 0.0 | - | | 23.5645 | 191650 | 0.0 | - | | 23.5706 | 191700 | 0.0 | - | | 23.5768 | 191750 | 0.0 | - | | 23.5829 | 191800 | 0.0 | - | | 23.5891 | 191850 | 0.0 | - | | 23.5952 | 191900 | 0.0 | - | | 23.6014 | 191950 | 0.0 | - | | 23.6075 | 192000 | 0.0 | - | | 23.6137 | 192050 | 0.0 | - | | 23.6198 | 192100 | 0.0 | - | | 23.6260 | 192150 | 0.0 | - | | 23.6321 | 192200 | 0.0 | - | | 23.6383 | 192250 | 0.0002 | - | | 23.6444 | 192300 | 0.0001 | - | | 23.6506 | 192350 | 0.0 | - | | 23.6567 | 192400 | 0.0 | - | | 23.6629 | 192450 | 0.0 | - | | 23.6690 | 192500 | 0.0 | - | | 23.6752 | 192550 | 0.0 | - | | 23.6813 | 192600 | 0.0 | - | | 23.6874 | 192650 | 0.0 | - | | 23.6936 | 192700 | 0.0 | - | | 23.6997 | 192750 | 0.0 | - | | 23.7059 | 192800 | 0.0 | - | | 23.7120 | 192850 | 0.0 | - | | 23.7182 | 192900 | 0.0 | - | | 23.7243 | 192950 | 0.0 | - | | 23.7305 | 193000 | 0.0 | - | | 23.7366 | 193050 | 0.0 | - | | 23.7428 | 193100 | 0.0 | - | | 23.7489 | 193150 | 0.0 | - | | 23.7551 | 193200 | 0.0 | - | | 23.7612 | 193250 | 0.0 | - | | 23.7674 | 193300 | 0.0 | - | | 23.7735 | 193350 | 0.0 | - | | 23.7797 | 193400 | 0.0 | - | | 23.7858 | 193450 | 0.0001 | - | | 23.7920 | 193500 | 0.0 | - | | 23.7981 | 193550 | 0.0 | - | | 23.8043 | 193600 | 0.0 | - | | 23.8104 | 193650 | 0.0 | - | | 23.8165 | 193700 | 0.0 | - | | 23.8227 | 193750 | 0.0 | - | | 23.8288 | 193800 | 0.0 | - | | 23.8350 | 193850 | 0.0 | - | | 23.8411 | 193900 | 0.0 | - | | 23.8473 | 193950 | 0.0 | - | | 23.8534 | 194000 | 0.0 | - | | 23.8596 | 194050 | 0.0 | - | | 23.8657 | 194100 | 0.0 | - | | 23.8719 | 194150 | 0.0 | - | | 23.8780 | 194200 | 0.0 | - | | 23.8842 | 194250 | 0.0 | - | | 23.8903 | 194300 | 0.0 | - | | 23.8965 | 194350 | 0.0 | - | | 23.9026 | 194400 | 0.0 | - | | 23.9088 | 194450 | 0.0 | - | | 23.9149 | 194500 | 0.0 | - | | 23.9211 | 194550 | 0.0 | - | | 23.9272 | 194600 | 0.0 | - | | 23.9334 | 194650 | 0.0 | - | | 23.9395 | 194700 | 0.0 | - | | 23.9457 | 194750 | 0.0 | - | | 23.9518 | 194800 | 0.0 | - | | 23.9579 | 194850 | 0.0 | - | | 23.9641 | 194900 | 0.0 | - | | 23.9702 | 194950 | 0.0 | - | | 23.9764 | 195000 | 0.0 | - | | 23.9825 | 195050 | 0.0 | - | | 23.9887 | 195100 | 0.0 | - | | 23.9948 | 195150 | 0.0 | - | | 24.0010 | 195200 | 0.0 | - | | 24.0071 | 195250 | 0.0 | - | | 24.0133 | 195300 | 0.0 | - | | 24.0194 | 195350 | 0.0 | - | | 24.0256 | 195400 | 0.0 | - | | 24.0317 | 195450 | 0.0 | - | | 24.0379 | 195500 | 0.0 | - | | 24.0440 | 195550 | 0.0 | - | | 24.0502 | 195600 | 0.0001 | - | | 24.0563 | 195650 | 0.0 | - | | 24.0625 | 195700 | 0.0 | - | | 24.0686 | 195750 | 0.0 | - | | 24.0748 | 195800 | 0.0 | - | | 24.0809 | 195850 | 0.0 | - | | 24.0871 | 195900 | 0.0 | - | | 24.0932 | 195950 | 0.0 | - | | 24.0993 | 196000 | 0.0 | - | | 24.1055 | 196050 | 0.0 | - | | 24.1116 | 196100 | 0.0 | - | | 24.1178 | 196150 | 0.0 | - | | 24.1239 | 196200 | 0.0 | - | | 24.1301 | 196250 | 0.0 | - | | 24.1362 | 196300 | 0.0 | - | | 24.1424 | 196350 | 0.0 | - | | 24.1485 | 196400 | 0.0 | - | | 24.1547 | 196450 | 0.0 | - | | 24.1608 | 196500 | 0.0 | - | | 24.1670 | 196550 | 0.0 | - | | 24.1731 | 196600 | 0.0 | - | | 24.1793 | 196650 | 0.0 | - | | 24.1854 | 196700 | 0.0 | - | | 24.1916 | 196750 | 0.0 | - | | 24.1977 | 196800 | 0.0 | - | | 24.2039 | 196850 | 0.0 | - | | 24.2100 | 196900 | 0.0 | - | | 24.2162 | 196950 | 0.0 | - | | 24.2223 | 197000 | 0.0 | - | | 24.2285 | 197050 | 0.0 | - | | 24.2346 | 197100 | 0.0 | - | | 24.2407 | 197150 | 0.0 | - | | 24.2469 | 197200 | 0.0 | - | | 24.2530 | 197250 | 0.0 | - | | 24.2592 | 197300 | 0.0 | - | | 24.2653 | 197350 | 0.0 | - | | 24.2715 | 197400 | 0.0 | - | | 24.2776 | 197450 | 0.0 | - | | 24.2838 | 197500 | 0.0 | - | | 24.2899 | 197550 | 0.0001 | - | | 24.2961 | 197600 | 0.0 | - | | 24.3022 | 197650 | 0.0 | - | | 24.3084 | 197700 | 0.0 | - | | 24.3145 | 197750 | 0.0 | - | | 24.3207 | 197800 | 0.0 | - | | 24.3268 | 197850 | 0.0 | - | | 24.3330 | 197900 | 0.0 | - | | 24.3391 | 197950 | 0.0 | - | | 24.3453 | 198000 | 0.0 | - | | 24.3514 | 198050 | 0.0 | - | | 24.3576 | 198100 | 0.0 | - | | 24.3637 | 198150 | 0.0 | - | | 24.3699 | 198200 | 0.0 | - | | 24.3760 | 198250 | 0.0 | - | | 24.3821 | 198300 | 0.0 | - | | 24.3883 | 198350 | 0.0 | - | | 24.3944 | 198400 | 0.0002 | - | | 24.4006 | 198450 | 0.0 | - | | 24.4067 | 198500 | 0.0 | - | | 24.4129 | 198550 | 0.0 | - | | 24.4190 | 198600 | 0.0 | - | | 24.4252 | 198650 | 0.0 | - | | 24.4313 | 198700 | 0.0 | - | | 24.4375 | 198750 | 0.0 | - | | 24.4436 | 198800 | 0.0 | - | | 24.4498 | 198850 | 0.0 | - | | 24.4559 | 198900 | 0.0 | - | | 24.4621 | 198950 | 0.0 | - | | 24.4682 | 199000 | 0.0 | - | | 24.4744 | 199050 | 0.0 | - | | 24.4805 | 199100 | 0.0 | - | | 24.4867 | 199150 | 0.0 | - | | 24.4928 | 199200 | 0.0 | - | | 24.4990 | 199250 | 0.0 | - | | 24.5051 | 199300 | 0.0 | - | | 24.5113 | 199350 | 0.0 | - | | 24.5174 | 199400 | 0.0 | - | | 24.5235 | 199450 | 0.0 | - | | 24.5297 | 199500 | 0.0 | - | | 24.5358 | 199550 | 0.0 | - | | 24.5420 | 199600 | 0.0 | - | | 24.5481 | 199650 | 0.0 | - | | 24.5543 | 199700 | 0.0 | - | | 24.5604 | 199750 | 0.0 | - | | 24.5666 | 199800 | 0.0 | - | | 24.5727 | 199850 | 0.0 | - | | 24.5789 | 199900 | 0.0 | - | | 24.5850 | 199950 | 0.0 | - | | 24.5912 | 200000 | 0.0 | - | | 24.5973 | 200050 | 0.0 | - | | 24.6035 | 200100 | 0.0 | - | | 24.6096 | 200150 | 0.0 | - | | 24.6158 | 200200 | 0.0 | - | | 24.6219 | 200250 | 0.0 | - | | 24.6281 | 200300 | 0.0 | - | | 24.6342 | 200350 | 0.0 | - | | 24.6404 | 200400 | 0.0 | - | | 24.6465 | 200450 | 0.0 | - | | 24.6526 | 200500 | 0.0 | - | | 24.6588 | 200550 | 0.0 | - | | 24.6649 | 200600 | 0.0002 | - | | 24.6711 | 200650 | 0.0 | - | | 24.6772 | 200700 | 0.0 | - | | 24.6834 | 200750 | 0.0 | - | | 24.6895 | 200800 | 0.0 | - | | 24.6957 | 200850 | 0.0 | - | | 24.7018 | 200900 | 0.0 | - | | 24.7080 | 200950 | 0.0 | - | | 24.7141 | 201000 | 0.0 | - | | 24.7203 | 201050 | 0.0 | - | | 24.7264 | 201100 | 0.0 | - | | 24.7326 | 201150 | 0.0 | - | | 24.7387 | 201200 | 0.0 | - | | 24.7449 | 201250 | 0.0 | - | | 24.7510 | 201300 | 0.0 | - | | 24.7572 | 201350 | 0.0 | - | | 24.7633 | 201400 | 0.0 | - | | 24.7695 | 201450 | 0.0 | - | | 24.7756 | 201500 | 0.0 | - | | 24.7818 | 201550 | 0.0 | - | | 24.7879 | 201600 | 0.0 | - | | 24.7940 | 201650 | 0.0 | - | | 24.8002 | 201700 | 0.0 | - | | 24.8063 | 201750 | 0.0 | - | | 24.8125 | 201800 | 0.0 | - | | 24.8186 | 201850 | 0.0 | - | | 24.8248 | 201900 | 0.0 | - | | 24.8309 | 201950 | 0.0 | - | | 24.8371 | 202000 | 0.0 | - | | 24.8432 | 202050 | 0.0 | - | | 24.8494 | 202100 | 0.0 | - | | 24.8555 | 202150 | 0.0 | - | | 24.8617 | 202200 | 0.0 | - | | 24.8678 | 202250 | 0.0 | - | | 24.8740 | 202300 | 0.0 | - | | 24.8801 | 202350 | 0.0 | - | | 24.8863 | 202400 | 0.0 | - | | 24.8924 | 202450 | 0.0 | - | | 24.8986 | 202500 | 0.0 | - | | 24.9047 | 202550 | 0.0 | - | | 24.9109 | 202600 | 0.0 | - | | 24.9170 | 202650 | 0.0 | - | | 24.9232 | 202700 | 0.0 | - | | 24.9293 | 202750 | 0.0 | - | | 24.9354 | 202800 | 0.0 | - | | 24.9416 | 202850 | 0.0 | - | | 24.9477 | 202900 | 0.0 | - | | 24.9539 | 202950 | 0.0 | - | | 24.9600 | 203000 | 0.0 | - | | 24.9662 | 203050 | 0.0 | - | | 24.9723 | 203100 | 0.0 | - | | 24.9785 | 203150 | 0.0 | - | | 24.9846 | 203200 | 0.0 | - | | 24.9908 | 203250 | 0.0 | - | | 24.9969 | 203300 | 0.0 | - | | 25.0031 | 203350 | 0.0 | - | | 25.0092 | 203400 | 0.0 | - | | 25.0154 | 203450 | 0.0 | - | | 25.0215 | 203500 | 0.0 | - | | 25.0277 | 203550 | 0.0 | - | | 25.0338 | 203600 | 0.0 | - | | 25.0400 | 203650 | 0.0 | - | | 25.0461 | 203700 | 0.0 | - | | 25.0523 | 203750 | 0.0 | - | | 25.0584 | 203800 | 0.0 | - | | 25.0646 | 203850 | 0.0 | - | | 25.0707 | 203900 | 0.0 | - | | 25.0768 | 203950 | 0.0 | - | | 25.0830 | 204000 | 0.0 | - | | 25.0891 | 204050 | 0.0 | - | | 25.0953 | 204100 | 0.0 | - | | 25.1014 | 204150 | 0.0 | - | | 25.1076 | 204200 | 0.0 | - | | 25.1137 | 204250 | 0.0 | - | | 25.1199 | 204300 | 0.0 | - | | 25.1260 | 204350 | 0.0 | - | | 25.1322 | 204400 | 0.0 | - | | 25.1383 | 204450 | 0.0 | - | | 25.1445 | 204500 | 0.0 | - | | 25.1506 | 204550 | 0.0 | - | | 25.1568 | 204600 | 0.0 | - | | 25.1629 | 204650 | 0.0 | - | | 25.1691 | 204700 | 0.0 | - | | 25.1752 | 204750 | 0.0 | - | | 25.1814 | 204800 | 0.0 | - | | 25.1875 | 204850 | 0.0 | - | | 25.1937 | 204900 | 0.0 | - | | 25.1998 | 204950 | 0.0 | - | | 25.2060 | 205000 | 0.0 | - | | 25.2121 | 205050 | 0.0 | - | | 25.2182 | 205100 | 0.0 | - | | 25.2244 | 205150 | 0.0 | - | | 25.2305 | 205200 | 0.0 | - | | 25.2367 | 205250 | 0.0 | - | | 25.2428 | 205300 | 0.0 | - | | 25.2490 | 205350 | 0.0 | - | | 25.2551 | 205400 | 0.0 | - | | 25.2613 | 205450 | 0.0 | - | | 25.2674 | 205500 | 0.0 | - | | 25.2736 | 205550 | 0.0 | - | | 25.2797 | 205600 | 0.0 | - | | 25.2859 | 205650 | 0.0 | - | | 25.2920 | 205700 | 0.0 | - | | 25.2982 | 205750 | 0.0 | - | | 25.3043 | 205800 | 0.0 | - | | 25.3105 | 205850 | 0.0 | - | | 25.3166 | 205900 | 0.0 | - | | 25.3228 | 205950 | 0.0 | - | | 25.3289 | 206000 | 0.0 | - | | 25.3351 | 206050 | 0.0 | - | | 25.3412 | 206100 | 0.0 | - | | 25.3474 | 206150 | 0.0 | - | | 25.3535 | 206200 | 0.0 | - | | 25.3596 | 206250 | 0.0 | - | | 25.3658 | 206300 | 0.0 | - | | 25.3719 | 206350 | 0.0 | - | | 25.3781 | 206400 | 0.0 | - | | 25.3842 | 206450 | 0.0 | - | | 25.3904 | 206500 | 0.0 | - | | 25.3965 | 206550 | 0.0 | - | | 25.4027 | 206600 | 0.0 | - | | 25.4088 | 206650 | 0.0 | - | | 25.4150 | 206700 | 0.0 | - | | 25.4211 | 206750 | 0.0 | - | | 25.4273 | 206800 | 0.0 | - | | 25.4334 | 206850 | 0.0 | - | | 25.4396 | 206900 | 0.0 | - | | 25.4457 | 206950 | 0.0 | - | | 25.4519 | 207000 | 0.0 | - | | 25.4580 | 207050 | 0.0 | - | | 25.4642 | 207100 | 0.0 | - | | 25.4703 | 207150 | 0.0 | - | | 25.4765 | 207200 | 0.0 | - | | 25.4826 | 207250 | 0.0 | - | | 25.4887 | 207300 | 0.0 | - | | 25.4949 | 207350 | 0.0 | - | | 25.5010 | 207400 | 0.0 | - | | 25.5072 | 207450 | 0.0 | - | | 25.5133 | 207500 | 0.0 | - | | 25.5195 | 207550 | 0.0 | - | | 25.5256 | 207600 | 0.0 | - | | 25.5318 | 207650 | 0.0 | - | | 25.5379 | 207700 | 0.0 | - | | 25.5441 | 207750 | 0.0 | - | | 25.5502 | 207800 | 0.0 | - | | 25.5564 | 207850 | 0.0 | - | | 25.5625 | 207900 | 0.0002 | - | | 25.5687 | 207950 | 0.0006 | - | | 25.5748 | 208000 | 0.0009 | - | | 25.5810 | 208050 | 0.0 | - | | 25.5871 | 208100 | 0.0 | - | | 25.5933 | 208150 | 0.0 | - | | 25.5994 | 208200 | 0.0 | - | | 25.6056 | 208250 | 0.0 | - | | 25.6117 | 208300 | 0.0002 | - | | 25.6179 | 208350 | 0.0 | - | | 25.6240 | 208400 | 0.0 | - | | 25.6301 | 208450 | 0.0 | - | | 25.6363 | 208500 | 0.0 | - | | 25.6424 | 208550 | 0.0 | - | | 25.6486 | 208600 | 0.0002 | - | | 25.6547 | 208650 | 0.0 | - | | 25.6609 | 208700 | 0.0 | - | | 25.6670 | 208750 | 0.0 | - | | 25.6732 | 208800 | 0.0 | - | | 25.6793 | 208850 | 0.0 | - | | 25.6855 | 208900 | 0.0 | - | | 25.6916 | 208950 | 0.0 | - | | 25.6978 | 209000 | 0.0 | - | | 25.7039 | 209050 | 0.0001 | - | | 25.7101 | 209100 | 0.0 | - | | 25.7162 | 209150 | 0.0 | - | | 25.7224 | 209200 | 0.0 | - | | 25.7285 | 209250 | 0.0 | - | | 25.7347 | 209300 | 0.0 | - | | 25.7408 | 209350 | 0.0 | - | | 25.7470 | 209400 | 0.0 | - | | 25.7531 | 209450 | 0.0 | - | | 25.7593 | 209500 | 0.0 | - | | 25.7654 | 209550 | 0.0 | - | | 25.7715 | 209600 | 0.0 | - | | 25.7777 | 209650 | 0.0 | - | | 25.7838 | 209700 | 0.0 | - | | 25.7900 | 209750 | 0.0 | - | | 25.7961 | 209800 | 0.0 | - | | 25.8023 | 209850 | 0.0 | - | | 25.8084 | 209900 | 0.0 | - | | 25.8146 | 209950 | 0.0 | - | | 25.8207 | 210000 | 0.0 | - | | 25.8269 | 210050 | 0.0 | - | | 25.8330 | 210100 | 0.0 | - | | 25.8392 | 210150 | 0.0 | - | | 25.8453 | 210200 | 0.0 | - | | 25.8515 | 210250 | 0.0 | - | | 25.8576 | 210300 | 0.0 | - | | 25.8638 | 210350 | 0.0 | - | | 25.8699 | 210400 | 0.0 | - | | 25.8761 | 210450 | 0.0 | - | | 25.8822 | 210500 | 0.0 | - | | 25.8884 | 210550 | 0.0 | - | | 25.8945 | 210600 | 0.0 | - | | 25.9007 | 210650 | 0.0 | - | | 25.9068 | 210700 | 0.0 | - | | 25.9129 | 210750 | 0.0 | - | | 25.9191 | 210800 | 0.0 | - | | 25.9252 | 210850 | 0.0 | - | | 25.9314 | 210900 | 0.0 | - | | 25.9375 | 210950 | 0.0 | - | | 25.9437 | 211000 | 0.0 | - | | 25.9498 | 211050 | 0.0 | - | | 25.9560 | 211100 | 0.0 | - | | 25.9621 | 211150 | 0.0 | - | | 25.9683 | 211200 | 0.0 | - | | 25.9744 | 211250 | 0.0 | - | | 25.9806 | 211300 | 0.0 | - | | 25.9867 | 211350 | 0.0 | - | | 25.9929 | 211400 | 0.0 | - | | 25.9990 | 211450 | 0.0 | - | | 26.0052 | 211500 | 0.0 | - | | 26.0113 | 211550 | 0.0 | - | | 26.0175 | 211600 | 0.0 | - | | 26.0236 | 211650 | 0.0 | - | | 26.0298 | 211700 | 0.0 | - | | 26.0359 | 211750 | 0.0 | - | | 26.0421 | 211800 | 0.0 | - | | 26.0482 | 211850 | 0.0 | - | | 26.0543 | 211900 | 0.0 | - | | 26.0605 | 211950 | 0.0 | - | | 26.0666 | 212000 | 0.0 | - | | 26.0728 | 212050 | 0.0 | - | | 26.0789 | 212100 | 0.0 | - | | 26.0851 | 212150 | 0.0 | - | | 26.0912 | 212200 | 0.0 | - | | 26.0974 | 212250 | 0.0 | - | | 26.1035 | 212300 | 0.0 | - | | 26.1097 | 212350 | 0.0 | - | | 26.1158 | 212400 | 0.0 | - | | 26.1220 | 212450 | 0.0 | - | | 26.1281 | 212500 | 0.0 | - | | 26.1343 | 212550 | 0.0 | - | | 26.1404 | 212600 | 0.0 | - | | 26.1466 | 212650 | 0.0 | - | | 26.1527 | 212700 | 0.0 | - | | 26.1589 | 212750 | 0.0 | - | | 26.1650 | 212800 | 0.0 | - | | 26.1712 | 212850 | 0.0 | - | | 26.1773 | 212900 | 0.0 | - | | 26.1835 | 212950 | 0.0 | - | | 26.1896 | 213000 | 0.0 | - | | 26.1957 | 213050 | 0.0 | - | | 26.2019 | 213100 | 0.0 | - | | 26.2080 | 213150 | 0.0 | - | | 26.2142 | 213200 | 0.0 | - | | 26.2203 | 213250 | 0.0 | - | | 26.2265 | 213300 | 0.0 | - | | 26.2326 | 213350 | 0.0 | - | | 26.2388 | 213400 | 0.0 | - | | 26.2449 | 213450 | 0.0 | - | | 26.2511 | 213500 | 0.0 | - | | 26.2572 | 213550 | 0.0 | - | | 26.2634 | 213600 | 0.0 | - | | 26.2695 | 213650 | 0.0 | - | | 26.2757 | 213700 | 0.0 | - | | 26.2818 | 213750 | 0.0 | - | | 26.2880 | 213800 | 0.0 | - | | 26.2941 | 213850 | 0.0 | - | | 26.3003 | 213900 | 0.0 | - | | 26.3064 | 213950 | 0.0 | - | | 26.3126 | 214000 | 0.0 | - | | 26.3187 | 214050 | 0.0 | - | | 26.3248 | 214100 | 0.0 | - | | 26.3310 | 214150 | 0.0 | - | | 26.3371 | 214200 | 0.0 | - | | 26.3433 | 214250 | 0.0 | - | | 26.3494 | 214300 | 0.0 | - | | 26.3556 | 214350 | 0.0 | - | | 26.3617 | 214400 | 0.0 | - | | 26.3679 | 214450 | 0.0 | - | | 26.3740 | 214500 | 0.0 | - | | 26.3802 | 214550 | 0.0 | - | | 26.3863 | 214600 | 0.0 | - | | 26.3925 | 214650 | 0.0 | - | | 26.3986 | 214700 | 0.0 | - | | 26.4048 | 214750 | 0.0 | - | | 26.4109 | 214800 | 0.0 | - | | 26.4171 | 214850 | 0.0 | - | | 26.4232 | 214900 | 0.0 | - | | 26.4294 | 214950 | 0.0 | - | | 26.4355 | 215000 | 0.0 | - | | 26.4417 | 215050 | 0.0 | - | | 26.4478 | 215100 | 0.0 | - | | 26.4540 | 215150 | 0.0 | - | | 26.4601 | 215200 | 0.0 | - | | 26.4662 | 215250 | 0.0 | - | | 26.4724 | 215300 | 0.0 | - | | 26.4785 | 215350 | 0.0 | - | | 26.4847 | 215400 | 0.0 | - | | 26.4908 | 215450 | 0.0 | - | | 26.4970 | 215500 | 0.0 | - | | 26.5031 | 215550 | 0.0 | - | | 26.5093 | 215600 | 0.0 | - | | 26.5154 | 215650 | 0.0 | - | | 26.5216 | 215700 | 0.0 | - | | 26.5277 | 215750 | 0.0 | - | | 26.5339 | 215800 | 0.0 | - | | 26.5400 | 215850 | 0.0 | - | | 26.5462 | 215900 | 0.0 | - | | 26.5523 | 215950 | 0.0 | - | | 26.5585 | 216000 | 0.0 | - | | 26.5646 | 216050 | 0.0 | - | | 26.5708 | 216100 | 0.0 | - | | 26.5769 | 216150 | 0.0 | - | | 26.5831 | 216200 | 0.0 | - | | 26.5892 | 216250 | 0.0 | - | | 26.5954 | 216300 | 0.0 | - | | 26.6015 | 216350 | 0.0 | - | | 26.6076 | 216400 | 0.0 | - | | 26.6138 | 216450 | 0.0 | - | | 26.6199 | 216500 | 0.0 | - | | 26.6261 | 216550 | 0.0 | - | | 26.6322 | 216600 | 0.0 | - | | 26.6384 | 216650 | 0.0 | - | | 26.6445 | 216700 | 0.0 | - | | 26.6507 | 216750 | 0.0 | - | | 26.6568 | 216800 | 0.0 | - | | 26.6630 | 216850 | 0.0 | - | | 26.6691 | 216900 | 0.0 | - | | 26.6753 | 216950 | 0.0 | - | | 26.6814 | 217000 | 0.0 | - | | 26.6876 | 217050 | 0.0 | - | | 26.6937 | 217100 | 0.0 | - | | 26.6999 | 217150 | 0.0 | - | | 26.7060 | 217200 | 0.0 | - | | 26.7122 | 217250 | 0.0 | - | | 26.7183 | 217300 | 0.0 | - | | 26.7245 | 217350 | 0.0 | - | | 26.7306 | 217400 | 0.0 | - | | 26.7368 | 217450 | 0.0 | - | | 26.7429 | 217500 | 0.0 | - | | 26.7490 | 217550 | 0.0 | - | | 26.7552 | 217600 | 0.0 | - | | 26.7613 | 217650 | 0.0 | - | | 26.7675 | 217700 | 0.0 | - | | 26.7736 | 217750 | 0.0 | - | | 26.7798 | 217800 | 0.0 | - | | 26.7859 | 217850 | 0.0 | - | | 26.7921 | 217900 | 0.0 | - | | 26.7982 | 217950 | 0.0 | - | | 26.8044 | 218000 | 0.0 | - | | 26.8105 | 218050 | 0.0 | - | | 26.8167 | 218100 | 0.0 | - | | 26.8228 | 218150 | 0.0 | - | | 26.8290 | 218200 | 0.0 | - | | 26.8351 | 218250 | 0.0 | - | | 26.8413 | 218300 | 0.0 | - | | 26.8474 | 218350 | 0.0 | - | | 26.8536 | 218400 | 0.0 | - | | 26.8597 | 218450 | 0.0 | - | | 26.8659 | 218500 | 0.0 | - | | 26.8720 | 218550 | 0.0 | - | | 26.8782 | 218600 | 0.0 | - | | 26.8843 | 218650 | 0.0 | - | | 26.8904 | 218700 | 0.0 | - | | 26.8966 | 218750 | 0.0 | - | | 26.9027 | 218800 | 0.0 | - | | 26.9089 | 218850 | 0.0 | - | | 26.9150 | 218900 | 0.0 | - | | 26.9212 | 218950 | 0.0 | - | | 26.9273 | 219000 | 0.0 | - | | 26.9335 | 219050 | 0.0 | - | | 26.9396 | 219100 | 0.0 | - | | 26.9458 | 219150 | 0.0 | - | | 26.9519 | 219200 | 0.0 | - | | 26.9581 | 219250 | 0.0 | - | | 26.9642 | 219300 | 0.0 | - | | 26.9704 | 219350 | 0.0 | - | | 26.9765 | 219400 | 0.0 | - | | 26.9827 | 219450 | 0.0 | - | | 26.9888 | 219500 | 0.0 | - | | 26.9950 | 219550 | 0.0 | - | | 27.0011 | 219600 | 0.0 | - | | 27.0073 | 219650 | 0.0 | - | | 27.0134 | 219700 | 0.0 | - | | 27.0195 | 219750 | 0.0 | - | | 27.0257 | 219800 | 0.0 | - | | 27.0318 | 219850 | 0.0 | - | | 27.0380 | 219900 | 0.0 | - | | 27.0441 | 219950 | 0.0 | - | | 27.0503 | 220000 | 0.0 | - | | 27.0564 | 220050 | 0.0 | - | | 27.0626 | 220100 | 0.0 | - | | 27.0687 | 220150 | 0.0 | - | | 27.0749 | 220200 | 0.0 | - | | 27.0810 | 220250 | 0.0 | - | | 27.0872 | 220300 | 0.0 | - | | 27.0933 | 220350 | 0.0 | - | | 27.0995 | 220400 | 0.0 | - | | 27.1056 | 220450 | 0.0 | - | | 27.1118 | 220500 | 0.0 | - | | 27.1179 | 220550 | 0.0 | - | | 27.1241 | 220600 | 0.0 | - | | 27.1302 | 220650 | 0.0 | - | | 27.1364 | 220700 | 0.0 | - | | 27.1425 | 220750 | 0.0 | - | | 27.1487 | 220800 | 0.0 | - | | 27.1548 | 220850 | 0.0 | - | | 27.1609 | 220900 | 0.0 | - | | 27.1671 | 220950 | 0.0 | - | | 27.1732 | 221000 | 0.0 | - | | 27.1794 | 221050 | 0.0 | - | | 27.1855 | 221100 | 0.0 | - | | 27.1917 | 221150 | 0.0 | - | | 27.1978 | 221200 | 0.0 | - | | 27.2040 | 221250 | 0.0 | - | | 27.2101 | 221300 | 0.0 | - | | 27.2163 | 221350 | 0.0 | - | | 27.2224 | 221400 | 0.0 | - | | 27.2286 | 221450 | 0.0 | - | | 27.2347 | 221500 | 0.0 | - | | 27.2409 | 221550 | 0.0 | - | | 27.2470 | 221600 | 0.0 | - | | 27.2532 | 221650 | 0.0 | - | | 27.2593 | 221700 | 0.0 | - | | 27.2655 | 221750 | 0.0 | - | | 27.2716 | 221800 | 0.0 | - | | 27.2778 | 221850 | 0.0 | - | | 27.2839 | 221900 | 0.0 | - | | 27.2901 | 221950 | 0.0 | - | | 27.2962 | 222000 | 0.0 | - | | 27.3023 | 222050 | 0.0 | - | | 27.3085 | 222100 | 0.0 | - | | 27.3146 | 222150 | 0.0 | - | | 27.3208 | 222200 | 0.0 | - | | 27.3269 | 222250 | 0.0 | - | | 27.3331 | 222300 | 0.0 | - | | 27.3392 | 222350 | 0.0 | - | | 27.3454 | 222400 | 0.0 | - | | 27.3515 | 222450 | 0.0 | - | | 27.3577 | 222500 | 0.0 | - | | 27.3638 | 222550 | 0.0 | - | | 27.3700 | 222600 | 0.0 | - | | 27.3761 | 222650 | 0.0 | - | | 27.3823 | 222700 | 0.0 | - | | 27.3884 | 222750 | 0.0 | - | | 27.3946 | 222800 | 0.0 | - | | 27.4007 | 222850 | 0.0 | - | | 27.4069 | 222900 | 0.0 | - | | 27.4130 | 222950 | 0.0 | - | | 27.4192 | 223000 | 0.0 | - | | 27.4253 | 223050 | 0.0 | - | | 27.4315 | 223100 | 0.0 | - | | 27.4376 | 223150 | 0.0 | - | | 27.4437 | 223200 | 0.0 | - | | 27.4499 | 223250 | 0.0 | - | | 27.4560 | 223300 | 0.0 | - | | 27.4622 | 223350 | 0.0 | - | | 27.4683 | 223400 | 0.0 | - | | 27.4745 | 223450 | 0.0 | - | | 27.4806 | 223500 | 0.0 | - | | 27.4868 | 223550 | 0.0 | - | | 27.4929 | 223600 | 0.0 | - | | 27.4991 | 223650 | 0.0 | - | | 27.5052 | 223700 | 0.0 | - | | 27.5114 | 223750 | 0.0 | - | | 27.5175 | 223800 | 0.0 | - | | 27.5237 | 223850 | 0.0 | - | | 27.5298 | 223900 | 0.0 | - | | 27.5360 | 223950 | 0.0 | - | | 27.5421 | 224000 | 0.0 | - | | 27.5483 | 224050 | 0.0 | - | | 27.5544 | 224100 | 0.0 | - | | 27.5606 | 224150 | 0.0 | - | | 27.5667 | 224200 | 0.0 | - | | 27.5729 | 224250 | 0.0 | - | | 27.5790 | 224300 | 0.0 | - | | 27.5851 | 224350 | 0.0 | - | | 27.5913 | 224400 | 0.0 | - | | 27.5974 | 224450 | 0.0 | - | | 27.6036 | 224500 | 0.0 | - | | 27.6097 | 224550 | 0.0 | - | | 27.6159 | 224600 | 0.0 | - | | 27.6220 | 224650 | 0.0 | - | | 27.6282 | 224700 | 0.0 | - | | 27.6343 | 224750 | 0.0 | - | | 27.6405 | 224800 | 0.0 | - | | 27.6466 | 224850 | 0.0 | - | | 27.6528 | 224900 | 0.0 | - | | 27.6589 | 224950 | 0.0 | - | | 27.6651 | 225000 | 0.0 | - | | 27.6712 | 225050 | 0.0 | - | | 27.6774 | 225100 | 0.0 | - | | 27.6835 | 225150 | 0.0 | - | | 27.6897 | 225200 | 0.0 | - | | 27.6958 | 225250 | 0.0 | - | | 27.7020 | 225300 | 0.0 | - | | 27.7081 | 225350 | 0.0 | - | | 27.7143 | 225400 | 0.0 | - | | 27.7204 | 225450 | 0.0 | - | | 27.7265 | 225500 | 0.0 | - | | 27.7327 | 225550 | 0.0 | - | | 27.7388 | 225600 | 0.0 | - | | 27.7450 | 225650 | 0.0 | - | | 27.7511 | 225700 | 0.0 | - | | 27.7573 | 225750 | 0.0 | - | | 27.7634 | 225800 | 0.0 | - | | 27.7696 | 225850 | 0.0 | - | | 27.7757 | 225900 | 0.0 | - | | 27.7819 | 225950 | 0.0 | - | | 27.7880 | 226000 | 0.0 | - | | 27.7942 | 226050 | 0.0 | - | | 27.8003 | 226100 | 0.0 | - | | 27.8065 | 226150 | 0.0 | - | | 27.8126 | 226200 | 0.0 | - | | 27.8188 | 226250 | 0.0 | - | | 27.8249 | 226300 | 0.0 | - | | 27.8311 | 226350 | 0.0 | - | | 27.8372 | 226400 | 0.0 | - | | 27.8434 | 226450 | 0.0 | - | | 27.8495 | 226500 | 0.0 | - | | 27.8556 | 226550 | 0.0 | - | | 27.8618 | 226600 | 0.0 | - | | 27.8679 | 226650 | 0.0 | - | | 27.8741 | 226700 | 0.0 | - | | 27.8802 | 226750 | 0.0 | - | | 27.8864 | 226800 | 0.0 | - | | 27.8925 | 226850 | 0.0 | - | | 27.8987 | 226900 | 0.0 | - | | 27.9048 | 226950 | 0.0 | - | | 27.9110 | 227000 | 0.0 | - | | 27.9171 | 227050 | 0.0 | - | | 27.9233 | 227100 | 0.0 | - | | 27.9294 | 227150 | 0.0 | - | | 27.9356 | 227200 | 0.0 | - | | 27.9417 | 227250 | 0.0 | - | | 27.9479 | 227300 | 0.0 | - | | 27.9540 | 227350 | 0.0 | - | | 27.9602 | 227400 | 0.0 | - | | 27.9663 | 227450 | 0.0 | - | | 27.9725 | 227500 | 0.0 | - | | 27.9786 | 227550 | 0.0 | - | | 27.9848 | 227600 | 0.0 | - | | 27.9909 | 227650 | 0.0 | - | | 27.9970 | 227700 | 0.0 | - | | 28.0032 | 227750 | 0.0 | - | | 28.0093 | 227800 | 0.0 | - | | 28.0155 | 227850 | 0.0 | - | | 28.0216 | 227900 | 0.0 | - | | 28.0278 | 227950 | 0.0 | - | | 28.0339 | 228000 | 0.0 | - | | 28.0401 | 228050 | 0.0 | - | | 28.0462 | 228100 | 0.0 | - | | 28.0524 | 228150 | 0.0 | - | | 28.0585 | 228200 | 0.0 | - | | 28.0647 | 228250 | 0.0 | - | | 28.0708 | 228300 | 0.0 | - | | 28.0770 | 228350 | 0.0 | - | | 28.0831 | 228400 | 0.0 | - | | 28.0893 | 228450 | 0.0 | - | | 28.0954 | 228500 | 0.0 | - | | 28.1016 | 228550 | 0.0 | - | | 28.1077 | 228600 | 0.0 | - | | 28.1139 | 228650 | 0.0 | - | | 28.1200 | 228700 | 0.0 | - | | 28.1262 | 228750 | 0.0 | - | | 28.1323 | 228800 | 0.0 | - | | 28.1384 | 228850 | 0.0 | - | | 28.1446 | 228900 | 0.0 | - | | 28.1507 | 228950 | 0.0 | - | | 28.1569 | 229000 | 0.0 | - | | 28.1630 | 229050 | 0.0 | - | | 28.1692 | 229100 | 0.0 | - | | 28.1753 | 229150 | 0.0 | - | | 28.1815 | 229200 | 0.0 | - | | 28.1876 | 229250 | 0.0 | - | | 28.1938 | 229300 | 0.0 | - | | 28.1999 | 229350 | 0.0 | - | | 28.2061 | 229400 | 0.0 | - | | 28.2122 | 229450 | 0.0 | - | | 28.2184 | 229500 | 0.0 | - | | 28.2245 | 229550 | 0.0 | - | | 28.2307 | 229600 | 0.0 | - | | 28.2368 | 229650 | 0.0 | - | | 28.2430 | 229700 | 0.0 | - | | 28.2491 | 229750 | 0.0 | - | | 28.2553 | 229800 | 0.0 | - | | 28.2614 | 229850 | 0.0 | - | | 28.2676 | 229900 | 0.0 | - | | 28.2737 | 229950 | 0.0 | - | | 28.2798 | 230000 | 0.0 | - | | 28.2860 | 230050 | 0.0 | - | | 28.2921 | 230100 | 0.0 | - | | 28.2983 | 230150 | 0.0 | - | | 28.3044 | 230200 | 0.0 | - | | 28.3106 | 230250 | 0.0 | - | | 28.3167 | 230300 | 0.0 | - | | 28.3229 | 230350 | 0.0 | - | | 28.3290 | 230400 | 0.0 | - | | 28.3352 | 230450 | 0.0 | - | | 28.3413 | 230500 | 0.0 | - | | 28.3475 | 230550 | 0.0 | - | | 28.3536 | 230600 | 0.0 | - | | 28.3598 | 230650 | 0.0 | - | | 28.3659 | 230700 | 0.0 | - | | 28.3721 | 230750 | 0.0 | - | | 28.3782 | 230800 | 0.0 | - | | 28.3844 | 230850 | 0.0 | - | | 28.3905 | 230900 | 0.0 | - | | 28.3967 | 230950 | 0.0 | - | | 28.4028 | 231000 | 0.0 | - | | 28.4090 | 231050 | 0.0 | - | | 28.4151 | 231100 | 0.0 | - | | 28.4212 | 231150 | 0.0 | - | | 28.4274 | 231200 | 0.0 | - | | 28.4335 | 231250 | 0.0 | - | | 28.4397 | 231300 | 0.0 | - | | 28.4458 | 231350 | 0.0 | - | | 28.4520 | 231400 | 0.0 | - | | 28.4581 | 231450 | 0.0 | - | | 28.4643 | 231500 | 0.0 | - | | 28.4704 | 231550 | 0.0 | - | | 28.4766 | 231600 | 0.0 | - | | 28.4827 | 231650 | 0.0 | - | | 28.4889 | 231700 | 0.0 | - | | 28.4950 | 231750 | 0.0 | - | | 28.5012 | 231800 | 0.0 | - | | 28.5073 | 231850 | 0.0 | - | | 28.5135 | 231900 | 0.0 | - | | 28.5196 | 231950 | 0.0 | - | | 28.5258 | 232000 | 0.0 | - | | 28.5319 | 232050 | 0.0 | - | | 28.5381 | 232100 | 0.0 | - | | 28.5442 | 232150 | 0.0 | - | | 28.5504 | 232200 | 0.0 | - | | 28.5565 | 232250 | 0.0 | - | | 28.5626 | 232300 | 0.0 | - | | 28.5688 | 232350 | 0.0 | - | | 28.5749 | 232400 | 0.0 | - | | 28.5811 | 232450 | 0.0 | - | | 28.5872 | 232500 | 0.0 | - | | 28.5934 | 232550 | 0.0 | - | | 28.5995 | 232600 | 0.0 | - | | 28.6057 | 232650 | 0.0 | - | | 28.6118 | 232700 | 0.0 | - | | 28.6180 | 232750 | 0.0 | - | | 28.6241 | 232800 | 0.0 | - | | 28.6303 | 232850 | 0.0 | - | | 28.6364 | 232900 | 0.0 | - | | 28.6426 | 232950 | 0.0 | - | | 28.6487 | 233000 | 0.0 | - | | 28.6549 | 233050 | 0.0 | - | | 28.6610 | 233100 | 0.0 | - | | 28.6672 | 233150 | 0.0 | - | | 28.6733 | 233200 | 0.0 | - | | 28.6795 | 233250 | 0.0 | - | | 28.6856 | 233300 | 0.0 | - | | 28.6917 | 233350 | 0.0 | - | | 28.6979 | 233400 | 0.0 | - | | 28.7040 | 233450 | 0.0 | - | | 28.7102 | 233500 | 0.0 | - | | 28.7163 | 233550 | 0.0 | - | | 28.7225 | 233600 | 0.0 | - | | 28.7286 | 233650 | 0.0 | - | | 28.7348 | 233700 | 0.0 | - | | 28.7409 | 233750 | 0.0 | - | | 28.7471 | 233800 | 0.0 | - | | 28.7532 | 233850 | 0.0 | - | | 28.7594 | 233900 | 0.0 | - | | 28.7655 | 233950 | 0.0 | - | | 28.7717 | 234000 | 0.0 | - | | 28.7778 | 234050 | 0.0 | - | | 28.7840 | 234100 | 0.0 | - | | 28.7901 | 234150 | 0.0 | - | | 28.7963 | 234200 | 0.0 | - | | 28.8024 | 234250 | 0.0 | - | | 28.8086 | 234300 | 0.0 | - | | 28.8147 | 234350 | 0.0 | - | | 28.8209 | 234400 | 0.0 | - | | 28.8270 | 234450 | 0.0 | - | | 28.8331 | 234500 | 0.0 | - | | 28.8393 | 234550 | 0.0 | - | | 28.8454 | 234600 | 0.0 | - | | 28.8516 | 234650 | 0.0 | - | | 28.8577 | 234700 | 0.0 | - | | 28.8639 | 234750 | 0.0 | - | | 28.8700 | 234800 | 0.0 | - | | 28.8762 | 234850 | 0.0 | - | | 28.8823 | 234900 | 0.0 | - | | 28.8885 | 234950 | 0.0 | - | | 28.8946 | 235000 | 0.0 | - | | 28.9008 | 235050 | 0.0 | - | | 28.9069 | 235100 | 0.0 | - | | 28.9131 | 235150 | 0.0 | - | | 28.9192 | 235200 | 0.0 | - | | 28.9254 | 235250 | 0.0 | - | | 28.9315 | 235300 | 0.0 | - | | 28.9377 | 235350 | 0.0 | - | | 28.9438 | 235400 | 0.0 | - | | 28.9500 | 235450 | 0.0 | - | | 28.9561 | 235500 | 0.0 | - | | 28.9623 | 235550 | 0.0 | - | | 28.9684 | 235600 | 0.0 | - | | 28.9745 | 235650 | 0.0 | - | | 28.9807 | 235700 | 0.0 | - | | 28.9868 | 235750 | 0.0 | - | | 28.9930 | 235800 | 0.0 | - | | 28.9991 | 235850 | 0.0 | - | | 29.0053 | 235900 | 0.0 | - | | 29.0114 | 235950 | 0.0 | - | | 29.0176 | 236000 | 0.0 | - | | 29.0237 | 236050 | 0.0 | - | | 29.0299 | 236100 | 0.0 | - | | 29.0360 | 236150 | 0.0 | - | | 29.0422 | 236200 | 0.0 | - | | 29.0483 | 236250 | 0.0 | - | | 29.0545 | 236300 | 0.0 | - | | 29.0606 | 236350 | 0.0 | - | | 29.0668 | 236400 | 0.0 | - | | 29.0729 | 236450 | 0.0 | - | | 29.0791 | 236500 | 0.0 | - | | 29.0852 | 236550 | 0.0 | - | | 29.0914 | 236600 | 0.0 | - | | 29.0975 | 236650 | 0.0 | - | | 29.1037 | 236700 | 0.0 | - | | 29.1098 | 236750 | 0.0 | - | | 29.1159 | 236800 | 0.0 | - | | 29.1221 | 236850 | 0.0 | - | | 29.1282 | 236900 | 0.0 | - | | 29.1344 | 236950 | 0.0 | - | | 29.1405 | 237000 | 0.0 | - | | 29.1467 | 237050 | 0.0 | - | | 29.1528 | 237100 | 0.0 | - | | 29.1590 | 237150 | 0.0 | - | | 29.1651 | 237200 | 0.0 | - | | 29.1713 | 237250 | 0.0 | - | | 29.1774 | 237300 | 0.0 | - | | 29.1836 | 237350 | 0.0 | - | | 29.1897 | 237400 | 0.0 | - | | 29.1959 | 237450 | 0.0 | - | | 29.2020 | 237500 | 0.0 | - | | 29.2082 | 237550 | 0.0 | - | | 29.2143 | 237600 | 0.0 | - | | 29.2205 | 237650 | 0.0 | - | | 29.2266 | 237700 | 0.0 | - | | 29.2328 | 237750 | 0.0 | - | | 29.2389 | 237800 | 0.0 | - | | 29.2451 | 237850 | 0.0 | - | | 29.2512 | 237900 | 0.0 | - | | 29.2573 | 237950 | 0.0 | - | | 29.2635 | 238000 | 0.0 | - | | 29.2696 | 238050 | 0.0 | - | | 29.2758 | 238100 | 0.0 | - | | 29.2819 | 238150 | 0.0 | - | | 29.2881 | 238200 | 0.0 | - | | 29.2942 | 238250 | 0.0 | - | | 29.3004 | 238300 | 0.0 | - | | 29.3065 | 238350 | 0.0 | - | | 29.3127 | 238400 | 0.0 | - | | 29.3188 | 238450 | 0.0 | - | | 29.3250 | 238500 | 0.0 | - | | 29.3311 | 238550 | 0.0 | - | | 29.3373 | 238600 | 0.0 | - | | 29.3434 | 238650 | 0.0 | - | | 29.3496 | 238700 | 0.0 | - | | 29.3557 | 238750 | 0.0 | - | | 29.3619 | 238800 | 0.0 | - | | 29.3680 | 238850 | 0.0 | - | | 29.3742 | 238900 | 0.0 | - | | 29.3803 | 238950 | 0.0 | - | | 29.3865 | 239000 | 0.0 | - | | 29.3926 | 239050 | 0.0 | - | | 29.3987 | 239100 | 0.0 | - | | 29.4049 | 239150 | 0.0 | - | | 29.4110 | 239200 | 0.0 | - | | 29.4172 | 239250 | 0.0 | - | | 29.4233 | 239300 | 0.0 | - | | 29.4295 | 239350 | 0.0 | - | | 29.4356 | 239400 | 0.0 | - | | 29.4418 | 239450 | 0.0 | - | | 29.4479 | 239500 | 0.0 | - | | 29.4541 | 239550 | 0.0 | - | | 29.4602 | 239600 | 0.0 | - | | 29.4664 | 239650 | 0.0 | - | | 29.4725 | 239700 | 0.0 | - | | 29.4787 | 239750 | 0.0 | - | | 29.4848 | 239800 | 0.0 | - | | 29.4910 | 239850 | 0.0 | - | | 29.4971 | 239900 | 0.0 | - | | 29.5033 | 239950 | 0.0 | - | | 29.5094 | 240000 | 0.0 | - | | 29.5156 | 240050 | 0.0 | - | | 29.5217 | 240100 | 0.0 | - | | 29.5278 | 240150 | 0.0 | - | | 29.5340 | 240200 | 0.0 | - | | 29.5401 | 240250 | 0.0 | - | | 29.5463 | 240300 | 0.0 | - | | 29.5524 | 240350 | 0.0 | - | | 29.5586 | 240400 | 0.0 | - | | 29.5647 | 240450 | 0.0 | - | | 29.5709 | 240500 | 0.0 | - | | 29.5770 | 240550 | 0.0 | - | | 29.5832 | 240600 | 0.0 | - | | 29.5893 | 240650 | 0.0 | - | | 29.5955 | 240700 | 0.0 | - | | 29.6016 | 240750 | 0.0 | - | | 29.6078 | 240800 | 0.0 | - | | 29.6139 | 240850 | 0.0 | - | | 29.6201 | 240900 | 0.0 | - | | 29.6262 | 240950 | 0.0 | - | | 29.6324 | 241000 | 0.0 | - | | 29.6385 | 241050 | 0.0 | - | | 29.6447 | 241100 | 0.0 | - | | 29.6508 | 241150 | 0.0 | - | | 29.6570 | 241200 | 0.0 | - | | 29.6631 | 241250 | 0.0 | - | | 29.6692 | 241300 | 0.0 | - | | 29.6754 | 241350 | 0.0 | - | | 29.6815 | 241400 | 0.0 | - | | 29.6877 | 241450 | 0.0 | - | | 29.6938 | 241500 | 0.0 | - | | 29.7000 | 241550 | 0.0 | - | | 29.7061 | 241600 | 0.0 | - | | 29.7123 | 241650 | 0.0 | - | | 29.7184 | 241700 | 0.0 | - | | 29.7246 | 241750 | 0.0 | - | | 29.7307 | 241800 | 0.0 | - | | 29.7369 | 241850 | 0.0 | - | | 29.7430 | 241900 | 0.0 | - | | 29.7492 | 241950 | 0.0 | - | | 29.7553 | 242000 | 0.0 | - | | 29.7615 | 242050 | 0.0 | - | | 29.7676 | 242100 | 0.0 | - | | 29.7738 | 242150 | 0.0 | - | | 29.7799 | 242200 | 0.0 | - | | 29.7861 | 242250 | 0.0 | - | | 29.7922 | 242300 | 0.0 | - | | 29.7984 | 242350 | 0.0 | - | | 29.8045 | 242400 | 0.0 | - | | 29.8106 | 242450 | 0.0 | - | | 29.8168 | 242500 | 0.0 | - | | 29.8229 | 242550 | 0.0 | - | | 29.8291 | 242600 | 0.0 | - | | 29.8352 | 242650 | 0.0 | - | | 29.8414 | 242700 | 0.0 | - | | 29.8475 | 242750 | 0.0 | - | | 29.8537 | 242800 | 0.0 | - | | 29.8598 | 242850 | 0.0 | - | | 29.8660 | 242900 | 0.0 | - | | 29.8721 | 242950 | 0.0 | - | | 29.8783 | 243000 | 0.0 | - | | 29.8844 | 243050 | 0.0 | - | | 29.8906 | 243100 | 0.0 | - | | 29.8967 | 243150 | 0.0 | - | | 29.9029 | 243200 | 0.0 | - | | 29.9090 | 243250 | 0.0 | - | | 29.9152 | 243300 | 0.0 | - | | 29.9213 | 243350 | 0.0 | - | | 29.9275 | 243400 | 0.0 | - | | 29.9336 | 243450 | 0.0 | - | | 29.9398 | 243500 | 0.0 | - | | 29.9459 | 243550 | 0.0 | - | | 29.9520 | 243600 | 0.0 | - | | 29.9582 | 243650 | 0.0 | - | | 29.9643 | 243700 | 0.0 | - | | 29.9705 | 243750 | 0.0 | - | | 29.9766 | 243800 | 0.0 | - | | 29.9828 | 243850 | 0.0 | - | | 29.9889 | 243900 | 0.0 | - | | 29.9951 | 243950 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - Transformers: 4.44.2 - PyTorch: 2.2.0a0+81ea7a4 - Datasets: 3.2.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "mini1013/master_domain", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "이니스프리 노세범 미네랄 파우더 2개 (#M)쿠팡 홈>뷰티>메이크업>베이스 메이크업>파우더/파우더팩트 Coupang > 뷰티 > 로드샵 > 메이크업 > 베이스 메이크업 > 파우더/파우더팩트"}, {"text": "화장품 명품 차스킨 멀티밤 9g 미백 주름개선 2중기 LotteOn > 뷰티 > 스킨케어 > 마스크/팩 > 마사지크림/젤 LotteOn > 뷰티 > 스킨케어 > 마스크/팩 > 마사지크림/젤"}, {"text": "헤라 메이크업 픽서 (110ml) LotteOn > 뷰티 > 스킨케어 > 미스트 LotteOn > 뷰티 > 스킨케어 > 미스트"}, {"text": "포먼트 젤네일 P.4 페탈로지 × 1개 LotteOn > 뷰티 > 네일 > 네일관리기기 > 젤네일램프 LotteOn > 뷰티 > 네일 > 네일관리기기 > 젤네일램프"}, {"text": "실크테라피 3D 볼류마이징 에센스 매니아구성(150ml 6개 + 60ml 1개) MinSellAmount (#M)바디/헤어>헤어케어>헤어에센스 Gmarket > 뷰티 > 바디/헤어 > 헤어케어 > 헤어에센스"}], "inference": true, "model-index": [{"name": "SetFit with mini1013/master_domain", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9741772986555503, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,182
JustFrederik/jparacrawl-v3-small-ct2
JustFrederik
translation
[ "translation", "ja", "en", "license:unknown", "region:us" ]
2023-05-10T09:01:42Z
2023-05-10T09:11:24+00:00
0
0
--- language: - ja - en license: unknown pipeline_tag: translation --- https://www.kecl.ntt.co.jp/icl/lirg/jparacrawl/ <br /> ``` ct2-fairseq-converter --model_path ./small/small.pretrain.pt --data_dir ./small --source_lang ja --target_lang en --output_dir ../converted/jparacrawl-v3-small-ct2/ja-en ``` ``` ct2-fairseq-converter --model_path small.pretrain.pt --data_dir . --source_lang en --target_lang ja --output_dir ../converted/jparacrawl-v3-small-ct2/en-ja ```
null
Non_BioNLP
https://www.kecl.ntt.co.jp/icl/lirg/jparacrawl/ <br /> ``` ct2-fairseq-converter --model_path ./small/small.pretrain.pt --data_dir ./small --source_lang ja --target_lang en --output_dir ../converted/jparacrawl-v3-small-ct2/ja-en ``` ``` ct2-fairseq-converter --model_path small.pretrain.pt --data_dir . --source_lang en --target_lang ja --output_dir ../converted/jparacrawl-v3-small-ct2/en-ja ```
{"language": ["ja", "en"], "license": "unknown", "pipeline_tag": "translation"}
task
[ "TRANSLATION" ]
46,183
henryscheible/crowspairs_trainer_roberta-large_finetuned
henryscheible
text-classification
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "dataset:crows_pairs", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-01-16T16:59:41Z
2023-01-16T17:20:21+00:00
112
0
--- datasets: - crows_pairs license: mit metrics: - accuracy tags: - generated_from_trainer model-index: - name: crowspairs_trainer_roberta-large_finetuned results: - task: type: text-classification name: Text Classification dataset: name: crows_pairs type: crows_pairs config: crows_pairs split: test args: crows_pairs metrics: - type: accuracy value: 0.4966887417218543 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # crowspairs_trainer_roberta-large_finetuned This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the crows_pairs dataset. It achieves the following results on the evaluation set: - Loss: 0.6933 - Accuracy: 0.4967 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.53 | 20 | 0.6942 | 0.5033 | | No log | 1.05 | 40 | 0.6943 | 0.4967 | | No log | 1.58 | 60 | 0.7100 | 0.4967 | | No log | 2.11 | 80 | 0.6937 | 0.4967 | | No log | 2.63 | 100 | 0.6937 | 0.4967 | | No log | 3.16 | 120 | 0.6936 | 0.4967 | | No log | 3.68 | 140 | 0.6931 | 0.5033 | | No log | 4.21 | 160 | 0.6938 | 0.4967 | | No log | 4.74 | 180 | 0.6933 | 0.4967 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # crowspairs_trainer_roberta-large_finetuned This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the crows_pairs dataset. It achieves the following results on the evaluation set: - Loss: 0.6933 - Accuracy: 0.4967 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.53 | 20 | 0.6942 | 0.5033 | | No log | 1.05 | 40 | 0.6943 | 0.4967 | | No log | 1.58 | 60 | 0.7100 | 0.4967 | | No log | 2.11 | 80 | 0.6937 | 0.4967 | | No log | 2.63 | 100 | 0.6937 | 0.4967 | | No log | 3.16 | 120 | 0.6936 | 0.4967 | | No log | 3.68 | 140 | 0.6931 | 0.5033 | | No log | 4.21 | 160 | 0.6938 | 0.4967 | | No log | 4.74 | 180 | 0.6933 | 0.4967 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
{"datasets": ["crows_pairs"], "license": "mit", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "crowspairs_trainer_roberta-large_finetuned", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "crows_pairs", "type": "crows_pairs", "config": "crows_pairs", "split": "test", "args": "crows_pairs"}, "metrics": [{"type": "accuracy", "value": 0.4966887417218543, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,184
a-mannion/bioroberta-es-umls-kgi
a-mannion
fill-mask
[ "transformers", "pytorch", "roberta", "fill-mask", "medical", "es", "arxiv:2307.11170", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-13T16:39:38Z
2025-04-07T12:15:21+00:00
13
0
--- language: - es license: apache-2.0 tags: - medical --- # BioRoBERTa-ES-UMLS-KGI <!-- Provide a quick summary of what the model is/does. --> This is a BERT encoder trained on the Spanish-language section of the European Clinical Case corpus as well as the UMLS metathesaurus knowledge graph, starting from the `oo` checkpoint, as described in [this paper](https://aclanthology.org/2023.clinicalnlp-1.35/). The training corpus consists of a custom combination of clinical documents from the E3C and text sequences derived from the metathesaurus (see our [Github repo](https://github.com/ap-mannion/bertify-umls) for more details). ## Model Details This model was trained using a multi-task approach combining Masked Language Modelling with knowledge-graph-based classification/fill-mask type objectives. The idea behind this framework was to try to improve the robustness of specialised biomedical BERT models by having them learn from structured data as well as natural language, while remaining in the cross-entropy-based learning paradigm. - **Developed by:** Aidan Mannion - **Funded by :** GENCI-IDRIS grant AD011013535R1 - **Model type:** DistilBERT - **Language(s) (NLP):** Spanish For further details on the model architecture, training objectives, hardware \& software used, as well as the preliminary downstream evaluation experiments carried out, refer to the [ArXiv paper](https://arxiv.org/abs/2307.11170). ### UMLS-KGI Models | **Model** | **Model Repo** | **Dataset Size** | **Base Architecture** | **Base Model** | **Total KGI training steps** | |:--------------------------:|:--------------------------------------------------------------------------:|:----------------:|:---------------------:|:---------------------------------------------------------------------------------------------:|:----------------------------:| | UMLS-KGI-BERT-multilingual | [url-multi](https://huggingface.co/ap-mannion/umls-kgi-bert-multilingual) | 940MB | DistilBERT | n/a | 163,904 | | UMLS-KGI-BERT-FR | [url-fr](https://huggingface.co/ap-mannion/umls-kgi-bert-fr) | 604MB | DistilBERT | n/a | 126,720 | | UMLS-KGI-BERT-EN | [url-en](https://huggingface.co/ap-mannion/umls-kgi-bert-en) | 174MB | DistilBERT | n/a | 19,008 | | UMLS-KGI-BERT-ES | [url-es](https://huggingface.co/ap-mannion/umls-kgi-bert-es) | 162MB | DistilBERT | n/a | 18,176 | | DrBERT-UMLS-KGI | [url-drbert](https://huggingface.co/ap-mannion/drbert-umls-kgi) | 604MB | CamemBERT/RoBERTa | [DrBERT-4GB](https://huggingface.co/Dr-BERT/DrBERT-4GB) | 126,720 | | PubMedBERT-UMLS-KGI | [url-pubmedbert](https://huggingface.co/ap-mannion/pubmedbert-umls-kgi) | 174MB | BERT | microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract | 19,008 | | BioRoBERTa-ES-UMLS-KGI | [url-bioroberta](https://huggingface.co/ap-mannion/bioroberta-es-umls-kgi) | 162MB | RoBERTa | [RoBERTa-base-biomedical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-es) | 18,176 | ### Direct/Downstream Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model is intended for use in experimental clinical/biomedical NLP work, either as a part of a larger system requiring text encoding or fine-tuned on a specific downstream task requiring clinical language modelling. It has **not** been sufficiently tested for accuracy, robustness and bias to be used in production settings. ### Out-of-Scope Use Experiments on general-domain data suggest that, given it's specialised training corpus, this model is **not** suitable for use on out-of-domain NLP tasks, and we recommend that it only be used for processing clinical text. ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> - [European Clinical Case Corpus](https://live.european-language-grid.eu/catalogue/corpus/7618) - [UMLS Metathesaurus](https://www.nlm.nih.gov/research/umls/index.html) #### Training Hyperparameters - sequence length: 256 - learning rate 7.5e-5 - linear learning rate schedule with 10,770 warmup steps - effective batch size 1500 (15 sequences per batch x 100 gradient accumulation steps) - MLM masking probability 0.15 **Training regime:** The model was trained with fp16 non-mixed precision, using the AdamW optimizer with default parameters. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] ## Citation [BibTeX] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> ``` @inproceedings{mannion-etal-2023-umls, title = "{UMLS}-{KGI}-{BERT}: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition", author = "Mannion, Aidan and Schwab, Didier and Goeuriot, Lorraine", booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.clinicalnlp-1.35", pages = "312--322", abstract = "Pre-trained transformer language models (LMs) have in recent years become the dominant paradigm in applied NLP. These models have achieved state-of-the-art performance on tasks such as information extraction, question answering, sentiment analysis, document classification and many others. In the biomedical domain, significant progress has been made in adapting this paradigm to NLP tasks that require the integration of domain-specific knowledge as well as statistical modelling of language. In particular, research in this area has focused on the question of how best to construct LMs that take into account not only the patterns of token distribution in medical text, but also the wealth of structured information contained in terminology resources such as the UMLS. This work contributes a data-centric paradigm for enriching the language representations of biomedical transformer-encoder LMs by extracting text sequences from the UMLS.This allows for graph-based learning objectives to be combined with masked-language pre-training. Preliminary results from experiments in the extension of pre-trained LMs as well as training from scratch show that this framework improves downstream performance on multiple biomedical and clinical Named Entity Recognition (NER) tasks. All pre-trained models, data processing pipelines and evaluation scripts will be made publicly available.", } ``` ``` @misc{mannion2023umlskgibert, title={UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition}, author={Aidan Mannion and Thierry Chevalier and Didier Schwab and Lorraine Geouriot}, year={2023}, eprint={2307.11170}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
null
BioNLP
# BioRoBERTa-ES-UMLS-KGI <!-- Provide a quick summary of what the model is/does. --> This is a BERT encoder trained on the Spanish-language section of the European Clinical Case corpus as well as the UMLS metathesaurus knowledge graph, starting from the `BioRoBERTA-ES` checkpoint, as described in [this paper](https://aclanthology.org/2023.clinicalnlp-1.35/). The training corpus consists of a custom combination of clinical documents from the E3C and text sequences derived from the metathesaurus (see our [Github repo](https://github.com/ap-mannion/bertify-umls) for more details). ## Model Details This model was trained using a multi-task approach combining Masked Language Modelling with knowledge-graph-based classification/fill-mask type objectives. The idea behind this framework was to try to improve the robustness of specialised biomedical BERT models by having them learn from structured data as well as natural language, while remaining in the cross-entropy-based learning paradigm. - **Developed by:** Aidan Mannion - **Funded by :** GENCI-IDRIS grant AD011013535R1 - **Model type:** DistilBERT - **Language(s) (NLP):** Spanish For further details on the model architecture, training objectives, hardware \& software used, as well as the preliminary downstream evaluation experiments carried out, refer to the [ArXiv paper](https://arxiv.org/abs/2307.11170). ### UMLS-KGI Models | **Model** | **Model Repo** | **Dataset Size** | **Base Architecture** | **Base Model** | **Total KGI training steps** | |:--------------------------:|:--------------------------------------------------------------------------:|:----------------:|:---------------------:|:---------------------------------------------------------------------------------------------:|:----------------------------:| | UMLS-KGI-BERT-multilingual | [url-multi](https://huggingface.co/ap-mannion/umls-kgi-bert-multilingual) | 940MB | DistilBERT | n/a | 163,904 | | UMLS-KGI-BERT-FR | [url-fr](https://huggingface.co/ap-mannion/umls-kgi-bert-fr) | 604MB | DistilBERT | n/a | 126,720 | | UMLS-KGI-BERT-EN | [url-en](https://huggingface.co/ap-mannion/umls-kgi-bert-en) | 174MB | DistilBERT | n/a | 19,008 | | UMLS-KGI-BERT-ES | [url-es](https://huggingface.co/ap-mannion/umls-kgi-bert-es) | 162MB | DistilBERT | n/a | 18,176 | | DrBERT-UMLS-KGI | [url-drbert](https://huggingface.co/ap-mannion/drbert-umls-kgi) | 604MB | CamemBERT/RoBERTa | [DrBERT-4GB](https://huggingface.co/Dr-BERT/DrBERT-4GB) | 126,720 | | PubMedBERT-UMLS-KGI | [url-pubmedbert](https://huggingface.co/ap-mannion/pubmedbert-umls-kgi) | 174MB | BERT | microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract | 19,008 | | BioRoBERTa-ES-UMLS-KGI | [url-bioroberta](https://huggingface.co/ap-mannion/bioroberta-es-umls-kgi) | 162MB | RoBERTa | [RoBERTa-base-biomedical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-es) | 18,176 | ### Direct/Downstream Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model is intended for use in experimental clinical/biomedical NLP work, either as a part of a larger system requiring text encoding or fine-tuned on a specific downstream task requiring clinical language modelling. It has **not** been sufficiently tested for accuracy, robustness and bias to be used in production settings. ### Out-of-Scope Use Experiments on general-domain data suggest that, given it's specialised training corpus, this model is **not** suitable for use on out-of-domain NLP tasks, and we recommend that it only be used for processing clinical text. ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> - [European Clinical Case Corpus](https://live.european-language-grid.eu/catalogue/corpus/7618) - [UMLS Metathesaurus](https://www.nlm.nih.gov/research/umls/index.html) #### Training Hyperparameters - sequence length: 256 - learning rate 7.5e-5 - linear learning rate schedule with 10,770 warmup steps - effective batch size 1500 (15 sequences per batch x 100 gradient accumulation steps) - MLM masking probability 0.15 **Training regime:** The model was trained with fp16 non-mixed precision, using the AdamW optimizer with default parameters. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] ## Citation [BibTeX] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> ``` @inproceedings{mannion-etal-2023-umls, title = "{UMLS}-{KGI}-{BERT}: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition", author = "Mannion, Aidan and Schwab, Didier and Goeuriot, Lorraine", booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.clinicalnlp-1.35", pages = "312--322", abstract = "Pre-trained transformer language models (LMs) have in recent years become the dominant paradigm in applied NLP. These models have achieved state-of-the-art performance on tasks such as information extraction, question answering, sentiment analysis, document classification and many others. In the biomedical domain, significant progress has been made in adapting this paradigm to NLP tasks that require the integration of domain-specific knowledge as well as statistical modelling of language. In particular, research in this area has focused on the question of how best to construct LMs that take into account not only the patterns of token distribution in medical text, but also the wealth of structured information contained in terminology resources such as the UMLS. This work contributes a data-centric paradigm for enriching the language representations of biomedical transformer-encoder LMs by extracting text sequences from the UMLS.This allows for graph-based learning objectives to be combined with masked-language pre-training. Preliminary results from experiments in the extension of pre-trained LMs as well as training from scratch show that this framework improves downstream performance on multiple biomedical and clinical Named Entity Recognition (NER) tasks. All pre-trained models, data processing pipelines and evaluation scripts will be made publicly available.", } ``` ``` @misc{mannion2023umlskgibert, title={UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition}, author={Aidan Mannion and Thierry Chevalier and Didier Schwab and Lorraine Geouriot}, year={2023}, eprint={2307.11170}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": ["es"], "license": "apache-2.0", "tags": ["medical"]}
task
[ "NAMED_ENTITY_RECOGNITION", "QUESTION_ANSWERING" ]
46,185
mrapacz/interlinear-pl-mt5-large-emb-auto-diacritics-ob
mrapacz
text2text-generation
[ "transformers", "pytorch", "morph-t5-auto", "text2text-generation", "pl", "dataset:mrapacz/greek-interlinear-translations", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-02-08T12:27:57Z
2025-02-21T21:30:48+00:00
14
0
--- base_model: - mT5-large datasets: - mrapacz/greek-interlinear-translations language: - pl library_name: transformers license: cc-by-sa-4.0 metrics: - bleu --- # Model Card for Ancient Greek to Polish Interlinear Translation Model This model performs interlinear translation from Ancient Greek to Polish, maintaining word-level alignment between source and target texts. You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation). ## Model Details ### Model Description - **Developed By:** Maciej Rapacz, AGH University of Kraków - **Model Type:** MorphT5AutoForConditionalGeneration - **Base Model:** mT5-large - **Tokenizer:** mT5 - **Language(s):** Ancient Greek (source) → Polish (target) - **License:** CC BY-NC-SA 4.0 - **Tag Set:** OB (Oblubienica) - **Text Preprocessing:** Diacritics - **Morphological Encoding:** emb-auto ### Model Performance - **BLEU Score:** 58.44 - **SemScore:** 0.93 ### Model Sources - **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation - **Paper:** https://aclanthology.org/2025.loreslm-1.11/ ## Usage Example > **Note**: This model uses a modification of T5-family models that includes dedicated embedding layers for encoding morphological information. To load these models, install the [morpht5](https://github.com/mrapacz/loreslm-interlinear-translation/blob/master/morpht5/README.md) package: > ```bash > pip install morpht5 > ``` ```python >>> from morpht5 import MorphT5AutoForConditionalGeneration, MorphT5Tokenizer >>> text = ['Λέγει', 'αὐτῷ', 'ὁ', 'Ἰησοῦς', 'Ἔγειρε', 'ἆρον', 'τὸν', 'κράβαττόν', 'σου', 'καὶ', 'περιπάτει'] >>> tags = ['vi Pres Act 3 Sg', 'pp Dat Sg m', 't_ Nom Sg m', 'n_ Nom Sg m', 'vm Pres Act 2 Sg', 'vm Aor Act 2 Sg', 't_ Acc Sg m', 'n_ Acc Sg m', 'pp 2 Gen Sg', 'Conj', 'vm Pres Act 2 Sg'] >>> tokenizer = MorphT5Tokenizer.from_pretrained("mrapacz/interlinear-pl-mt5-large-emb-auto-diacritics-ob") >>> inputs = tokenizer( text=text, morph_tags=tags, return_tensors="pt" ) >>> model = MorphT5AutoForConditionalGeneration.from_pretrained("mrapacz/interlinear-pl-mt5-large-emb-auto-diacritics-ob") >>> outputs = model.generate( **inputs, max_new_tokens=100, early_stopping=True, ) >>> decoded = tokenizer.decode(outputs[0], skip_special_tokens=True, keep_block_separator=True) >>> decoded = decoded.replace(tokenizer.target_block_separator_token, " | ") >>> decoded 'Mówi | mu | - | Jezus | wstawaj | weź | - | matę | swoją | i | chodź' ``` ## Citation If you use this model, please cite the following paper: ``` @inproceedings{rapacz-smywinski-pohl-2025-low, title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek", author = "Rapacz, Maciej and Smywi{\'n}ski-Pohl, Aleksander", editor = "Hettiarachchi, Hansi and Ranasinghe, Tharindu and Rayson, Paul and Mitkov, Ruslan and Gaber, Mohamed and Premasiri, Damith and Tan, Fiona Anting and Uyangodage, Lasitha", booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages", month = jan, year = "2025", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.loreslm-1.11/", pages = "145--165", abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios." } ```
null
Non_BioNLP
# Model Card for Ancient Greek to Polish Interlinear Translation Model This model performs interlinear translation from Ancient Greek to Polish, maintaining word-level alignment between source and target texts. You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation). ## Model Details ### Model Description - **Developed By:** Maciej Rapacz, AGH University of Kraków - **Model Type:** MorphT5AutoForConditionalGeneration - **Base Model:** mT5-large - **Tokenizer:** mT5 - **Language(s):** Ancient Greek (source) → Polish (target) - **License:** CC BY-NC-SA 4.0 - **Tag Set:** OB (Oblubienica) - **Text Preprocessing:** Diacritics - **Morphological Encoding:** emb-auto ### Model Performance - **BLEU Score:** 58.44 - **SemScore:** 0.93 ### Model Sources - **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation - **Paper:** https://aclanthology.org/2025.loreslm-1.11/ ## Usage Example > **Note**: This model uses a modification of T5-family models that includes dedicated embedding layers for encoding morphological information. To load these models, install the [morpht5](https://github.com/mrapacz/loreslm-interlinear-translation/blob/master/morpht5/README.md) package: > ```bash > pip install morpht5 > ``` ```python >>> from morpht5 import MorphT5AutoForConditionalGeneration, MorphT5Tokenizer >>> text = ['Λέγει', 'αὐτῷ', 'ὁ', 'Ἰησοῦς', 'Ἔγειρε', 'ἆρον', 'τὸν', 'κράβαττόν', 'σου', 'καὶ', 'περιπάτει'] >>> tags = ['vi Pres Act 3 Sg', 'pp Dat Sg m', 't_ Nom Sg m', 'n_ Nom Sg m', 'vm Pres Act 2 Sg', 'vm Aor Act 2 Sg', 't_ Acc Sg m', 'n_ Acc Sg m', 'pp 2 Gen Sg', 'Conj', 'vm Pres Act 2 Sg'] >>> tokenizer = MorphT5Tokenizer.from_pretrained("mrapacz/interlinear-pl-mt5-large-emb-auto-diacritics-ob") >>> inputs = tokenizer( text=text, morph_tags=tags, return_tensors="pt" ) >>> model = MorphT5AutoForConditionalGeneration.from_pretrained("mrapacz/interlinear-pl-mt5-large-emb-auto-diacritics-ob") >>> outputs = model.generate( **inputs, max_new_tokens=100, early_stopping=True, ) >>> decoded = tokenizer.decode(outputs[0], skip_special_tokens=True, keep_block_separator=True) >>> decoded = decoded.replace(tokenizer.target_block_separator_token, " | ") >>> decoded 'Mówi | mu | - | Jezus | wstawaj | weź | - | matę | swoją | i | chodź' ``` ## Citation If you use this model, please cite the following paper: ``` @inproceedings{rapacz-smywinski-pohl-2025-low, title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek", author = "Rapacz, Maciej and Smywi{\'n}ski-Pohl, Aleksander", editor = "Hettiarachchi, Hansi and Ranasinghe, Tharindu and Rayson, Paul and Mitkov, Ruslan and Gaber, Mohamed and Premasiri, Damith and Tan, Fiona Anting and Uyangodage, Lasitha", booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages", month = jan, year = "2025", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.loreslm-1.11/", pages = "145--165", abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios." } ```
{"base_model": ["mT5-large"], "datasets": ["mrapacz/greek-interlinear-translations"], "language": ["pl"], "library_name": "transformers", "license": "cc-by-sa-4.0", "metrics": ["bleu"]}
task
[ "TRANSLATION" ]
46,186
Chhabi/distilbert-base-uncased-finetuned-emotion
Chhabi
text-classification
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-05-26T06:01:37Z
2024-06-01T07:37:48+00:00
19
0
--- base_model: distilbert-base-uncased datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.94 name: Accuracy - type: f1 value: 0.9401790526544406 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1364 - Accuracy: 0.94 - F1: 0.9402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7715 | 1.0 | 250 | 0.2577 | 0.918 | 0.9183 | | 0.2028 | 2.0 | 500 | 0.1631 | 0.932 | 0.9317 | | 0.1359 | 3.0 | 750 | 0.1482 | 0.9345 | 0.9350 | | 0.1076 | 4.0 | 1000 | 0.1352 | 0.94 | 0.9400 | | 0.091 | 5.0 | 1250 | 0.1364 | 0.94 | 0.9402 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1364 - Accuracy: 0.94 - F1: 0.9402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7715 | 1.0 | 250 | 0.2577 | 0.918 | 0.9183 | | 0.2028 | 2.0 | 500 | 0.1631 | 0.932 | 0.9317 | | 0.1359 | 3.0 | 750 | 0.1482 | 0.9345 | 0.9350 | | 0.1076 | 4.0 | 1000 | 0.1352 | 0.94 | 0.9400 | | 0.091 | 5.0 | 1250 | 0.1364 | 0.94 | 0.9402 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.94, "name": "Accuracy"}, {"type": "f1", "value": 0.9401790526544406, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,187
mrapacz/interlinear-en-greta-baseline-normalized-unused
mrapacz
text2text-generation
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:mrapacz/greek-interlinear-translations", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2025-02-08T12:29:27Z
2025-02-21T21:30:34+00:00
10
0
--- base_model: - GreTa datasets: - mrapacz/greek-interlinear-translations language: - en library_name: transformers license: cc-by-sa-4.0 metrics: - bleu --- # Model Card for Ancient Greek to English Interlinear Translation Model This model performs interlinear translation from Ancient Greek to English, maintaining word-level alignment between source and target texts. You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation). ## Model Details ### Model Description - **Developed By:** Maciej Rapacz, AGH University of Kraków - **Model Type:** MT5ForConditionalGeneration - **Base Model:** GreTa - **Tokenizer:** GreTa - **Language(s):** Ancient Greek (source) → English (target) - **License:** CC BY-NC-SA 4.0 - **Tag Set:** Unused - **Text Preprocessing:** Normalized - **Morphological Encoding:** baseline (text only, no morphological tags) ### Model Performance - **BLEU Score:** 16.77 - **SemScore:** 0.56 ### Model Sources - **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation - **Paper:** https://aclanthology.org/2025.loreslm-1.11/ ## Usage Example ```python >>> from transformers import AutoModelForSeq2SeqLM, T5TokenizerFast >>> text = ['λεγει', 'αυτω', 'ο', 'ιησους', 'εγειρε', 'αρον', 'τον', 'κραβαττον', 'σου', 'και', 'περιπατει'] >>> text = " <extra_id_0>".join(text) >>> tokenizer = T5TokenizerFast.from_pretrained("mrapacz/interlinear-en-greta-baseline-normalized-unused") >>> inputs = tokenizer( text=text, return_tensors="pt" ) >>> model = T5ForConditionalGeneration.from_pretrained("mrapacz/interlinear-en-greta-baseline-normalized-unused") >>> outputs = model.generate( **inputs, max_new_tokens=100, early_stopping=True, ) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'he says to him - jesus shall shall shall shall shall shall shall shall shall shall shall shall shall shall shall shall shall shall the ' ``` ## Citation If you use this model, please cite the following paper: ``` @inproceedings{rapacz-smywinski-pohl-2025-low, title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek", author = "Rapacz, Maciej and Smywi{\'n}ski-Pohl, Aleksander", editor = "Hettiarachchi, Hansi and Ranasinghe, Tharindu and Rayson, Paul and Mitkov, Ruslan and Gaber, Mohamed and Premasiri, Damith and Tan, Fiona Anting and Uyangodage, Lasitha", booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages", month = jan, year = "2025", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.loreslm-1.11/", pages = "145--165", abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios." } ```
null
Non_BioNLP
# Model Card for Ancient Greek to English Interlinear Translation Model This model performs interlinear translation from Ancient Greek to English, maintaining word-level alignment between source and target texts. You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation). ## Model Details ### Model Description - **Developed By:** Maciej Rapacz, AGH University of Kraków - **Model Type:** MT5ForConditionalGeneration - **Base Model:** GreTa - **Tokenizer:** GreTa - **Language(s):** Ancient Greek (source) → English (target) - **License:** CC BY-NC-SA 4.0 - **Tag Set:** Unused - **Text Preprocessing:** Normalized - **Morphological Encoding:** baseline (text only, no morphological tags) ### Model Performance - **BLEU Score:** 16.77 - **SemScore:** 0.56 ### Model Sources - **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation - **Paper:** https://aclanthology.org/2025.loreslm-1.11/ ## Usage Example ```python >>> from transformers import AutoModelForSeq2SeqLM, T5TokenizerFast >>> text = ['λεγει', 'αυτω', 'ο', 'ιησους', 'εγειρε', 'αρον', 'τον', 'κραβαττον', 'σου', 'και', 'περιπατει'] >>> text = " <extra_id_0>".join(text) >>> tokenizer = T5TokenizerFast.from_pretrained("mrapacz/interlinear-en-greta-baseline-normalized-unused") >>> inputs = tokenizer( text=text, return_tensors="pt" ) >>> model = T5ForConditionalGeneration.from_pretrained("mrapacz/interlinear-en-greta-baseline-normalized-unused") >>> outputs = model.generate( **inputs, max_new_tokens=100, early_stopping=True, ) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'he says to him - jesus shall shall shall shall shall shall shall shall shall shall shall shall shall shall shall shall shall shall the ' ``` ## Citation If you use this model, please cite the following paper: ``` @inproceedings{rapacz-smywinski-pohl-2025-low, title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek", author = "Rapacz, Maciej and Smywi{\'n}ski-Pohl, Aleksander", editor = "Hettiarachchi, Hansi and Ranasinghe, Tharindu and Rayson, Paul and Mitkov, Ruslan and Gaber, Mohamed and Premasiri, Damith and Tan, Fiona Anting and Uyangodage, Lasitha", booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages", month = jan, year = "2025", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.loreslm-1.11/", pages = "145--165", abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios." } ```
{"base_model": ["GreTa"], "datasets": ["mrapacz/greek-interlinear-translations"], "language": ["en"], "library_name": "transformers", "license": "cc-by-sa-4.0", "metrics": ["bleu"]}
task
[ "TRANSLATION" ]
46,188
calibration-tuning/Llama-2-13b-chat-hf-ct-oe
calibration-tuning
null
[ "transformers", "safetensors", "peft", "arxiv:2406.08391", "dataset:calibration-tuning/Llama-2-13b-chat-hf-20k-oe", "base_model:meta-llama/Llama-2-13b-chat-hf", "base_model:finetune:meta-llama/Llama-2-13b-chat-hf", "license:llama2", "endpoints_compatible", "region:us" ]
2024-03-19T04:32:29Z
2024-06-13T15:13:01+00:00
0
0
--- base_model: meta-llama/Llama-2-13b-chat-hf datasets: - calibration-tuning/Llama-2-13b-chat-hf-20k-oe library_name: transformers license: llama2 tags: - transformers - peft - arxiv:2406.08391 --- # Model Card **Llama 2 13B Chat CT-OE** is a fine-tuned [Llama 2 13B Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) model that provides well-calibrated confidence estimates for open-ended question answering. The model is fine-tuned (calibration-tuned) using a [dataset](https://huggingface.co/datasets/calibration-tuning/Llama-2-13b-chat-hf-20k-oe) of *open-ended* generations from `meta-llama/Llama-2-13b-chat-hf`, labeled for correctness. At test/inference time, the probability of correctness defines the confidence of the model in its answer. For full details, please see our [paper](https://arxiv.org/abs/2406.08391) and supporting [code](https://github.com/activatedgeek/calibration-tuning). **Other Models**: We also release a broader collection of [Open-Ended CT Models](https://huggingface.co/collections/calibration-tuning/open-ended-ct-models-66043b12c7902115c826a20e). ## Usage This adapter model is meant to be used on top of `meta-llama/Llama-2-13b-chat-hf` model generations. The confidence estimation pipeline follows these steps, 1. Load base model and PEFT adapter. 2. Disable adapter and generate answer. 3. Enable adapter and generate confidence. All standard guidelines for the base model's generation apply. For a complete example, see [play.py](https://github.com/activatedgeek/calibration-tuning/blob/main/experiments/play.py) at the supporting code repository. **NOTE**: Using the adapter for generations may hurt downstream task accuracy and confidence estimates. We recommend using the adapter to estimate *only* confidence. ## License The model is released under the original model's Llama 2 Community License Agreement.
null
Non_BioNLP
# Model Card **Llama 2 13B Chat CT-OE** is a fine-tuned [Llama 2 13B Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) model that provides well-calibrated confidence estimates for open-ended question answering. The model is fine-tuned (calibration-tuned) using a [dataset](https://huggingface.co/datasets/calibration-tuning/Llama-2-13b-chat-hf-20k-oe) of *open-ended* generations from `meta-llama/Llama-2-13b-chat-hf`, labeled for correctness. At test/inference time, the probability of correctness defines the confidence of the model in its answer. For full details, please see our [paper](https://arxiv.org/abs/2406.08391) and supporting [code](https://github.com/activatedgeek/calibration-tuning). **Other Models**: We also release a broader collection of [Open-Ended CT Models](https://huggingface.co/collections/calibration-tuning/open-ended-ct-models-66043b12c7902115c826a20e). ## Usage This adapter model is meant to be used on top of `meta-llama/Llama-2-13b-chat-hf` model generations. The confidence estimation pipeline follows these steps, 1. Load base model and PEFT adapter. 2. Disable adapter and generate answer. 3. Enable adapter and generate confidence. All standard guidelines for the base model's generation apply. For a complete example, see [play.py](https://github.com/activatedgeek/calibration-tuning/blob/main/experiments/play.py) at the supporting code repository. **NOTE**: Using the adapter for generations may hurt downstream task accuracy and confidence estimates. We recommend using the adapter to estimate *only* confidence. ## License The model is released under the original model's Llama 2 Community License Agreement.
{"base_model": "meta-llama/Llama-2-13b-chat-hf", "datasets": ["calibration-tuning/Llama-2-13b-chat-hf-20k-oe"], "library_name": "transformers", "license": "llama2", "tags": ["transformers", "peft", "arxiv:2406.08391"]}
task
[ "QUESTION_ANSWERING" ]
46,189
google/paligemma2-28b-pt-224
google
image-text-to-text
[ "transformers", "safetensors", "paligemma", "image-text-to-text", "arxiv:2407.07726", "arxiv:2408.00118", "arxiv:2310.09199", "arxiv:2303.15343", "arxiv:1706.03762", "arxiv:2010.11929", "arxiv:2412.03555", "arxiv:2209.06794", "arxiv:2209.04372", "arxiv:2103.01913", "arxiv:1908.04913", "arxiv:1906.02467", "arxiv:2203.10244", "arxiv:2205.12522", "arxiv:2104.12756", "arxiv:1608.00272", "arxiv:1511.02283", "arxiv:1905.13648", "arxiv:2110.11624", "arxiv:2108.03353", "arxiv:1810.12440", "arxiv:1904.03493", "arxiv:2010.04295", "arxiv:1511.09207", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-11-21T22:42:42Z
2024-12-05T12:43:06+00:00
37
7
--- library_name: transformers license: gemma pipeline_tag: image-text-to-text extra_gated_heading: Access PaliGemma on Hugging Face extra_gated_prompt: To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # PaliGemma 2 model card **Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma) Transformers PaliGemma 2 28B weights, pre-trained with 224*224 input images and 128 token input/output text sequences. The model is available in the `bfloat16` format for fine-tuning. **Resources and technical documentation:** * [PaliGemma 2 on Kaggle](https://www.kaggle.com/models/google/paligemma-2) * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) **Terms of Use:** [Terms](https://ai.google.dev/gemma/terms) **Authors:** Google ## Model information ### Model summary PaliGemma 2 is an update of the [PaliGemma](https://arxiv.org/abs/2407.07726) vision-language model (VLM) which incorporates the capabilities of the [Gemma 2](https://arxiv.org/abs/2408.00118) models. The PaliGemma family of models is inspired by [PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as the [SigLIP](https://arxiv.org/abs/2303.15343) vision model and [Gemma 2](https://arxiv.org/abs/2408.00118) language models. It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation. #### Model architecture PaliGemma 2 is the composition of a [Transformer decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image encoder](https://arxiv.org/abs/2010.11929). The text decoder is initialized from [Gemma 2](https://ai.google.dev/gemma/docs/base) in the 2B, 9B, and 27B parameter sizes. The image encoder is initialized from [SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb). Similar to the original PaliGemma model, PaliGemma 2 is trained following the [PaLI-3](https://arxiv.org/abs/2310.09199) recipes. #### Inputs and outputs * **Input:** Image and text string, such as a prompt to caption the image, or a question. * **Output:** Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords. #### Citation ```none @article{ title={PaliGemma 2: A Family of Versatile VLMs for Transfer}, author={Andreas Steiner and André Susano Pinto and Michael Tschannen and Daniel Keysers and Xiao Wang and Yonatan Bitton and Alexey Gritsenko and Matthias Minderer and Anthony Sherbondy and Shangbang Long and Siyang Qin and Reeve Ingle and Emanuele Bugliarello and Sahar Kazemzadeh and Thomas Mesnard and Ibrahim Alabdulmohsin and Lucas Beyer and Xiaohua Zhai}, year={2024}, journal={arXiv preprint arXiv:2412.03555} } ``` ### Model data #### Pre-train datasets PaliGemma 2 is pre-trained on the following mixture of datasets: * **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is a web-scale multilingual image-text dataset built from the public web. A wide range of WebLI splits are used to acquire versatile model capabilities, such as visual semantic understanding, object localization, visually-situated text understanding, and multilinguality. * **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud Translation API](https://cloud.google.com/translate) to translate into 34 additional languages. * **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al., 2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the same additional 34 languages as CC3M-35L, using the [Google Cloud Translation API](https://cloud.google.com/translate). * **OpenImages:** Detection and object-aware questions and answers ([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by handcrafted rules on the [OpenImages dataset]. * **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al., 2021](https://arxiv.org/abs/2103.01913)). [OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html PaliGemma 2 is based on Gemma 2, and you can find information on the pre-training datasets for Gemma 2 in the [Gemma 2 model card](https://ai.google.dev/gemma/docs/model_card_2). #### Data responsibility filtering The following filters are applied to WebLI, with the goal of training PaliGemma 2 on safe and responsible data: * **Pornographic image filtering:** This filter removes images deemed to be of pornographic nature. * **Text safety filtering:** We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about child sexual abuse imagery (CSAI), pornography, vulgarities, or is otherwise offensive. * **Text toxicity filtering:** We further use the [Perspective API](https://perspectiveapi.com/) to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic. * **Text personal information filtering:** We filtered certain personal information and other sensitive data using the [Cloud Data Loss Prevention (DLP) API](https://cloud.google.com/security/products/dlp) to protect the privacy of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed. * **Additional methods:** Filtering based on content quality and safety in line with our policies and practices. [other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759 ## Use in Transformers The following snippet uses model `google/paligemma2-28b-pt-224` for reference purposes. It is a base model and is recommended to use after fine tuning it on a downstream task. Here is a [notebook](https://github.com/merveenoyan/smol-vision/blob/main/Fine_tune_PaliGemma.ipynb) that showcases fine-tuning PaliGemma 2. ```python from transformers import ( PaliGemmaProcessor, PaliGemmaForConditionalGeneration, ) from transformers.image_utils import load_image import torch model_id = "google/paligemma2-28b-pt-224" url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" image = load_image(url) model = PaliGemmaForConditionalGeneration.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto").eval() processor = PaliGemmaProcessor.from_pretrained(model_id) # Leaving the prompt blank for pre-trained models prompt = "" model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(torch.bfloat16).to(model.device) input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ``` ## Implementation information ### Hardware PaliGemma 2 was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e). ### Software Training was completed using [JAX](https://github.com/google/jax), [Flax](https://github.com/google/flax), [TFDS](https://github.com/tensorflow/datasets) and [`big_vision`](https://github.com/google-research/big_vision). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma 2 fine-tune code and inference code are released in the `big_vision` GitHub repository. ## Evaluation information ### Benchmark results In order to verify the transferability of PaliGemma 2 to a wide variety of academic tasks, we fine-tune the pretrained models on each task. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web-scale pre-training data. #### PaliGemma 2 results by model resolution and size | Benchmark | 224-3B | 224-10B | 224-28B | 448-3B | 448-10B | 448-28B | |-------------------------------|:------:|:-------:|:-------:|:------:|:-------:|:-------:| | [AI2D][ai2d] | 74.7 | 83.1 | 83.2 | 76.0 | 84.4 | 84.6 | | [AOKVQA-DA][aokvqa-da] (val) | 64.2 | 68.9 | 70.2 | 67.9 | 70.8 | 71.2 | | [AOKVQA-MC][aokvqa-mc] (val) | 79.7 | 83.7 | 84.7 | 82.5 | 85.9 | 87.0 | | [ActivityNet-CAP][anet-cap] | 34.2 | 35.9 | - | - | - | - | | [ActivityNet-QA][anet-qa] | 51.3 | 53.2 | - | - | - | - | | [COCO-35L][coco-35l] (avg34) | 113.9 | 115.8 | 116.5 | 115.8 | 117.2 | 117.2 | | [COCO-35L][coco-35l] (en) | 138.4 | 140.8 | 142.4 | 140.4 | 142.4 | 142.3 | | [COCOcap][coco-cap] | 141.3 | 143.7 | 144.0 | 143.4 | 145.0 | 145.2 | | [ChartQA][chartqa] (aug) | 74.4 | 74.2 | 68.9 | 89.2 | 90.1 | 85.1 | | [ChartQA][chartqa] (human) | 42.0 | 48.4 | 46.8 | 54.0 | 66.4 | 61.3 | | [CountBenchQA][countbenchqa] | 81.0 | 84.0 | 86.4 | 82.0 | 85.3 | 87.4 | | [DocVQA][docvqa] (val) | 39.9 | 43.9 | 44.9 | 73.6 | 76.6 | 76.1 | | [GQA][gqa] | 66.2 | 67.2 | 67.3 | 68.1 | 68.3 | 68.3 | | [InfoVQA][info-vqa] (val) | 25.2 | 33.6 | 36.4 | 37.5 | 47.8 | 46.7 | | [MARVL][marvl] (avg5) | 83.5 | 89.5 | 90.6 | 82.7 | 89.1 | 89.7 | | [MSRVTT-CAP][msrvtt] | 68.5 | 72.1 | - | - | - | - | | [MSRVTT-QA][msrvtt] | 50.5 | 51.9 | - | - | - | - | | [MSVD-QA][msvd-qa] | 61.1 | 62.5 | - | - | - | - | | [NLVR2][nlvr2] | 91.4 | 93.9 | 94.2 | 91.6 | 93.7 | 94.1 | | [NoCaps][nocaps] | 123.1 | 126.3 | 127.1 | 123.5 | 126.9 | 127.0 | | [OCR-VQA][ocr-vqa] | 73.4 | 74.7 | 75.3 | 75.7 | 76.3 | 76.6 | | [OKVQA][okvqa] | 64.2 | 68.0 | 71.2 | 64.1 | 68.6 | 70.6 | | [RSVQA-hr][rsvqa-hr] (test) | 92.7 | 92.6 | 92.7 | 92.8 | 92.8 | 92.8 | | [RSVQA-hr][rsvqa-hr] (test2) | 90.9 | 90.8 | 90.9 | 90.7 | 90.7 | 90.8 | | [RSVQA-lr][rsvqa-lr] | 93.0 | 92.8 | 93.5 | 92.7 | 93.1 | 93.7 | | [RefCOCO][refcoco] (testA) | 75.7 | 77.2 | 76.8 | 78.6 | 79.7 | 79.3 | | [RefCOCO][refcoco] (testB) | 71.0 | 74.2 | 73.9 | 73.5 | 76.2 | 74.8 | | [RefCOCO][refcoco] (val) | 73.4 | 75.9 | 75.0 | 76.3 | 78.2 | 77.3 | | [RefCOCO+][refcoco+] (testA) | 72.7 | 74.7 | 73.6 | 76.1 | 77.7 | 76.6 | | [RefCOCO+][refcoco+] (testB) | 64.2 | 68.4 | 67.1 | 67.0 | 71.1 | 68.6 | | [RefCOCO+][refcoco+] (val) | 68.6 | 72.0 | 70.3 | 72.1 | 74.4 | 72.8 | | [RefCOCOg][refcocog] (test) | 69.0 | 71.9 | 70.7 | 72.7 | 74.8 | 73.7 | | [RefCOCOg][refcocog] (val) | 68.3 | 71.4 | 70.5 | 72.3 | 74.4 | 73.0 | | [ST-VQA][st-vqa] (val) | 61.9 | 64.3 | 65.1 | 80.5 | 82.0 | 81.8 | | [SciCap][scicap] | 165.1 | 159.5 | 156.9 | 183.3 | 177.2 | 172.7 | | [ScienceQA][scienceqa] | 96.1 | 98.2 | 98.2 | 96.2 | 98.5 | 98.6 | | [Screen2Words][screen2words] | 113.3 | 117.8 | 122.8 | 114.0 | 119.1 | 123.4 | | [TallyQA][tallyqa] (complex) | 70.3 | 73.4 | 74.2 | 73.6 | 76.7 | 76.8 | | [TallyQA][tallyqa] (simple) | 81.8 | 83.2 | 83.4 | 85.3 | 86.2 | 85.7 | | [TextCaps][textcaps] | 127.5 | 137.9 | 139.9 | 152.1 | 157.7 | 153.6 | | [TextVQA][textvqa] (val) | 59.6 | 64.0 | 64.7 | 75.2 | 76.6 | 76.2 | | [VATEX][vatex] | 80.8 | 82.7 | - | - | - | - | | [VQAv2][vqav2] (minival) | 83.0 | 84.3 | 84.5 | 84.8 | 85.8 | 85.8 | | [VizWizVQA][vizwiz-vqa] (val) | 76.4 | 78.1 | 78.7 | 77.5 | 78.6 | 78.9 | | [WidgetCap][widgetcap] | 138.1 | 139.8 | 138.8 | 151.4 | 151.9 | 148.9 | | [XM3600][xm3600] (avg35) | 42.8 | 44.5 | 45.2 | 43.2 | 44.6 | 45.2 | | [XM3600][xm3600] (en) | 79.8 | 80.7 | 81.0 | 80.3 | 81.5 | 81.0 | | [xGQA][xgqa] (avg7) | 58.6 | 61.4 | 61.1 | 60.4 | 62.6 | 62.1 | #### Additional Benchmarks **[ICDAR 2015 Incidental][icdar2015-inc]** | Model | Precision | Recall | F1 | |-----------------|-----------|:------:|:-----:| | PaliGemma 2 3B | 81.88 | 70.73 | 75.9 | **[Total-Text][total-text]** | Model | Precision | Recall | F1 | |-----------------|-----------|:------:|:-----:| | PaliGemma 2 3B | 73.8. | 74.54 | 74.17 | **[FinTabNet][fintabnet]** | Model | S-TEDS | TEDS | GriTS-Top | GriTS-Con | |-----------------|--------|-------|-----------|-----------| | PaliGemma 2 3B | 99.18 | 98.94 | 99.43 | 99.21 | **[PubTabNet][pubtabnet]** | Model | S-TEDS | TEDS | GriTS-Top | GriTS-Con | |-----------------|--------|-------|-----------|-----------| | PaliGemma 2 3B | 97.6 | 97.31 | 97.99 | 97.84 | **[GrandStaff][grandstaff]** | Model | CER | LER | SER | |-----------------|-----|-----|-----| | PaliGemma 2 3B | 1.6 | 6.7 | 2.3 | **[PubChem][pubchem]** * PaliGemma 2 3B, Full Match: 94.8 **[DOCCI][docci]** | Model | avg#char | avg#sent | NES % | |-----------------|----------|----------|---------| | PaliGemma 2 3B | 529 | 7.74 | 28.42 | | PaliGemma 2 10B | 521 | 7.45 | 20.27 | - *avg#char*: Average number of characters - *avg#sent*: Average number of sentences - *NES*: Non entailment sentences **[MIMIC-CXR][mimic-cxr]** | Model | CIDEr | BLEU4 | Rouge-L | RadGraph F1 | |-----------------|-------|-------|---------|-------------| | PaliGemma 2 3B | 19.9% | 14.6% | 31.92% | 28.8% | | PaliGemma 2 10B | 17.4% | 15% | 32.41% | 29.5% | **[Visual Spatial Reasoning][vsr]** | Model | VSR zeroshot split (test) | VSR random split (test) | |-----------------|---------------------------|--------------------------| | PaliGemma 2 3B | 0.75 | 0.82 | | PaliGemma 2 10B | 0.80 | 0.87 | ## Ethics and safety ### Evaluation approach Our evaluation methods include structured ethics and safety evaluations across relevant content policies, including: * Human evaluation on prompts covering child safety, content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach, but with image captioning and visual question answering setups. * Image-to-Text benchmark evaluation: Benchmark against relevant academic datasets such as FairFace Dataset ([Karkkainen et al., 2021](https://arxiv.org/abs/1908.04913)). ### Evaluation results * The human evaluation results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety and representational harms. * On top of robust internal evaluations, we also use the Perspective API (threshold of 0.8) to measure toxicity, profanity, and other potential issues in the generated captions for images sourced from the FairFace dataset. We report the maximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes. <table> <tr> <col> <colgroup span="3"></colgroup> <colgroup span="3"></colgroup> <colgroup span="3"></colgroup> <th>Metric</th> <th colspan="3" scope="colgroup">Perceived gender</th> <th colspan="3" scope="colgroup">Ethnicity</th> <th colspan="3" scope="colgroup">Age group</th> </tr> <tr> <th>Model size</th> <th scope="col">3B</th> <th scope="col">10B</th> <th scope="col">28B</th> <th scope="col">3B</th> <th scope="col">10B</th> <th scope="col">28B</th> <th scope="col">3B</th> <th scope="col">10B</th> <th scope="col">28B</th> </tr> <tr> <th></th> <th colspan="9" scope="colgroup">Maximum</th> </tr> <tr> <td>Toxicity</td> <td>0.14%</td> <td>0.15%</td> <td>0.19%</td> <td>0.29%</td> <td>0.39%</td> <td>0.39%</td> <td>0.26%</td> <td>0.18%</td> <td>0.32%</td> </tr> <tr> <td>Identity Attack</td> <td>0.04%</td> <td>0.02%</td> <td>0.02%</td> <td>0.13%</td> <td>0.06%</td> <td>0.06%</td> <td>0.06%</td> <td>0.03%</td> <td>0.06%</td> </tr> <tr> <td>Insult</td> <td>0.17%</td> <td>0.25%</td> <td>0.17%</td> <td>0.37%</td> <td>0.52%</td> <td>0.52%</td> <td>0.27%</td> <td>0.39%</td> <td>0.24%</td> </tr> <tr> <td>Threat</td> <td>0.55%</td> <td>0.43%</td> <td>0.57%</td> <td>0.83%</td> <td>0.48%</td> <td>0.48%</td> <td>0.64%</td> <td>0.43%</td> <td>0.64%</td> </tr> <tr> <td>Profanity</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> <tr> <th></th> <th colspan="9" scope="colgroup">Median</th> </tr> <tr> <td>Toxicity</td> <td>0.13%</td> <td>0.10%</td> <td>0.18%</td> <td>0.07%</td> <td>0.07%</td> <td>0.14%</td> <td>0.12%</td> <td>0.08%</td> <td>0.12%</td> </tr> <tr> <td>Identity Attack</td> <td>0.02%</td> <td>0.01%</td> <td>0.02%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> <tr> <td>Insult</td> <td>0.15%</td> <td>0.23%</td> <td>0.14%</td> <td>0.14%</td> <td>0.17%</td> <td>0.13%</td> <td>0.09%</td> <td>0.18%</td> <td>0.16%</td> </tr> <tr> <td>Threat</td> <td>0.35%</td> <td>0.27%</td> <td>0.41%</td> <td>0.28%</td> <td>0.19%</td> <td>0.42%</td> <td>0.27%</td> <td>0.31%</td> <td>0.40%</td> </tr> <tr> <td>Profanity</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> </table> ## Usage and limitations ### Intended usage Open Vision Language Models (VLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). Fine-tune on specific vision-language task: * The pre-trained models can be fine-tuned on a wide range of vision-language tasks such as: image captioning, short video caption, visual question answering, text reading, object detection and object segmentation. * The pre-trained models can be fine-tuned for specific domains such as remote sensing question answering, visual questions from people who are blind, science question answering, describe UI element functionalities. * The pre-trained models can be fine-tuned for tasks with non-textual outputs such as bounding boxes or segmentation masks. Vision-language research: * The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM techniques, develop algorithms, and contribute to the advancement of the field. ### Ethical considerations and risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * VLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible). * Transparency and Accountability * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * **Perpetuation of biases:** It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * **Generation of harmful content:** Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * **Misuse for malicious purposes:** Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided: see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible). Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Limitations * Most limitations inherited from the underlying Gemma 2 models still apply: * VLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * Natural language is inherently complex. VLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * VLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * VLMs rely on statistical patterns in language and images. They might lack the ability to apply common sense reasoning in certain situations. * PaliGemma 2 was designed first and foremost to serve as a general pre-trained model for fine-tuning to specialized tasks. Hence, its "out of the box" or "zero-shot" performance might lag behind models designed specifically for general purpose use. * PaliGemma 2 is not a multi-turn chatbot. It is designed for a single round of image and text input. [ai2d]: https://allenai.org/data/diagrams [aokvqa-da]: https://allenai.org/project/a-okvqa/home [aokvqa-mc]: https://allenai.org/project/a-okvqa/home [anet-cap]: https://paperswithcode.com/dataset/activitynet-captions [anet-qa]: https://arxiv.org/abs/1906.02467 [chartqa]: https://arxiv.org/abs/2203.10244 [coco-35l]: https://arxiv.org/pdf/2205.12522 [coco-cap]: https://cocodataset.org/#home [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/ [docvqa]: https://www.docvqa.org/ [gqa]: https://cs.stanford.edu/people/dorarad/gqa/about.html [info-vqa]: https://arxiv.org/abs/2104.12756 [marvl]: https://marvl-challenge.github.io/ [msrvtt]: https://paperswithcode.com/dataset/msr-vtt [msvd-qa]: https://paperswithcode.com/dataset/msvd-qa [nlvr2]: https://lil.nlp.cornell.edu/nlvr/ [nocaps]: https://nocaps.org/ [ocr-vqa]: https://ocr-vqa.github.io/ [okvqa]: https://okvqa.allenai.org/ [refcoco]: https://arxiv.org/abs/1608.00272 [refcoco+]: https://aclanthology.org/D14-1086 [refcocog]: https://arxiv.org/abs/1511.02283 [rsvqa-hr]: https://zenodo.org/records/6344367 [rsvqa-lr]: https://zenodo.org/records/6344334 [st-vqa]: https://arxiv.org/abs/1905.13648 [scicap]: https://arxiv.org/abs/2110.11624 [scienceqa]: https://scienceqa.github.io/ [screen2words]: https://arxiv.org/abs/2108.03353 [tallyqa]: https://arxiv.org/abs/1810.12440 [textcaps]: https://textvqa.org/textcaps/ [textvqa]: https://textvqa.org/ [vatex]: https://arxiv.org/abs/1904.03493 [vizwiz-vqa]: https://vizwiz.org/tasks-and-datasets/vqa/ [widgetcap]: https://arxiv.org/abs/2010.04295 [vqav2]: https://visualqa.org/index.html [xgqa]: https://aclanthology.org/2022.findings-acl.196/ [xm3600]: https://arxiv.org/pdf/2205.12522 [icdar2015-inc]: https://arxiv.org/abs/1511.09207 [total-text]: https://paperswithcode.com/paper/total-text-a-comprehensive-dataset-for-scene [fintabnet]: https://developer.ibm.com/data/fintabnet/ [pubtabnet]: https://paperswithcode.com/dataset/pubtabnet [grandstaff]: https://link.springer.com/article/10.1007/s10032-023-00432-z [pubchem]: https://pmc.ncbi.nlm.nih.gov/articles/PMC7352161/ [docci]: https://research.google/pubs/docci-descriptions-of-connected-and-contrasting-images/ [mimic-cxr]: https://paperswithcode.com/dataset/mimic-cxr [vsr]: https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00566/116470/Visual-Spatial-Reasoning
null
Non_BioNLP
# PaliGemma 2 model card **Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma) Transformers PaliGemma 2 28B weights, pre-trained with 224*224 input images and 128 token input/output text sequences. The model is available in the `bfloat16` format for fine-tuning. **Resources and technical documentation:** * [PaliGemma 2 on Kaggle](https://www.kaggle.com/models/google/paligemma-2) * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) **Terms of Use:** [Terms](https://ai.google.dev/gemma/terms) **Authors:** Google ## Model information ### Model summary PaliGemma 2 is an update of the [PaliGemma](https://arxiv.org/abs/2407.07726) vision-language model (VLM) which incorporates the capabilities of the [Gemma 2](https://arxiv.org/abs/2408.00118) models. The PaliGemma family of models is inspired by [PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as the [SigLIP](https://arxiv.org/abs/2303.15343) vision model and [Gemma 2](https://arxiv.org/abs/2408.00118) language models. It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation. #### Model architecture PaliGemma 2 is the composition of a [Transformer decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image encoder](https://arxiv.org/abs/2010.11929). The text decoder is initialized from [Gemma 2](https://ai.google.dev/gemma/docs/base) in the 2B, 9B, and 27B parameter sizes. The image encoder is initialized from [SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb). Similar to the original PaliGemma model, PaliGemma 2 is trained following the [PaLI-3](https://arxiv.org/abs/2310.09199) recipes. #### Inputs and outputs * **Input:** Image and text string, such as a prompt to caption the image, or a question. * **Output:** Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords. #### Citation ```none @article{ title={PaliGemma 2: A Family of Versatile VLMs for Transfer}, author={Andreas Steiner and André Susano Pinto and Michael Tschannen and Daniel Keysers and Xiao Wang and Yonatan Bitton and Alexey Gritsenko and Matthias Minderer and Anthony Sherbondy and Shangbang Long and Siyang Qin and Reeve Ingle and Emanuele Bugliarello and Sahar Kazemzadeh and Thomas Mesnard and Ibrahim Alabdulmohsin and Lucas Beyer and Xiaohua Zhai}, year={2024}, journal={arXiv preprint arXiv:2412.03555} } ``` ### Model data #### Pre-train datasets PaliGemma 2 is pre-trained on the following mixture of datasets: * **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is a web-scale multilingual image-text dataset built from the public web. A wide range of WebLI splits are used to acquire versatile model capabilities, such as visual semantic understanding, object localization, visually-situated text understanding, and multilinguality. * **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud Translation API](https://cloud.google.com/translate) to translate into 34 additional languages. * **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al., 2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the same additional 34 languages as CC3M-35L, using the [Google Cloud Translation API](https://cloud.google.com/translate). * **OpenImages:** Detection and object-aware questions and answers ([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by handcrafted rules on the [OpenImages dataset]. * **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al., 2021](https://arxiv.org/abs/2103.01913)). [OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html PaliGemma 2 is based on Gemma 2, and you can find information on the pre-training datasets for Gemma 2 in the [Gemma 2 model card](https://ai.google.dev/gemma/docs/model_card_2). #### Data responsibility filtering The following filters are applied to WebLI, with the goal of training PaliGemma 2 on safe and responsible data: * **Pornographic image filtering:** This filter removes images deemed to be of pornographic nature. * **Text safety filtering:** We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about child sexual abuse imagery (CSAI), pornography, vulgarities, or is otherwise offensive. * **Text toxicity filtering:** We further use the [Perspective API](https://perspectiveapi.com/) to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic. * **Text personal information filtering:** We filtered certain personal information and other sensitive data using the [Cloud Data Loss Prevention (DLP) API](https://cloud.google.com/security/products/dlp) to protect the privacy of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed. * **Additional methods:** Filtering based on content quality and safety in line with our policies and practices. [other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759 ## Use in Transformers The following snippet uses model `google/paligemma2-28b-pt-224` for reference purposes. It is a base model and is recommended to use after fine tuning it on a downstream task. Here is a [notebook](https://github.com/merveenoyan/smol-vision/blob/main/Fine_tune_PaliGemma.ipynb) that showcases fine-tuning PaliGemma 2. ```python from transformers import ( PaliGemmaProcessor, PaliGemmaForConditionalGeneration, ) from transformers.image_utils import load_image import torch model_id = "google/paligemma2-28b-pt-224" url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" image = load_image(url) model = PaliGemmaForConditionalGeneration.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto").eval() processor = PaliGemmaProcessor.from_pretrained(model_id) # Leaving the prompt blank for pre-trained models prompt = "" model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(torch.bfloat16).to(model.device) input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ``` ## Implementation information ### Hardware PaliGemma 2 was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e). ### Software Training was completed using [JAX](https://github.com/google/jax), [Flax](https://github.com/google/flax), [TFDS](https://github.com/tensorflow/datasets) and [`big_vision`](https://github.com/google-research/big_vision). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma 2 fine-tune code and inference code are released in the `big_vision` GitHub repository. ## Evaluation information ### Benchmark results In order to verify the transferability of PaliGemma 2 to a wide variety of academic tasks, we fine-tune the pretrained models on each task. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web-scale pre-training data. #### PaliGemma 2 results by model resolution and size | Benchmark | 224-3B | 224-10B | 224-28B | 448-3B | 448-10B | 448-28B | |-------------------------------|:------:|:-------:|:-------:|:------:|:-------:|:-------:| | [AI2D][ai2d] | 74.7 | 83.1 | 83.2 | 76.0 | 84.4 | 84.6 | | [AOKVQA-DA][aokvqa-da] (val) | 64.2 | 68.9 | 70.2 | 67.9 | 70.8 | 71.2 | | [AOKVQA-MC][aokvqa-mc] (val) | 79.7 | 83.7 | 84.7 | 82.5 | 85.9 | 87.0 | | [ActivityNet-CAP][anet-cap] | 34.2 | 35.9 | - | - | - | - | | [ActivityNet-QA][anet-qa] | 51.3 | 53.2 | - | - | - | - | | [COCO-35L][coco-35l] (avg34) | 113.9 | 115.8 | 116.5 | 115.8 | 117.2 | 117.2 | | [COCO-35L][coco-35l] (en) | 138.4 | 140.8 | 142.4 | 140.4 | 142.4 | 142.3 | | [COCOcap][coco-cap] | 141.3 | 143.7 | 144.0 | 143.4 | 145.0 | 145.2 | | [ChartQA][chartqa] (aug) | 74.4 | 74.2 | 68.9 | 89.2 | 90.1 | 85.1 | | [ChartQA][chartqa] (human) | 42.0 | 48.4 | 46.8 | 54.0 | 66.4 | 61.3 | | [CountBenchQA][countbenchqa] | 81.0 | 84.0 | 86.4 | 82.0 | 85.3 | 87.4 | | [DocVQA][docvqa] (val) | 39.9 | 43.9 | 44.9 | 73.6 | 76.6 | 76.1 | | [GQA][gqa] | 66.2 | 67.2 | 67.3 | 68.1 | 68.3 | 68.3 | | [InfoVQA][info-vqa] (val) | 25.2 | 33.6 | 36.4 | 37.5 | 47.8 | 46.7 | | [MARVL][marvl] (avg5) | 83.5 | 89.5 | 90.6 | 82.7 | 89.1 | 89.7 | | [MSRVTT-CAP][msrvtt] | 68.5 | 72.1 | - | - | - | - | | [MSRVTT-QA][msrvtt] | 50.5 | 51.9 | - | - | - | - | | [MSVD-QA][msvd-qa] | 61.1 | 62.5 | - | - | - | - | | [NLVR2][nlvr2] | 91.4 | 93.9 | 94.2 | 91.6 | 93.7 | 94.1 | | [NoCaps][nocaps] | 123.1 | 126.3 | 127.1 | 123.5 | 126.9 | 127.0 | | [OCR-VQA][ocr-vqa] | 73.4 | 74.7 | 75.3 | 75.7 | 76.3 | 76.6 | | [OKVQA][okvqa] | 64.2 | 68.0 | 71.2 | 64.1 | 68.6 | 70.6 | | [RSVQA-hr][rsvqa-hr] (test) | 92.7 | 92.6 | 92.7 | 92.8 | 92.8 | 92.8 | | [RSVQA-hr][rsvqa-hr] (test2) | 90.9 | 90.8 | 90.9 | 90.7 | 90.7 | 90.8 | | [RSVQA-lr][rsvqa-lr] | 93.0 | 92.8 | 93.5 | 92.7 | 93.1 | 93.7 | | [RefCOCO][refcoco] (testA) | 75.7 | 77.2 | 76.8 | 78.6 | 79.7 | 79.3 | | [RefCOCO][refcoco] (testB) | 71.0 | 74.2 | 73.9 | 73.5 | 76.2 | 74.8 | | [RefCOCO][refcoco] (val) | 73.4 | 75.9 | 75.0 | 76.3 | 78.2 | 77.3 | | [RefCOCO+][refcoco+] (testA) | 72.7 | 74.7 | 73.6 | 76.1 | 77.7 | 76.6 | | [RefCOCO+][refcoco+] (testB) | 64.2 | 68.4 | 67.1 | 67.0 | 71.1 | 68.6 | | [RefCOCO+][refcoco+] (val) | 68.6 | 72.0 | 70.3 | 72.1 | 74.4 | 72.8 | | [RefCOCOg][refcocog] (test) | 69.0 | 71.9 | 70.7 | 72.7 | 74.8 | 73.7 | | [RefCOCOg][refcocog] (val) | 68.3 | 71.4 | 70.5 | 72.3 | 74.4 | 73.0 | | [ST-VQA][st-vqa] (val) | 61.9 | 64.3 | 65.1 | 80.5 | 82.0 | 81.8 | | [SciCap][scicap] | 165.1 | 159.5 | 156.9 | 183.3 | 177.2 | 172.7 | | [ScienceQA][scienceqa] | 96.1 | 98.2 | 98.2 | 96.2 | 98.5 | 98.6 | | [Screen2Words][screen2words] | 113.3 | 117.8 | 122.8 | 114.0 | 119.1 | 123.4 | | [TallyQA][tallyqa] (complex) | 70.3 | 73.4 | 74.2 | 73.6 | 76.7 | 76.8 | | [TallyQA][tallyqa] (simple) | 81.8 | 83.2 | 83.4 | 85.3 | 86.2 | 85.7 | | [TextCaps][textcaps] | 127.5 | 137.9 | 139.9 | 152.1 | 157.7 | 153.6 | | [TextVQA][textvqa] (val) | 59.6 | 64.0 | 64.7 | 75.2 | 76.6 | 76.2 | | [VATEX][vatex] | 80.8 | 82.7 | - | - | - | - | | [VQAv2][vqav2] (minival) | 83.0 | 84.3 | 84.5 | 84.8 | 85.8 | 85.8 | | [VizWizVQA][vizwiz-vqa] (val) | 76.4 | 78.1 | 78.7 | 77.5 | 78.6 | 78.9 | | [WidgetCap][widgetcap] | 138.1 | 139.8 | 138.8 | 151.4 | 151.9 | 148.9 | | [XM3600][xm3600] (avg35) | 42.8 | 44.5 | 45.2 | 43.2 | 44.6 | 45.2 | | [XM3600][xm3600] (en) | 79.8 | 80.7 | 81.0 | 80.3 | 81.5 | 81.0 | | [xGQA][xgqa] (avg7) | 58.6 | 61.4 | 61.1 | 60.4 | 62.6 | 62.1 | #### Additional Benchmarks **[ICDAR 2015 Incidental][icdar2015-inc]** | Model | Precision | Recall | F1 | |-----------------|-----------|:------:|:-----:| | PaliGemma 2 3B | 81.88 | 70.73 | 75.9 | **[Total-Text][total-text]** | Model | Precision | Recall | F1 | |-----------------|-----------|:------:|:-----:| | PaliGemma 2 3B | 73.8. | 74.54 | 74.17 | **[FinTabNet][fintabnet]** | Model | S-TEDS | TEDS | GriTS-Top | GriTS-Con | |-----------------|--------|-------|-----------|-----------| | PaliGemma 2 3B | 99.18 | 98.94 | 99.43 | 99.21 | **[PubTabNet][pubtabnet]** | Model | S-TEDS | TEDS | GriTS-Top | GriTS-Con | |-----------------|--------|-------|-----------|-----------| | PaliGemma 2 3B | 97.6 | 97.31 | 97.99 | 97.84 | **[GrandStaff][grandstaff]** | Model | CER | LER | SER | |-----------------|-----|-----|-----| | PaliGemma 2 3B | 1.6 | 6.7 | 2.3 | **[PubChem][pubchem]** * PaliGemma 2 3B, Full Match: 94.8 **[DOCCI][docci]** | Model | avg#char | avg#sent | NES % | |-----------------|----------|----------|---------| | PaliGemma 2 3B | 529 | 7.74 | 28.42 | | PaliGemma 2 10B | 521 | 7.45 | 20.27 | - *avg#char*: Average number of characters - *avg#sent*: Average number of sentences - *NES*: Non entailment sentences **[MIMIC-CXR][mimic-cxr]** | Model | CIDEr | BLEU4 | Rouge-L | RadGraph F1 | |-----------------|-------|-------|---------|-------------| | PaliGemma 2 3B | 19.9% | 14.6% | 31.92% | 28.8% | | PaliGemma 2 10B | 17.4% | 15% | 32.41% | 29.5% | **[Visual Spatial Reasoning][vsr]** | Model | VSR zeroshot split (test) | VSR random split (test) | |-----------------|---------------------------|--------------------------| | PaliGemma 2 3B | 0.75 | 0.82 | | PaliGemma 2 10B | 0.80 | 0.87 | ## Ethics and safety ### Evaluation approach Our evaluation methods include structured ethics and safety evaluations across relevant content policies, including: * Human evaluation on prompts covering child safety, content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach, but with image captioning and visual question answering setups. * Image-to-Text benchmark evaluation: Benchmark against relevant academic datasets such as FairFace Dataset ([Karkkainen et al., 2021](https://arxiv.org/abs/1908.04913)). ### Evaluation results * The human evaluation results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety and representational harms. * On top of robust internal evaluations, we also use the Perspective API (threshold of 0.8) to measure toxicity, profanity, and other potential issues in the generated captions for images sourced from the FairFace dataset. We report the maximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes. <table> <tr> <col> <colgroup span="3"></colgroup> <colgroup span="3"></colgroup> <colgroup span="3"></colgroup> <th>Metric</th> <th colspan="3" scope="colgroup">Perceived gender</th> <th colspan="3" scope="colgroup">Ethnicity</th> <th colspan="3" scope="colgroup">Age group</th> </tr> <tr> <th>Model size</th> <th scope="col">3B</th> <th scope="col">10B</th> <th scope="col">28B</th> <th scope="col">3B</th> <th scope="col">10B</th> <th scope="col">28B</th> <th scope="col">3B</th> <th scope="col">10B</th> <th scope="col">28B</th> </tr> <tr> <th></th> <th colspan="9" scope="colgroup">Maximum</th> </tr> <tr> <td>Toxicity</td> <td>0.14%</td> <td>0.15%</td> <td>0.19%</td> <td>0.29%</td> <td>0.39%</td> <td>0.39%</td> <td>0.26%</td> <td>0.18%</td> <td>0.32%</td> </tr> <tr> <td>Identity Attack</td> <td>0.04%</td> <td>0.02%</td> <td>0.02%</td> <td>0.13%</td> <td>0.06%</td> <td>0.06%</td> <td>0.06%</td> <td>0.03%</td> <td>0.06%</td> </tr> <tr> <td>Insult</td> <td>0.17%</td> <td>0.25%</td> <td>0.17%</td> <td>0.37%</td> <td>0.52%</td> <td>0.52%</td> <td>0.27%</td> <td>0.39%</td> <td>0.24%</td> </tr> <tr> <td>Threat</td> <td>0.55%</td> <td>0.43%</td> <td>0.57%</td> <td>0.83%</td> <td>0.48%</td> <td>0.48%</td> <td>0.64%</td> <td>0.43%</td> <td>0.64%</td> </tr> <tr> <td>Profanity</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> <tr> <th></th> <th colspan="9" scope="colgroup">Median</th> </tr> <tr> <td>Toxicity</td> <td>0.13%</td> <td>0.10%</td> <td>0.18%</td> <td>0.07%</td> <td>0.07%</td> <td>0.14%</td> <td>0.12%</td> <td>0.08%</td> <td>0.12%</td> </tr> <tr> <td>Identity Attack</td> <td>0.02%</td> <td>0.01%</td> <td>0.02%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> <tr> <td>Insult</td> <td>0.15%</td> <td>0.23%</td> <td>0.14%</td> <td>0.14%</td> <td>0.17%</td> <td>0.13%</td> <td>0.09%</td> <td>0.18%</td> <td>0.16%</td> </tr> <tr> <td>Threat</td> <td>0.35%</td> <td>0.27%</td> <td>0.41%</td> <td>0.28%</td> <td>0.19%</td> <td>0.42%</td> <td>0.27%</td> <td>0.31%</td> <td>0.40%</td> </tr> <tr> <td>Profanity</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> </table> ## Usage and limitations ### Intended usage Open Vision Language Models (VLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). Fine-tune on specific vision-language task: * The pre-trained models can be fine-tuned on a wide range of vision-language tasks such as: image captioning, short video caption, visual question answering, text reading, object detection and object segmentation. * The pre-trained models can be fine-tuned for specific domains such as remote sensing question answering, visual questions from people who are blind, science question answering, describe UI element functionalities. * The pre-trained models can be fine-tuned for tasks with non-textual outputs such as bounding boxes or segmentation masks. Vision-language research: * The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM techniques, develop algorithms, and contribute to the advancement of the field. ### Ethical considerations and risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * VLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible). * Transparency and Accountability * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * **Perpetuation of biases:** It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * **Generation of harmful content:** Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * **Misuse for malicious purposes:** Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided: see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible). Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Limitations * Most limitations inherited from the underlying Gemma 2 models still apply: * VLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * Natural language is inherently complex. VLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * VLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * VLMs rely on statistical patterns in language and images. They might lack the ability to apply common sense reasoning in certain situations. * PaliGemma 2 was designed first and foremost to serve as a general pre-trained model for fine-tuning to specialized tasks. Hence, its "out of the box" or "zero-shot" performance might lag behind models designed specifically for general purpose use. * PaliGemma 2 is not a multi-turn chatbot. It is designed for a single round of image and text input. [ai2d]: https://allenai.org/data/diagrams [aokvqa-da]: https://allenai.org/project/a-okvqa/home [aokvqa-mc]: https://allenai.org/project/a-okvqa/home [anet-cap]: https://paperswithcode.com/dataset/activitynet-captions [anet-qa]: https://arxiv.org/abs/1906.02467 [chartqa]: https://arxiv.org/abs/2203.10244 [coco-35l]: https://arxiv.org/pdf/2205.12522 [coco-cap]: https://cocodataset.org/#home [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/ [docvqa]: https://www.docvqa.org/ [gqa]: https://cs.stanford.edu/people/dorarad/gqa/about.html [info-vqa]: https://arxiv.org/abs/2104.12756 [marvl]: https://marvl-challenge.github.io/ [msrvtt]: https://paperswithcode.com/dataset/msr-vtt [msvd-qa]: https://paperswithcode.com/dataset/msvd-qa [nlvr2]: https://lil.nlp.cornell.edu/nlvr/ [nocaps]: https://nocaps.org/ [ocr-vqa]: https://ocr-vqa.github.io/ [okvqa]: https://okvqa.allenai.org/ [refcoco]: https://arxiv.org/abs/1608.00272 [refcoco+]: https://aclanthology.org/D14-1086 [refcocog]: https://arxiv.org/abs/1511.02283 [rsvqa-hr]: https://zenodo.org/records/6344367 [rsvqa-lr]: https://zenodo.org/records/6344334 [st-vqa]: https://arxiv.org/abs/1905.13648 [scicap]: https://arxiv.org/abs/2110.11624 [scienceqa]: https://scienceqa.github.io/ [screen2words]: https://arxiv.org/abs/2108.03353 [tallyqa]: https://arxiv.org/abs/1810.12440 [textcaps]: https://textvqa.org/textcaps/ [textvqa]: https://textvqa.org/ [vatex]: https://arxiv.org/abs/1904.03493 [vizwiz-vqa]: https://vizwiz.org/tasks-and-datasets/vqa/ [widgetcap]: https://arxiv.org/abs/2010.04295 [vqav2]: https://visualqa.org/index.html [xgqa]: https://aclanthology.org/2022.findings-acl.196/ [xm3600]: https://arxiv.org/pdf/2205.12522 [icdar2015-inc]: https://arxiv.org/abs/1511.09207 [total-text]: https://paperswithcode.com/paper/total-text-a-comprehensive-dataset-for-scene [fintabnet]: https://developer.ibm.com/data/fintabnet/ [pubtabnet]: https://paperswithcode.com/dataset/pubtabnet [grandstaff]: https://link.springer.com/article/10.1007/s10032-023-00432-z [pubchem]: https://pmc.ncbi.nlm.nih.gov/articles/PMC7352161/ [docci]: https://research.google/pubs/docci-descriptions-of-connected-and-contrasting-images/ [mimic-cxr]: https://paperswithcode.com/dataset/mimic-cxr [vsr]: https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00566/116470/Visual-Spatial-Reasoning
{"library_name": "transformers", "license": "gemma", "pipeline_tag": "image-text-to-text", "extra_gated_heading": "Access PaliGemma on Hugging Face", "extra_gated_prompt": "To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
task
[ "QUESTION_ANSWERING", "TRANSLATION" ]
46,190
monsterapi/gemma-2-2b-hindi-translator
monsterapi
translation
[ "peft", "safetensors", "translation", "code", "instruct", "gemma", "dataset:cfilt/iitb-english-hindi", "base_model:google/gemma-2-2b-it", "base_model:adapter:google/gemma-2-2b-it", "license:apache-2.0", "region:us" ]
2024-08-06T11:00:47Z
2024-08-07T09:28:04+00:00
28
1
--- base_model: google/gemma-2-2b-it datasets: - cfilt/iitb-english-hindi library_name: peft license: apache-2.0 tags: - translation - code - instruct - gemma --- ### Finetuning Overview: **Model Used:** google/gemma-2-2b-it **Dataset:** cfilt/iitb-english-hindi #### Dataset Insights: The IIT Bombay English-Hindi corpus contains a parallel corpus for English-Hindi as well as a monolingual Hindi corpus collected from various sources. This corpus has been utilized in the Workshop on Asian Language Translation Shared Task since 2016 for Hindi-to-English and English-to-Hindi language pairs and as a pivot language pair for Hindi-to-Japanese and Japanese-to-Hindi translations. #### Finetuning Details: With the utilization of [MonsterAPI](https://monsterapi.ai)'s [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm), this finetuning: - Was achieved with cost-effectiveness. - Completed in a total duration of 1 hour and 33 minutes for 0.1 epochs. - Costed `$1.91` for the entire process. #### Hyperparameters & Additional Details: - **Epochs:** 0.1 - **Total Finetuning Cost:** $1.91 - **Model Path:** google/gemma-2-2b-it - **Learning Rate:** 0.001 - **Data Split:** 100% Train - **Gradient Accumulation Steps:** 16 ##### Prompt Template ``` <bos><start_of_turn>user {PROMPT}<end_of_turn> <start_of_turn>model {OUTPUT} <end_of_turn> <eos> ``` Training loss: ![training loss](train-loss.png "Training loss") --- license: apache-2.0
null
Non_BioNLP
### Finetuning Overview: **Model Used:** google/gemma-2-2b-it **Dataset:** cfilt/iitb-english-hindi #### Dataset Insights: The IIT Bombay English-Hindi corpus contains a parallel corpus for English-Hindi as well as a monolingual Hindi corpus collected from various sources. This corpus has been utilized in the Workshop on Asian Language Translation Shared Task since 2016 for Hindi-to-English and English-to-Hindi language pairs and as a pivot language pair for Hindi-to-Japanese and Japanese-to-Hindi translations. #### Finetuning Details: With the utilization of [MonsterAPI](https://monsterapi.ai)'s [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm), this finetuning: - Was achieved with cost-effectiveness. - Completed in a total duration of 1 hour and 33 minutes for 0.1 epochs. - Costed `$1.91` for the entire process. #### Hyperparameters & Additional Details: - **Epochs:** 0.1 - **Total Finetuning Cost:** $1.91 - **Model Path:** google/gemma-2-2b-it - **Learning Rate:** 0.001 - **Data Split:** 100% Train - **Gradient Accumulation Steps:** 16 ##### Prompt Template ``` <bos><start_of_turn>user {PROMPT}<end_of_turn> <start_of_turn>model {OUTPUT} <end_of_turn> <eos> ``` Training loss: ![training loss](train-loss.png "Training loss") --- license: apache-2.0
{"base_model": "google/gemma-2-2b-it", "datasets": ["cfilt/iitb-english-hindi"], "library_name": "peft", "license": "apache-2.0", "tags": ["translation", "code", "instruct", "gemma"]}
task
[ "TRANSLATION" ]
46,191
Hanish09/seq2seq-en-es
Hanish09
null
[ "en", "es", "arxiv:1706.03762", "license:mit", "region:us" ]
2025-01-26T00:08:12Z
2025-01-26T00:18:42+00:00
0
3
--- language: - en - es license: mit --- # Neural Machine Translation with Attention 🚀 A PyTorch implementation of a Sequence-to-Sequence model with Attention for English-Spanish translation. ![Python](https://img.shields.io/badge/Python-3.8%2B-blue) ![PyTorch](https://img.shields.io/badge/PyTorch-2.0%2B-orange) ![License](https://img.shields.io/badge/license-MIT-green) ## 🌟 Features - **Bidirectional GRU Encoder**: Captures context from both directions of the input sequence - **Attention Mechanism**: Helps the model focus on relevant parts of the input sequence - **Teacher Forcing**: Implements curriculum learning for better training stability - **Dynamic Batching**: Efficient training with variable sequence lengths - **Hugging Face Integration**: Uses MarianTokenizer for robust text processing ## 🏗️ Architecture The model consists of three main components: 1. **Encoder**: Bidirectional GRU network that processes input sequences 2. **Attention**: Computes attention weights for each encoder state 3. **Decoder**: GRU network that generates translations using attention context ```plaintext Input → Encoder → Attention → Decoder → Translation ↑ ↑ ↑ Embeddings Context Attention Weights ``` ## 🚀 Quick Start 1. Clone the repository: ```bash git clone https://github.com/yourusername/nmt-attention.git cd nmt-attention ``` 2. Install dependencies: ```bash pip install torch transformers datasets ``` 3. Train the model: ```python python train.py ``` 4. Translate text: ```python from translate import translate text = "How are you?" translated = translate(model, text, tokenizer) print(translated) # Loading a saved model model = Seq2Seq(encoder, decoder, device) model.load_state_dict(torch.load('LSTM_text_generator.pth')) model.eval() ``` ## 📊 Model Performance Training metrics after 10 epochs: - Initial Loss: 11.147 - Final Loss: 3.527 - Training Time: ~2 hours on NVIDIA V100 ## 🔧 Hyperparameters ```python BATCH_SIZE = 32 LEARNING_RATE = 1e-3 CLIP = 1.0 N_EPOCHS = 10 ENC_EMB_DIM = 256 DEC_EMB_DIM = 256 ENC_HID_DIM = 512 DEC_HID_DIM = 512 ``` ## 📚 Dataset Using the `loresiensis/corpus-en-es` dataset from Hugging Face Hub, which provides English-Spanish sentence pairs for training. ## 🤝 Contributing 1. Fork the repository 2. Create your feature branch (`git checkout -b feature/amazing-feature`) 3. Commit your changes (`git commit -m 'Add amazing feature'`) 4. Push to the branch (`git push origin feature/amazing-feature`) 5. Open a Pull Request ## 📝 License This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. ## 🙏 Acknowledgments - [Attention Is All You Need](https://arxiv.org/abs/1706.03762) paper - Hugging Face for the transformers library and datasets - PyTorch team for the amazing deep learning framework --- ⭐️ If you found this project helpful, please consider giving it a star!
null
Non_BioNLP
# Neural Machine Translation with Attention 🚀 A PyTorch implementation of a Sequence-to-Sequence model with Attention for English-Spanish translation. ![Python](https://img.shields.io/badge/Python-3.8%2B-blue) ![PyTorch](https://img.shields.io/badge/PyTorch-2.0%2B-orange) ![License](https://img.shields.io/badge/license-MIT-green) ## 🌟 Features - **Bidirectional GRU Encoder**: Captures context from both directions of the input sequence - **Attention Mechanism**: Helps the model focus on relevant parts of the input sequence - **Teacher Forcing**: Implements curriculum learning for better training stability - **Dynamic Batching**: Efficient training with variable sequence lengths - **Hugging Face Integration**: Uses MarianTokenizer for robust text processing ## 🏗️ Architecture The model consists of three main components: 1. **Encoder**: Bidirectional GRU network that processes input sequences 2. **Attention**: Computes attention weights for each encoder state 3. **Decoder**: GRU network that generates translations using attention context ```plaintext Input → Encoder → Attention → Decoder → Translation ↑ ↑ ↑ Embeddings Context Attention Weights ``` ## 🚀 Quick Start 1. Clone the repository: ```bash git clone https://github.com/yourusername/nmt-attention.git cd nmt-attention ``` 2. Install dependencies: ```bash pip install torch transformers datasets ``` 3. Train the model: ```python python train.py ``` 4. Translate text: ```python from translate import translate text = "How are you?" translated = translate(model, text, tokenizer) print(translated) # Loading a saved model model = Seq2Seq(encoder, decoder, device) model.load_state_dict(torch.load('LSTM_text_generator.pth')) model.eval() ``` ## 📊 Model Performance Training metrics after 10 epochs: - Initial Loss: 11.147 - Final Loss: 3.527 - Training Time: ~2 hours on NVIDIA V100 ## 🔧 Hyperparameters ```python BATCH_SIZE = 32 LEARNING_RATE = 1e-3 CLIP = 1.0 N_EPOCHS = 10 ENC_EMB_DIM = 256 DEC_EMB_DIM = 256 ENC_HID_DIM = 512 DEC_HID_DIM = 512 ``` ## 📚 Dataset Using the `loresiensis/corpus-en-es` dataset from Hugging Face Hub, which provides English-Spanish sentence pairs for training. ## 🤝 Contributing 1. Fork the repository 2. Create your feature branch (`git checkout -b feature/amazing-feature`) 3. Commit your changes (`git commit -m 'Add amazing feature'`) 4. Push to the branch (`git push origin feature/amazing-feature`) 5. Open a Pull Request ## 📝 License This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. ## 🙏 Acknowledgments - [Attention Is All You Need](https://arxiv.org/abs/1706.03762) paper - Hugging Face for the transformers library and datasets - PyTorch team for the amazing deep learning framework --- ⭐️ If you found this project helpful, please consider giving it a star!
{"language": ["en", "es"], "license": "mit"}
task
[ "TRANSLATION" ]
46,192
mini1013/master_cate_bc4
mini1013
text-classification
[ "setfit", "safetensors", "roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:mini1013/master_domain", "base_model:finetune:mini1013/master_domain", "model-index", "region:us" ]
2025-01-23T21:00:42Z
2025-01-23T21:01:04+00:00
613
0
--- base_model: mini1013/master_domain library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 머리감는의자 샴푸베드 샴푸대 가정용 목욕침대 세안기 어린이 아기 접의식 블루 출산/육아 > 목욕용품 > 기타목욕용품 - text: 여성 목욕 유아 샤워 웨딩 플라워 타월 타올 드레스 어린이 파티 가운 플레이 솔리드 잠옷 11=CM11_8-9T 130-140cm 출산/육아 > 목욕용품 > 유아목욕가운 - text: 아동용 레이어드나시반팔티 J4385 나시티11호 트임나시13호 출산/육아 > 목욕용품 > 유아목욕가운 - text: 욕실 타일 바닥 미끄럼방지 스티커 12P 세트 출산/육아 > 목욕용품 > 기타목욕용품 - text: 가정용 테이블 디지털 온습도 전자기계 욕조온도계 측정기 출산/육아 > 목욕용품 > 유아욕탕온도계 inference: true model-index: - name: SetFit with mini1013/master_domain results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 1.0 name: Accuracy --- # SetFit with mini1013/master_domain This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 11 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 5.0 | <ul><li>'유키두 물놀이 고래 잠수함 샤워기 출산/육아 > 목욕용품 > 유아목욕장난감'</li><li>'베베라팡 헤엄치는 바다친구들 아기 욕조 물놀이 목욕놀이장난감 거북이 블루 출산/육아 > 목욕용품 > 유아목욕장난감'</li><li>'KC인증 목욕놀이 장난감 (고래,거북이,오리,상어) 유아 아기 물놀이 욕조 태엽 장난감 단품구성_큰고래(핑크) 출산/육아 > 목욕용품 > 유아목욕장난감'</li></ul> | | 7.0 | <ul><li>'릴린져샴푸캡 아기 목욕모자 샤워캡 블루 출산/육아 > 목욕용품 > 유아샴푸캡'</li><li>'돗투돗 말랑 샴푸캡 유아 목욕용품 해어캡 신생아 샴푸모자 돗바니 출산/육아 > 목욕용품 > 유아샴푸캡'</li><li>'[귀애우비] 육아꿀템 아기목욕 귀 방수 스티커 출산선물 수영장 편한착용 안아픔 샴푸캡 소형(세트) - 2Box (100매) 출산/육아 > 목욕용품 > 유아샴푸캡'</li></ul> | | 1.0 | <ul><li>'고리에거는 아이장난감정리망 그물주머니 흰색 출산/육아 > 목욕용품 > 욕실정리망/정리함'</li><li>'욕실그물망 정리 함 가방 수납 장난감 화이트 DC002879 출산/육아 > 목욕용품 > 욕실정리망/정리함'</li><li>'리빙 세탁바구니 1P 29x41x36cm 옷빨래 다용도수납 화이트 출산/육아 > 목욕용품 > 욕실정리망/정리함'</li></ul> | | 9.0 | <ul><li>'아이너바움 비건인증 홈 케어 5종 (세탁+섬유+주방+토이+욕조) 세탁세제(코튼블랑)_섬유유연제(스윗선데이)_주방세제 (시트러스/액상형) 출산/육아 > 목욕용품 > 유아욕조'</li><li>'스토케 플렉시바스 아기욕조 라지 화이트 출산/육아 > 목욕용품 > 유아욕조'</li><li>'오케이베이비 오플라 아기욕조 히포 샴푸캡 세트 동의합니다_모카그레이_민트그린 출산/육아 > 목욕용품 > 유아욕조'</li></ul> | | 6.0 | <ul><li>'어린이집수건 먼지없는 소창 고리수건 3겹(30X30) 5장세트 유치원 이름자수 핸드타올 소창 고리수건 3겹(30X30)-수건아래-5장_다크블루_그린 출산/육아 > 목욕용품 > 유아목욕타월'</li><li>'베이비꼬 아기목욕타월 옐로우 출산/육아 > 목욕용품 > 유아목욕타월'</li><li>'송월 베이비 어린이집 고리 수건 선물 이름자수 준비물 손수건 베이비시리즈_퍼플(열기구) 출산/육아 > 목욕용품 > 유아목욕타월'</li></ul> | | 4.0 | <ul><li>'샤워 타올 스틱 손잡이 볼 욕실용품 볼브러쉬 등 바디 블루 출산/육아 > 목욕용품 > 유아목욕장갑/스펀지'</li><li>'비비또 실리콘 아기목욕장갑 스펀지 신생아 유아 샤워볼 아기목욕장갑 2P(10% 할인)_헤븐리핑크_발틱블루 출산/육아 > 목욕용품 > 유아목욕장갑/스펀지'</li><li>'뉴 샤워기 거치대 각도조절 샤워기홀더 편리한 욕실용품 생활용품 샤워기걸이 당일출고 데일리 01.180캐릭터샤워홀더-버드 출산/육아 > 목욕용품 > 유아목욕장갑/스펀지'</li></ul> | | 8.0 | <ul><li>'유아 어린이 손씻기 세면대 수도꼭지 연장 물받이 회색 출산/육아 > 목욕용품 > 유아세면대/수도꼭지'</li><li>'모던 수도꼭지 연장탭 베이지 출산/육아 > 목욕용품 > 유아세면대/수도꼭지'</li><li>'버드시아 유아 세면대 (장난감놀이/신생아욕조/목욕놀이) 연그레이 출산/육아 > 목욕용품 > 유아세면대/수도꼭지'</li></ul> | | 2.0 | <ul><li>'유아 비치가운 어린이 목욕가운 아기 타올 베이비 오리 출산/육아 > 목욕용품 > 유아목욕가운'</li><li>'목욕 잠옷 가운 샤워 키구 동물 어린이 유니콘 플란넬 겨울 루미 유아 드레싱 타월 후드 11=Pinkstarunicorn_5 출산/육아 > 목욕용품 > 유아목욕가운'</li><li>'스프링스트라이프조끼 오렌지_120 출산/육아 > 목욕용품 > 유아목욕가운'</li></ul> | | 3.0 | <ul><li>'부품 : 아기비데 - 3in1 받침대 업그레이드 신생아 유아 출산선물 국내생산 색상추가 컴포트 3세대 받침대-엘레강스 아이보리 - 그린 출산/육아 > 목욕용품 > 유아목욕의자'</li><li>'목욕의자 미용실 머리 샴푸 대형 실내 교육 업소 의자 D 출산/육아 > 목욕용품 > 유아목욕의자'</li><li>'웰빙 은나노 목욕 의자 대 핑크 출산/육아 > 목욕용품 > 유아목욕의자'</li></ul> | | 0.0 | <ul><li>'나무 욕조 히노끼 이동식 홈 스타 사우나 목욕 삼나무 장90 x58 78+커버(친환경 왁스) 출산/육아 > 목욕용품 > 기타목욕용품'</li><li>'두근두근 점프대 목욕놀이 세트 3pcs 출산/육아 > 목욕용품 > 기타목욕용품'</li><li>'창신 베어트리 욕실 용품 양치컵 욕실의자 칫솔꽂이 4.창신 베어트리 욕실의자(대) 출산/육아 > 목욕용품 > 기타목욕용품'</li></ul> | | 10.0 | <ul><li>'밀폐형 전해 셀, 3 전극 시스템 반응기 매칭 165900 50ml 165900 50ml 출산/육아 > 목욕용품 > 유아욕탕온도계'</li><li>'직수입 디지털 온도계 습도계 탕온계 모음 스마일(심플) 온습도계 핑크 02.스마일(패턴)온습도계_화이트 출산/육아 > 목욕용품 > 유아욕탕온도계'</li><li>'(건전지 포함)디지털 대화면 온습도계 아날로그 욕실 육추기 병아리 오리 탕온도 병아리 계란 아날로그온습도계 출산/육아 > 목욕용품 > 유아욕탕온도계'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 1.0 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_cate_bc4") # Run inference preds = model("욕실 타일 바닥 미끄럼방지 스티커 12P 세트 출산/육아 > 목욕용품 > 기타목욕용품") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 7 | 14.2403 | 27 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 70 | | 1.0 | 70 | | 2.0 | 70 | | 3.0 | 70 | | 4.0 | 70 | | 5.0 | 70 | | 6.0 | 70 | | 7.0 | 70 | | 8.0 | 70 | | 9.0 | 70 | | 10.0 | 70 | ### Training Hyperparameters - batch_size: (256, 256) - num_epochs: (30, 30) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 50 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:----:|:-------------:|:---------------:| | 0.0066 | 1 | 0.4891 | - | | 0.3311 | 50 | 0.5008 | - | | 0.6623 | 100 | 0.4057 | - | | 0.9934 | 150 | 0.3132 | - | | 1.3245 | 200 | 0.176 | - | | 1.6556 | 250 | 0.0868 | - | | 1.9868 | 300 | 0.0349 | - | | 2.3179 | 350 | 0.0133 | - | | 2.6490 | 400 | 0.0018 | - | | 2.9801 | 450 | 0.0006 | - | | 3.3113 | 500 | 0.0004 | - | | 3.6424 | 550 | 0.0005 | - | | 3.9735 | 600 | 0.0003 | - | | 4.3046 | 650 | 0.0002 | - | | 4.6358 | 700 | 0.0002 | - | | 4.9669 | 750 | 0.0002 | - | | 5.2980 | 800 | 0.0001 | - | | 5.6291 | 850 | 0.0001 | - | | 5.9603 | 900 | 0.0001 | - | | 6.2914 | 950 | 0.0001 | - | | 6.6225 | 1000 | 0.0001 | - | | 6.9536 | 1050 | 0.0001 | - | | 7.2848 | 1100 | 0.0001 | - | | 7.6159 | 1150 | 0.0001 | - | | 7.9470 | 1200 | 0.0001 | - | | 8.2781 | 1250 | 0.0001 | - | | 8.6093 | 1300 | 0.0001 | - | | 8.9404 | 1350 | 0.0001 | - | | 9.2715 | 1400 | 0.0001 | - | | 9.6026 | 1450 | 0.0 | - | | 9.9338 | 1500 | 0.0001 | - | | 10.2649 | 1550 | 0.0 | - | | 10.5960 | 1600 | 0.0 | - | | 10.9272 | 1650 | 0.0 | - | | 11.2583 | 1700 | 0.0 | - | | 11.5894 | 1750 | 0.0 | - | | 11.9205 | 1800 | 0.0 | - | | 12.2517 | 1850 | 0.0 | - | | 12.5828 | 1900 | 0.0 | - | | 12.9139 | 1950 | 0.0 | - | | 13.2450 | 2000 | 0.0 | - | | 13.5762 | 2050 | 0.0 | - | | 13.9073 | 2100 | 0.0 | - | | 14.2384 | 2150 | 0.0 | - | | 14.5695 | 2200 | 0.0 | - | | 14.9007 | 2250 | 0.0 | - | | 15.2318 | 2300 | 0.0 | - | | 15.5629 | 2350 | 0.0 | - | | 15.8940 | 2400 | 0.0 | - | | 16.2252 | 2450 | 0.0 | - | | 16.5563 | 2500 | 0.0 | - | | 16.8874 | 2550 | 0.0 | - | | 17.2185 | 2600 | 0.0 | - | | 17.5497 | 2650 | 0.0 | - | | 17.8808 | 2700 | 0.0 | - | | 18.2119 | 2750 | 0.0 | - | | 18.5430 | 2800 | 0.0 | - | | 18.8742 | 2850 | 0.0 | - | | 19.2053 | 2900 | 0.0 | - | | 19.5364 | 2950 | 0.0 | - | | 19.8675 | 3000 | 0.0 | - | | 20.1987 | 3050 | 0.0 | - | | 20.5298 | 3100 | 0.0 | - | | 20.8609 | 3150 | 0.0 | - | | 21.1921 | 3200 | 0.0 | - | | 21.5232 | 3250 | 0.0 | - | | 21.8543 | 3300 | 0.0 | - | | 22.1854 | 3350 | 0.0 | - | | 22.5166 | 3400 | 0.0 | - | | 22.8477 | 3450 | 0.0 | - | | 23.1788 | 3500 | 0.0 | - | | 23.5099 | 3550 | 0.0 | - | | 23.8411 | 3600 | 0.0 | - | | 24.1722 | 3650 | 0.0 | - | | 24.5033 | 3700 | 0.0 | - | | 24.8344 | 3750 | 0.0 | - | | 25.1656 | 3800 | 0.0 | - | | 25.4967 | 3850 | 0.0 | - | | 25.8278 | 3900 | 0.0 | - | | 26.1589 | 3950 | 0.0 | - | | 26.4901 | 4000 | 0.0 | - | | 26.8212 | 4050 | 0.0 | - | | 27.1523 | 4100 | 0.0 | - | | 27.4834 | 4150 | 0.0 | - | | 27.8146 | 4200 | 0.0 | - | | 28.1457 | 4250 | 0.0 | - | | 28.4768 | 4300 | 0.0 | - | | 28.8079 | 4350 | 0.0 | - | | 29.1391 | 4400 | 0.0 | - | | 29.4702 | 4450 | 0.0 | - | | 29.8013 | 4500 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - Transformers: 4.44.2 - PyTorch: 2.2.0a0+81ea7a4 - Datasets: 3.2.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with mini1013/master_domain This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 11 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 5.0 | <ul><li>'유키두 물놀이 고래 잠수함 샤워기 출산/육아 > 목욕용품 > 유아목욕장난감'</li><li>'베베라팡 헤엄치는 바다친구들 아기 욕조 물놀이 목욕놀이장난감 거북이 블루 출산/육아 > 목욕용품 > 유아목욕장난감'</li><li>'KC인증 목욕놀이 장난감 (고래,거북이,오리,상어) 유아 아기 물놀이 욕조 태엽 장난감 단품구성_큰고래(핑크) 출산/육아 > 목욕용품 > 유아목욕장난감'</li></ul> | | 7.0 | <ul><li>'릴린져샴푸캡 아기 목욕모자 샤워캡 블루 출산/육아 > 목욕용품 > 유아샴푸캡'</li><li>'돗투돗 말랑 샴푸캡 유아 목욕용품 해어캡 신생아 샴푸모자 돗바니 출산/육아 > 목욕용품 > 유아샴푸캡'</li><li>'[귀애우비] 육아꿀템 아기목욕 귀 방수 스티커 출산선물 수영장 편한착용 안아픔 샴푸캡 소형(세트) - 2Box (100매) 출산/육아 > 목욕용품 > 유아샴푸캡'</li></ul> | | 1.0 | <ul><li>'고리에거는 아이장난감정리망 그물주머니 흰색 출산/육아 > 목욕용품 > 욕실정리망/정리함'</li><li>'욕실그물망 정리 함 가방 수납 장난감 화이트 DC002879 출산/육아 > 목욕용품 > 욕실정리망/정리함'</li><li>'리빙 세탁바구니 1P 29x41x36cm 옷빨래 다용도수납 화이트 출산/육아 > 목욕용품 > 욕실정리망/정리함'</li></ul> | | 9.0 | <ul><li>'아이너바움 비건인증 홈 케어 5종 (세탁+섬유+주방+토이+욕조) 세탁세제(코튼블랑)_섬유유연제(스윗선데이)_주방세제 (시트러스/액상형) 출산/육아 > 목욕용품 > 유아욕조'</li><li>'스토케 플렉시바스 아기욕조 라지 화이트 출산/육아 > 목욕용품 > 유아욕조'</li><li>'오케이베이비 오플라 아기욕조 히포 샴푸캡 세트 동의합니다_모카그레이_민트그린 출산/육아 > 목욕용품 > 유아욕조'</li></ul> | | 6.0 | <ul><li>'어린이집수건 먼지없는 소창 고리수건 3겹(30X30) 5장세트 유치원 이름자수 핸드타올 소창 고리수건 3겹(30X30)-수건아래-5장_다크블루_그린 출산/육아 > 목욕용품 > 유아목욕타월'</li><li>'베이비꼬 아기목욕타월 옐로우 출산/육아 > 목욕용품 > 유아목욕타월'</li><li>'송월 베이비 어린이집 고리 수건 선물 이름자수 준비물 손수건 베이비시리즈_퍼플(열기구) 출산/육아 > 목욕용품 > 유아목욕타월'</li></ul> | | 4.0 | <ul><li>'샤워 타올 스틱 손잡이 볼 욕실용품 볼브러쉬 등 바디 블루 출산/육아 > 목욕용품 > 유아목욕장갑/스펀지'</li><li>'비비또 실리콘 아기목욕장갑 스펀지 신생아 유아 샤워볼 아기목욕장갑 2P(10% 할인)_헤븐리핑크_발틱블루 출산/육아 > 목욕용품 > 유아목욕장갑/스펀지'</li><li>'뉴 샤워기 거치대 각도조절 샤워기홀더 편리한 욕실용품 생활용품 샤워기걸이 당일출고 데일리 01.180캐릭터샤워홀더-버드 출산/육아 > 목욕용품 > 유아목욕장갑/스펀지'</li></ul> | | 8.0 | <ul><li>'유아 어린이 손씻기 세면대 수도꼭지 연장 물받이 회색 출산/육아 > 목욕용품 > 유아세면대/수도꼭지'</li><li>'모던 수도꼭지 연장탭 베이지 출산/육아 > 목욕용품 > 유아세면대/수도꼭지'</li><li>'버드시아 유아 세면대 (장난감놀이/신생아욕조/목욕놀이) 연그레이 출산/육아 > 목욕용품 > 유아세면대/수도꼭지'</li></ul> | | 2.0 | <ul><li>'유아 비치가운 어린이 목욕가운 아기 타올 베이비 오리 출산/육아 > 목욕용품 > 유아목욕가운'</li><li>'목욕 잠옷 가운 샤워 키구 동물 어린이 유니콘 플란넬 겨울 루미 유아 드레싱 타월 후드 11=Pinkstarunicorn_5 출산/육아 > 목욕용품 > 유아목욕가운'</li><li>'스프링스트라이프조끼 오렌지_120 출산/육아 > 목욕용품 > 유아목욕가운'</li></ul> | | 3.0 | <ul><li>'부품 : 아기비데 - 3in1 받침대 업그레이드 신생아 유아 출산선물 국내생산 색상추가 컴포트 3세대 받침대-엘레강스 아이보리 - 그린 출산/육아 > 목욕용품 > 유아목욕의자'</li><li>'목욕의자 미용실 머리 샴푸 대형 실내 교육 업소 의자 D 출산/육아 > 목욕용품 > 유아목욕의자'</li><li>'웰빙 은나노 목욕 의자 대 핑크 출산/육아 > 목욕용품 > 유아목욕의자'</li></ul> | | 0.0 | <ul><li>'나무 욕조 히노끼 이동식 홈 스타 사우나 목욕 삼나무 장90 x58 78+커버(친환경 왁스) 출산/육아 > 목욕용품 > 기타목욕용품'</li><li>'두근두근 점프대 목욕놀이 세트 3pcs 출산/육아 > 목욕용품 > 기타목욕용품'</li><li>'창신 베어트리 욕실 용품 양치컵 욕실의자 칫솔꽂이 4.창신 베어트리 욕실의자(대) 출산/육아 > 목욕용품 > 기타목욕용품'</li></ul> | | 10.0 | <ul><li>'밀폐형 전해 셀, 3 전극 시스템 반응기 매칭 165900 50ml 165900 50ml 출산/육아 > 목욕용품 > 유아욕탕온도계'</li><li>'직수입 디지털 온도계 습도계 탕온계 모음 스마일(심플) 온습도계 핑크 02.스마일(패턴)온습도계_화이트 출산/육아 > 목욕용품 > 유아욕탕온도계'</li><li>'(건전지 포함)디지털 대화면 온습도계 아날로그 욕실 육추기 병아리 오리 탕온도 병아리 계란 아날로그온습도계 출산/육아 > 목욕용품 > 유아욕탕온도계'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 1.0 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_cate_bc4") # Run inference preds = model("욕실 타일 바닥 미끄럼방지 스티커 12P 세트 출산/육아 > 목욕용품 > 기타목욕용품") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 7 | 14.2403 | 27 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 70 | | 1.0 | 70 | | 2.0 | 70 | | 3.0 | 70 | | 4.0 | 70 | | 5.0 | 70 | | 6.0 | 70 | | 7.0 | 70 | | 8.0 | 70 | | 9.0 | 70 | | 10.0 | 70 | ### Training Hyperparameters - batch_size: (256, 256) - num_epochs: (30, 30) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 50 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:----:|:-------------:|:---------------:| | 0.0066 | 1 | 0.4891 | - | | 0.3311 | 50 | 0.5008 | - | | 0.6623 | 100 | 0.4057 | - | | 0.9934 | 150 | 0.3132 | - | | 1.3245 | 200 | 0.176 | - | | 1.6556 | 250 | 0.0868 | - | | 1.9868 | 300 | 0.0349 | - | | 2.3179 | 350 | 0.0133 | - | | 2.6490 | 400 | 0.0018 | - | | 2.9801 | 450 | 0.0006 | - | | 3.3113 | 500 | 0.0004 | - | | 3.6424 | 550 | 0.0005 | - | | 3.9735 | 600 | 0.0003 | - | | 4.3046 | 650 | 0.0002 | - | | 4.6358 | 700 | 0.0002 | - | | 4.9669 | 750 | 0.0002 | - | | 5.2980 | 800 | 0.0001 | - | | 5.6291 | 850 | 0.0001 | - | | 5.9603 | 900 | 0.0001 | - | | 6.2914 | 950 | 0.0001 | - | | 6.6225 | 1000 | 0.0001 | - | | 6.9536 | 1050 | 0.0001 | - | | 7.2848 | 1100 | 0.0001 | - | | 7.6159 | 1150 | 0.0001 | - | | 7.9470 | 1200 | 0.0001 | - | | 8.2781 | 1250 | 0.0001 | - | | 8.6093 | 1300 | 0.0001 | - | | 8.9404 | 1350 | 0.0001 | - | | 9.2715 | 1400 | 0.0001 | - | | 9.6026 | 1450 | 0.0 | - | | 9.9338 | 1500 | 0.0001 | - | | 10.2649 | 1550 | 0.0 | - | | 10.5960 | 1600 | 0.0 | - | | 10.9272 | 1650 | 0.0 | - | | 11.2583 | 1700 | 0.0 | - | | 11.5894 | 1750 | 0.0 | - | | 11.9205 | 1800 | 0.0 | - | | 12.2517 | 1850 | 0.0 | - | | 12.5828 | 1900 | 0.0 | - | | 12.9139 | 1950 | 0.0 | - | | 13.2450 | 2000 | 0.0 | - | | 13.5762 | 2050 | 0.0 | - | | 13.9073 | 2100 | 0.0 | - | | 14.2384 | 2150 | 0.0 | - | | 14.5695 | 2200 | 0.0 | - | | 14.9007 | 2250 | 0.0 | - | | 15.2318 | 2300 | 0.0 | - | | 15.5629 | 2350 | 0.0 | - | | 15.8940 | 2400 | 0.0 | - | | 16.2252 | 2450 | 0.0 | - | | 16.5563 | 2500 | 0.0 | - | | 16.8874 | 2550 | 0.0 | - | | 17.2185 | 2600 | 0.0 | - | | 17.5497 | 2650 | 0.0 | - | | 17.8808 | 2700 | 0.0 | - | | 18.2119 | 2750 | 0.0 | - | | 18.5430 | 2800 | 0.0 | - | | 18.8742 | 2850 | 0.0 | - | | 19.2053 | 2900 | 0.0 | - | | 19.5364 | 2950 | 0.0 | - | | 19.8675 | 3000 | 0.0 | - | | 20.1987 | 3050 | 0.0 | - | | 20.5298 | 3100 | 0.0 | - | | 20.8609 | 3150 | 0.0 | - | | 21.1921 | 3200 | 0.0 | - | | 21.5232 | 3250 | 0.0 | - | | 21.8543 | 3300 | 0.0 | - | | 22.1854 | 3350 | 0.0 | - | | 22.5166 | 3400 | 0.0 | - | | 22.8477 | 3450 | 0.0 | - | | 23.1788 | 3500 | 0.0 | - | | 23.5099 | 3550 | 0.0 | - | | 23.8411 | 3600 | 0.0 | - | | 24.1722 | 3650 | 0.0 | - | | 24.5033 | 3700 | 0.0 | - | | 24.8344 | 3750 | 0.0 | - | | 25.1656 | 3800 | 0.0 | - | | 25.4967 | 3850 | 0.0 | - | | 25.8278 | 3900 | 0.0 | - | | 26.1589 | 3950 | 0.0 | - | | 26.4901 | 4000 | 0.0 | - | | 26.8212 | 4050 | 0.0 | - | | 27.1523 | 4100 | 0.0 | - | | 27.4834 | 4150 | 0.0 | - | | 27.8146 | 4200 | 0.0 | - | | 28.1457 | 4250 | 0.0 | - | | 28.4768 | 4300 | 0.0 | - | | 28.8079 | 4350 | 0.0 | - | | 29.1391 | 4400 | 0.0 | - | | 29.4702 | 4450 | 0.0 | - | | 29.8013 | 4500 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - Transformers: 4.44.2 - PyTorch: 2.2.0a0+81ea7a4 - Datasets: 3.2.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "mini1013/master_domain", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "머리감는의자 샴푸베드 샴푸대 가정용 목욕침대 세안기 어린이 아기 접의식 블루 출산/육아 > 목욕용품 > 기타목욕용품"}, {"text": "여성 목욕 유아 샤워 웨딩 플라워 타월 타올 드레스 어린이 파티 가운 플레이 솔리드 잠옷 11=CM11_8-9T 130-140cm 출산/육아 > 목욕용품 > 유아목욕가운"}, {"text": "아동용 레이어드나시반팔티 J4385 나시티11호 트임나시13호 출산/육아 > 목욕용품 > 유아목욕가운"}, {"text": "욕실 타일 바닥 미끄럼방지 스티커 12P 세트 출산/육아 > 목욕용품 > 기타목욕용품"}, {"text": "가정용 테이블 디지털 온습도 전자기계 욕조온도계 측정기 출산/육아 > 목욕용품 > 유아욕탕온도계"}], "inference": true, "model-index": [{"name": "SetFit with mini1013/master_domain", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,193
Daga2001/bert-base-cased-finetuned-conll2002
Daga2001
token-classification
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:conll2002", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-06-05T22:33:04Z
2024-06-06T23:17:22+00:00
22
0
--- base_model: bert-base-cased datasets: - conll2002 license: apache-2.0 metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer model-index: - name: bert-base-cased-finetuned-conll2002 results: - task: type: token-classification name: Token Classification dataset: name: conll2002 type: conll2002 config: es split: validation args: es metrics: - type: precision value: 0.8175829168559745 name: Precision - type: recall value: 0.8269761029411765 name: Recall - type: f1 value: 0.8222526844870915 name: F1 - type: accuracy value: 0.9739999622092474 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-conll2002 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2002 dataset. It achieves the following results on the evaluation set: - Loss: 0.1670 - Precision: 0.8176 - Recall: 0.8270 - F1: 0.8223 - Accuracy: 0.9740 ## Model description The model described here is a fine-tuned version of the BERT (Bidirectional Encoder Representations from Transformers) base cased model for Named Entity Recognition (NER) tasks, trained on the CoNLL-2002 dataset. BERT is a pre-trained language model based on the transformer architecture, designed to understand and process text by considering the context of each word from both directions (left-to-right and right-to-left). By fine-tuning the BERT base cased model on the CoNLL-2002 dataset, this model has been adapted to recognize and classify named entities such as persons, organizations, locations, and other miscellaneous entities within Spanish text. The fine-tuning process involves adjusting the pre-trained model weights to better fit the specific task of NER, thereby improving its performance and accuracy on Spanish text. ## Intended uses & limitations More information needed ## Training and evaluation data The training was performed using a GPU with 22.5 GB of RAM, 53 GB of system RAM, and 200 GB of disk space. This setup ensured efficient handling of the large dataset and the computational demands of fine-tuning the model. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0248 | 1.0 | 1041 | 0.1439 | 0.8066 | 0.8155 | 0.8110 | 0.9732 | | 0.0141 | 2.0 | 2082 | 0.1569 | 0.8108 | 0.8182 | 0.8145 | 0.9728 | | 0.0109 | 3.0 | 3123 | 0.1670 | 0.8176 | 0.8270 | 0.8223 | 0.9740 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-conll2002 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2002 dataset. It achieves the following results on the evaluation set: - Loss: 0.1670 - Precision: 0.8176 - Recall: 0.8270 - F1: 0.8223 - Accuracy: 0.9740 ## Model description The model described here is a fine-tuned version of the BERT (Bidirectional Encoder Representations from Transformers) base cased model for Named Entity Recognition (NER) tasks, trained on the CoNLL-2002 dataset. BERT is a pre-trained language model based on the transformer architecture, designed to understand and process text by considering the context of each word from both directions (left-to-right and right-to-left). By fine-tuning the BERT base cased model on the CoNLL-2002 dataset, this model has been adapted to recognize and classify named entities such as persons, organizations, locations, and other miscellaneous entities within Spanish text. The fine-tuning process involves adjusting the pre-trained model weights to better fit the specific task of NER, thereby improving its performance and accuracy on Spanish text. ## Intended uses & limitations More information needed ## Training and evaluation data The training was performed using a GPU with 22.5 GB of RAM, 53 GB of system RAM, and 200 GB of disk space. This setup ensured efficient handling of the large dataset and the computational demands of fine-tuning the model. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0248 | 1.0 | 1041 | 0.1439 | 0.8066 | 0.8155 | 0.8110 | 0.9732 | | 0.0141 | 2.0 | 2082 | 0.1569 | 0.8108 | 0.8182 | 0.8145 | 0.9728 | | 0.0109 | 3.0 | 3123 | 0.1670 | 0.8176 | 0.8270 | 0.8223 | 0.9740 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
{"base_model": "bert-base-cased", "datasets": ["conll2002"], "license": "apache-2.0", "metrics": ["precision", "recall", "f1", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-cased-finetuned-conll2002", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2002", "type": "conll2002", "config": "es", "split": "validation", "args": "es"}, "metrics": [{"type": "precision", "value": 0.8175829168559745, "name": "Precision"}, {"type": "recall", "value": 0.8269761029411765, "name": "Recall"}, {"type": "f1", "value": 0.8222526844870915, "name": "F1"}, {"type": "accuracy", "value": 0.9739999622092474, "name": "Accuracy"}]}]}]}
task
[ "NAMED_ENTITY_RECOGNITION" ]
46,194
Nextcloud-AI/opus-mt-tr-fr
Nextcloud-AI
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-02-23T10:47:46Z
2023-08-16T12:07:27+00:00
10
0
--- license: apache-2.0 tags: - translation --- ### opus-mt-tr-fr * source languages: tr * target languages: fr * OPUS readme: [tr-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.tr.fr | 45.3 | 0.627 |
null
Non_BioNLP
### opus-mt-tr-fr * source languages: tr * target languages: fr * OPUS readme: [tr-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.tr.fr | 45.3 | 0.627 |
{"license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
46,195
TheBloke/dragon-mistral-7B-v0-AWQ
TheBloke
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "base_model:llmware/dragon-mistral-7b-v0", "base_model:quantized:llmware/dragon-mistral-7b-v0", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
2023-12-20T22:14:24Z
2023-12-20T22:31:45+00:00
65
1
--- base_model: llmware/dragon-mistral-7b-v0 license: apache-2.0 model_name: Dragon Mistral 7B V0 inference: false model_creator: llmware model_type: mistral prompt_template: '<human>: {prompt} <bot>: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Dragon Mistral 7B V0 - AWQ - Model creator: [llmware](https://huggingface.co/llmware) - Original model: [Dragon Mistral 7B V0](https://huggingface.co/llmware/dragon-mistral-7b-v0) <!-- description start --> ## Description This repo contains AWQ model files for [llmware's Dragon Mistral 7B V0](https://huggingface.co/llmware/dragon-mistral-7b-v0). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-GGUF) * [llmware's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/llmware/dragon-mistral-7b-v0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: human-bot ``` <human>: {prompt} <bot>: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.15 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/dragon-mistral-7B-v0-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `dragon-mistral-7B-v0-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/dragon-mistral-7B-v0-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''<human>: {prompt} <bot>: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/dragon-mistral-7B-v0-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/dragon-mistral-7B-v0-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<human>: {prompt} <bot>: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/dragon-mistral-7B-v0-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''<human>: {prompt} <bot>: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: llmware's Dragon Mistral 7B V0 # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> dragon-mistral-7b-v0 part of the dRAGon ("Delivering RAG On ...") model series, RAG-instruct trained on top of a Mistral-7B base model. DRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation. ### Benchmark Tests Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester) Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations. --**Accuracy Score**: **96.50** correct out of 100 --Not Found Classification: 92.50% --Boolean: 97.50% --Math/Logic: 81.25% --Complex Questions (1-5): 4 (Medium-High - table-reading, multiple-choice, causal) --Summarization Quality (1-5): 4 (Coherent, extractive) --Hallucinations: No hallucinations observed in test runs. For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo). ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** llmware - **Model type:** Mistral-7B - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model:** Mistral-7B-Base ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> DRAGON is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources. DRAGON models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms. ## How to Get Started with the Model The fastest way to get started with dRAGon is through direct import in transformers: from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("dragon-mistral-7b-v0") model = AutoModelForCausalLM.from_pretrained("dragon-mistral-7b-v0") Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents. The dRAGon model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as: full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:" The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts: 1. Text Passage Context, and 2. Specific question or instruction based on the text passage To get the best results, package "my_prompt" as follows: my_prompt = {{text_passage}} + "\n" + {{question/instruction}} If you are using a HuggingFace generation script: # prepare prompt packaging used in fine-tuning process new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:" inputs = tokenizer(new_prompt, return_tensors="pt") start_of_output = len(inputs.input_ids[0]) # temperature: set at 0.3 for consistency of output # max_new_tokens: set at 100 - may prematurely stop a few of the summaries outputs = model.generate( inputs.input_ids.to(device), eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.3, max_new_tokens=100, ) output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True) ## Model Card Contact Darren Oberst & llmware team
null
Non_BioNLP
<!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Dragon Mistral 7B V0 - AWQ - Model creator: [llmware](https://huggingface.co/llmware) - Original model: [Dragon Mistral 7B V0](https://huggingface.co/llmware/dragon-mistral-7b-v0) <!-- description start --> ## Description This repo contains AWQ model files for [llmware's Dragon Mistral 7B V0](https://huggingface.co/llmware/dragon-mistral-7b-v0). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-GGUF) * [llmware's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/llmware/dragon-mistral-7b-v0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: human-bot ``` <human>: {prompt} <bot>: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/dragon-mistral-7B-v0-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.15 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/dragon-mistral-7B-v0-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `dragon-mistral-7B-v0-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/dragon-mistral-7B-v0-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''<human>: {prompt} <bot>: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/dragon-mistral-7B-v0-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/dragon-mistral-7B-v0-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<human>: {prompt} <bot>: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/dragon-mistral-7B-v0-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''<human>: {prompt} <bot>: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: llmware's Dragon Mistral 7B V0 # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> dragon-mistral-7b-v0 part of the dRAGon ("Delivering RAG On ...") model series, RAG-instruct trained on top of a Mistral-7B base model. DRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation. ### Benchmark Tests Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester) Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations. --**Accuracy Score**: **96.50** correct out of 100 --Not Found Classification: 92.50% --Boolean: 97.50% --Math/Logic: 81.25% --Complex Questions (1-5): 4 (Medium-High - table-reading, multiple-choice, causal) --Summarization Quality (1-5): 4 (Coherent, extractive) --Hallucinations: No hallucinations observed in test runs. For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo). ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** llmware - **Model type:** Mistral-7B - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model:** Mistral-7B-Base ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> DRAGON is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources. DRAGON models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms. ## How to Get Started with the Model The fastest way to get started with dRAGon is through direct import in transformers: from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("dragon-mistral-7b-v0") model = AutoModelForCausalLM.from_pretrained("dragon-mistral-7b-v0") Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents. The dRAGon model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as: full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:" The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts: 1. Text Passage Context, and 2. Specific question or instruction based on the text passage To get the best results, package "my_prompt" as follows: my_prompt = {{text_passage}} + "\n" + {{question/instruction}} If you are using a HuggingFace generation script: # prepare prompt packaging used in fine-tuning process new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:" inputs = tokenizer(new_prompt, return_tensors="pt") start_of_output = len(inputs.input_ids[0]) # temperature: set at 0.3 for consistency of output # max_new_tokens: set at 100 - may prematurely stop a few of the summaries outputs = model.generate( inputs.input_ids.to(device), eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.3, max_new_tokens=100, ) output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True) ## Model Card Contact Darren Oberst & llmware team
{"base_model": "llmware/dragon-mistral-7b-v0", "license": "apache-2.0", "model_name": "Dragon Mistral 7B V0", "inference": false, "model_creator": "llmware", "model_type": "mistral", "prompt_template": "<human>: {prompt}\n<bot>:\n", "quantized_by": "TheBloke"}
task
[ "SUMMARIZATION" ]
46,196
Helsinki-NLP/opus-mt-fr-ru
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "fr", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T11:37:08+00:00
770
0
--- license: apache-2.0 tags: - translation --- ### opus-mt-fr-ru * source languages: fr * target languages: ru * OPUS readme: [fr-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ru/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.fr.ru | 37.9 | 0.585 |
null
Non_BioNLP
### opus-mt-fr-ru * source languages: fr * target languages: ru * OPUS readme: [fr-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ru/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.fr.ru | 37.9 | 0.585 |
{"license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
46,197
valepert/distilbert-base-uncased-finetuned-emotion
valepert
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-06-08T16:35:41Z
2023-06-08T16:57:37+00:00
13
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.9295 name: Accuracy - type: f1 value: 0.9296237017952964 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2166 - Accuracy: 0.9295 - F1: 0.9296 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8311 | 1.0 | 250 | 0.3120 | 0.9105 | 0.9092 | | 0.2537 | 2.0 | 500 | 0.2166 | 0.9295 | 0.9296 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2166 - Accuracy: 0.9295 - F1: 0.9296 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8311 | 1.0 | 250 | 0.3120 | 0.9105 | 0.9092 | | 0.2537 | 2.0 | 500 | 0.2166 | 0.9295 | 0.9296 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9295, "name": "Accuracy"}, {"type": "f1", "value": 0.9296237017952964, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,198
Falconsai/question_answering_v2
Falconsai
question-answering
[ "transformers", "safetensors", "distilbert", "question-answering", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2023-11-14T01:39:20Z
2025-02-28T02:18:52+00:00
1,368
8
--- language: - en library_name: transformers license: apache-2.0 widget: - text: Who did not understand the question? context: Willam sat their quietly, listening to the interviwer. Upon hearing the ridiculous question he leaned forward to answer. Well, I really dont understand your question, he said. It sounded to me more like a statement. example_title: Name - text: In what year did the University of Georgia's research expenditures total over $452 million? context: The University of Georgia ranked 25th among public U.S. universities in research expenditures in a recent National Science Foundation survey. In fiscal year 2018, UGA research totaled over $452 million, advancing knowledge in diverse fields from genetics to education. example_title: Research funding - text: When was the second satellite for the BeiDou-2 system launched? context: In April 2007, the first satellite of BeiDou-2, namely Compass-M1 (to validate frequencies for the BeiDou-2 constellation) was successfully put into its working orbit. The second BeiDou-2 constellation satellite Compass-G2 was launched on 15 April 2009. On 15 January 2010, the official website of the BeiDou Navigation Satellite System went online, and the system's third satellite (Compass-G1) was carried into its orbit by a Long March 3C rocket on 17 January 2010. On 2 June 2010, the fourth satellite was launched successfully into orbit. The fifth orbiter was launched into space from Xichang Satellite Launch Center by an LM-3I carrier rocket on 1 August 2010. Three months later, on 1 November 2010, the sixth satellite was sent into orbit by LM-3C. Another satellite, the Beidou-2/Compass IGSO-5 (fifth inclined geosynchonous orbit) satellite, was launched from the Xichang Satellite Launch Center by a Long March-3A on 1 December 2011 (UTC). example_title: BeiDou_Navigation_Satellite_System --- # Fine-tuned DistilBERT-base-uncased for Question and Answering V2 ## Model Description ## Overview The fine-tuned model presented here is an enhanced iteration of the DistilBERT-base-uncased model, meticulously trained on an updated dataset. Leveraging the underlying architecture of DistilBERT, a compact variant of BERT optimized for efficiency, this model is tailor-made for natural language processing tasks with a primary focus on question answering. Its training involved exposure to a diverse and contemporary dataset, ensuring its adaptability to a wide range of linguistic nuances and semantic intricacies. The fine-tuning process refines the model's understanding of context, allowing it to excel in tasks that require nuanced comprehension and contextual reasoning, making it a robust solution for question and answering applications in natural language processing. ## Intended Use This fine-tuned DistilBERT-base-uncased model is designed for versatile natural language processing applications. Its adaptability makes it well-suited for a broad range of tasks, including but not limited to text classification, sentiment analysis, and named entity recognition. Users are strongly advised to conduct a comprehensive performance assessment tailored to their specific tasks and datasets to ascertain its suitability for their particular use case. The model's efficacy and robustness can vary across different applications, and evaluating its performance on targeted tasks is crucial for optimal results. In this specific instance, the model underwent training with a focus on enhancing its performance in question and answering tasks. The training process was optimized to improve the model's understanding of contextual information and its ability to generate accurate and relevant responses in question-answering scenarios. Users seeking to leverage the model for similar applications are encouraged to evaluate its performance in the context of question and answering benchmarks to ensure alignment with their intended use case. ## Training Data The model was fine-tuned on an updated dataset collected from diverse sources to enhance its performance on a broad range of natural language understanding tasks. ## Model Architecture The underlying architecture of the model is rooted in DistilBERT-base-uncased, a variant designed to be both smaller and computationally more efficient than its precursor, BERT. This architecture optimization enables the model to retain a substantial portion of BERT's performance capabilities while demanding significantly fewer computational resources. DistilBERT achieves this efficiency through a process of knowledge distillation, wherein the model is trained to mimic the behavior and knowledge of the larger BERT model, resulting in a streamlined yet effective representation of language understanding. This reduction in complexity makes the model particularly well-suited for scenarios where computational resources are constrained, without compromising on the quality of natural language processing tasks. Moreover, the choice of DistilBERT as the base architecture aligns with the broader trend in developing models that strike a balance between performance and resource efficiency. Researchers and practitioners aiming for state-of-the-art results in natural language processing applications increasingly consider such distilled architectures due to their pragmatic benefits in deployment, inference speed, and overall versatility across various computational environments. ### How to Use To use this model for medical text summarization, you can follow these steps: ```python from transformers import pipeline question = "What would to the carmine pigment if not used diligently?" context = "The painters of the early Renaissance used two traditional lake pigments, made from mixing dye with either chalk or alum, kermes lake, made from kermes insects, and madder lake, made from the rubia tinctorum plant. With the arrival of cochineal, they had a third, carmine, which made a very fine crimson, though it had a tendency to change color if not used carefully. It was used by almost all the great painters of the 15th and 16th centuries, including Rembrandt, Vermeer, Rubens, Anthony van Dyck, Diego Vel\u00e1zquez and Tintoretto. Later it was used by Thomas Gainsborough, Seurat and J.M.W. Turner." question_answerer = pipeline("question-answering", model="Falconsai/question_answering_v2") question_answerer(question=question, context=context) ``` ```python import torch from transformers import AutoTokenizer from transformers import AutoModelForQuestionAnswering question = "On which date did Swansea City play its first Premier League game?" context = "In 2011, a Welsh club participated in the Premier League for the first time after Swansea City gained promotion. The first Premier League match to be played outside England was Swansea City's home match at the Liberty Stadium against Wigan Athletic on 20 August 2011. In 2012\u201313, Swansea qualified for the Europa League by winning the League Cup. The number of Welsh clubs in the Premier League increased to two for the first time in 2013\u201314, as Cardiff City gained promotion, but Cardiff City was relegated after its maiden season." tokenizer = AutoTokenizer.from_pretrained("Falconsai/question_answering_v2") inputs = tokenizer(question, context, return_tensors="pt") model = AutoModelForQuestionAnswering.from_pretrained("Falconsai/question_answering_v2") with torch.no_grad(): outputs = model(**inputs) answer_start_index = outputs.start_logits.argmax() answer_end_index = outputs.end_logits.argmax() predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] tokenizer.decode(predict_answer_tokens) ``` ## Ethical Considerations Care has been taken to minimize biases in the training data. However, biases may still be present, and users are encouraged to evaluate the model's predictions for potential bias and fairness concerns, especially when applied to different demographic groups. ## Limitations While this model performs well on standard benchmarks, it may not generalize optimally to all datasets or tasks. Users are advised to conduct thorough evaluation and testing in their specific use case. ## Contact Information For inquiries or issues related to this model, please contact [https://falcons.ai/]. ---
null
Non_BioNLP
# Fine-tuned DistilBERT-base-uncased for Question and Answering V2 ## Model Description ## Overview The fine-tuned model presented here is an enhanced iteration of the DistilBERT-base-uncased model, meticulously trained on an updated dataset. Leveraging the underlying architecture of DistilBERT, a compact variant of BERT optimized for efficiency, this model is tailor-made for natural language processing tasks with a primary focus on question answering. Its training involved exposure to a diverse and contemporary dataset, ensuring its adaptability to a wide range of linguistic nuances and semantic intricacies. The fine-tuning process refines the model's understanding of context, allowing it to excel in tasks that require nuanced comprehension and contextual reasoning, making it a robust solution for question and answering applications in natural language processing. ## Intended Use This fine-tuned DistilBERT-base-uncased model is designed for versatile natural language processing applications. Its adaptability makes it well-suited for a broad range of tasks, including but not limited to text classification, sentiment analysis, and named entity recognition. Users are strongly advised to conduct a comprehensive performance assessment tailored to their specific tasks and datasets to ascertain its suitability for their particular use case. The model's efficacy and robustness can vary across different applications, and evaluating its performance on targeted tasks is crucial for optimal results. In this specific instance, the model underwent training with a focus on enhancing its performance in question and answering tasks. The training process was optimized to improve the model's understanding of contextual information and its ability to generate accurate and relevant responses in question-answering scenarios. Users seeking to leverage the model for similar applications are encouraged to evaluate its performance in the context of question and answering benchmarks to ensure alignment with their intended use case. ## Training Data The model was fine-tuned on an updated dataset collected from diverse sources to enhance its performance on a broad range of natural language understanding tasks. ## Model Architecture The underlying architecture of the model is rooted in DistilBERT-base-uncased, a variant designed to be both smaller and computationally more efficient than its precursor, BERT. This architecture optimization enables the model to retain a substantial portion of BERT's performance capabilities while demanding significantly fewer computational resources. DistilBERT achieves this efficiency through a process of knowledge distillation, wherein the model is trained to mimic the behavior and knowledge of the larger BERT model, resulting in a streamlined yet effective representation of language understanding. This reduction in complexity makes the model particularly well-suited for scenarios where computational resources are constrained, without compromising on the quality of natural language processing tasks. Moreover, the choice of DistilBERT as the base architecture aligns with the broader trend in developing models that strike a balance between performance and resource efficiency. Researchers and practitioners aiming for state-of-the-art results in natural language processing applications increasingly consider such distilled architectures due to their pragmatic benefits in deployment, inference speed, and overall versatility across various computational environments. ### How to Use To use this model for medical text summarization, you can follow these steps: ```python from transformers import pipeline question = "What would to the carmine pigment if not used diligently?" context = "The painters of the early Renaissance used two traditional lake pigments, made from mixing dye with either chalk or alum, kermes lake, made from kermes insects, and madder lake, made from the rubia tinctorum plant. With the arrival of cochineal, they had a third, carmine, which made a very fine crimson, though it had a tendency to change color if not used carefully. It was used by almost all the great painters of the 15th and 16th centuries, including Rembrandt, Vermeer, Rubens, Anthony van Dyck, Diego Vel\u00e1zquez and Tintoretto. Later it was used by Thomas Gainsborough, Seurat and J.M.W. Turner." question_answerer = pipeline("question-answering", model="Falconsai/question_answering_v2") question_answerer(question=question, context=context) ``` ```python import torch from transformers import AutoTokenizer from transformers import AutoModelForQuestionAnswering question = "On which date did Swansea City play its first Premier League game?" context = "In 2011, a Welsh club participated in the Premier League for the first time after Swansea City gained promotion. The first Premier League match to be played outside England was Swansea City's home match at the Liberty Stadium against Wigan Athletic on 20 August 2011. In 2012\u201313, Swansea qualified for the Europa League by winning the League Cup. The number of Welsh clubs in the Premier League increased to two for the first time in 2013\u201314, as Cardiff City gained promotion, but Cardiff City was relegated after its maiden season." tokenizer = AutoTokenizer.from_pretrained("Falconsai/question_answering_v2") inputs = tokenizer(question, context, return_tensors="pt") model = AutoModelForQuestionAnswering.from_pretrained("Falconsai/question_answering_v2") with torch.no_grad(): outputs = model(**inputs) answer_start_index = outputs.start_logits.argmax() answer_end_index = outputs.end_logits.argmax() predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] tokenizer.decode(predict_answer_tokens) ``` ## Ethical Considerations Care has been taken to minimize biases in the training data. However, biases may still be present, and users are encouraged to evaluate the model's predictions for potential bias and fairness concerns, especially when applied to different demographic groups. ## Limitations While this model performs well on standard benchmarks, it may not generalize optimally to all datasets or tasks. Users are advised to conduct thorough evaluation and testing in their specific use case. ## Contact Information For inquiries or issues related to this model, please contact [https://falcons.ai/]. ---
{"language": ["en"], "library_name": "transformers", "license": "apache-2.0", "widget": [{"text": "Who did not understand the question?", "context": "Willam sat their quietly, listening to the interviwer. Upon hearing the ridiculous question he leaned forward to answer. Well, I really dont understand your question, he said. It sounded to me more like a statement.", "example_title": "Name"}, {"text": "In what year did the University of Georgia's research expenditures total over $452 million?", "context": "The University of Georgia ranked 25th among public U.S. universities in research expenditures in a recent National Science Foundation survey. In fiscal year 2018, UGA research totaled over $452 million, advancing knowledge in diverse fields from genetics to education.", "example_title": "Research funding"}, {"text": "When was the second satellite for the BeiDou-2 system launched?", "context": "In April 2007, the first satellite of BeiDou-2, namely Compass-M1 (to validate frequencies for the BeiDou-2 constellation) was successfully put into its working orbit. The second BeiDou-2 constellation satellite Compass-G2 was launched on 15 April 2009. On 15 January 2010, the official website of the BeiDou Navigation Satellite System went online, and the system's third satellite (Compass-G1) was carried into its orbit by a Long March 3C rocket on 17 January 2010. On 2 June 2010, the fourth satellite was launched successfully into orbit. The fifth orbiter was launched into space from Xichang Satellite Launch Center by an LM-3I carrier rocket on 1 August 2010. Three months later, on 1 November 2010, the sixth satellite was sent into orbit by LM-3C. Another satellite, the Beidou-2/Compass IGSO-5 (fifth inclined geosynchonous orbit) satellite, was launched from the Xichang Satellite Launch Center by a Long March-3A on 1 December 2011 (UTC).", "example_title": "BeiDou_Navigation_Satellite_System"}]}
task
[ "NAMED_ENTITY_RECOGNITION", "TEXT_CLASSIFICATION", "QUESTION_ANSWERING", "SUMMARIZATION" ]
46,199
madeinglasgow/distilbert-base-uncased-finetuned-emotion
madeinglasgow
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-03-30T15:02:23Z
2023-03-30T15:43:05+00:00
19
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.9235 name: Accuracy - type: f1 value: 0.9235366202450886 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2112 - Accuracy: 0.9235 - F1: 0.9235 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7976 | 1.0 | 250 | 0.3068 | 0.9025 | 0.8993 | | 0.2393 | 2.0 | 500 | 0.2112 | 0.9235 | 0.9235 | ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.1+cu116 - Datasets 2.11.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2112 - Accuracy: 0.9235 - F1: 0.9235 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7976 | 1.0 | 250 | 0.3068 | 0.9025 | 0.8993 | | 0.2393 | 2.0 | 500 | 0.2112 | 0.9235 | 0.9235 | ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.1+cu116 - Datasets 2.11.0 - Tokenizers 0.13.2
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9235, "name": "Accuracy"}, {"type": "f1", "value": 0.9235366202450886, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
46,200
Mubin1917/lamini_docs_LoRA_adapter
Mubin1917
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-08-16T02:33:50Z
2024-08-21T23:16:48+00:00
0
0
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded Model: LORA Adapter - **Developed by:** Mubin1917 - **License:** apache-2.0 - **Finetuned from model:** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This LORA adapter is based on the `unsloth/meta-llama-3.1-8b-instruct-bnb-4bit` model and has been fine-tuned on the [**Lamini_docs QnA**](https://huggingface.co/datasets/lamini/lamini_docs) dataset. The fine-tuning process was optimized using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library, resulting in a 2x faster training time. ### Training Configuration The model was trained with the following configuration: ```python training_args = TrainingArguments( num_train_epochs=6, per_device_train_batch_size=4, gradient_accumulation_steps=4, per_device_eval_batch_size=4, eval_accumulation_steps=4, warmup_steps=50, learning_rate=2e-4, fp16=not torch.cuda.is_bf16_supported(), bf16=torch.cuda.is_bf16_supported(), eval_steps=25, # Evaluate every 25 steps logging_steps=25, optim="adamw_8bit", weight_decay=0.01, lr_scheduler_type="linear", seed=3407, output_dir="/kaggle/temp/results", report_to="wandb", save_total_limit=1, # Save the best one and the last one metric_for_best_model="val_loss", eval_strategy="steps", load_best_model_at_end=True, ) ``` ### Evaluation Results - **SacreBLEU Test:** Score: **73.55** Detailed Metrics: - Counts: [20894, 19191, 18504, 18029] - Totals: [26214, 26074, 25934, 25794] - Precisions: [79.71%, 73.60%, 71.35%, 69.90%] - Brevity Penalty: **1.0** - System Length: **26214** - Reference Length: **24955** - **BLEU Test:** BLEU Score: **0.767** Detailed Metrics: - Precisions: [79.71%, 73.73%] - Brevity Penalty: **1.0** - Length Ratio: **1.05** - Translation Length: **26299** - Reference Length: **24955** For a detailed comparison between the predicted and actual QnA responses on the test dataset, please visit the [evaluation dataset](https://huggingface.co/datasets/Mubin1917/lamini_docs_evaluation). [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
null
Non_BioNLP
# Uploaded Model: LORA Adapter - **Developed by:** Mubin1917 - **License:** apache-2.0 - **Finetuned from model:** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This LORA adapter is based on the `unsloth/meta-llama-3.1-8b-instruct-bnb-4bit` model and has been fine-tuned on the [**Lamini_docs QnA**](https://huggingface.co/datasets/lamini/lamini_docs) dataset. The fine-tuning process was optimized using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library, resulting in a 2x faster training time. ### Training Configuration The model was trained with the following configuration: ```python training_args = TrainingArguments( num_train_epochs=6, per_device_train_batch_size=4, gradient_accumulation_steps=4, per_device_eval_batch_size=4, eval_accumulation_steps=4, warmup_steps=50, learning_rate=2e-4, fp16=not torch.cuda.is_bf16_supported(), bf16=torch.cuda.is_bf16_supported(), eval_steps=25, # Evaluate every 25 steps logging_steps=25, optim="adamw_8bit", weight_decay=0.01, lr_scheduler_type="linear", seed=3407, output_dir="/kaggle/temp/results", report_to="wandb", save_total_limit=1, # Save the best one and the last one metric_for_best_model="val_loss", eval_strategy="steps", load_best_model_at_end=True, ) ``` ### Evaluation Results - **SacreBLEU Test:** Score: **73.55** Detailed Metrics: - Counts: [20894, 19191, 18504, 18029] - Totals: [26214, 26074, 25934, 25794] - Precisions: [79.71%, 73.60%, 71.35%, 69.90%] - Brevity Penalty: **1.0** - System Length: **26214** - Reference Length: **24955** - **BLEU Test:** BLEU Score: **0.767** Detailed Metrics: - Precisions: [79.71%, 73.73%] - Brevity Penalty: **1.0** - Length Ratio: **1.05** - Translation Length: **26299** - Reference Length: **24955** For a detailed comparison between the predicted and actual QnA responses on the test dataset, please visit the [evaluation dataset](https://huggingface.co/datasets/Mubin1917/lamini_docs_evaluation). [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"base_model": "unsloth/meta-llama-3.1-8b-instruct-bnb-4bit", "language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"]}
task
[ "TRANSLATION" ]
46,201
MaLA-LM/lucky52-bloom-7b1-no-21
MaLA-LM
text-generation
[ "transformers", "pytorch", "bloom", "text-generation", "generation", "question answering", "instruction tuning", "multilingual", "dataset:MBZUAI/Bactrian-X", "arxiv:2404.04850", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-04-04T10:35:42Z
2025-04-08T17:04:36+00:00
9
0
--- datasets: - MBZUAI/Bactrian-X language: - multilingual library_name: transformers license: cc-by-nc-4.0 pipeline_tag: text-generation tags: - generation - question answering - instruction tuning --- ### Model Description This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages. We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks. Please refer to [our paper](https://arxiv.org/abs/2404.04850) for more details. * Base model: [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1) * Instruction languages: English, Chinese, Afrikaans, Arabic, Azerbaijani, Bengali, Czech, German, Spanish, Estonian, Farsi, Finnish, French, Galician, Gujarati, Hebrew, Hindi, Croatian, Indonesian, Italian, Japanese * Instruction language codes: en, zh, af, ar, az, bn, cs, de, es, et, fa, fi, fr, gl, gu, he, hi, hr, id, it, ja * Training method: full-parameter fine-tuning. ### Usage The model checkpoint should be loaded using `transformers` library. ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-21") model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-21") ``` ### Citation ``` @inproceedings{ji2025lucky52, title={How Many Languages Make Good Multilingual Instruction Tuning? A Case Study on BLOOM}, author={Shaoxiong Ji and Pinzhen Chen}, year={2025}, booktitle={Proceedings of COLING}, url={https://arxiv.org/abs/2404.04850}, } ```
null
Non_BioNLP
### Model Description This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages. We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks. Please refer to [our paper](https://arxiv.org/abs/2404.04850) for more details. * Base model: [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1) * Instruction languages: English, Chinese, Afrikaans, Arabic, Azerbaijani, Bengali, Czech, German, Spanish, Estonian, Farsi, Finnish, French, Galician, Gujarati, Hebrew, Hindi, Croatian, Indonesian, Italian, Japanese * Instruction language codes: en, zh, af, ar, az, bn, cs, de, es, et, fa, fi, fr, gl, gu, he, hi, hr, id, it, ja * Training method: full-parameter fine-tuning. ### Usage The model checkpoint should be loaded using `transformers` library. ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-21") model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-21") ``` ### Citation ``` @inproceedings{ji2025lucky52, title={How Many Languages Make Good Multilingual Instruction Tuning? A Case Study on BLOOM}, author={Shaoxiong Ji and Pinzhen Chen}, year={2025}, booktitle={Proceedings of COLING}, url={https://arxiv.org/abs/2404.04850}, } ```
{"datasets": ["MBZUAI/Bactrian-X"], "language": ["multilingual"], "library_name": "transformers", "license": "cc-by-nc-4.0", "pipeline_tag": "text-generation", "tags": ["generation", "question answering", "instruction tuning"]}
task
[ "QUESTION_ANSWERING" ]
46,202