id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 42
values | tags
listlengths 1
4.05k
| created_time
timestamp[ns, tz=UTC]date 2022-03-02 23:29:04
2025-04-10 08:38:38
| last_modified
stringdate 2020-05-14 13:13:12
2025-04-19 04:15:39
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_bigbio_names
listlengths 1
8
⌀ | is_bionlp
stringclasses 3
values | model_cards
stringlengths 0
1M
| metadata
stringlengths 2
698k
| source
stringclasses 2
values | matched_task
listlengths 1
10
⌀ | __index_level_0__
int64 0
46.9k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
anishareddyalla/bge-base-matryoshka-aws-casestudies
|
anishareddyalla
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2231",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-07-18T18:34:06Z |
2024-07-18T18:34:22+00:00
| 8 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2231
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: The fact that no customer noticed this major migration to Amazon
S3 Glacier Instant Retrieval was a big win for us. It was a seamless experience
for end users, and we had no production issues during the entire migration. ”
Contact Sales Greater than 99. 99% Outcome | Gaining Insights on AWS to Prioritize
Business Needs 한국어 Snap migrated more than 2 exabytes of data—roughly equivalent
to 1. 5 trillion media files—seamlessly to Amazon S3 Glacier Instant Retrieval
from Amazon S3 Standard-IA. “The fact that no customer noticed this major migration
to Amazon S3 Glacier Instant Retrieval was a big win for us,” says Manoharan.
“It was a seamless experience for Snapchatters, and we had no production issues
during the entire migration. ” As a result of the migration, the company saved
tens of millions of dollars on storage. Snap has configured Amazon S3 in 20 AWS
Regions around the world so that customers anywhere can retrieve data in milliseconds.
The AWS Global Infrastructure is the most secure, extensive, and reliable Global
Cloud Infrastructure for a business’s applications. The global reach of AWS lets
Snap store media closer to the place where Snapchatters are creating it for optimal
performance. Snap is also able to deliver content efficiently using Amazon CloudFront,
a content delivery network service built for high performance, security, and availability.
“We’ve been able to off-load all of the regionalization work and costs to AWS
so that we can focus on developing new features,” says Manoharan. As a result,
Snapchat continues to meet its quarterly cost-optimization goals. Overview | Opportunity
| Solution | Outcome | AWS Services Used 2 exabytes Amazon Simple Storage Service
(Amazon S3) is an object storage service offering industry-leading scalability,
data availability, security, and performance. … In 2016, Snap migrated its data
to AWS. “We chose to migrate to AWS because of its global reach, excellent performance,
and competitive pricing that, in turn, gave us the ability to reinvest in our
business,” says Vijay Manoharan, manager of the media delivery platform team at
Snap. Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers
the lowest-cost storage for long-lived data that is rarely accessed and requires
retrieval in milliseconds. AWS Services Used In 2017, Snap migrated one of the
app’s most central features—Snapchat Stories—to Amazon DynamoDB, a fully managed,
serverless, NoSQL database designed to run high-performance applications at virtually
any scale. Using Amazon DynamoDB, the company experienced greater than 99.
sentences:
- How did Snap save tens of millions of dollars on storage as a result of migrating
to Amazon S3 Glacier Instant Retrieval from Amazon S3 Standard-IA?
- How has Panasonic Avionics Corporation leveraged Amazon Aurora MySQL-Compatible
Edition and other AWS services to improve the reliability and scalability of its
databases for in-flight entertainment and communications systems?
- How does Ground Truth Plus ensure the quality of image and video captions generated
by human annotators?
- source_sentence: ” 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Customer Stories
/ Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries
are transforming their businesses and delivering on their missions every day using
AWS. Contact our experts and start your own AWS journey today. Outcome | Expanding
Intelligent Features of Virtual Care Amazon Transcribe is an automatic speech
recognition service that makes it easy to add speech to text capabilities to any
application. Learn more » Learn more » It is critical that video visits are secure,
responsive, and reliable. Using AWS helps us provide all this in a performant
and scalable way. " Overview With the Amazon Chime SDK, builders can easily add
real-time voice, video, and messaging powered by machine learning into their applications.
Get Started Beyond traditional use cases, Salesforce is adding capabilities in
medication-therapy management, connectivity for care coordinators, and other approaches
for patient engagement. The company is developing a new feature that will expand
its support of Virtual Care sessions to multiple participants, instead of just
clinician and patient. This will facilitate care-team coordination with multiple
parties in a single meeting. Using AWS, Salesforce circumvented the heavy lifting
that would have been required to build and maintain a video-calling solution from
scratch. Patients self-schedule virtual appointments, coordinate previsit activities,
and conduct virtual visits in a HIPAA-compliant environment. A patient’s appointment
request gets routed to Amazon Chime SDK. Clinicians then review a patient’s intake
form and correlate the patient to a Virtual Care session using Amazon Chime SDK
messaging, which connects providers and patients with secure, scalable messaging
in their web and mobile applications. The Amazon Chime SDK control plane sends
event notifications through a default event bus to Amazon EventBridge, a serverless
event bus that helps organizations receive, filter, transform, route, and deliver
events. Healthcare professionals deliver care over the internet in near real time,
which has significantly reduced no-shows for appointments. “Using Amazon Chime
SDK, we don’t have to worry about the mechanics of the video call,” Daftari says.
“We can focus on features and functions that help differentiate our product in
the marketplace, while also significantly improving our speed to launch. ” Salesforce
further supports accessibility through embedding closed-captioning of video calls
using Amazon Chime SDK live transcription. Amazon Chime SDK sends live audio streams
to Amazon Transcribe, which automatically converts speech to text. Salesforce
Health Cloud customers can use the live transcription capability to display subtitles,
create meeting transcripts, or analyze content.
sentences:
- How did DB Energie use Amazon SageMaker and AWS to enhance the sustainability
and reliability of its power grid operations?
- How did Provectus assist Earth.com in enhancing the AI-powered image recognition
capabilities of EarthSnap and reducing engineering heavy lifting through the implementation
of end-to-end ML pipelines and managed MLOps platform?
- How does Salesforce use AWS services such as Amazon Chime SDK and Amazon Transcribe
to enhance their Virtual Care sessions for healthcare professionals and patients?
- source_sentence: It’s been a great success. ” Overview 93% Validate technical skills
and cloud expertise to grow your career and business. Learn more » Amazon Web
Services (AWS) Education Programs collaborate with education institutions and
the public sector to provide access for individuals to develop cloud computing
and digital skills. To help graduates boost their employability, Staffordshire
University worked with the AWS team to introduce cloud computing skills training
and add cloud courses to its credit-bearing computer science modules. Staffordshire
University offers courses through AWS Academy, which empowers higher education
institutions to prepare students for industry-recognized certifications and careers.
Since the university added AWS Academy courses to its curriculum in 2017, several
hundred students have participated. Of those students, 93 percent have achieved
employment within 6 months of graduation. Empowered students Türkçe Solution |
Learning by Doing Using AWS Learner Labs English With AWS Academy, our students
love that they’re not just taking theory lessons. They get to work in actual environments
with real AWS tools. ” Next up, Staffordshire University is expanding on the success
of its cloud courses by launching additional programs of study developed in collaboration
with the AWS team. Staffordshire University and the AWS team designed these programs
by "Working Backwards" — an Amazon process that encourages companies to brainstorm
solutions by using a customer challenge as the starting point — from the cloud
skills employers are currently seeking in the United Kingdom and across the global
labor market. One of these programs, which launches in September 2022, is a cloud
computing course that features both cloud computing and cybersecurity modules
and will offer students more opportunities to discover what’s possible with the
AWS Cloud. “What we want to encourage is for students to play with AWS services
as well as build confidence with the tools,” says Dr. Champion. to learn remotely
using any hardware and earn AWS Certifications Staffordshire University added
cloud computing skills training to its curriculum using AWS Education Programs,
helping 93 percent of participants find employment within 6 months of graduation.
covering cloud skills AWS Certification during the AWS Educate University Challenge
Deutsch of graduates find jobs within 6 months Tiếng Việt Italiano ไทย Outcome
| Developing New Cloud Coursework About Staffordshire University Staffordshire
University is a public research university in Staffordshire, England. Founded
in 1914, the university serves over 15,000 students across three schools and four
campuses. The United Kingdom has experienced a technology boom in recent years,
with technology funding tripling in the first 6 months of 2021 compared to the
same period in 2020. In particular, employers need professionals with cloud computing
skills ranging from cloud development to machine learning and data analytics.
To meet demand, Staffordshire University offers students their choice of six AWS
courses covering these key skills and more.
sentences:
- How has the collaboration between Staffordshire University and the AWS team impacted
the employability of graduates in the field of cloud computing?
- How can the confidence scores be used to verify the accuracy of sentiment assignments
in the sentiment_results_final table, especially for any dubious sentiment assignments?
- How did migrating to AWS help Travian Games improve the stability and reliability
of their game servers, and what impact did this have on their players' experience?
- source_sentence: Contact our experts and start your own AWS journey today. customer
and agent experience 2022 Overview WaFd Bank Transforms Contact Centers Using
Conversational AI on AWS Customer Stories / Financial Services WaFd uses a data
lake on AWS to store and analyze data from phone and chatbot conversations. “We’re
getting incredible data from AWS through the conversational logs,” says Hubbard.
“That has given us insights into what our customers are asking for so that we
can add more self-service functionality. ” The data also gives WaFd more insight
into call volumes, so the call center can better manage staff schedules. Opportunity
| Using Amazon Lex to Implement an AI-Powered Contact Center Solution Türkçe English
WaFd is a US retail and commercial bank with over 200 branches in eight states.
In 2019, WaFd founded subsidiary Pike Street Labs, a fintech startup, to drive
client-facing digital innovation for the bank. “Banks need to meet customers’
digital expectations,” says Dustin Hubbard, chief technology officer at WaFd Bank
and Pike Street Labs. “Every year, customers expect more innovation because that’s
what they see from new entrants or in other markets. ” Pike Street Labs redesigned
WaFd’s online banking solution to provide personalized customer experiences and
began tackling the bank’s customer care center. The company’s previous contact
center solution used dated technology with limited features spread across disparate
systems. This led to long wait times for customers and frustration for agents,
who had to answer incoming calls without prior knowledge of what the customer
needed. Agents also bore the burden of identifying fraudulent calls. WaFd needed
a solution to improve both the customer and agent experiences. Previously, WaFd
used two different systems in its customer care center to manage its voice and
chat-based customer interactions, with no way for one system to recognize that
an agent was busy on the other. Chat messages remained unanswered because agents
would forget to sign in to the chat system. The company implemented chatbots and
voice bots powered by Amazon Lex. Now, the call and chat systems are interoperable,
and chats can be escalated to agent assisted calls when needed. When a call gets
passed to an agent, the system also passes the full chat record and an analysis
of the customer’s tone so that the agent is prepared to address the client’s needs
and be empathetic toward the caller’s sentiment. WaFd worked with the AWS and
Talkdesk teams to create and launch its new contact center solution in July 2022.
sentences:
- How did Yellow Class optimize its video files and improve performance using AWS
services such as AWS Elemental MediaConvert?
- How has FanDuel ensured the redundancy and reliability of its live video streams
through the use of AWS Elemental MediaConnect and AWS Elemental MediaLive?
- How did WaFd Bank use data from phone and chatbot conversations stored in a data
lake on AWS to improve self-service functionality and better manage call center
staff schedules?
- source_sentence: 'Alternatively, you can run the inference via code. Here is one
example written in Python, using the requests library: import requests url = "https://<YOUR_API_GATEWAY_ENDPOINT_ID>.
execute-api. <YOUR_ENDPOINT_REGION>. amazonaws. com/prod/question?question=\"What
is the color of my car now?\"&context=\"My car used to be blue but I painted red\""
response = requests. request("GET", url, headers=headers, data=payload) print(response.
text) The code outputs a string similar to the following: ''{"score":0. 6947233080863953,"start":38,"end":41,"answer":"red"}''
If you are interested in knowing more about deploying Generative AI and large
language models on AWS, check out here: Deploy Serverless Generative AI on AWS
Lambda with OpenLLaMa Deploy large language models on AWS Inferentia2 using large
model inference containers Clean up Inside the root directory of your repository,
run the following code to clean up your resources: make destroy Conclusion In
this post, we introduced how you can use Lambda to deploy your trained ML model
using your preferred web application framework, such as FastAPI. We provided a
detailed code repository that you can deploy, and you retain the flexibility of
switching to whichever trained model artifacts you process. The performance can
depend on how you implement and deploy the model. You are welcome to try it out
yourself, and we’re excited to hear your feedback! About the Authors Tingyi Li
is an Enterprise Solutions Architect from AWS based out in Stockholm, Sweden supporting
the Nordics customers. She enjoys helping customers with the architecture, design,
and development of cloud-optimized infrastructure solutions. She is specialized
in AI and Machine Learning and is interested in empowering customers with intelligence
in their AI/ML applications. In her spare time, she is also a part-time illustrator
who writes novels and plays the piano. Demir Catovic is a Machine Learning Engineer
from AWS based in Zurich, Switzerland. He engages with customers and helps them
implement scalable and fully-functional ML applications. He is passionate about
building and productionizing machine learning applications for customers and is
always keen to explore around new trends and cutting-edge technologies in the
AI/ML world. TAGS: Generative AI , Natural Language Processing Comments View Comments
Resources Getting Started What''s New Blog Topics Amazon Comprehend Amazon Kendra
Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow
Twitter Facebook LinkedIn Twitch Email Updates.'
sentences:
- How did ALTBalaji use AWS Elemental MediaLive to handle a tenfold increase in
viewership during the live streaming of Lock Upp, and what insights did they gain
from this experience?
- How has PayEye been able to accelerate their development process and enter the
production phase within a few months using AWS services, and what impact has this
had on their recruitment efforts and team focus?
- How can Lambda be used to deploy trained ML models using a preferred web application
framework?
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.5120967741935484
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8266129032258065
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9233870967741935
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9637096774193549
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5120967741935484
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2755376344086021
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18467741935483872
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09637096774193549
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5120967741935484
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8266129032258065
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9233870967741935
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9637096774193549
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7538879073840729
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6844038018433181
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6858592666542238
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.532258064516129
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8225806451612904
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9193548387096774
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.967741935483871
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.532258064516129
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27419354838709675
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18387096774193548
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09677419354838711
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.532258064516129
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8225806451612904
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9193548387096774
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.967741935483871
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7596718979684643
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6912602406554021
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6924236134719179
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.5241935483870968
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8225806451612904
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9193548387096774
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9596774193548387
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5241935483870968
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27419354838709675
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1838709677419355
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0959677419354839
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5241935483870968
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8225806451612904
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9193548387096774
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9596774193548387
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7527772429981233
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6846406169994881
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6862769216923534
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.4959677419354839
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7903225806451613
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8911290322580645
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9556451612903226
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4959677419354839
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26344086021505375
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17822580645161293
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09556451612903227
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.4959677419354839
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7903225806451613
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8911290322580645
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9556451612903226
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.73375586078758
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6613495263696876
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6630698645438532
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.4475806451612903
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7661290322580645
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8790322580645161
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9475806451612904
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4475806451612903
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2553763440860215
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17580645161290326
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09475806451612903
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.4475806451612903
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7661290322580645
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8790322580645161
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9475806451612904
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7052651530890945
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6260768689196109
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6277483838406475
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("anishareddyalla/bge-base-matryoshka-aws-casestudies")
# Run inference
sentences = [
'Alternatively, you can run the inference via code. Here is one example written in Python, using the requests library: import requests url = "https://<YOUR_API_GATEWAY_ENDPOINT_ID>. execute-api. <YOUR_ENDPOINT_REGION>. amazonaws. com/prod/question?question=\\"What is the color of my car now?\\"&context=\\"My car used to be blue but I painted red\\"" response = requests. request("GET", url, headers=headers, data=payload) print(response. text) The code outputs a string similar to the following: \'{"score":0. 6947233080863953,"start":38,"end":41,"answer":"red"}\' If you are interested in knowing more about deploying Generative AI and large language models on AWS, check out here: Deploy Serverless Generative AI on AWS Lambda with OpenLLaMa Deploy large language models on AWS Inferentia2 using large model inference containers Clean up Inside the root directory of your repository, run the following code to clean up your resources: make destroy Conclusion In this post, we introduced how you can use Lambda to deploy your trained ML model using your preferred web application framework, such as FastAPI. We provided a detailed code repository that you can deploy, and you retain the flexibility of switching to whichever trained model artifacts you process. The performance can depend on how you implement and deploy the model. You are welcome to try it out yourself, and we’re excited to hear your feedback! About the Authors Tingyi Li is an Enterprise Solutions Architect from AWS based out in Stockholm, Sweden supporting the Nordics customers. She enjoys helping customers with the architecture, design, and development of cloud-optimized infrastructure solutions. She is specialized in AI and Machine Learning and is interested in empowering customers with intelligence in their AI/ML applications. In her spare time, she is also a part-time illustrator who writes novels and plays the piano. Demir Catovic is a Machine Learning Engineer from AWS based in Zurich, Switzerland. He engages with customers and helps them implement scalable and fully-functional ML applications. He is passionate about building and productionizing machine learning applications for customers and is always keen to explore around new trends and cutting-edge technologies in the AI/ML world. TAGS: Generative AI , Natural Language Processing Comments View Comments Resources Getting Started What\'s New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow Twitter Facebook LinkedIn Twitch Email Updates.',
'How can Lambda be used to deploy trained ML models using a preferred web application framework?',
'How has PayEye been able to accelerate their development process and enter the production phase within a few months using AWS services, and what impact has this had on their recruitment efforts and team focus?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.5121 |
| cosine_accuracy@3 | 0.8266 |
| cosine_accuracy@5 | 0.9234 |
| cosine_accuracy@10 | 0.9637 |
| cosine_precision@1 | 0.5121 |
| cosine_precision@3 | 0.2755 |
| cosine_precision@5 | 0.1847 |
| cosine_precision@10 | 0.0964 |
| cosine_recall@1 | 0.5121 |
| cosine_recall@3 | 0.8266 |
| cosine_recall@5 | 0.9234 |
| cosine_recall@10 | 0.9637 |
| cosine_ndcg@10 | 0.7539 |
| cosine_mrr@10 | 0.6844 |
| **cosine_map@100** | **0.6859** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.5323 |
| cosine_accuracy@3 | 0.8226 |
| cosine_accuracy@5 | 0.9194 |
| cosine_accuracy@10 | 0.9677 |
| cosine_precision@1 | 0.5323 |
| cosine_precision@3 | 0.2742 |
| cosine_precision@5 | 0.1839 |
| cosine_precision@10 | 0.0968 |
| cosine_recall@1 | 0.5323 |
| cosine_recall@3 | 0.8226 |
| cosine_recall@5 | 0.9194 |
| cosine_recall@10 | 0.9677 |
| cosine_ndcg@10 | 0.7597 |
| cosine_mrr@10 | 0.6913 |
| **cosine_map@100** | **0.6924** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.5242 |
| cosine_accuracy@3 | 0.8226 |
| cosine_accuracy@5 | 0.9194 |
| cosine_accuracy@10 | 0.9597 |
| cosine_precision@1 | 0.5242 |
| cosine_precision@3 | 0.2742 |
| cosine_precision@5 | 0.1839 |
| cosine_precision@10 | 0.096 |
| cosine_recall@1 | 0.5242 |
| cosine_recall@3 | 0.8226 |
| cosine_recall@5 | 0.9194 |
| cosine_recall@10 | 0.9597 |
| cosine_ndcg@10 | 0.7528 |
| cosine_mrr@10 | 0.6846 |
| **cosine_map@100** | **0.6863** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.496 |
| cosine_accuracy@3 | 0.7903 |
| cosine_accuracy@5 | 0.8911 |
| cosine_accuracy@10 | 0.9556 |
| cosine_precision@1 | 0.496 |
| cosine_precision@3 | 0.2634 |
| cosine_precision@5 | 0.1782 |
| cosine_precision@10 | 0.0956 |
| cosine_recall@1 | 0.496 |
| cosine_recall@3 | 0.7903 |
| cosine_recall@5 | 0.8911 |
| cosine_recall@10 | 0.9556 |
| cosine_ndcg@10 | 0.7338 |
| cosine_mrr@10 | 0.6613 |
| **cosine_map@100** | **0.6631** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.4476 |
| cosine_accuracy@3 | 0.7661 |
| cosine_accuracy@5 | 0.879 |
| cosine_accuracy@10 | 0.9476 |
| cosine_precision@1 | 0.4476 |
| cosine_precision@3 | 0.2554 |
| cosine_precision@5 | 0.1758 |
| cosine_precision@10 | 0.0948 |
| cosine_recall@1 | 0.4476 |
| cosine_recall@3 | 0.7661 |
| cosine_recall@5 | 0.879 |
| cosine_recall@10 | 0.9476 |
| cosine_ndcg@10 | 0.7053 |
| cosine_mrr@10 | 0.6261 |
| **cosine_map@100** | **0.6277** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 2,231 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 430.06 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 33.49 tokens</li><li>max: 65 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>TCSG is helping students enter a competitive workforce as educated cloud professionals and providing opportunities for success. TCSG built its Cloud Academy using AWS Academy, which provides higher education institutions with a free, ready-to-teach cloud computing curriculum that prepares students to pursue industry-recognized certifications and in-demand cloud jobs. TCSG launched the TCSG Cloud Academy in two forms: one as a specialization within an existing associate’s degree and the second as a stand-alone technical certificate of credit. For the technical certificate of credit, students who have existing degrees can enter the curriculum to focus on cloud computing and participate in hands-on cloud experiences using AWS services. Tiếng Việt Italiano ไทย The Technical College System of Georgia is the state government agency that supervises workforce development of more than 294,000 students across 22 technical colleges, 88 campuses, and more than 600 programs. Using the AWS curriculum and technology as the foundation for its courses, TCSG is preparing students to earn industry-recognized AWS Certifications to increase employability while improving accessibility to cloud education by offering the academy virtually and remotely. Learn more » TCSG is the state of Georgia government agency that supervises workforce development of hundreds of thousands of students across 22 technical colleges, 88 campuses, and more than 600 programs. The agency aims to run a system of technical education using the latest technology that’s accessible to all adults and corporate citizens in the state. To develop and deploy its new cloud-focused curriculum, it worked with AWS Education Programs, which helps TCSG institutions develop initiatives that align education to careers in the cloud and promote student employability, preparing diverse learners for in-demand cloud roles around the world. In 2020, the organization officially launched the TCSG Cloud Academy—a virtual program for students pursuing expertise and certifications in cloud computing—on its eCampus virtual learning system. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português.</code> | <code>How has the use of AWS Academy by TCSG helped prepare students for pursuing industry-recognized certifications and in-demand cloud jobs in Georgia's workforce?</code> |
| <code>This prompt is then provided to the LLM for generating an answer to the user question. @router. post("/rag") async def rag_handler(req: Request) -> Dict[str, Any]: # dump the received request for debugging purposes logger. info(f"req={req}") # initialize vector db and SageMaker Endpoint _init(req) # Use the vector db to find similar documents to the query # the vector db call would automatically convert the query text # into embeddings docs = _vector_db. similarity_search(req. q, k=req. max_matching_docs) logger. info(f"here are the {req. max_matching_docs} closest matching docs to the query=\"{req. q}\"") for d in docs: logger. info(f"---------") logger. info(d) logger. info(f"---------") # now that we have the matching docs, lets pack them as a context # into the prompt and ask the LLM to generate a response prompt_template = """Answer based on context:\n\n{context}\n\n{question}""" prompt = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) logger. info(f"prompt sent to llm = \"{prompt}\"") chain = load_qa_chain(llm=_sm_llm, prompt=prompt) answer = chain({"input_documents": docs, "question": req. q}, return_only_outputs=True)['output_text'] logger. info(f"answer received from llm,\nquestion: \"{req. q}\"\nanswer: \"{answer}\"") resp = {'question': req. q, 'answer': answer} if req. verbose is True: resp['docs'] = docs return resp Clean up To avoid incurring future charges, delete the resources. You can do this by deleting the CloudFormation stack as shown in the following screenshot.</code> | <code>What resources need to be deleted to avoid future charges, and how can they be deleted?</code> |
| <code>append(input_1_s3_location) async_response = base_model_predictor. predict_async(input_path=input_1_s3_location) output_locations. append(async_response. output_path) if i > max_images: break This may take up to 30 minutes or more depending on how much data you have uploaded for asynchronous inference. You can visualize one of these inferences as follows: plot_response('data/single. out') Convert the asynchronous inference output to a Ground Truth input manifest In this step, we create an input manifest for a bounding box verification job on Ground Truth. We upload the Ground Truth UI template and label categories file, and create the verification job. The notebook linked to this post uses a private workforce to perform the labeling; you can change this if you’re using other types of workforces. For more details, refer to the full code in the notebook. Verify labels from the auto-labeling process in Ground Truth In this step, we complete the verification by accessing the labeling portal. For more details, refer to here. When you access the portal as a workforce member, you will be able to see the bounding boxes created by the JumpStart model and make adjustments as required. You can use this template to repeat auto-labeling with many task-specific models, potentially merge labels, and use the resulting labeled dataset in downstream tasks. Clean up In this step, we clean up by deleting the endpoint and the model created in previous steps: # Delete the SageMaker endpoint base_model_predictor. delete_model() base_model_predictor. delete_endpoint() Conclusion In this post, we walked through an auto-labeling process involving JumpStart and asynchronous inference. We used the results of the auto-labeling process to convert and visualize labeled data on a real-world dataset. You can use the solution to perform auto-labeling with many task-specific models, potentially merge labels, and use the resulting labeled dataset in downstream tasks. You can also explore using tools like the Segment Anything Model for generating segment masks as part of the auto-labeling process. In future posts in this series, we will cover the perception module and segmentation.</code> | <code>How can you visualize the inferences generated by the asynchronous inference process using the provided solution?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:----------:|:-----:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.9143 | 4 | - | 0.6663 | 0.6851 | 0.7027 | 0.6120 | 0.6998 |
| **1.8286** | **8** | **-** | **0.6758** | **0.6822** | **0.6966** | **0.6311** | **0.6941** |
| 2.2857 | 10 | 1.883 | - | - | - | - | - |
| 2.9714 | 13 | - | 0.6631 | 0.6881 | 0.6904 | 0.6245 | 0.6873 |
| 3.6571 | 16 | - | 0.6631 | 0.6863 | 0.6924 | 0.6277 | 0.6859 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("anishareddyalla/bge-base-matryoshka-aws-casestudies")
# Run inference
sentences = [
'Alternatively, you can run the inference via code. Here is one example written in Python, using the requests library: import requests url = "https://<YOUR_API_GATEWAY_ENDPOINT_ID>. execute-api. <YOUR_ENDPOINT_REGION>. amazonaws. com/prod/question?question=\\"What is the color of my car now?\\"&context=\\"My car used to be blue but I painted red\\"" response = requests. request("GET", url, headers=headers, data=payload) print(response. text) The code outputs a string similar to the following: \'{"score":0. 6947233080863953,"start":38,"end":41,"answer":"red"}\' If you are interested in knowing more about deploying Generative AI and large language models on AWS, check out here: Deploy Serverless Generative AI on AWS Lambda with OpenLLaMa Deploy large language models on AWS Inferentia2 using large model inference containers Clean up Inside the root directory of your repository, run the following code to clean up your resources: make destroy Conclusion In this post, we introduced how you can use Lambda to deploy your trained ML model using your preferred web application framework, such as FastAPI. We provided a detailed code repository that you can deploy, and you retain the flexibility of switching to whichever trained model artifacts you process. The performance can depend on how you implement and deploy the model. You are welcome to try it out yourself, and we’re excited to hear your feedback! About the Authors Tingyi Li is an Enterprise Solutions Architect from AWS based out in Stockholm, Sweden supporting the Nordics customers. She enjoys helping customers with the architecture, design, and development of cloud-optimized infrastructure solutions. She is specialized in AI and Machine Learning and is interested in empowering customers with intelligence in their AI/ML applications. In her spare time, she is also a part-time illustrator who writes novels and plays the piano. Demir Catovic is a Machine Learning Engineer from AWS based in Zurich, Switzerland. He engages with customers and helps them implement scalable and fully-functional ML applications. He is passionate about building and productionizing machine learning applications for customers and is always keen to explore around new trends and cutting-edge technologies in the AI/ML world. TAGS: Generative AI , Natural Language Processing Comments View Comments Resources Getting Started What\'s New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow Twitter Facebook LinkedIn Twitch Email Updates.',
'How can Lambda be used to deploy trained ML models using a preferred web application framework?',
'How has PayEye been able to accelerate their development process and enter the production phase within a few months using AWS services, and what impact has this had on their recruitment efforts and team focus?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.5121 |
| cosine_accuracy@3 | 0.8266 |
| cosine_accuracy@5 | 0.9234 |
| cosine_accuracy@10 | 0.9637 |
| cosine_precision@1 | 0.5121 |
| cosine_precision@3 | 0.2755 |
| cosine_precision@5 | 0.1847 |
| cosine_precision@10 | 0.0964 |
| cosine_recall@1 | 0.5121 |
| cosine_recall@3 | 0.8266 |
| cosine_recall@5 | 0.9234 |
| cosine_recall@10 | 0.9637 |
| cosine_ndcg@10 | 0.7539 |
| cosine_mrr@10 | 0.6844 |
| **cosine_map@100** | **0.6859** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.5323 |
| cosine_accuracy@3 | 0.8226 |
| cosine_accuracy@5 | 0.9194 |
| cosine_accuracy@10 | 0.9677 |
| cosine_precision@1 | 0.5323 |
| cosine_precision@3 | 0.2742 |
| cosine_precision@5 | 0.1839 |
| cosine_precision@10 | 0.0968 |
| cosine_recall@1 | 0.5323 |
| cosine_recall@3 | 0.8226 |
| cosine_recall@5 | 0.9194 |
| cosine_recall@10 | 0.9677 |
| cosine_ndcg@10 | 0.7597 |
| cosine_mrr@10 | 0.6913 |
| **cosine_map@100** | **0.6924** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.5242 |
| cosine_accuracy@3 | 0.8226 |
| cosine_accuracy@5 | 0.9194 |
| cosine_accuracy@10 | 0.9597 |
| cosine_precision@1 | 0.5242 |
| cosine_precision@3 | 0.2742 |
| cosine_precision@5 | 0.1839 |
| cosine_precision@10 | 0.096 |
| cosine_recall@1 | 0.5242 |
| cosine_recall@3 | 0.8226 |
| cosine_recall@5 | 0.9194 |
| cosine_recall@10 | 0.9597 |
| cosine_ndcg@10 | 0.7528 |
| cosine_mrr@10 | 0.6846 |
| **cosine_map@100** | **0.6863** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.496 |
| cosine_accuracy@3 | 0.7903 |
| cosine_accuracy@5 | 0.8911 |
| cosine_accuracy@10 | 0.9556 |
| cosine_precision@1 | 0.496 |
| cosine_precision@3 | 0.2634 |
| cosine_precision@5 | 0.1782 |
| cosine_precision@10 | 0.0956 |
| cosine_recall@1 | 0.496 |
| cosine_recall@3 | 0.7903 |
| cosine_recall@5 | 0.8911 |
| cosine_recall@10 | 0.9556 |
| cosine_ndcg@10 | 0.7338 |
| cosine_mrr@10 | 0.6613 |
| **cosine_map@100** | **0.6631** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.4476 |
| cosine_accuracy@3 | 0.7661 |
| cosine_accuracy@5 | 0.879 |
| cosine_accuracy@10 | 0.9476 |
| cosine_precision@1 | 0.4476 |
| cosine_precision@3 | 0.2554 |
| cosine_precision@5 | 0.1758 |
| cosine_precision@10 | 0.0948 |
| cosine_recall@1 | 0.4476 |
| cosine_recall@3 | 0.7661 |
| cosine_recall@5 | 0.879 |
| cosine_recall@10 | 0.9476 |
| cosine_ndcg@10 | 0.7053 |
| cosine_mrr@10 | 0.6261 |
| **cosine_map@100** | **0.6277** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 2,231 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 430.06 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 33.49 tokens</li><li>max: 65 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>TCSG is helping students enter a competitive workforce as educated cloud professionals and providing opportunities for success. TCSG built its Cloud Academy using AWS Academy, which provides higher education institutions with a free, ready-to-teach cloud computing curriculum that prepares students to pursue industry-recognized certifications and in-demand cloud jobs. TCSG launched the TCSG Cloud Academy in two forms: one as a specialization within an existing associate’s degree and the second as a stand-alone technical certificate of credit. For the technical certificate of credit, students who have existing degrees can enter the curriculum to focus on cloud computing and participate in hands-on cloud experiences using AWS services. Tiếng Việt Italiano ไทย The Technical College System of Georgia is the state government agency that supervises workforce development of more than 294,000 students across 22 technical colleges, 88 campuses, and more than 600 programs. Using the AWS curriculum and technology as the foundation for its courses, TCSG is preparing students to earn industry-recognized AWS Certifications to increase employability while improving accessibility to cloud education by offering the academy virtually and remotely. Learn more » TCSG is the state of Georgia government agency that supervises workforce development of hundreds of thousands of students across 22 technical colleges, 88 campuses, and more than 600 programs. The agency aims to run a system of technical education using the latest technology that’s accessible to all adults and corporate citizens in the state. To develop and deploy its new cloud-focused curriculum, it worked with AWS Education Programs, which helps TCSG institutions develop initiatives that align education to careers in the cloud and promote student employability, preparing diverse learners for in-demand cloud roles around the world. In 2020, the organization officially launched the TCSG Cloud Academy—a virtual program for students pursuing expertise and certifications in cloud computing—on its eCampus virtual learning system. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português.</code> | <code>How has the use of AWS Academy by TCSG helped prepare students for pursuing industry-recognized certifications and in-demand cloud jobs in Georgia's workforce?</code> |
| <code>This prompt is then provided to the LLM for generating an answer to the user question. @router. post("/rag") async def rag_handler(req: Request) -> Dict[str, Any]: # dump the received request for debugging purposes logger. info(f"req={req}") # initialize vector db and SageMaker Endpoint _init(req) # Use the vector db to find similar documents to the query # the vector db call would automatically convert the query text # into embeddings docs = _vector_db. similarity_search(req. q, k=req. max_matching_docs) logger. info(f"here are the {req. max_matching_docs} closest matching docs to the query=\"{req. q}\"") for d in docs: logger. info(f"---------") logger. info(d) logger. info(f"---------") # now that we have the matching docs, lets pack them as a context # into the prompt and ask the LLM to generate a response prompt_template = """Answer based on context:\n\n{context}\n\n{question}""" prompt = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) logger. info(f"prompt sent to llm = \"{prompt}\"") chain = load_qa_chain(llm=_sm_llm, prompt=prompt) answer = chain({"input_documents": docs, "question": req. q}, return_only_outputs=True)['output_text'] logger. info(f"answer received from llm,\nquestion: \"{req. q}\"\nanswer: \"{answer}\"") resp = {'question': req. q, 'answer': answer} if req. verbose is True: resp['docs'] = docs return resp Clean up To avoid incurring future charges, delete the resources. You can do this by deleting the CloudFormation stack as shown in the following screenshot.</code> | <code>What resources need to be deleted to avoid future charges, and how can they be deleted?</code> |
| <code>append(input_1_s3_location) async_response = base_model_predictor. predict_async(input_path=input_1_s3_location) output_locations. append(async_response. output_path) if i > max_images: break This may take up to 30 minutes or more depending on how much data you have uploaded for asynchronous inference. You can visualize one of these inferences as follows: plot_response('data/single. out') Convert the asynchronous inference output to a Ground Truth input manifest In this step, we create an input manifest for a bounding box verification job on Ground Truth. We upload the Ground Truth UI template and label categories file, and create the verification job. The notebook linked to this post uses a private workforce to perform the labeling; you can change this if you’re using other types of workforces. For more details, refer to the full code in the notebook. Verify labels from the auto-labeling process in Ground Truth In this step, we complete the verification by accessing the labeling portal. For more details, refer to here. When you access the portal as a workforce member, you will be able to see the bounding boxes created by the JumpStart model and make adjustments as required. You can use this template to repeat auto-labeling with many task-specific models, potentially merge labels, and use the resulting labeled dataset in downstream tasks. Clean up In this step, we clean up by deleting the endpoint and the model created in previous steps: # Delete the SageMaker endpoint base_model_predictor. delete_model() base_model_predictor. delete_endpoint() Conclusion In this post, we walked through an auto-labeling process involving JumpStart and asynchronous inference. We used the results of the auto-labeling process to convert and visualize labeled data on a real-world dataset. You can use the solution to perform auto-labeling with many task-specific models, potentially merge labels, and use the resulting labeled dataset in downstream tasks. You can also explore using tools like the Segment Anything Model for generating segment masks as part of the auto-labeling process. In future posts in this series, we will cover the perception module and segmentation.</code> | <code>How can you visualize the inferences generated by the asynchronous inference process using the provided solution?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:----------:|:-----:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.9143 | 4 | - | 0.6663 | 0.6851 | 0.7027 | 0.6120 | 0.6998 |
| **1.8286** | **8** | **-** | **0.6758** | **0.6822** | **0.6966** | **0.6311** | **0.6941** |
| 2.2857 | 10 | 1.883 | - | - | - | - | - |
| 2.9714 | 13 | - | 0.6631 | 0.6881 | 0.6904 | 0.6245 | 0.6873 |
| 3.6571 | 16 | - | 0.6631 | 0.6863 | 0.6924 | 0.6277 | 0.6859 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-base-en-v1.5", "datasets": [], "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:2231", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "The fact that no customer noticed this major migration to Amazon S3 Glacier Instant Retrieval was a big win for us. It was a seamless experience for end users, and we had no production issues during the entire migration. ” Contact Sales Greater than 99. 99% Outcome | Gaining Insights on AWS to Prioritize Business Needs 한국어 Snap migrated more than 2 exabytes of data—roughly equivalent to 1. 5 trillion media files—seamlessly to Amazon S3 Glacier Instant Retrieval from Amazon S3 Standard-IA. “The fact that no customer noticed this major migration to Amazon S3 Glacier Instant Retrieval was a big win for us,” says Manoharan. “It was a seamless experience for Snapchatters, and we had no production issues during the entire migration. ” As a result of the migration, the company saved tens of millions of dollars on storage. Snap has configured Amazon S3 in 20 AWS Regions around the world so that customers anywhere can retrieve data in milliseconds. The AWS Global Infrastructure is the most secure, extensive, and reliable Global Cloud Infrastructure for a business’s applications. The global reach of AWS lets Snap store media closer to the place where Snapchatters are creating it for optimal performance. Snap is also able to deliver content efficiently using Amazon CloudFront, a content delivery network service built for high performance, security, and availability. “We’ve been able to off-load all of the regionalization work and costs to AWS so that we can focus on developing new features,” says Manoharan. As a result, Snapchat continues to meet its quarterly cost-optimization goals. Overview | Opportunity | Solution | Outcome | AWS Services Used 2 exabytes Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. … In 2016, Snap migrated its data to AWS. “We chose to migrate to AWS because of its global reach, excellent performance, and competitive pricing that, in turn, gave us the ability to reinvest in our business,” says Vijay Manoharan, manager of the media delivery platform team at Snap. Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. AWS Services Used In 2017, Snap migrated one of the app’s most central features—Snapchat Stories—to Amazon DynamoDB, a fully managed, serverless, NoSQL database designed to run high-performance applications at virtually any scale. Using Amazon DynamoDB, the company experienced greater than 99.", "sentences": ["How did Snap save tens of millions of dollars on storage as a result of migrating to Amazon S3 Glacier Instant Retrieval from Amazon S3 Standard-IA?", "How has Panasonic Avionics Corporation leveraged Amazon Aurora MySQL-Compatible Edition and other AWS services to improve the reliability and scalability of its databases for in-flight entertainment and communications systems?", "How does Ground Truth Plus ensure the quality of image and video captions generated by human annotators?"]}, {"source_sentence": "” 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Outcome | Expanding Intelligent Features of Virtual Care Amazon Transcribe is an automatic speech recognition service that makes it easy to add speech to text capabilities to any application. Learn more » Learn more » It is critical that video visits are secure, responsive, and reliable. Using AWS helps us provide all this in a performant and scalable way. \" Overview With the Amazon Chime SDK, builders can easily add real-time voice, video, and messaging powered by machine learning into their applications. Get Started Beyond traditional use cases, Salesforce is adding capabilities in medication-therapy management, connectivity for care coordinators, and other approaches for patient engagement. The company is developing a new feature that will expand its support of Virtual Care sessions to multiple participants, instead of just clinician and patient. This will facilitate care-team coordination with multiple parties in a single meeting. Using AWS, Salesforce circumvented the heavy lifting that would have been required to build and maintain a video-calling solution from scratch. Patients self-schedule virtual appointments, coordinate previsit activities, and conduct virtual visits in a HIPAA-compliant environment. A patient’s appointment request gets routed to Amazon Chime SDK. Clinicians then review a patient’s intake form and correlate the patient to a Virtual Care session using Amazon Chime SDK messaging, which connects providers and patients with secure, scalable messaging in their web and mobile applications. The Amazon Chime SDK control plane sends event notifications through a default event bus to Amazon EventBridge, a serverless event bus that helps organizations receive, filter, transform, route, and deliver events. Healthcare professionals deliver care over the internet in near real time, which has significantly reduced no-shows for appointments. “Using Amazon Chime SDK, we don’t have to worry about the mechanics of the video call,” Daftari says. “We can focus on features and functions that help differentiate our product in the marketplace, while also significantly improving our speed to launch. ” Salesforce further supports accessibility through embedding closed-captioning of video calls using Amazon Chime SDK live transcription. Amazon Chime SDK sends live audio streams to Amazon Transcribe, which automatically converts speech to text. Salesforce Health Cloud customers can use the live transcription capability to display subtitles, create meeting transcripts, or analyze content.", "sentences": ["How did DB Energie use Amazon SageMaker and AWS to enhance the sustainability and reliability of its power grid operations?", "How did Provectus assist Earth.com in enhancing the AI-powered image recognition capabilities of EarthSnap and reducing engineering heavy lifting through the implementation of end-to-end ML pipelines and managed MLOps platform?", "How does Salesforce use AWS services such as Amazon Chime SDK and Amazon Transcribe to enhance their Virtual Care sessions for healthcare professionals and patients?"]}, {"source_sentence": "It’s been a great success. ” Overview 93% Validate technical skills and cloud expertise to grow your career and business. Learn more » Amazon Web Services (AWS) Education Programs collaborate with education institutions and the public sector to provide access for individuals to develop cloud computing and digital skills. To help graduates boost their employability, Staffordshire University worked with the AWS team to introduce cloud computing skills training and add cloud courses to its credit-bearing computer science modules. Staffordshire University offers courses through AWS Academy, which empowers higher education institutions to prepare students for industry-recognized certifications and careers. Since the university added AWS Academy courses to its curriculum in 2017, several hundred students have participated. Of those students, 93 percent have achieved employment within 6 months of graduation. Empowered students Türkçe Solution | Learning by Doing Using AWS Learner Labs English With AWS Academy, our students love that they’re not just taking theory lessons. They get to work in actual environments with real AWS tools. ” Next up, Staffordshire University is expanding on the success of its cloud courses by launching additional programs of study developed in collaboration with the AWS team. Staffordshire University and the AWS team designed these programs by \"Working Backwards\" — an Amazon process that encourages companies to brainstorm solutions by using a customer challenge as the starting point — from the cloud skills employers are currently seeking in the United Kingdom and across the global labor market. One of these programs, which launches in September 2022, is a cloud computing course that features both cloud computing and cybersecurity modules and will offer students more opportunities to discover what’s possible with the AWS Cloud. “What we want to encourage is for students to play with AWS services as well as build confidence with the tools,” says Dr. Champion. to learn remotely using any hardware and earn AWS Certifications Staffordshire University added cloud computing skills training to its curriculum using AWS Education Programs, helping 93 percent of participants find employment within 6 months of graduation. covering cloud skills AWS Certification during the AWS Educate University Challenge Deutsch of graduates find jobs within 6 months Tiếng Việt Italiano ไทย Outcome | Developing New Cloud Coursework About Staffordshire University Staffordshire University is a public research university in Staffordshire, England. Founded in 1914, the university serves over 15,000 students across three schools and four campuses. The United Kingdom has experienced a technology boom in recent years, with technology funding tripling in the first 6 months of 2021 compared to the same period in 2020. In particular, employers need professionals with cloud computing skills ranging from cloud development to machine learning and data analytics. To meet demand, Staffordshire University offers students their choice of six AWS courses covering these key skills and more.", "sentences": ["How has the collaboration between Staffordshire University and the AWS team impacted the employability of graduates in the field of cloud computing?", "How can the confidence scores be used to verify the accuracy of sentiment assignments in the sentiment_results_final table, especially for any dubious sentiment assignments?", "How did migrating to AWS help Travian Games improve the stability and reliability of their game servers, and what impact did this have on their players' experience?"]}, {"source_sentence": "Contact our experts and start your own AWS journey today. customer and agent experience 2022 Overview WaFd Bank Transforms Contact Centers Using Conversational AI on AWS Customer Stories / Financial Services WaFd uses a data lake on AWS to store and analyze data from phone and chatbot conversations. “We’re getting incredible data from AWS through the conversational logs,” says Hubbard. “That has given us insights into what our customers are asking for so that we can add more self-service functionality. ” The data also gives WaFd more insight into call volumes, so the call center can better manage staff schedules. Opportunity | Using Amazon Lex to Implement an AI-Powered Contact Center Solution Türkçe English WaFd is a US retail and commercial bank with over 200 branches in eight states. In 2019, WaFd founded subsidiary Pike Street Labs, a fintech startup, to drive client-facing digital innovation for the bank. “Banks need to meet customers’ digital expectations,” says Dustin Hubbard, chief technology officer at WaFd Bank and Pike Street Labs. “Every year, customers expect more innovation because that’s what they see from new entrants or in other markets. ” Pike Street Labs redesigned WaFd’s online banking solution to provide personalized customer experiences and began tackling the bank’s customer care center. The company’s previous contact center solution used dated technology with limited features spread across disparate systems. This led to long wait times for customers and frustration for agents, who had to answer incoming calls without prior knowledge of what the customer needed. Agents also bore the burden of identifying fraudulent calls. WaFd needed a solution to improve both the customer and agent experiences. Previously, WaFd used two different systems in its customer care center to manage its voice and chat-based customer interactions, with no way for one system to recognize that an agent was busy on the other. Chat messages remained unanswered because agents would forget to sign in to the chat system. The company implemented chatbots and voice bots powered by Amazon Lex. Now, the call and chat systems are interoperable, and chats can be escalated to agent assisted calls when needed. When a call gets passed to an agent, the system also passes the full chat record and an analysis of the customer’s tone so that the agent is prepared to address the client’s needs and be empathetic toward the caller’s sentiment. WaFd worked with the AWS and Talkdesk teams to create and launch its new contact center solution in July 2022.", "sentences": ["How did Yellow Class optimize its video files and improve performance using AWS services such as AWS Elemental MediaConvert?", "How has FanDuel ensured the redundancy and reliability of its live video streams through the use of AWS Elemental MediaConnect and AWS Elemental MediaLive?", "How did WaFd Bank use data from phone and chatbot conversations stored in a data lake on AWS to improve self-service functionality and better manage call center staff schedules?"]}, {"source_sentence": "Alternatively, you can run the inference via code. Here is one example written in Python, using the requests library: import requests url = \"https://<YOUR_API_GATEWAY_ENDPOINT_ID>. execute-api. <YOUR_ENDPOINT_REGION>. amazonaws. com/prod/question?question=\\\"What is the color of my car now?\\\"&context=\\\"My car used to be blue but I painted red\\\"\" response = requests. request(\"GET\", url, headers=headers, data=payload) print(response. text) The code outputs a string similar to the following: '{\"score\":0. 6947233080863953,\"start\":38,\"end\":41,\"answer\":\"red\"}' If you are interested in knowing more about deploying Generative AI and large language models on AWS, check out here: Deploy Serverless Generative AI on AWS Lambda with OpenLLaMa Deploy large language models on AWS Inferentia2 using large model inference containers Clean up Inside the root directory of your repository, run the following code to clean up your resources: make destroy Conclusion In this post, we introduced how you can use Lambda to deploy your trained ML model using your preferred web application framework, such as FastAPI. We provided a detailed code repository that you can deploy, and you retain the flexibility of switching to whichever trained model artifacts you process. The performance can depend on how you implement and deploy the model. You are welcome to try it out yourself, and we’re excited to hear your feedback! About the Authors Tingyi Li is an Enterprise Solutions Architect from AWS based out in Stockholm, Sweden supporting the Nordics customers. She enjoys helping customers with the architecture, design, and development of cloud-optimized infrastructure solutions. She is specialized in AI and Machine Learning and is interested in empowering customers with intelligence in their AI/ML applications. In her spare time, she is also a part-time illustrator who writes novels and plays the piano. Demir Catovic is a Machine Learning Engineer from AWS based in Zurich, Switzerland. He engages with customers and helps them implement scalable and fully-functional ML applications. He is passionate about building and productionizing machine learning applications for customers and is always keen to explore around new trends and cutting-edge technologies in the AI/ML world. TAGS: Generative AI , Natural Language Processing Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow Twitter Facebook LinkedIn Twitch Email Updates.", "sentences": ["How did ALTBalaji use AWS Elemental MediaLive to handle a tenfold increase in viewership during the live streaming of Lock Upp, and what insights did they gain from this experience?", "How has PayEye been able to accelerate their development process and enter the production phase within a few months using AWS services, and what impact has this had on their recruitment efforts and team focus?", "How can Lambda be used to deploy trained ML models using a preferred web application framework?"]}], "model-index": [{"name": "BGE base Financial Matryoshka", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 768", "type": "dim_768"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.5120967741935484, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8266129032258065, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9233870967741935, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9637096774193549, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.5120967741935484, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2755376344086021, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.18467741935483872, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09637096774193549, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.5120967741935484, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.8266129032258065, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9233870967741935, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9637096774193549, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7538879073840729, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6844038018433181, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6858592666542238, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.532258064516129, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8225806451612904, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9193548387096774, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.967741935483871, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.532258064516129, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.27419354838709675, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.18387096774193548, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09677419354838711, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.532258064516129, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.8225806451612904, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9193548387096774, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.967741935483871, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7596718979684643, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6912602406554021, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6924236134719179, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.5241935483870968, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8225806451612904, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9193548387096774, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9596774193548387, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.5241935483870968, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.27419354838709675, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.1838709677419355, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.0959677419354839, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.5241935483870968, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.8225806451612904, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9193548387096774, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9596774193548387, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7527772429981233, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6846406169994881, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6862769216923534, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 128", "type": "dim_128"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.4959677419354839, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7903225806451613, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8911290322580645, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9556451612903226, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.4959677419354839, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.26344086021505375, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17822580645161293, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09556451612903227, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.4959677419354839, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.7903225806451613, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8911290322580645, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9556451612903226, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.73375586078758, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6613495263696876, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6630698645438532, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 64", "type": "dim_64"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.4475806451612903, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7661290322580645, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8790322580645161, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9475806451612904, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.4475806451612903, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2553763440860215, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17580645161290326, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09475806451612903, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.4475806451612903, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.7661290322580645, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8790322580645161, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9475806451612904, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7052651530890945, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6260768689196109, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6277483838406475, "name": "Cosine Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,317 |
Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-itc
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc-bible",
"acf",
"an",
"ast",
"ca",
"cbk",
"co",
"crs",
"de",
"egl",
"en",
"es",
"ext",
"fr",
"frm",
"fro",
"frp",
"fur",
"gcf",
"gl",
"ht",
"it",
"kea",
"la",
"lad",
"lij",
"lld",
"lmo",
"lou",
"mfe",
"mo",
"mwl",
"nap",
"oc",
"osp",
"pap",
"pcd",
"pms",
"pt",
"rm",
"ro",
"rup",
"sc",
"scn",
"vec",
"wa",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-10-08T07:16:41Z |
2024-10-08T07:16:55+00:00
| 19 | 0 |
---
language:
- acf
- an
- ast
- ca
- cbk
- co
- crs
- de
- egl
- en
- es
- ext
- fr
- frm
- fro
- frp
- fur
- gcf
- gl
- ht
- it
- kea
- la
- lad
- lij
- lld
- lmo
- lou
- mfe
- mo
- mwl
- nap
- oc
- osp
- pap
- pcd
- pms
- pt
- rm
- ro
- rup
- sc
- scn
- vec
- wa
library_name: transformers
license: apache-2.0
tags:
- translation
- opus-mt-tc-bible
model-index:
- name: opus-mt-tc-bible-big-deu_eng_fra_por_spa-itc
results:
- task:
type: translation
name: Translation deu-ast
dataset:
name: flores200-devtest
type: flores200-devtest
args: deu-ast
metrics:
- type: bleu
value: 22.1
name: BLEU
- type: chrf
value: 0.53782
name: chr-F
- type: bleu
value: 32.2
name: BLEU
- type: chrf
value: 0.58846
name: chr-F
- type: bleu
value: 37.2
name: BLEU
- type: chrf
value: 0.62803
name: chr-F
- type: bleu
value: 18.7
name: BLEU
- type: chrf
value: 0.46372
name: chr-F
- type: bleu
value: 28.7
name: BLEU
- type: chrf
value: 0.56229
name: chr-F
- type: bleu
value: 15.7
name: BLEU
- type: chrf
value: 0.46752
name: chr-F
- type: bleu
value: 25.8
name: BLEU
- type: chrf
value: 0.55344
name: chr-F
- type: bleu
value: 11.8
name: BLEU
- type: chrf
value: 0.40732
name: chr-F
- type: bleu
value: 23.1
name: BLEU
- type: chrf
value: 0.52749
name: chr-F
- type: bleu
value: 22.4
name: BLEU
- type: chrf
value: 0.49721
name: chr-F
- type: bleu
value: 34.7
name: BLEU
- type: chrf
value: 0.60818
name: chr-F
- type: bleu
value: 31.1
name: BLEU
- type: chrf
value: 0.57873
name: chr-F
- type: bleu
value: 24.4
name: BLEU
- type: chrf
value: 0.52442
name: chr-F
- type: bleu
value: 16.1
name: BLEU
- type: chrf
value: 0.45629
name: chr-F
- type: bleu
value: 27.8
name: BLEU
- type: chrf
value: 0.59255
name: chr-F
- type: bleu
value: 42.8
name: BLEU
- type: chrf
value: 0.66809
name: chr-F
- type: bleu
value: 49.5
name: BLEU
- type: chrf
value: 0.71001
name: chr-F
- type: bleu
value: 23.0
name: BLEU
- type: chrf
value: 0.49164
name: chr-F
- type: bleu
value: 36.1
name: BLEU
- type: chrf
value: 0.62349
name: chr-F
- type: bleu
value: 21.3
name: BLEU
- type: chrf
value: 0.5172
name: chr-F
- type: bleu
value: 29.7
name: BLEU
- type: chrf
value: 0.58898
name: chr-F
- type: bleu
value: 11.0
name: BLEU
- type: chrf
value: 0.34963
name: chr-F
- type: bleu
value: 14.8
name: BLEU
- type: chrf
value: 0.43644
name: chr-F
- type: bleu
value: 35.2
name: BLEU
- type: chrf
value: 0.63245
name: chr-F
- type: bleu
value: 30.4
name: BLEU
- type: chrf
value: 0.56775
name: chr-F
- type: bleu
value: 50.0
name: BLEU
- type: chrf
value: 0.71438
name: chr-F
- type: bleu
value: 41.2
name: BLEU
- type: chrf
value: 0.65373
name: chr-F
- type: bleu
value: 27.6
name: BLEU
- type: chrf
value: 0.55784
name: chr-F
- type: bleu
value: 21.0
name: BLEU
- type: chrf
value: 0.49876
name: chr-F
- type: bleu
value: 22.0
name: BLEU
- type: chrf
value: 0.53904
name: chr-F
- type: bleu
value: 34.5
name: BLEU
- type: chrf
value: 0.60549
name: chr-F
- type: bleu
value: 21.4
name: BLEU
- type: chrf
value: 0.49119
name: chr-F
- type: bleu
value: 31.3
name: BLEU
- type: chrf
value: 0.57998
name: chr-F
- type: bleu
value: 20.7
name: BLEU
- type: chrf
value: 0.52018
name: chr-F
- type: bleu
value: 27.0
name: BLEU
- type: chrf
value: 0.5647
name: chr-F
- type: bleu
value: 11.2
name: BLEU
- type: chrf
value: 0.38741
name: chr-F
- type: bleu
value: 13.6
name: BLEU
- type: chrf
value: 0.4318
name: chr-F
- type: bleu
value: 29.2
name: BLEU
- type: chrf
value: 0.58268
name: chr-F
- type: bleu
value: 23.6
name: BLEU
- type: chrf
value: 0.51029
name: chr-F
- type: bleu
value: 37.5
name: BLEU
- type: chrf
value: 0.6254
name: chr-F
- type: bleu
value: 32.7
name: BLEU
- type: chrf
value: 0.59255
name: chr-F
- type: bleu
value: 24.4
name: BLEU
- type: chrf
value: 0.53001
name: chr-F
- type: bleu
value: 17.9
name: BLEU
- type: chrf
value: 0.47645
name: chr-F
- type: bleu
value: 23.9
name: BLEU
- type: chrf
value: 0.55369
name: chr-F
- type: bleu
value: 36.4
name: BLEU
- type: chrf
value: 0.61981
name: chr-F
- type: bleu
value: 40.4
name: BLEU
- type: chrf
value: 0.64654
name: chr-F
- type: bleu
value: 22.1
name: BLEU
- type: chrf
value: 0.50078
name: chr-F
- type: bleu
value: 31.1
name: BLEU
- type: chrf
value: 0.58336
name: chr-F
- type: bleu
value: 18.0
name: BLEU
- type: chrf
value: 0.48834
name: chr-F
- type: bleu
value: 26.7
name: BLEU
- type: chrf
value: 0.56077
name: chr-F
- type: bleu
value: 13.6
name: BLEU
- type: chrf
value: 0.42451
name: chr-F
- type: bleu
value: 13.4
name: BLEU
- type: chrf
value: 0.43715
name: chr-F
- type: bleu
value: 28.1
name: BLEU
- type: chrf
value: 0.57143
name: chr-F
- type: bleu
value: 25.0
name: BLEU
- type: chrf
value: 0.52192
name: chr-F
- type: bleu
value: 34.2
name: BLEU
- type: chrf
value: 0.59962
name: chr-F
- type: bleu
value: 25.6
name: BLEU
- type: chrf
value: 0.53772
name: chr-F
- type: bleu
value: 18.8
name: BLEU
- type: chrf
value: 0.48882
name: chr-F
- type: bleu
value: 16.3
name: BLEU
- type: chrf
value: 0.49512
name: chr-F
- type: bleu
value: 23.1
name: BLEU
- type: chrf
value: 0.53968
name: chr-F
- type: bleu
value: 27.9
name: BLEU
- type: chrf
value: 0.57461
name: chr-F
- type: bleu
value: 16.1
name: BLEU
- type: chrf
value: 0.45785
name: chr-F
- type: bleu
value: 22.2
name: BLEU
- type: chrf
value: 0.52933
name: chr-F
- type: bleu
value: 13.0
name: BLEU
- type: chrf
value: 0.44627
name: chr-F
- type: bleu
value: 22.4
name: BLEU
- type: chrf
value: 0.53063
name: chr-F
- type: bleu
value: 10.2
name: BLEU
- type: chrf
value: 0.39784
name: chr-F
- type: bleu
value: 17.4
name: BLEU
- type: chrf
value: 0.49293
name: chr-F
- type: bleu
value: 17.7
name: BLEU
- type: chrf
value: 0.46595
name: chr-F
- type: bleu
value: 25.9
name: BLEU
- type: chrf
value: 0.56138
name: chr-F
- type: bleu
value: 23.8
name: BLEU
- type: chrf
value: 0.53609
name: chr-F
- type: bleu
value: 13.3
name: BLEU
- type: chrf
value: 0.44898
name: chr-F
- task:
type: translation
name: Translation deu-ast
dataset:
name: flores101-devtest
type: flores_101
args: deu ast devtest
metrics:
- type: bleu
value: 21.5
name: BLEU
- type: chrf
value: 0.5323
name: chr-F
- type: bleu
value: 31.6
name: BLEU
- type: chrf
value: 0.58466
name: chr-F
- type: bleu
value: 36.5
name: BLEU
- type: chrf
value: 0.6237
name: chr-F
- type: bleu
value: 28.0
name: BLEU
- type: chrf
value: 0.55693
name: chr-F
- type: bleu
value: 22.3
name: BLEU
- type: chrf
value: 0.52253
name: chr-F
- type: bleu
value: 34.8
name: BLEU
- type: chrf
value: 0.60688
name: chr-F
- type: bleu
value: 30.3
name: BLEU
- type: chrf
value: 0.57333
name: chr-F
- type: bleu
value: 42.5
name: BLEU
- type: chrf
value: 0.66607
name: chr-F
- type: bleu
value: 48.8
name: BLEU
- type: chrf
value: 0.70492
name: chr-F
- type: bleu
value: 10.7
name: BLEU
- type: chrf
value: 0.34867
name: chr-F
- type: bleu
value: 49.3
name: BLEU
- type: chrf
value: 0.71112
name: chr-F
- type: bleu
value: 40.3
name: BLEU
- type: chrf
value: 0.64856
name: chr-F
- type: bleu
value: 29.2
name: BLEU
- type: chrf
value: 0.58559
name: chr-F
- type: bleu
value: 32.1
name: BLEU
- type: chrf
value: 0.58922
name: chr-F
- type: bleu
value: 12.8
name: BLEU
- type: chrf
value: 0.40779
name: chr-F
- type: bleu
value: 27.5
name: BLEU
- type: chrf
value: 0.57016
name: chr-F
- type: bleu
value: 16.3
name: BLEU
- type: chrf
value: 0.49666
name: chr-F
- type: bleu
value: 23.2
name: BLEU
- type: chrf
value: 0.54015
name: chr-F
- type: bleu
value: 22.1
name: BLEU
- type: chrf
value: 0.52923
name: chr-F
- type: bleu
value: 17.2
name: BLEU
- type: chrf
value: 0.49285
name: chr-F
- type: bleu
value: 25.7
name: BLEU
- type: chrf
value: 0.55944
name: chr-F
- type: bleu
value: 23.3
name: BLEU
- type: chrf
value: 0.53282
name: chr-F
- task:
type: translation
name: Translation deu-fra
dataset:
name: generaltest2022
type: generaltest2022
args: deu-fra
metrics:
- type: bleu
value: 37.4
name: BLEU
- type: chrf
value: 0.60634
name: chr-F
- task:
type: translation
name: Translation deu-fra
dataset:
name: multi30k_test_2016_flickr
type: multi30k-2016_flickr
args: deu-fra
metrics:
- type: bleu
value: 38.5
name: BLEU
- type: chrf
value: 0.62595
name: chr-F
- type: bleu
value: 51.4
name: BLEU
- type: chrf
value: 0.7163
name: chr-F
- task:
type: translation
name: Translation deu-fra
dataset:
name: multi30k_test_2017_flickr
type: multi30k-2017_flickr
args: deu-fra
metrics:
- type: bleu
value: 37.3
name: BLEU
- type: chrf
value: 0.62733
name: chr-F
- type: bleu
value: 50.8
name: BLEU
- type: chrf
value: 0.7185
name: chr-F
- task:
type: translation
name: Translation deu-fra
dataset:
name: multi30k_test_2017_mscoco
type: multi30k-2017_mscoco
args: deu-fra
metrics:
- type: bleu
value: 33.8
name: BLEU
- type: chrf
value: 0.59089
name: chr-F
- type: bleu
value: 54.1
name: BLEU
- type: chrf
value: 0.73129
name: chr-F
- task:
type: translation
name: Translation deu-fra
dataset:
name: multi30k_test_2018_flickr
type: multi30k-2018_flickr
args: deu-fra
metrics:
- type: bleu
value: 30.9
name: BLEU
- type: chrf
value: 0.57155
name: chr-F
- type: bleu
value: 41.9
name: BLEU
- type: chrf
value: 0.65461
name: chr-F
- task:
type: translation
name: Translation eng-fra
dataset:
name: newsdiscusstest2015
type: newsdiscusstest2015
args: eng-fra
metrics:
- type: bleu
value: 38.5
name: BLEU
- type: chrf
value: 0.6366
name: chr-F
- task:
type: translation
name: Translation deu-cat
dataset:
name: ntrex128
type: ntrex128
args: deu-cat
metrics:
- type: bleu
value: 28.2
name: BLEU
- type: chrf
value: 0.55033
name: chr-F
- type: bleu
value: 28.5
name: BLEU
- type: chrf
value: 0.55854
name: chr-F
- type: bleu
value: 27.8
name: BLEU
- type: chrf
value: 0.55034
name: chr-F
- type: bleu
value: 26.6
name: BLEU
- type: chrf
value: 0.55733
name: chr-F
- type: bleu
value: 26.0
name: BLEU
- type: chrf
value: 0.54208
name: chr-F
- type: bleu
value: 26.6
name: BLEU
- type: chrf
value: 0.52839
name: chr-F
- type: bleu
value: 30.8
name: BLEU
- type: chrf
value: 0.56966
name: chr-F
- type: bleu
value: 36.3
name: BLEU
- type: chrf
value: 0.61431
name: chr-F
- type: bleu
value: 35.5
name: BLEU
- type: chrf
value: 0.61695
name: chr-F
- type: bleu
value: 37.2
name: BLEU
- type: chrf
value: 0.6239
name: chr-F
- type: bleu
value: 36.1
name: BLEU
- type: chrf
value: 0.62209
name: chr-F
- type: bleu
value: 33.5
name: BLEU
- type: chrf
value: 0.59859
name: chr-F
- type: bleu
value: 33.4
name: BLEU
- type: chrf
value: 0.58128
name: chr-F
- type: bleu
value: 40.3
name: BLEU
- type: chrf
value: 0.64099
name: chr-F
- type: bleu
value: 28.1
name: BLEU
- type: chrf
value: 0.55093
name: chr-F
- type: bleu
value: 28.0
name: BLEU
- type: chrf
value: 0.55325
name: chr-F
- type: bleu
value: 27.4
name: BLEU
- type: chrf
value: 0.56188
name: chr-F
- type: bleu
value: 25.6
name: BLEU
- type: chrf
value: 0.54001
name: chr-F
- type: bleu
value: 24.8
name: BLEU
- type: chrf
value: 0.51853
name: chr-F
- type: bleu
value: 31.0
name: BLEU
- type: chrf
value: 0.57116
name: chr-F
- type: bleu
value: 31.6
name: BLEU
- type: chrf
value: 0.57962
name: chr-F
- type: bleu
value: 28.9
name: BLEU
- type: chrf
value: 0.5691
name: chr-F
- type: bleu
value: 30.3
name: BLEU
- type: chrf
value: 0.57389
name: chr-F
- type: bleu
value: 30.6
name: BLEU
- type: chrf
value: 0.58788
name: chr-F
- type: bleu
value: 28.0
name: BLEU
- type: chrf
value: 0.54276
name: chr-F
- type: bleu
value: 34.2
name: BLEU
- type: chrf
value: 0.59565
name: chr-F
- type: bleu
value: 34.0
name: BLEU
- type: chrf
value: 0.60605
name: chr-F
- type: bleu
value: 29.6
name: BLEU
- type: chrf
value: 0.57501
name: chr-F
- type: bleu
value: 34.4
name: BLEU
- type: chrf
value: 0.613
name: chr-F
- type: bleu
value: 28.9
name: BLEU
- type: chrf
value: 0.57868
name: chr-F
- type: bleu
value: 29.1
name: BLEU
- type: chrf
value: 0.5673
name: chr-F
- type: bleu
value: 27.9
name: BLEU
- type: chrf
value: 0.54222
name: chr-F
- task:
type: translation
name: Translation deu-cat
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: deu-cat
metrics:
- type: bleu
value: 44.3
name: BLEU
- type: chrf
value: 0.63465
name: chr-F
- type: bleu
value: 50.7
name: BLEU
- type: chrf
value: 0.68258
name: chr-F
- type: bleu
value: 47.4
name: BLEU
- type: chrf
value: 0.68502
name: chr-F
- type: bleu
value: 22.0
name: BLEU
- type: chrf
value: 0.38047
name: chr-F
- type: bleu
value: 43.1
name: BLEU
- type: chrf
value: 0.63684
name: chr-F
- type: bleu
value: 42.6
name: BLEU
- type: chrf
value: 0.64207
name: chr-F
- type: bleu
value: 49.4
name: BLEU
- type: chrf
value: 0.68333
name: chr-F
- type: bleu
value: 49.1
name: BLEU
- type: chrf
value: 0.67724
name: chr-F
- type: bleu
value: 51.6
name: BLEU
- type: chrf
value: 0.68777
name: chr-F
- type: bleu
value: 45.2
name: BLEU
- type: chrf
value: 0.6453
name: chr-F
- type: bleu
value: 53.3
name: BLEU
- type: chrf
value: 0.72115
name: chr-F
- type: bleu
value: 24.2
name: BLEU
- type: chrf
value: 0.43857
name: chr-F
- type: bleu
value: 27.6
name: BLEU
- type: chrf
value: 0.50848
name: chr-F
- type: bleu
value: 20.0
name: BLEU
- type: chrf
value: 0.4571
name: chr-F
- type: bleu
value: 53.4
name: BLEU
- type: chrf
value: 0.72159
name: chr-F
- type: bleu
value: 47.1
name: BLEU
- type: chrf
value: 0.67835
name: chr-F
- type: bleu
value: 55.8
name: BLEU
- type: chrf
value: 0.72875
name: chr-F
- type: bleu
value: 44.6
name: BLEU
- type: chrf
value: 0.65547
name: chr-F
- type: bleu
value: 39.9
name: BLEU
- type: chrf
value: 0.6165
name: chr-F
- type: bleu
value: 53.5
name: BLEU
- type: chrf
value: 0.72739
name: chr-F
- type: bleu
value: 52.0
name: BLEU
- type: chrf
value: 0.70655
name: chr-F
- type: bleu
value: 43.7
name: BLEU
- type: chrf
value: 0.65399
name: chr-F
- type: bleu
value: 54.8
name: BLEU
- type: chrf
value: 0.72083
name: chr-F
- type: bleu
value: 49.7
name: BLEU
- type: chrf
value: 0.67768
name: chr-F
- type: bleu
value: 52.0
name: BLEU
- type: chrf
value: 0.71178
name: chr-F
- type: bleu
value: 60.4
name: BLEU
- type: chrf
value: 0.75691
name: chr-F
- type: bleu
value: 57.6
name: BLEU
- type: chrf
value: 0.74818
name: chr-F
- type: bleu
value: 58.7
name: BLEU
- type: chrf
value: 0.76899
name: chr-F
- type: bleu
value: 51.0
name: BLEU
- type: chrf
value: 0.71775
name: chr-F
- type: bleu
value: 47.8
name: BLEU
- type: chrf
value: 0.69517
name: chr-F
- type: bleu
value: 64.9
name: BLEU
- type: chrf
value: 0.79442
name: chr-F
- type: bleu
value: 66.3
name: BLEU
- type: chrf
value: 0.81845
name: chr-F
- type: bleu
value: 57.4
name: BLEU
- type: chrf
value: 0.73277
name: chr-F
- type: bleu
value: 61.5
name: BLEU
- type: chrf
value: 0.76118
name: chr-F
- type: bleu
value: 59.5
name: BLEU
- type: chrf
value: 0.76742
name: chr-F
- type: bleu
value: 23.4
name: BLEU
- type: chrf
value: 0.43064
name: chr-F
- type: bleu
value: 27.1
name: BLEU
- type: chrf
value: 0.50795
name: chr-F
- type: bleu
value: 60.7
name: BLEU
- type: chrf
value: 0.76951
name: chr-F
- type: bleu
value: 45.9
name: BLEU
- type: chrf
value: 0.67782
name: chr-F
- type: bleu
value: 49.6
name: BLEU
- type: chrf
value: 0.67346
name: chr-F
- task:
type: translation
name: Translation eng-fra
dataset:
name: tico19-test
type: tico19-test
args: eng-fra
metrics:
- type: bleu
value: 40.1
name: BLEU
- type: chrf
value: 0.62989
name: chr-F
- type: bleu
value: 50.0
name: BLEU
- type: chrf
value: 0.72708
name: chr-F
- type: bleu
value: 52.0
name: BLEU
- type: chrf
value: 0.73154
name: chr-F
- type: bleu
value: 34.1
name: BLEU
- type: chrf
value: 0.58383
name: chr-F
- type: bleu
value: 37.0
name: BLEU
- type: chrf
value: 0.59581
name: chr-F
- type: bleu
value: 34.4
name: BLEU
- type: chrf
value: 0.59798
name: chr-F
- type: bleu
value: 45.4
name: BLEU
- type: chrf
value: 0.68332
name: chr-F
- type: bleu
value: 35.5
name: BLEU
- type: chrf
value: 0.60469
name: chr-F
- type: bleu
value: 42.8
name: BLEU
- type: chrf
value: 0.67898
name: chr-F
- task:
type: translation
name: Translation deu-fra
dataset:
name: newstest2008
type: wmt-2008-news
args: deu-fra
metrics:
- type: bleu
value: 26.3
name: BLEU
- type: chrf
value: 0.54926
name: chr-F
- type: bleu
value: 25.5
name: BLEU
- type: chrf
value: 0.53902
name: chr-F
- type: bleu
value: 26.8
name: BLEU
- type: chrf
value: 0.55358
name: chr-F
- type: bleu
value: 29.5
name: BLEU
- type: chrf
value: 0.56491
name: chr-F
- type: bleu
value: 33.0
name: BLEU
- type: chrf
value: 0.58764
name: chr-F
- type: bleu
value: 32.4
name: BLEU
- type: chrf
value: 0.58848
name: chr-F
- task:
type: translation
name: Translation deu-fra
dataset:
name: newstest2009
type: wmt-2009-news
args: deu-fra
metrics:
- type: bleu
value: 25.4
name: BLEU
- type: chrf
value: 0.5387
name: chr-F
- type: bleu
value: 24.4
name: BLEU
- type: chrf
value: 0.54509
name: chr-F
- type: bleu
value: 25.7
name: BLEU
- type: chrf
value: 0.53769
name: chr-F
- type: bleu
value: 29.3
name: BLEU
- type: chrf
value: 0.57566
name: chr-F
- type: bleu
value: 31.4
name: BLEU
- type: chrf
value: 0.60372
name: chr-F
- type: bleu
value: 30.0
name: BLEU
- type: chrf
value: 0.57913
name: chr-F
- type: bleu
value: 30.5
name: BLEU
- type: chrf
value: 0.59749
name: chr-F
- type: bleu
value: 32.1
name: BLEU
- type: chrf
value: 0.58921
name: chr-F
- type: bleu
value: 32.3
name: BLEU
- type: chrf
value: 0.59195
name: chr-F
- type: bleu
value: 33.0
name: BLEU
- type: chrf
value: 0.61007
name: chr-F
- task:
type: translation
name: Translation deu-fra
dataset:
name: newstest2010
type: wmt-2010-news
args: deu-fra
metrics:
- type: bleu
value: 29.5
name: BLEU
- type: chrf
value: 0.57888
name: chr-F
- type: bleu
value: 32.7
name: BLEU
- type: chrf
value: 0.59408
name: chr-F
- type: bleu
value: 32.4
name: BLEU
- type: chrf
value: 0.59588
name: chr-F
- type: bleu
value: 36.6
name: BLEU
- type: chrf
value: 0.61978
name: chr-F
- type: bleu
value: 37.7
name: BLEU
- type: chrf
value: 0.62513
name: chr-F
- type: bleu
value: 36.1
name: BLEU
- type: chrf
value: 0.62193
name: chr-F
- task:
type: translation
name: Translation deu-fra
dataset:
name: newstest2011
type: wmt-2011-news
args: deu-fra
metrics:
- type: bleu
value: 27.5
name: BLEU
- type: chrf
value: 0.55704
name: chr-F
- type: bleu
value: 30.4
name: BLEU
- type: chrf
value: 0.56696
name: chr-F
- type: bleu
value: 34.3
name: BLEU
- type: chrf
value: 0.61071
name: chr-F
- type: bleu
value: 38.7
name: BLEU
- type: chrf
value: 0.62126
name: chr-F
- type: bleu
value: 40.0
name: BLEU
- type: chrf
value: 0.63139
name: chr-F
- type: bleu
value: 35.2
name: BLEU
- type: chrf
value: 0.61258
name: chr-F
- task:
type: translation
name: Translation deu-fra
dataset:
name: newstest2012
type: wmt-2012-news
args: deu-fra
metrics:
- type: bleu
value: 27.6
name: BLEU
- type: chrf
value: 0.56034
name: chr-F
- type: bleu
value: 31.6
name: BLEU
- type: chrf
value: 0.57336
name: chr-F
- type: bleu
value: 31.9
name: BLEU
- type: chrf
value: 0.59264
name: chr-F
- type: bleu
value: 39.1
name: BLEU
- type: chrf
value: 0.62568
name: chr-F
- type: bleu
value: 39.5
name: BLEU
- type: chrf
value: 0.62725
name: chr-F
- type: bleu
value: 34.2
name: BLEU
- type: chrf
value: 0.61177
name: chr-F
- task:
type: translation
name: Translation deu-fra
dataset:
name: newstest2013
type: wmt-2013-news
args: deu-fra
metrics:
- type: bleu
value: 29.9
name: BLEU
- type: chrf
value: 0.56475
name: chr-F
- type: bleu
value: 31.9
name: BLEU
- type: chrf
value: 0.57187
name: chr-F
- type: bleu
value: 33.3
name: BLEU
- type: chrf
value: 0.58938
name: chr-F
- type: bleu
value: 35.2
name: BLEU
- type: chrf
value: 0.59817
name: chr-F
- type: bleu
value: 35.1
name: BLEU
- type: chrf
value: 0.59482
name: chr-F
- type: bleu
value: 33.9
name: BLEU
- type: chrf
value: 0.59825
name: chr-F
- task:
type: translation
name: Translation eng-fra
dataset:
name: newstest2014
type: wmt-2014-news
args: eng-fra
metrics:
- type: bleu
value: 40.2
name: BLEU
- type: chrf
value: 0.65438
name: chr-F
- task:
type: translation
name: Translation eng-ron
dataset:
name: newstest2016
type: wmt-2016-news
args: eng-ron
metrics:
- type: bleu
value: 32.2
name: BLEU
- type: chrf
value: 0.59473
name: chr-F
- task:
type: translation
name: Translation deu-fra
dataset:
name: newstest2019
type: wmt-2019-news
args: deu-fra
metrics:
- type: bleu
value: 35.9
name: BLEU
- type: chrf
value: 0.62831
name: chr-F
- task:
type: translation
name: Translation deu-fra
dataset:
name: newstest2020
type: wmt-2020-news
args: deu-fra
metrics:
- type: bleu
value: 33.0
name: BLEU
- type: chrf
value: 0.60408
name: chr-F
- task:
type: translation
name: Translation deu-fra
dataset:
name: newstest2021
type: wmt-2021-news
args: deu-fra
metrics:
- type: bleu
value: 31.3
name: BLEU
- type: chrf
value: 0.58913
name: chr-F
---
# opus-mt-tc-bible-big-deu_eng_fra_por_spa-itc
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from unknown (deu+eng+fra+por+spa) to Italic languages (itc).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2024-05-30
- **License:** Apache-2.0
- **Language(s):**
- Source Language(s): deu eng fra por spa
- Target Language(s): acf arg ast cat cbk cos crs egl ext fra frm fro frp fur gcf glg hat ita kea lad lat lij lld lmo lou mfe mol mwl nap oci osp pap pcd pms por roh ron rup scn spa srd vec wln
- Valid Target Language Labels: >>acf<< >>aoa<< >>arg<< >>ast<< >>cat<< >>cbk<< >>cbk_Latn<< >>ccd<< >>cks<< >>cos<< >>cri<< >>crs<< >>dlm<< >>drc<< >>egl<< >>ext<< >>fab<< >>fax<< >>fra<< >>frc<< >>frm<< >>frm_Latn<< >>fro<< >>fro_Latn<< >>frp<< >>fur<< >>gcf<< >>gcf_Latn<< >>gcr<< >>glg<< >>hat<< >>idb<< >>ist<< >>ita<< >>itk<< >>kea<< >>kmv<< >>lad<< >>lad_Latn<< >>lat<< >>lat_Latn<< >>lij<< >>lld<< >>lld_Latn<< >>lmo<< >>lou<< >>lou_Latn<< >>mcm<< >>mfe<< >>mol<< >>mwl<< >>mxi<< >>mzs<< >>nap<< >>nrf<< >>oci<< >>osc<< >>osp<< >>osp_Latn<< >>pap<< >>pcd<< >>pln<< >>pms<< >>por<< >>pov<< >>pre<< >>pro<< >>rcf<< >>rgn<< >>roh<< >>ron<< >>ruo<< >>rup<< >>ruq<< >>scf<< >>scn<< >>spa<< >>spq<< >>spx<< >>srd<< >>tmg<< >>tvy<< >>vec<< >>vkp<< >>wln<< >>xfa<< >>xum<< >>xxx<<
- **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-itc/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
- **Resources for more information:**
- [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/deu%2Beng%2Bfra%2Bpor%2Bspa-itc/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
- [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>acf<<`
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>acf<< Replace this with text in an accepted source language.",
">>wln<< This is the second sentence."
]
model_name = "pytorch-models/opus-mt-tc-bible-big-deu_eng_fra_por_spa-itc"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-itc")
print(pipe(">>acf<< Replace this with text in an accepted source language."))
```
## Training
- **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-itc/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/deu%2Beng%2Bfra%2Bpor%2Bspa-itc/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-itc/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt)
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-itc/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| deu-cat | tatoeba-test-v2021-08-07 | 0.63465 | 44.3 | 723 | 5539 |
| deu-fra | tatoeba-test-v2021-08-07 | 0.68258 | 50.7 | 12418 | 102721 |
| deu-ita | tatoeba-test-v2021-08-07 | 0.68502 | 47.4 | 10094 | 75504 |
| deu-lad | tatoeba-test-v2021-08-07 | 0.38047 | 22.0 | 220 | 1130 |
| deu-lat | tatoeba-test-v2021-08-07 | 0.42567 | 16.2 | 2016 | 10538 |
| deu-por | tatoeba-test-v2021-08-07 | 0.63684 | 43.1 | 10000 | 81482 |
| deu-ron | tatoeba-test-v2021-08-07 | 0.64207 | 42.6 | 1141 | 7432 |
| deu-spa | tatoeba-test-v2021-08-07 | 0.68333 | 49.4 | 10521 | 82570 |
| eng-cat | tatoeba-test-v2021-08-07 | 0.67724 | 49.1 | 1631 | 12344 |
| eng-fra | tatoeba-test-v2021-08-07 | 0.68777 | 51.6 | 12681 | 106378 |
| eng-glg | tatoeba-test-v2021-08-07 | 0.64530 | 45.2 | 1015 | 7881 |
| eng-ita | tatoeba-test-v2021-08-07 | 0.72115 | 53.3 | 17320 | 116336 |
| eng-lad | tatoeba-test-v2021-08-07 | 0.43857 | 24.2 | 768 | 4105 |
| eng-lad_Latn | tatoeba-test-v2021-08-07 | 0.50848 | 27.6 | 672 | 3580 |
| eng-lat | tatoeba-test-v2021-08-07 | 0.45710 | 20.0 | 10298 | 76510 |
| eng-por | tatoeba-test-v2021-08-07 | 0.72159 | 53.4 | 13222 | 105265 |
| eng-ron | tatoeba-test-v2021-08-07 | 0.67835 | 47.1 | 5508 | 40367 |
| eng-spa | tatoeba-test-v2021-08-07 | 0.72875 | 55.8 | 16583 | 134710 |
| fra-cat | tatoeba-test-v2021-08-07 | 0.65547 | 44.6 | 700 | 5342 |
| fra-fra | tatoeba-test-v2021-08-07 | 0.61650 | 39.9 | 1000 | 7757 |
| fra-ita | tatoeba-test-v2021-08-07 | 0.72739 | 53.5 | 10091 | 62060 |
| fra-por | tatoeba-test-v2021-08-07 | 0.70655 | 52.0 | 10518 | 77650 |
| fra-ron | tatoeba-test-v2021-08-07 | 0.65399 | 43.7 | 1925 | 12252 |
| fra-spa | tatoeba-test-v2021-08-07 | 0.72083 | 54.8 | 10294 | 78406 |
| por-cat | tatoeba-test-v2021-08-07 | 0.71178 | 52.0 | 747 | 6149 |
| por-fra | tatoeba-test-v2021-08-07 | 0.75691 | 60.4 | 10518 | 80459 |
| por-glg | tatoeba-test-v2021-08-07 | 0.74818 | 57.6 | 433 | 3016 |
| por-ita | tatoeba-test-v2021-08-07 | 0.76899 | 58.7 | 3066 | 24897 |
| por-por | tatoeba-test-v2021-08-07 | 0.71775 | 51.0 | 2500 | 19220 |
| por-ron | tatoeba-test-v2021-08-07 | 0.69517 | 47.8 | 681 | 4521 |
| por-spa | tatoeba-test-v2021-08-07 | 0.79442 | 64.9 | 10947 | 87335 |
| spa-cat | tatoeba-test-v2021-08-07 | 0.81845 | 66.3 | 1534 | 12343 |
| spa-fra | tatoeba-test-v2021-08-07 | 0.73277 | 57.4 | 10294 | 83501 |
| spa-glg | tatoeba-test-v2021-08-07 | 0.76118 | 61.5 | 2121 | 16581 |
| spa-ita | tatoeba-test-v2021-08-07 | 0.76742 | 59.5 | 5000 | 34515 |
| spa-lad | tatoeba-test-v2021-08-07 | 0.43064 | 23.4 | 276 | 1464 |
| spa-lad_Latn | tatoeba-test-v2021-08-07 | 0.50795 | 27.1 | 239 | 1254 |
| spa-lat | tatoeba-test-v2021-08-07 | 0.44044 | 18.8 | 3129 | 27685 |
| spa-por | tatoeba-test-v2021-08-07 | 0.76951 | 60.7 | 10947 | 87610 |
| spa-ron | tatoeba-test-v2021-08-07 | 0.67782 | 45.9 | 1959 | 12503 |
| spa-spa | tatoeba-test-v2021-08-07 | 0.67346 | 49.6 | 2500 | 21469 |
| deu-ast | flores101-devtest | 0.53230 | 21.5 | 1012 | 24572 |
| deu-cat | flores101-devtest | 0.58466 | 31.6 | 1012 | 27304 |
| deu-fra | flores101-devtest | 0.62370 | 36.5 | 1012 | 28343 |
| deu-glg | flores101-devtest | 0.55693 | 28.0 | 1012 | 26582 |
| deu-oci | flores101-devtest | 0.52253 | 22.3 | 1012 | 27305 |
| deu-por | flores101-devtest | 0.60688 | 34.8 | 1012 | 26519 |
| deu-ron | flores101-devtest | 0.57333 | 30.3 | 1012 | 26799 |
| eng-cat | flores101-devtest | 0.66607 | 42.5 | 1012 | 27304 |
| eng-fra | flores101-devtest | 0.70492 | 48.8 | 1012 | 28343 |
| eng-por | flores101-devtest | 0.71112 | 49.3 | 1012 | 26519 |
| eng-ron | flores101-devtest | 0.64856 | 40.3 | 1012 | 26799 |
| fra-oci | flores101-devtest | 0.58559 | 29.2 | 1012 | 27305 |
| fra-ron | flores101-devtest | 0.58922 | 32.1 | 1012 | 26799 |
| por-kea | flores101-devtest | 0.40779 | 12.8 | 1012 | 25540 |
| por-oci | flores101-devtest | 0.57016 | 27.5 | 1012 | 27305 |
| spa-ast | flores101-devtest | 0.49666 | 16.3 | 1012 | 24572 |
| spa-cat | flores101-devtest | 0.54015 | 23.2 | 1012 | 27304 |
| spa-glg | flores101-devtest | 0.52923 | 22.1 | 1012 | 26582 |
| spa-oci | flores101-devtest | 0.49285 | 17.2 | 1012 | 27305 |
| spa-por | flores101-devtest | 0.55944 | 25.7 | 1012 | 26519 |
| spa-ron | flores101-devtest | 0.53282 | 23.3 | 1012 | 26799 |
| deu-ast | flores200-devtest | 0.53782 | 22.1 | 1012 | 24572 |
| deu-cat | flores200-devtest | 0.58846 | 32.2 | 1012 | 27304 |
| deu-fra | flores200-devtest | 0.62803 | 37.2 | 1012 | 28343 |
| deu-fur | flores200-devtest | 0.46372 | 18.7 | 1012 | 29171 |
| deu-glg | flores200-devtest | 0.56229 | 28.7 | 1012 | 26582 |
| deu-hat | flores200-devtest | 0.46752 | 15.7 | 1012 | 25833 |
| deu-ita | flores200-devtest | 0.55344 | 25.8 | 1012 | 27306 |
| deu-lij | flores200-devtest | 0.40732 | 11.8 | 1012 | 28625 |
| deu-oci | flores200-devtest | 0.52749 | 23.1 | 1012 | 27305 |
| deu-pap | flores200-devtest | 0.49721 | 22.4 | 1012 | 28016 |
| deu-por | flores200-devtest | 0.60818 | 34.7 | 1012 | 26519 |
| deu-ron | flores200-devtest | 0.57873 | 31.1 | 1012 | 26799 |
| deu-spa | flores200-devtest | 0.52442 | 24.4 | 1012 | 29199 |
| deu-srd | flores200-devtest | 0.45629 | 16.1 | 1012 | 28322 |
| eng-ast | flores200-devtest | 0.59255 | 27.8 | 1012 | 24572 |
| eng-cat | flores200-devtest | 0.66809 | 42.8 | 1012 | 27304 |
| eng-fra | flores200-devtest | 0.71001 | 49.5 | 1012 | 28343 |
| eng-fur | flores200-devtest | 0.49164 | 23.0 | 1012 | 29171 |
| eng-glg | flores200-devtest | 0.62349 | 36.1 | 1012 | 26582 |
| eng-hat | flores200-devtest | 0.51720 | 21.3 | 1012 | 25833 |
| eng-ita | flores200-devtest | 0.58898 | 29.7 | 1012 | 27306 |
| eng-lij | flores200-devtest | 0.43644 | 14.8 | 1012 | 28625 |
| eng-oci | flores200-devtest | 0.63245 | 35.2 | 1012 | 27305 |
| eng-pap | flores200-devtest | 0.56775 | 30.4 | 1012 | 28016 |
| eng-por | flores200-devtest | 0.71438 | 50.0 | 1012 | 26519 |
| eng-ron | flores200-devtest | 0.65373 | 41.2 | 1012 | 26799 |
| eng-spa | flores200-devtest | 0.55784 | 27.6 | 1012 | 29199 |
| eng-srd | flores200-devtest | 0.49876 | 21.0 | 1012 | 28322 |
| fra-ast | flores200-devtest | 0.53904 | 22.0 | 1012 | 24572 |
| fra-cat | flores200-devtest | 0.60549 | 34.5 | 1012 | 27304 |
| fra-fur | flores200-devtest | 0.49119 | 21.4 | 1012 | 29171 |
| fra-glg | flores200-devtest | 0.57998 | 31.3 | 1012 | 26582 |
| fra-hat | flores200-devtest | 0.52018 | 20.7 | 1012 | 25833 |
| fra-ita | flores200-devtest | 0.56470 | 27.0 | 1012 | 27306 |
| fra-lij | flores200-devtest | 0.43180 | 13.6 | 1012 | 28625 |
| fra-oci | flores200-devtest | 0.58268 | 29.2 | 1012 | 27305 |
| fra-pap | flores200-devtest | 0.51029 | 23.6 | 1012 | 28016 |
| fra-por | flores200-devtest | 0.62540 | 37.5 | 1012 | 26519 |
| fra-ron | flores200-devtest | 0.59255 | 32.7 | 1012 | 26799 |
| fra-spa | flores200-devtest | 0.53001 | 24.4 | 1012 | 29199 |
| fra-srd | flores200-devtest | 0.47645 | 17.9 | 1012 | 28322 |
| por-ast | flores200-devtest | 0.55369 | 23.9 | 1012 | 24572 |
| por-cat | flores200-devtest | 0.61981 | 36.4 | 1012 | 27304 |
| por-fra | flores200-devtest | 0.64654 | 40.4 | 1012 | 28343 |
| por-fur | flores200-devtest | 0.50078 | 22.1 | 1012 | 29171 |
| por-glg | flores200-devtest | 0.58336 | 31.1 | 1012 | 26582 |
| por-hat | flores200-devtest | 0.48834 | 18.0 | 1012 | 25833 |
| por-ita | flores200-devtest | 0.56077 | 26.7 | 1012 | 27306 |
| por-kea | flores200-devtest | 0.42451 | 13.6 | 1012 | 25540 |
| por-lij | flores200-devtest | 0.43715 | 13.4 | 1012 | 28625 |
| por-oci | flores200-devtest | 0.57143 | 28.1 | 1012 | 27305 |
| por-pap | flores200-devtest | 0.52192 | 25.0 | 1012 | 28016 |
| por-ron | flores200-devtest | 0.59962 | 34.2 | 1012 | 26799 |
| por-spa | flores200-devtest | 0.53772 | 25.6 | 1012 | 29199 |
| por-srd | flores200-devtest | 0.48882 | 18.8 | 1012 | 28322 |
| spa-ast | flores200-devtest | 0.49512 | 16.3 | 1012 | 24572 |
| spa-cat | flores200-devtest | 0.53968 | 23.1 | 1012 | 27304 |
| spa-fra | flores200-devtest | 0.57461 | 27.9 | 1012 | 28343 |
| spa-fur | flores200-devtest | 0.45785 | 16.1 | 1012 | 29171 |
| spa-glg | flores200-devtest | 0.52933 | 22.2 | 1012 | 26582 |
| spa-hat | flores200-devtest | 0.44627 | 13.0 | 1012 | 25833 |
| spa-ita | flores200-devtest | 0.53063 | 22.4 | 1012 | 27306 |
| spa-oci | flores200-devtest | 0.49293 | 17.4 | 1012 | 27305 |
| spa-pap | flores200-devtest | 0.46595 | 17.7 | 1012 | 28016 |
| spa-por | flores200-devtest | 0.56138 | 25.9 | 1012 | 26519 |
| spa-ron | flores200-devtest | 0.53609 | 23.8 | 1012 | 26799 |
| spa-srd | flores200-devtest | 0.44898 | 13.3 | 1012 | 28322 |
| deu-fra | generaltest2022 | 0.60634 | 37.4 | 1984 | 38276 |
| deu-fra | multi30k_test_2016_flickr | 0.62595 | 38.5 | 1000 | 13505 |
| eng-fra | multi30k_test_2016_flickr | 0.71630 | 51.4 | 1000 | 13505 |
| deu-fra | multi30k_test_2017_flickr | 0.62733 | 37.3 | 1000 | 12118 |
| eng-fra | multi30k_test_2017_flickr | 0.71850 | 50.8 | 1000 | 12118 |
| deu-fra | multi30k_test_2017_mscoco | 0.59089 | 33.8 | 461 | 5484 |
| eng-fra | multi30k_test_2017_mscoco | 0.73129 | 54.1 | 461 | 5484 |
| deu-fra | multi30k_test_2018_flickr | 0.57155 | 30.9 | 1071 | 15867 |
| eng-fra | multi30k_test_2018_flickr | 0.65461 | 41.9 | 1071 | 15867 |
| eng-fra | newsdiscusstest2015 | 0.63660 | 38.5 | 1500 | 27975 |
| deu-fra | newssyscomb2009 | 0.56035 | 27.6 | 502 | 12331 |
| deu-ita | newssyscomb2009 | 0.55722 | 25.1 | 502 | 11551 |
| deu-spa | newssyscomb2009 | 0.55595 | 28.5 | 502 | 12503 |
| eng-fra | newssyscomb2009 | 0.58465 | 29.5 | 502 | 12331 |
| eng-ita | newssyscomb2009 | 0.60792 | 31.3 | 502 | 11551 |
| eng-spa | newssyscomb2009 | 0.58219 | 31.0 | 502 | 12503 |
| fra-ita | newssyscomb2009 | 0.61352 | 31.9 | 502 | 11551 |
| fra-spa | newssyscomb2009 | 0.60430 | 34.3 | 502 | 12503 |
| spa-fra | newssyscomb2009 | 0.61491 | 34.6 | 502 | 12331 |
| spa-ita | newssyscomb2009 | 0.61861 | 33.7 | 502 | 11551 |
| deu-fra | newstest2008 | 0.54926 | 26.3 | 2051 | 52685 |
| deu-spa | newstest2008 | 0.53902 | 25.5 | 2051 | 52586 |
| eng-fra | newstest2008 | 0.55358 | 26.8 | 2051 | 52685 |
| eng-spa | newstest2008 | 0.56491 | 29.5 | 2051 | 52586 |
| fra-spa | newstest2008 | 0.58764 | 33.0 | 2051 | 52586 |
| spa-fra | newstest2008 | 0.58848 | 32.4 | 2051 | 52685 |
| deu-fra | newstest2009 | 0.53870 | 25.4 | 2525 | 69263 |
| deu-ita | newstest2009 | 0.54509 | 24.4 | 2525 | 63466 |
| deu-spa | newstest2009 | 0.53769 | 25.7 | 2525 | 68111 |
| eng-fra | newstest2009 | 0.57566 | 29.3 | 2525 | 69263 |
| eng-ita | newstest2009 | 0.60372 | 31.4 | 2525 | 63466 |
| eng-spa | newstest2009 | 0.57913 | 30.0 | 2525 | 68111 |
| fra-ita | newstest2009 | 0.59749 | 30.5 | 2525 | 63466 |
| fra-spa | newstest2009 | 0.58921 | 32.1 | 2525 | 68111 |
| spa-fra | newstest2009 | 0.59195 | 32.3 | 2525 | 69263 |
| spa-ita | newstest2009 | 0.61007 | 33.0 | 2525 | 63466 |
| deu-fra | newstest2010 | 0.57888 | 29.5 | 2489 | 66022 |
| deu-spa | newstest2010 | 0.59408 | 32.7 | 2489 | 65480 |
| eng-fra | newstest2010 | 0.59588 | 32.4 | 2489 | 66022 |
| eng-spa | newstest2010 | 0.61978 | 36.6 | 2489 | 65480 |
| fra-spa | newstest2010 | 0.62513 | 37.7 | 2489 | 65480 |
| spa-fra | newstest2010 | 0.62193 | 36.1 | 2489 | 66022 |
| deu-fra | newstest2011 | 0.55704 | 27.5 | 3003 | 80626 |
| deu-spa | newstest2011 | 0.56696 | 30.4 | 3003 | 79476 |
| eng-fra | newstest2011 | 0.61071 | 34.3 | 3003 | 80626 |
| eng-spa | newstest2011 | 0.62126 | 38.7 | 3003 | 79476 |
| fra-spa | newstest2011 | 0.63139 | 40.0 | 3003 | 79476 |
| spa-fra | newstest2011 | 0.61258 | 35.2 | 3003 | 80626 |
| deu-fra | newstest2012 | 0.56034 | 27.6 | 3003 | 78011 |
| deu-spa | newstest2012 | 0.57336 | 31.6 | 3003 | 79006 |
| eng-fra | newstest2012 | 0.59264 | 31.9 | 3003 | 78011 |
| eng-spa | newstest2012 | 0.62568 | 39.1 | 3003 | 79006 |
| fra-spa | newstest2012 | 0.62725 | 39.5 | 3003 | 79006 |
| spa-fra | newstest2012 | 0.61177 | 34.2 | 3003 | 78011 |
| deu-fra | newstest2013 | 0.56475 | 29.9 | 3000 | 70037 |
| deu-spa | newstest2013 | 0.57187 | 31.9 | 3000 | 70528 |
| eng-fra | newstest2013 | 0.58938 | 33.3 | 3000 | 70037 |
| eng-spa | newstest2013 | 0.59817 | 35.2 | 3000 | 70528 |
| fra-spa | newstest2013 | 0.59482 | 35.1 | 3000 | 70528 |
| spa-fra | newstest2013 | 0.59825 | 33.9 | 3000 | 70037 |
| eng-fra | newstest2014 | 0.65438 | 40.2 | 3003 | 77306 |
| eng-ron | newstest2016 | 0.59473 | 32.2 | 1999 | 48945 |
| deu-fra | newstest2019 | 0.62831 | 35.9 | 1701 | 42509 |
| deu-fra | newstest2020 | 0.60408 | 33.0 | 1619 | 36890 |
| deu-fra | newstest2021 | 0.58913 | 31.3 | 1000 | 23757 |
| deu-cat | ntrex128 | 0.55033 | 28.2 | 1997 | 53438 |
| deu-fra | ntrex128 | 0.55854 | 28.5 | 1997 | 53481 |
| deu-glg | ntrex128 | 0.55034 | 27.8 | 1997 | 50432 |
| deu-ita | ntrex128 | 0.55733 | 26.6 | 1997 | 50759 |
| deu-por | ntrex128 | 0.54208 | 26.0 | 1997 | 51631 |
| deu-ron | ntrex128 | 0.52839 | 26.6 | 1997 | 53498 |
| deu-spa | ntrex128 | 0.56966 | 30.8 | 1997 | 54107 |
| eng-cat | ntrex128 | 0.61431 | 36.3 | 1997 | 53438 |
| eng-fra | ntrex128 | 0.61695 | 35.5 | 1997 | 53481 |
| eng-glg | ntrex128 | 0.62390 | 37.2 | 1997 | 50432 |
| eng-ita | ntrex128 | 0.62209 | 36.1 | 1997 | 50759 |
| eng-por | ntrex128 | 0.59859 | 33.5 | 1997 | 51631 |
| eng-ron | ntrex128 | 0.58128 | 33.4 | 1997 | 53498 |
| eng-spa | ntrex128 | 0.64099 | 40.3 | 1997 | 54107 |
| fra-cat | ntrex128 | 0.55093 | 28.1 | 1997 | 53438 |
| fra-glg | ntrex128 | 0.55325 | 28.0 | 1997 | 50432 |
| fra-ita | ntrex128 | 0.56188 | 27.4 | 1997 | 50759 |
| fra-por | ntrex128 | 0.54001 | 25.6 | 1997 | 51631 |
| fra-ron | ntrex128 | 0.51853 | 24.8 | 1997 | 53498 |
| fra-spa | ntrex128 | 0.57116 | 31.0 | 1997 | 54107 |
| por-cat | ntrex128 | 0.57962 | 31.6 | 1997 | 53438 |
| por-fra | ntrex128 | 0.56910 | 28.9 | 1997 | 53481 |
| por-glg | ntrex128 | 0.57389 | 30.3 | 1997 | 50432 |
| por-ita | ntrex128 | 0.58788 | 30.6 | 1997 | 50759 |
| por-ron | ntrex128 | 0.54276 | 28.0 | 1997 | 53498 |
| por-spa | ntrex128 | 0.59565 | 34.2 | 1997 | 54107 |
| spa-cat | ntrex128 | 0.60605 | 34.0 | 1997 | 53438 |
| spa-fra | ntrex128 | 0.57501 | 29.6 | 1997 | 53481 |
| spa-glg | ntrex128 | 0.61300 | 34.4 | 1997 | 50432 |
| spa-ita | ntrex128 | 0.57868 | 28.9 | 1997 | 50759 |
| spa-por | ntrex128 | 0.56730 | 29.1 | 1997 | 51631 |
| spa-ron | ntrex128 | 0.54222 | 27.9 | 1997 | 53498 |
| eng-fra | tico19-test | 0.62989 | 40.1 | 2100 | 64661 |
| eng-por | tico19-test | 0.72708 | 50.0 | 2100 | 62729 |
| eng-spa | tico19-test | 0.73154 | 52.0 | 2100 | 66563 |
| fra-por | tico19-test | 0.58383 | 34.1 | 2100 | 62729 |
| fra-spa | tico19-test | 0.59581 | 37.0 | 2100 | 66563 |
| por-fra | tico19-test | 0.59798 | 34.4 | 2100 | 64661 |
| por-spa | tico19-test | 0.68332 | 45.4 | 2100 | 66563 |
| spa-fra | tico19-test | 0.60469 | 35.5 | 2100 | 64661 |
| spa-por | tico19-test | 0.67898 | 42.8 | 2100 | 62729 |
## Citation Information
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```bibtex
@article{tiedemann2023democratizing,
title={Democratizing neural machine translation with {OPUS-MT}},
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
journal={Language Resources and Evaluation},
number={58},
pages={713--755},
year={2023},
publisher={Springer Nature},
issn={1574-0218},
doi={10.1007/s10579-023-09704-w}
}
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
## Model conversion info
* transformers version: 4.45.1
* OPUS-MT git hash: 0882077
* port time: Tue Oct 8 10:16:22 EEST 2024
* port machine: LM0-400-22516.local
| null |
Non_BioNLP
|
# opus-mt-tc-bible-big-deu_eng_fra_por_spa-itc
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from unknown (deu+eng+fra+por+spa) to Italic languages (itc).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2024-05-30
- **License:** Apache-2.0
- **Language(s):**
- Source Language(s): deu eng fra por spa
- Target Language(s): acf arg ast cat cbk cos crs egl ext fra frm fro frp fur gcf glg hat ita kea lad lat lij lld lmo lou mfe mol mwl nap oci osp pap pcd pms por roh ron rup scn spa srd vec wln
- Valid Target Language Labels: >>acf<< >>aoa<< >>arg<< >>ast<< >>cat<< >>cbk<< >>cbk_Latn<< >>ccd<< >>cks<< >>cos<< >>cri<< >>crs<< >>dlm<< >>drc<< >>egl<< >>ext<< >>fab<< >>fax<< >>fra<< >>frc<< >>frm<< >>frm_Latn<< >>fro<< >>fro_Latn<< >>frp<< >>fur<< >>gcf<< >>gcf_Latn<< >>gcr<< >>glg<< >>hat<< >>idb<< >>ist<< >>ita<< >>itk<< >>kea<< >>kmv<< >>lad<< >>lad_Latn<< >>lat<< >>lat_Latn<< >>lij<< >>lld<< >>lld_Latn<< >>lmo<< >>lou<< >>lou_Latn<< >>mcm<< >>mfe<< >>mol<< >>mwl<< >>mxi<< >>mzs<< >>nap<< >>nrf<< >>oci<< >>osc<< >>osp<< >>osp_Latn<< >>pap<< >>pcd<< >>pln<< >>pms<< >>por<< >>pov<< >>pre<< >>pro<< >>rcf<< >>rgn<< >>roh<< >>ron<< >>ruo<< >>rup<< >>ruq<< >>scf<< >>scn<< >>spa<< >>spq<< >>spx<< >>srd<< >>tmg<< >>tvy<< >>vec<< >>vkp<< >>wln<< >>xfa<< >>xum<< >>xxx<<
- **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-itc/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
- **Resources for more information:**
- [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/deu%2Beng%2Bfra%2Bpor%2Bspa-itc/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
- [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>acf<<`
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>acf<< Replace this with text in an accepted source language.",
">>wln<< This is the second sentence."
]
model_name = "pytorch-models/opus-mt-tc-bible-big-deu_eng_fra_por_spa-itc"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-itc")
print(pipe(">>acf<< Replace this with text in an accepted source language."))
```
## Training
- **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-itc/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/deu%2Beng%2Bfra%2Bpor%2Bspa-itc/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-itc/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt)
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-itc/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| deu-cat | tatoeba-test-v2021-08-07 | 0.63465 | 44.3 | 723 | 5539 |
| deu-fra | tatoeba-test-v2021-08-07 | 0.68258 | 50.7 | 12418 | 102721 |
| deu-ita | tatoeba-test-v2021-08-07 | 0.68502 | 47.4 | 10094 | 75504 |
| deu-lad | tatoeba-test-v2021-08-07 | 0.38047 | 22.0 | 220 | 1130 |
| deu-lat | tatoeba-test-v2021-08-07 | 0.42567 | 16.2 | 2016 | 10538 |
| deu-por | tatoeba-test-v2021-08-07 | 0.63684 | 43.1 | 10000 | 81482 |
| deu-ron | tatoeba-test-v2021-08-07 | 0.64207 | 42.6 | 1141 | 7432 |
| deu-spa | tatoeba-test-v2021-08-07 | 0.68333 | 49.4 | 10521 | 82570 |
| eng-cat | tatoeba-test-v2021-08-07 | 0.67724 | 49.1 | 1631 | 12344 |
| eng-fra | tatoeba-test-v2021-08-07 | 0.68777 | 51.6 | 12681 | 106378 |
| eng-glg | tatoeba-test-v2021-08-07 | 0.64530 | 45.2 | 1015 | 7881 |
| eng-ita | tatoeba-test-v2021-08-07 | 0.72115 | 53.3 | 17320 | 116336 |
| eng-lad | tatoeba-test-v2021-08-07 | 0.43857 | 24.2 | 768 | 4105 |
| eng-lad_Latn | tatoeba-test-v2021-08-07 | 0.50848 | 27.6 | 672 | 3580 |
| eng-lat | tatoeba-test-v2021-08-07 | 0.45710 | 20.0 | 10298 | 76510 |
| eng-por | tatoeba-test-v2021-08-07 | 0.72159 | 53.4 | 13222 | 105265 |
| eng-ron | tatoeba-test-v2021-08-07 | 0.67835 | 47.1 | 5508 | 40367 |
| eng-spa | tatoeba-test-v2021-08-07 | 0.72875 | 55.8 | 16583 | 134710 |
| fra-cat | tatoeba-test-v2021-08-07 | 0.65547 | 44.6 | 700 | 5342 |
| fra-fra | tatoeba-test-v2021-08-07 | 0.61650 | 39.9 | 1000 | 7757 |
| fra-ita | tatoeba-test-v2021-08-07 | 0.72739 | 53.5 | 10091 | 62060 |
| fra-por | tatoeba-test-v2021-08-07 | 0.70655 | 52.0 | 10518 | 77650 |
| fra-ron | tatoeba-test-v2021-08-07 | 0.65399 | 43.7 | 1925 | 12252 |
| fra-spa | tatoeba-test-v2021-08-07 | 0.72083 | 54.8 | 10294 | 78406 |
| por-cat | tatoeba-test-v2021-08-07 | 0.71178 | 52.0 | 747 | 6149 |
| por-fra | tatoeba-test-v2021-08-07 | 0.75691 | 60.4 | 10518 | 80459 |
| por-glg | tatoeba-test-v2021-08-07 | 0.74818 | 57.6 | 433 | 3016 |
| por-ita | tatoeba-test-v2021-08-07 | 0.76899 | 58.7 | 3066 | 24897 |
| por-por | tatoeba-test-v2021-08-07 | 0.71775 | 51.0 | 2500 | 19220 |
| por-ron | tatoeba-test-v2021-08-07 | 0.69517 | 47.8 | 681 | 4521 |
| por-spa | tatoeba-test-v2021-08-07 | 0.79442 | 64.9 | 10947 | 87335 |
| spa-cat | tatoeba-test-v2021-08-07 | 0.81845 | 66.3 | 1534 | 12343 |
| spa-fra | tatoeba-test-v2021-08-07 | 0.73277 | 57.4 | 10294 | 83501 |
| spa-glg | tatoeba-test-v2021-08-07 | 0.76118 | 61.5 | 2121 | 16581 |
| spa-ita | tatoeba-test-v2021-08-07 | 0.76742 | 59.5 | 5000 | 34515 |
| spa-lad | tatoeba-test-v2021-08-07 | 0.43064 | 23.4 | 276 | 1464 |
| spa-lad_Latn | tatoeba-test-v2021-08-07 | 0.50795 | 27.1 | 239 | 1254 |
| spa-lat | tatoeba-test-v2021-08-07 | 0.44044 | 18.8 | 3129 | 27685 |
| spa-por | tatoeba-test-v2021-08-07 | 0.76951 | 60.7 | 10947 | 87610 |
| spa-ron | tatoeba-test-v2021-08-07 | 0.67782 | 45.9 | 1959 | 12503 |
| spa-spa | tatoeba-test-v2021-08-07 | 0.67346 | 49.6 | 2500 | 21469 |
| deu-ast | flores101-devtest | 0.53230 | 21.5 | 1012 | 24572 |
| deu-cat | flores101-devtest | 0.58466 | 31.6 | 1012 | 27304 |
| deu-fra | flores101-devtest | 0.62370 | 36.5 | 1012 | 28343 |
| deu-glg | flores101-devtest | 0.55693 | 28.0 | 1012 | 26582 |
| deu-oci | flores101-devtest | 0.52253 | 22.3 | 1012 | 27305 |
| deu-por | flores101-devtest | 0.60688 | 34.8 | 1012 | 26519 |
| deu-ron | flores101-devtest | 0.57333 | 30.3 | 1012 | 26799 |
| eng-cat | flores101-devtest | 0.66607 | 42.5 | 1012 | 27304 |
| eng-fra | flores101-devtest | 0.70492 | 48.8 | 1012 | 28343 |
| eng-por | flores101-devtest | 0.71112 | 49.3 | 1012 | 26519 |
| eng-ron | flores101-devtest | 0.64856 | 40.3 | 1012 | 26799 |
| fra-oci | flores101-devtest | 0.58559 | 29.2 | 1012 | 27305 |
| fra-ron | flores101-devtest | 0.58922 | 32.1 | 1012 | 26799 |
| por-kea | flores101-devtest | 0.40779 | 12.8 | 1012 | 25540 |
| por-oci | flores101-devtest | 0.57016 | 27.5 | 1012 | 27305 |
| spa-ast | flores101-devtest | 0.49666 | 16.3 | 1012 | 24572 |
| spa-cat | flores101-devtest | 0.54015 | 23.2 | 1012 | 27304 |
| spa-glg | flores101-devtest | 0.52923 | 22.1 | 1012 | 26582 |
| spa-oci | flores101-devtest | 0.49285 | 17.2 | 1012 | 27305 |
| spa-por | flores101-devtest | 0.55944 | 25.7 | 1012 | 26519 |
| spa-ron | flores101-devtest | 0.53282 | 23.3 | 1012 | 26799 |
| deu-ast | flores200-devtest | 0.53782 | 22.1 | 1012 | 24572 |
| deu-cat | flores200-devtest | 0.58846 | 32.2 | 1012 | 27304 |
| deu-fra | flores200-devtest | 0.62803 | 37.2 | 1012 | 28343 |
| deu-fur | flores200-devtest | 0.46372 | 18.7 | 1012 | 29171 |
| deu-glg | flores200-devtest | 0.56229 | 28.7 | 1012 | 26582 |
| deu-hat | flores200-devtest | 0.46752 | 15.7 | 1012 | 25833 |
| deu-ita | flores200-devtest | 0.55344 | 25.8 | 1012 | 27306 |
| deu-lij | flores200-devtest | 0.40732 | 11.8 | 1012 | 28625 |
| deu-oci | flores200-devtest | 0.52749 | 23.1 | 1012 | 27305 |
| deu-pap | flores200-devtest | 0.49721 | 22.4 | 1012 | 28016 |
| deu-por | flores200-devtest | 0.60818 | 34.7 | 1012 | 26519 |
| deu-ron | flores200-devtest | 0.57873 | 31.1 | 1012 | 26799 |
| deu-spa | flores200-devtest | 0.52442 | 24.4 | 1012 | 29199 |
| deu-srd | flores200-devtest | 0.45629 | 16.1 | 1012 | 28322 |
| eng-ast | flores200-devtest | 0.59255 | 27.8 | 1012 | 24572 |
| eng-cat | flores200-devtest | 0.66809 | 42.8 | 1012 | 27304 |
| eng-fra | flores200-devtest | 0.71001 | 49.5 | 1012 | 28343 |
| eng-fur | flores200-devtest | 0.49164 | 23.0 | 1012 | 29171 |
| eng-glg | flores200-devtest | 0.62349 | 36.1 | 1012 | 26582 |
| eng-hat | flores200-devtest | 0.51720 | 21.3 | 1012 | 25833 |
| eng-ita | flores200-devtest | 0.58898 | 29.7 | 1012 | 27306 |
| eng-lij | flores200-devtest | 0.43644 | 14.8 | 1012 | 28625 |
| eng-oci | flores200-devtest | 0.63245 | 35.2 | 1012 | 27305 |
| eng-pap | flores200-devtest | 0.56775 | 30.4 | 1012 | 28016 |
| eng-por | flores200-devtest | 0.71438 | 50.0 | 1012 | 26519 |
| eng-ron | flores200-devtest | 0.65373 | 41.2 | 1012 | 26799 |
| eng-spa | flores200-devtest | 0.55784 | 27.6 | 1012 | 29199 |
| eng-srd | flores200-devtest | 0.49876 | 21.0 | 1012 | 28322 |
| fra-ast | flores200-devtest | 0.53904 | 22.0 | 1012 | 24572 |
| fra-cat | flores200-devtest | 0.60549 | 34.5 | 1012 | 27304 |
| fra-fur | flores200-devtest | 0.49119 | 21.4 | 1012 | 29171 |
| fra-glg | flores200-devtest | 0.57998 | 31.3 | 1012 | 26582 |
| fra-hat | flores200-devtest | 0.52018 | 20.7 | 1012 | 25833 |
| fra-ita | flores200-devtest | 0.56470 | 27.0 | 1012 | 27306 |
| fra-lij | flores200-devtest | 0.43180 | 13.6 | 1012 | 28625 |
| fra-oci | flores200-devtest | 0.58268 | 29.2 | 1012 | 27305 |
| fra-pap | flores200-devtest | 0.51029 | 23.6 | 1012 | 28016 |
| fra-por | flores200-devtest | 0.62540 | 37.5 | 1012 | 26519 |
| fra-ron | flores200-devtest | 0.59255 | 32.7 | 1012 | 26799 |
| fra-spa | flores200-devtest | 0.53001 | 24.4 | 1012 | 29199 |
| fra-srd | flores200-devtest | 0.47645 | 17.9 | 1012 | 28322 |
| por-ast | flores200-devtest | 0.55369 | 23.9 | 1012 | 24572 |
| por-cat | flores200-devtest | 0.61981 | 36.4 | 1012 | 27304 |
| por-fra | flores200-devtest | 0.64654 | 40.4 | 1012 | 28343 |
| por-fur | flores200-devtest | 0.50078 | 22.1 | 1012 | 29171 |
| por-glg | flores200-devtest | 0.58336 | 31.1 | 1012 | 26582 |
| por-hat | flores200-devtest | 0.48834 | 18.0 | 1012 | 25833 |
| por-ita | flores200-devtest | 0.56077 | 26.7 | 1012 | 27306 |
| por-kea | flores200-devtest | 0.42451 | 13.6 | 1012 | 25540 |
| por-lij | flores200-devtest | 0.43715 | 13.4 | 1012 | 28625 |
| por-oci | flores200-devtest | 0.57143 | 28.1 | 1012 | 27305 |
| por-pap | flores200-devtest | 0.52192 | 25.0 | 1012 | 28016 |
| por-ron | flores200-devtest | 0.59962 | 34.2 | 1012 | 26799 |
| por-spa | flores200-devtest | 0.53772 | 25.6 | 1012 | 29199 |
| por-srd | flores200-devtest | 0.48882 | 18.8 | 1012 | 28322 |
| spa-ast | flores200-devtest | 0.49512 | 16.3 | 1012 | 24572 |
| spa-cat | flores200-devtest | 0.53968 | 23.1 | 1012 | 27304 |
| spa-fra | flores200-devtest | 0.57461 | 27.9 | 1012 | 28343 |
| spa-fur | flores200-devtest | 0.45785 | 16.1 | 1012 | 29171 |
| spa-glg | flores200-devtest | 0.52933 | 22.2 | 1012 | 26582 |
| spa-hat | flores200-devtest | 0.44627 | 13.0 | 1012 | 25833 |
| spa-ita | flores200-devtest | 0.53063 | 22.4 | 1012 | 27306 |
| spa-oci | flores200-devtest | 0.49293 | 17.4 | 1012 | 27305 |
| spa-pap | flores200-devtest | 0.46595 | 17.7 | 1012 | 28016 |
| spa-por | flores200-devtest | 0.56138 | 25.9 | 1012 | 26519 |
| spa-ron | flores200-devtest | 0.53609 | 23.8 | 1012 | 26799 |
| spa-srd | flores200-devtest | 0.44898 | 13.3 | 1012 | 28322 |
| deu-fra | generaltest2022 | 0.60634 | 37.4 | 1984 | 38276 |
| deu-fra | multi30k_test_2016_flickr | 0.62595 | 38.5 | 1000 | 13505 |
| eng-fra | multi30k_test_2016_flickr | 0.71630 | 51.4 | 1000 | 13505 |
| deu-fra | multi30k_test_2017_flickr | 0.62733 | 37.3 | 1000 | 12118 |
| eng-fra | multi30k_test_2017_flickr | 0.71850 | 50.8 | 1000 | 12118 |
| deu-fra | multi30k_test_2017_mscoco | 0.59089 | 33.8 | 461 | 5484 |
| eng-fra | multi30k_test_2017_mscoco | 0.73129 | 54.1 | 461 | 5484 |
| deu-fra | multi30k_test_2018_flickr | 0.57155 | 30.9 | 1071 | 15867 |
| eng-fra | multi30k_test_2018_flickr | 0.65461 | 41.9 | 1071 | 15867 |
| eng-fra | newsdiscusstest2015 | 0.63660 | 38.5 | 1500 | 27975 |
| deu-fra | newssyscomb2009 | 0.56035 | 27.6 | 502 | 12331 |
| deu-ita | newssyscomb2009 | 0.55722 | 25.1 | 502 | 11551 |
| deu-spa | newssyscomb2009 | 0.55595 | 28.5 | 502 | 12503 |
| eng-fra | newssyscomb2009 | 0.58465 | 29.5 | 502 | 12331 |
| eng-ita | newssyscomb2009 | 0.60792 | 31.3 | 502 | 11551 |
| eng-spa | newssyscomb2009 | 0.58219 | 31.0 | 502 | 12503 |
| fra-ita | newssyscomb2009 | 0.61352 | 31.9 | 502 | 11551 |
| fra-spa | newssyscomb2009 | 0.60430 | 34.3 | 502 | 12503 |
| spa-fra | newssyscomb2009 | 0.61491 | 34.6 | 502 | 12331 |
| spa-ita | newssyscomb2009 | 0.61861 | 33.7 | 502 | 11551 |
| deu-fra | newstest2008 | 0.54926 | 26.3 | 2051 | 52685 |
| deu-spa | newstest2008 | 0.53902 | 25.5 | 2051 | 52586 |
| eng-fra | newstest2008 | 0.55358 | 26.8 | 2051 | 52685 |
| eng-spa | newstest2008 | 0.56491 | 29.5 | 2051 | 52586 |
| fra-spa | newstest2008 | 0.58764 | 33.0 | 2051 | 52586 |
| spa-fra | newstest2008 | 0.58848 | 32.4 | 2051 | 52685 |
| deu-fra | newstest2009 | 0.53870 | 25.4 | 2525 | 69263 |
| deu-ita | newstest2009 | 0.54509 | 24.4 | 2525 | 63466 |
| deu-spa | newstest2009 | 0.53769 | 25.7 | 2525 | 68111 |
| eng-fra | newstest2009 | 0.57566 | 29.3 | 2525 | 69263 |
| eng-ita | newstest2009 | 0.60372 | 31.4 | 2525 | 63466 |
| eng-spa | newstest2009 | 0.57913 | 30.0 | 2525 | 68111 |
| fra-ita | newstest2009 | 0.59749 | 30.5 | 2525 | 63466 |
| fra-spa | newstest2009 | 0.58921 | 32.1 | 2525 | 68111 |
| spa-fra | newstest2009 | 0.59195 | 32.3 | 2525 | 69263 |
| spa-ita | newstest2009 | 0.61007 | 33.0 | 2525 | 63466 |
| deu-fra | newstest2010 | 0.57888 | 29.5 | 2489 | 66022 |
| deu-spa | newstest2010 | 0.59408 | 32.7 | 2489 | 65480 |
| eng-fra | newstest2010 | 0.59588 | 32.4 | 2489 | 66022 |
| eng-spa | newstest2010 | 0.61978 | 36.6 | 2489 | 65480 |
| fra-spa | newstest2010 | 0.62513 | 37.7 | 2489 | 65480 |
| spa-fra | newstest2010 | 0.62193 | 36.1 | 2489 | 66022 |
| deu-fra | newstest2011 | 0.55704 | 27.5 | 3003 | 80626 |
| deu-spa | newstest2011 | 0.56696 | 30.4 | 3003 | 79476 |
| eng-fra | newstest2011 | 0.61071 | 34.3 | 3003 | 80626 |
| eng-spa | newstest2011 | 0.62126 | 38.7 | 3003 | 79476 |
| fra-spa | newstest2011 | 0.63139 | 40.0 | 3003 | 79476 |
| spa-fra | newstest2011 | 0.61258 | 35.2 | 3003 | 80626 |
| deu-fra | newstest2012 | 0.56034 | 27.6 | 3003 | 78011 |
| deu-spa | newstest2012 | 0.57336 | 31.6 | 3003 | 79006 |
| eng-fra | newstest2012 | 0.59264 | 31.9 | 3003 | 78011 |
| eng-spa | newstest2012 | 0.62568 | 39.1 | 3003 | 79006 |
| fra-spa | newstest2012 | 0.62725 | 39.5 | 3003 | 79006 |
| spa-fra | newstest2012 | 0.61177 | 34.2 | 3003 | 78011 |
| deu-fra | newstest2013 | 0.56475 | 29.9 | 3000 | 70037 |
| deu-spa | newstest2013 | 0.57187 | 31.9 | 3000 | 70528 |
| eng-fra | newstest2013 | 0.58938 | 33.3 | 3000 | 70037 |
| eng-spa | newstest2013 | 0.59817 | 35.2 | 3000 | 70528 |
| fra-spa | newstest2013 | 0.59482 | 35.1 | 3000 | 70528 |
| spa-fra | newstest2013 | 0.59825 | 33.9 | 3000 | 70037 |
| eng-fra | newstest2014 | 0.65438 | 40.2 | 3003 | 77306 |
| eng-ron | newstest2016 | 0.59473 | 32.2 | 1999 | 48945 |
| deu-fra | newstest2019 | 0.62831 | 35.9 | 1701 | 42509 |
| deu-fra | newstest2020 | 0.60408 | 33.0 | 1619 | 36890 |
| deu-fra | newstest2021 | 0.58913 | 31.3 | 1000 | 23757 |
| deu-cat | ntrex128 | 0.55033 | 28.2 | 1997 | 53438 |
| deu-fra | ntrex128 | 0.55854 | 28.5 | 1997 | 53481 |
| deu-glg | ntrex128 | 0.55034 | 27.8 | 1997 | 50432 |
| deu-ita | ntrex128 | 0.55733 | 26.6 | 1997 | 50759 |
| deu-por | ntrex128 | 0.54208 | 26.0 | 1997 | 51631 |
| deu-ron | ntrex128 | 0.52839 | 26.6 | 1997 | 53498 |
| deu-spa | ntrex128 | 0.56966 | 30.8 | 1997 | 54107 |
| eng-cat | ntrex128 | 0.61431 | 36.3 | 1997 | 53438 |
| eng-fra | ntrex128 | 0.61695 | 35.5 | 1997 | 53481 |
| eng-glg | ntrex128 | 0.62390 | 37.2 | 1997 | 50432 |
| eng-ita | ntrex128 | 0.62209 | 36.1 | 1997 | 50759 |
| eng-por | ntrex128 | 0.59859 | 33.5 | 1997 | 51631 |
| eng-ron | ntrex128 | 0.58128 | 33.4 | 1997 | 53498 |
| eng-spa | ntrex128 | 0.64099 | 40.3 | 1997 | 54107 |
| fra-cat | ntrex128 | 0.55093 | 28.1 | 1997 | 53438 |
| fra-glg | ntrex128 | 0.55325 | 28.0 | 1997 | 50432 |
| fra-ita | ntrex128 | 0.56188 | 27.4 | 1997 | 50759 |
| fra-por | ntrex128 | 0.54001 | 25.6 | 1997 | 51631 |
| fra-ron | ntrex128 | 0.51853 | 24.8 | 1997 | 53498 |
| fra-spa | ntrex128 | 0.57116 | 31.0 | 1997 | 54107 |
| por-cat | ntrex128 | 0.57962 | 31.6 | 1997 | 53438 |
| por-fra | ntrex128 | 0.56910 | 28.9 | 1997 | 53481 |
| por-glg | ntrex128 | 0.57389 | 30.3 | 1997 | 50432 |
| por-ita | ntrex128 | 0.58788 | 30.6 | 1997 | 50759 |
| por-ron | ntrex128 | 0.54276 | 28.0 | 1997 | 53498 |
| por-spa | ntrex128 | 0.59565 | 34.2 | 1997 | 54107 |
| spa-cat | ntrex128 | 0.60605 | 34.0 | 1997 | 53438 |
| spa-fra | ntrex128 | 0.57501 | 29.6 | 1997 | 53481 |
| spa-glg | ntrex128 | 0.61300 | 34.4 | 1997 | 50432 |
| spa-ita | ntrex128 | 0.57868 | 28.9 | 1997 | 50759 |
| spa-por | ntrex128 | 0.56730 | 29.1 | 1997 | 51631 |
| spa-ron | ntrex128 | 0.54222 | 27.9 | 1997 | 53498 |
| eng-fra | tico19-test | 0.62989 | 40.1 | 2100 | 64661 |
| eng-por | tico19-test | 0.72708 | 50.0 | 2100 | 62729 |
| eng-spa | tico19-test | 0.73154 | 52.0 | 2100 | 66563 |
| fra-por | tico19-test | 0.58383 | 34.1 | 2100 | 62729 |
| fra-spa | tico19-test | 0.59581 | 37.0 | 2100 | 66563 |
| por-fra | tico19-test | 0.59798 | 34.4 | 2100 | 64661 |
| por-spa | tico19-test | 0.68332 | 45.4 | 2100 | 66563 |
| spa-fra | tico19-test | 0.60469 | 35.5 | 2100 | 64661 |
| spa-por | tico19-test | 0.67898 | 42.8 | 2100 | 62729 |
## Citation Information
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```bibtex
@article{tiedemann2023democratizing,
title={Democratizing neural machine translation with {OPUS-MT}},
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
journal={Language Resources and Evaluation},
number={58},
pages={713--755},
year={2023},
publisher={Springer Nature},
issn={1574-0218},
doi={10.1007/s10579-023-09704-w}
}
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
## Model conversion info
* transformers version: 4.45.1
* OPUS-MT git hash: 0882077
* port time: Tue Oct 8 10:16:22 EEST 2024
* port machine: LM0-400-22516.local
|
{"language": ["acf", "an", "ast", "ca", "cbk", "co", "crs", "de", "egl", "en", "es", "ext", "fr", "frm", "fro", "frp", "fur", "gcf", "gl", "ht", "it", "kea", "la", "lad", "lij", "lld", "lmo", "lou", "mfe", "mo", "mwl", "nap", "oc", "osp", "pap", "pcd", "pms", "pt", "rm", "ro", "rup", "sc", "scn", "vec", "wa"], "library_name": "transformers", "license": "apache-2.0", "tags": ["translation", "opus-mt-tc-bible"], "model-index": [{"name": "opus-mt-tc-bible-big-deu_eng_fra_por_spa-itc", "results": [{"task": {"type": "translation", "name": "Translation deu-ast"}, "dataset": {"name": "flores200-devtest", "type": "flores200-devtest", "args": "deu-ast"}, "metrics": [{"type": "bleu", "value": 22.1, "name": "BLEU"}, {"type": "chrf", "value": 0.53782, "name": "chr-F"}, {"type": "bleu", "value": 32.2, "name": "BLEU"}, {"type": "chrf", "value": 0.58846, "name": "chr-F"}, {"type": "bleu", "value": 37.2, "name": "BLEU"}, {"type": "chrf", "value": 0.62803, "name": "chr-F"}, {"type": "bleu", "value": 18.7, "name": "BLEU"}, {"type": "chrf", "value": 0.46372, "name": "chr-F"}, {"type": "bleu", "value": 28.7, "name": "BLEU"}, {"type": "chrf", "value": 0.56229, "name": "chr-F"}, {"type": "bleu", "value": 15.7, "name": "BLEU"}, {"type": "chrf", "value": 0.46752, "name": "chr-F"}, {"type": "bleu", "value": 25.8, "name": "BLEU"}, {"type": "chrf", "value": 0.55344, "name": "chr-F"}, {"type": "bleu", "value": 11.8, "name": "BLEU"}, {"type": "chrf", "value": 0.40732, "name": "chr-F"}, {"type": "bleu", "value": 23.1, "name": "BLEU"}, {"type": "chrf", "value": 0.52749, "name": "chr-F"}, {"type": "bleu", "value": 22.4, "name": "BLEU"}, {"type": "chrf", "value": 0.49721, "name": "chr-F"}, {"type": "bleu", "value": 34.7, "name": "BLEU"}, {"type": "chrf", "value": 0.60818, "name": "chr-F"}, {"type": "bleu", "value": 31.1, "name": "BLEU"}, {"type": "chrf", "value": 0.57873, "name": "chr-F"}, {"type": "bleu", "value": 24.4, "name": "BLEU"}, {"type": "chrf", "value": 0.52442, "name": "chr-F"}, {"type": "bleu", "value": 16.1, "name": "BLEU"}, {"type": "chrf", "value": 0.45629, "name": "chr-F"}, {"type": "bleu", "value": 27.8, "name": "BLEU"}, {"type": "chrf", "value": 0.59255, "name": "chr-F"}, {"type": "bleu", "value": 42.8, "name": "BLEU"}, {"type": "chrf", "value": 0.66809, "name": "chr-F"}, {"type": "bleu", "value": 49.5, "name": "BLEU"}, {"type": "chrf", "value": 0.71001, "name": "chr-F"}, {"type": "bleu", "value": 23.0, "name": "BLEU"}, {"type": "chrf", "value": 0.49164, "name": "chr-F"}, {"type": "bleu", "value": 36.1, "name": "BLEU"}, {"type": "chrf", "value": 0.62349, "name": "chr-F"}, {"type": "bleu", "value": 21.3, "name": "BLEU"}, {"type": "chrf", "value": 0.5172, "name": "chr-F"}, {"type": "bleu", "value": 29.7, "name": "BLEU"}, {"type": "chrf", "value": 0.58898, "name": "chr-F"}, {"type": "bleu", "value": 11.0, "name": "BLEU"}, {"type": "chrf", "value": 0.34963, "name": "chr-F"}, {"type": "bleu", "value": 14.8, "name": "BLEU"}, {"type": "chrf", "value": 0.43644, "name": "chr-F"}, {"type": "bleu", "value": 35.2, "name": "BLEU"}, {"type": "chrf", "value": 0.63245, "name": "chr-F"}, {"type": "bleu", "value": 30.4, "name": "BLEU"}, {"type": "chrf", "value": 0.56775, "name": "chr-F"}, {"type": "bleu", "value": 50.0, "name": "BLEU"}, {"type": "chrf", "value": 0.71438, "name": "chr-F"}, {"type": "bleu", "value": 41.2, "name": "BLEU"}, {"type": "chrf", "value": 0.65373, "name": "chr-F"}, {"type": "bleu", "value": 27.6, "name": "BLEU"}, {"type": "chrf", "value": 0.55784, "name": "chr-F"}, {"type": "bleu", "value": 21.0, "name": "BLEU"}, {"type": "chrf", "value": 0.49876, "name": "chr-F"}, {"type": "bleu", "value": 22.0, "name": "BLEU"}, {"type": "chrf", "value": 0.53904, "name": "chr-F"}, {"type": "bleu", "value": 34.5, "name": "BLEU"}, {"type": "chrf", "value": 0.60549, "name": "chr-F"}, {"type": "bleu", "value": 21.4, "name": "BLEU"}, {"type": "chrf", "value": 0.49119, "name": "chr-F"}, {"type": "bleu", "value": 31.3, "name": "BLEU"}, {"type": "chrf", "value": 0.57998, "name": "chr-F"}, {"type": "bleu", "value": 20.7, "name": "BLEU"}, {"type": "chrf", "value": 0.52018, "name": "chr-F"}, {"type": "bleu", "value": 27.0, "name": "BLEU"}, {"type": "chrf", "value": 0.5647, "name": "chr-F"}, {"type": "bleu", "value": 11.2, "name": "BLEU"}, {"type": "chrf", "value": 0.38741, "name": "chr-F"}, {"type": "bleu", "value": 13.6, "name": "BLEU"}, {"type": "chrf", "value": 0.4318, "name": "chr-F"}, {"type": "bleu", "value": 29.2, "name": "BLEU"}, {"type": "chrf", "value": 0.58268, "name": "chr-F"}, {"type": "bleu", "value": 23.6, "name": "BLEU"}, {"type": "chrf", "value": 0.51029, "name": "chr-F"}, {"type": "bleu", "value": 37.5, "name": "BLEU"}, {"type": "chrf", "value": 0.6254, "name": "chr-F"}, {"type": "bleu", "value": 32.7, "name": "BLEU"}, {"type": "chrf", "value": 0.59255, "name": "chr-F"}, {"type": "bleu", "value": 24.4, "name": "BLEU"}, {"type": "chrf", "value": 0.53001, "name": "chr-F"}, {"type": "bleu", "value": 17.9, "name": "BLEU"}, {"type": "chrf", "value": 0.47645, "name": "chr-F"}, {"type": "bleu", "value": 23.9, "name": "BLEU"}, {"type": "chrf", "value": 0.55369, "name": "chr-F"}, {"type": "bleu", "value": 36.4, "name": "BLEU"}, {"type": "chrf", "value": 0.61981, "name": "chr-F"}, {"type": "bleu", "value": 40.4, "name": "BLEU"}, {"type": "chrf", "value": 0.64654, "name": "chr-F"}, {"type": "bleu", "value": 22.1, "name": "BLEU"}, {"type": "chrf", "value": 0.50078, "name": "chr-F"}, {"type": "bleu", "value": 31.1, "name": "BLEU"}, {"type": "chrf", "value": 0.58336, "name": "chr-F"}, {"type": "bleu", "value": 18.0, "name": "BLEU"}, {"type": "chrf", "value": 0.48834, "name": "chr-F"}, {"type": "bleu", "value": 26.7, "name": "BLEU"}, {"type": "chrf", "value": 0.56077, "name": "chr-F"}, {"type": "bleu", "value": 13.6, "name": "BLEU"}, {"type": "chrf", "value": 0.42451, "name": "chr-F"}, {"type": "bleu", "value": 13.4, "name": "BLEU"}, {"type": "chrf", "value": 0.43715, "name": "chr-F"}, {"type": "bleu", "value": 28.1, "name": "BLEU"}, {"type": "chrf", "value": 0.57143, "name": "chr-F"}, {"type": "bleu", "value": 25.0, "name": "BLEU"}, {"type": "chrf", "value": 0.52192, "name": "chr-F"}, {"type": "bleu", "value": 34.2, "name": "BLEU"}, {"type": "chrf", "value": 0.59962, "name": "chr-F"}, {"type": "bleu", "value": 25.6, "name": "BLEU"}, {"type": "chrf", "value": 0.53772, "name": "chr-F"}, {"type": "bleu", "value": 18.8, "name": "BLEU"}, {"type": "chrf", "value": 0.48882, "name": "chr-F"}, {"type": "bleu", "value": 16.3, "name": "BLEU"}, {"type": "chrf", "value": 0.49512, "name": "chr-F"}, {"type": "bleu", "value": 23.1, "name": "BLEU"}, {"type": "chrf", "value": 0.53968, "name": "chr-F"}, {"type": "bleu", "value": 27.9, "name": "BLEU"}, {"type": "chrf", "value": 0.57461, "name": "chr-F"}, {"type": "bleu", "value": 16.1, "name": "BLEU"}, {"type": "chrf", "value": 0.45785, "name": "chr-F"}, {"type": "bleu", "value": 22.2, "name": "BLEU"}, {"type": "chrf", "value": 0.52933, "name": "chr-F"}, {"type": "bleu", "value": 13.0, "name": "BLEU"}, {"type": "chrf", "value": 0.44627, "name": "chr-F"}, {"type": "bleu", "value": 22.4, "name": "BLEU"}, {"type": "chrf", "value": 0.53063, "name": "chr-F"}, {"type": "bleu", "value": 10.2, "name": "BLEU"}, {"type": "chrf", "value": 0.39784, "name": "chr-F"}, {"type": "bleu", "value": 17.4, "name": "BLEU"}, {"type": "chrf", "value": 0.49293, "name": "chr-F"}, {"type": "bleu", "value": 17.7, "name": "BLEU"}, {"type": "chrf", "value": 0.46595, "name": "chr-F"}, {"type": "bleu", "value": 25.9, "name": "BLEU"}, {"type": "chrf", "value": 0.56138, "name": "chr-F"}, {"type": "bleu", "value": 23.8, "name": "BLEU"}, {"type": "chrf", "value": 0.53609, "name": "chr-F"}, {"type": "bleu", "value": 13.3, "name": "BLEU"}, {"type": "chrf", "value": 0.44898, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-ast"}, "dataset": {"name": "flores101-devtest", "type": "flores_101", "args": "deu ast devtest"}, "metrics": [{"type": "bleu", "value": 21.5, "name": "BLEU"}, {"type": "chrf", "value": 0.5323, "name": "chr-F"}, {"type": "bleu", "value": 31.6, "name": "BLEU"}, {"type": "chrf", "value": 0.58466, "name": "chr-F"}, {"type": "bleu", "value": 36.5, "name": "BLEU"}, {"type": "chrf", "value": 0.6237, "name": "chr-F"}, {"type": "bleu", "value": 28.0, "name": "BLEU"}, {"type": "chrf", "value": 0.55693, "name": "chr-F"}, {"type": "bleu", "value": 22.3, "name": "BLEU"}, {"type": "chrf", "value": 0.52253, "name": "chr-F"}, {"type": "bleu", "value": 34.8, "name": "BLEU"}, {"type": "chrf", "value": 0.60688, "name": "chr-F"}, {"type": "bleu", "value": 30.3, "name": "BLEU"}, {"type": "chrf", "value": 0.57333, "name": "chr-F"}, {"type": "bleu", "value": 42.5, "name": "BLEU"}, {"type": "chrf", "value": 0.66607, "name": "chr-F"}, {"type": "bleu", "value": 48.8, "name": "BLEU"}, {"type": "chrf", "value": 0.70492, "name": "chr-F"}, {"type": "bleu", "value": 10.7, "name": "BLEU"}, {"type": "chrf", "value": 0.34867, "name": "chr-F"}, {"type": "bleu", "value": 49.3, "name": "BLEU"}, {"type": "chrf", "value": 0.71112, "name": "chr-F"}, {"type": "bleu", "value": 40.3, "name": "BLEU"}, {"type": "chrf", "value": 0.64856, "name": "chr-F"}, {"type": "bleu", "value": 29.2, "name": "BLEU"}, {"type": "chrf", "value": 0.58559, "name": "chr-F"}, {"type": "bleu", "value": 32.1, "name": "BLEU"}, {"type": "chrf", "value": 0.58922, "name": "chr-F"}, {"type": "bleu", "value": 12.8, "name": "BLEU"}, {"type": "chrf", "value": 0.40779, "name": "chr-F"}, {"type": "bleu", "value": 27.5, "name": "BLEU"}, {"type": "chrf", "value": 0.57016, "name": "chr-F"}, {"type": "bleu", "value": 16.3, "name": "BLEU"}, {"type": "chrf", "value": 0.49666, "name": "chr-F"}, {"type": "bleu", "value": 23.2, "name": "BLEU"}, {"type": "chrf", "value": 0.54015, "name": "chr-F"}, {"type": "bleu", "value": 22.1, "name": "BLEU"}, {"type": "chrf", "value": 0.52923, "name": "chr-F"}, {"type": "bleu", "value": 17.2, "name": "BLEU"}, {"type": "chrf", "value": 0.49285, "name": "chr-F"}, {"type": "bleu", "value": 25.7, "name": "BLEU"}, {"type": "chrf", "value": 0.55944, "name": "chr-F"}, {"type": "bleu", "value": 23.3, "name": "BLEU"}, {"type": "chrf", "value": 0.53282, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-fra"}, "dataset": {"name": "generaltest2022", "type": "generaltest2022", "args": "deu-fra"}, "metrics": [{"type": "bleu", "value": 37.4, "name": "BLEU"}, {"type": "chrf", "value": 0.60634, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-fra"}, "dataset": {"name": "multi30k_test_2016_flickr", "type": "multi30k-2016_flickr", "args": "deu-fra"}, "metrics": [{"type": "bleu", "value": 38.5, "name": "BLEU"}, {"type": "chrf", "value": 0.62595, "name": "chr-F"}, {"type": "bleu", "value": 51.4, "name": "BLEU"}, {"type": "chrf", "value": 0.7163, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-fra"}, "dataset": {"name": "multi30k_test_2017_flickr", "type": "multi30k-2017_flickr", "args": "deu-fra"}, "metrics": [{"type": "bleu", "value": 37.3, "name": "BLEU"}, {"type": "chrf", "value": 0.62733, "name": "chr-F"}, {"type": "bleu", "value": 50.8, "name": "BLEU"}, {"type": "chrf", "value": 0.7185, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-fra"}, "dataset": {"name": "multi30k_test_2017_mscoco", "type": "multi30k-2017_mscoco", "args": "deu-fra"}, "metrics": [{"type": "bleu", "value": 33.8, "name": "BLEU"}, {"type": "chrf", "value": 0.59089, "name": "chr-F"}, {"type": "bleu", "value": 54.1, "name": "BLEU"}, {"type": "chrf", "value": 0.73129, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-fra"}, "dataset": {"name": "multi30k_test_2018_flickr", "type": "multi30k-2018_flickr", "args": "deu-fra"}, "metrics": [{"type": "bleu", "value": 30.9, "name": "BLEU"}, {"type": "chrf", "value": 0.57155, "name": "chr-F"}, {"type": "bleu", "value": 41.9, "name": "BLEU"}, {"type": "chrf", "value": 0.65461, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation eng-fra"}, "dataset": {"name": "newsdiscusstest2015", "type": "newsdiscusstest2015", "args": "eng-fra"}, "metrics": [{"type": "bleu", "value": 38.5, "name": "BLEU"}, {"type": "chrf", "value": 0.6366, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-cat"}, "dataset": {"name": "ntrex128", "type": "ntrex128", "args": "deu-cat"}, "metrics": [{"type": "bleu", "value": 28.2, "name": "BLEU"}, {"type": "chrf", "value": 0.55033, "name": "chr-F"}, {"type": "bleu", "value": 28.5, "name": "BLEU"}, {"type": "chrf", "value": 0.55854, "name": "chr-F"}, {"type": "bleu", "value": 27.8, "name": "BLEU"}, {"type": "chrf", "value": 0.55034, "name": "chr-F"}, {"type": "bleu", "value": 26.6, "name": "BLEU"}, {"type": "chrf", "value": 0.55733, "name": "chr-F"}, {"type": "bleu", "value": 26.0, "name": "BLEU"}, {"type": "chrf", "value": 0.54208, "name": "chr-F"}, {"type": "bleu", "value": 26.6, "name": "BLEU"}, {"type": "chrf", "value": 0.52839, "name": "chr-F"}, {"type": "bleu", "value": 30.8, "name": "BLEU"}, {"type": "chrf", "value": 0.56966, "name": "chr-F"}, {"type": "bleu", "value": 36.3, "name": "BLEU"}, {"type": "chrf", "value": 0.61431, "name": "chr-F"}, {"type": "bleu", "value": 35.5, "name": "BLEU"}, {"type": "chrf", "value": 0.61695, "name": "chr-F"}, {"type": "bleu", "value": 37.2, "name": "BLEU"}, {"type": "chrf", "value": 0.6239, "name": "chr-F"}, {"type": "bleu", "value": 36.1, "name": "BLEU"}, {"type": "chrf", "value": 0.62209, "name": "chr-F"}, {"type": "bleu", "value": 33.5, "name": "BLEU"}, {"type": "chrf", "value": 0.59859, "name": "chr-F"}, {"type": "bleu", "value": 33.4, "name": "BLEU"}, {"type": "chrf", "value": 0.58128, "name": "chr-F"}, {"type": "bleu", "value": 40.3, "name": "BLEU"}, {"type": "chrf", "value": 0.64099, "name": "chr-F"}, {"type": "bleu", "value": 28.1, "name": "BLEU"}, {"type": "chrf", "value": 0.55093, "name": "chr-F"}, {"type": "bleu", "value": 28.0, "name": "BLEU"}, {"type": "chrf", "value": 0.55325, "name": "chr-F"}, {"type": "bleu", "value": 27.4, "name": "BLEU"}, {"type": "chrf", "value": 0.56188, "name": "chr-F"}, {"type": "bleu", "value": 25.6, "name": "BLEU"}, {"type": "chrf", "value": 0.54001, "name": "chr-F"}, {"type": "bleu", "value": 24.8, "name": "BLEU"}, {"type": "chrf", "value": 0.51853, "name": "chr-F"}, {"type": "bleu", "value": 31.0, "name": "BLEU"}, {"type": "chrf", "value": 0.57116, "name": "chr-F"}, {"type": "bleu", "value": 31.6, "name": "BLEU"}, {"type": "chrf", "value": 0.57962, "name": "chr-F"}, {"type": "bleu", "value": 28.9, "name": "BLEU"}, {"type": "chrf", "value": 0.5691, "name": "chr-F"}, {"type": "bleu", "value": 30.3, "name": "BLEU"}, {"type": "chrf", "value": 0.57389, "name": "chr-F"}, {"type": "bleu", "value": 30.6, "name": "BLEU"}, {"type": "chrf", "value": 0.58788, "name": "chr-F"}, {"type": "bleu", "value": 28.0, "name": "BLEU"}, {"type": "chrf", "value": 0.54276, "name": "chr-F"}, {"type": "bleu", "value": 34.2, "name": "BLEU"}, {"type": "chrf", "value": 0.59565, "name": "chr-F"}, {"type": "bleu", "value": 34.0, "name": "BLEU"}, {"type": "chrf", "value": 0.60605, "name": "chr-F"}, {"type": "bleu", "value": 29.6, "name": "BLEU"}, {"type": "chrf", "value": 0.57501, "name": "chr-F"}, {"type": "bleu", "value": 34.4, "name": "BLEU"}, {"type": "chrf", "value": 0.613, "name": "chr-F"}, {"type": "bleu", "value": 28.9, "name": "BLEU"}, {"type": "chrf", "value": 0.57868, "name": "chr-F"}, {"type": "bleu", "value": 29.1, "name": "BLEU"}, {"type": "chrf", "value": 0.5673, "name": "chr-F"}, {"type": "bleu", "value": 27.9, "name": "BLEU"}, {"type": "chrf", "value": 0.54222, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-cat"}, "dataset": {"name": "tatoeba-test-v2021-08-07", "type": "tatoeba_mt", "args": "deu-cat"}, "metrics": [{"type": "bleu", "value": 44.3, "name": "BLEU"}, {"type": "chrf", "value": 0.63465, "name": "chr-F"}, {"type": "bleu", "value": 50.7, "name": "BLEU"}, {"type": "chrf", "value": 0.68258, "name": "chr-F"}, {"type": "bleu", "value": 47.4, "name": "BLEU"}, {"type": "chrf", "value": 0.68502, "name": "chr-F"}, {"type": "bleu", "value": 22.0, "name": "BLEU"}, {"type": "chrf", "value": 0.38047, "name": "chr-F"}, {"type": "bleu", "value": 43.1, "name": "BLEU"}, {"type": "chrf", "value": 0.63684, "name": "chr-F"}, {"type": "bleu", "value": 42.6, "name": "BLEU"}, {"type": "chrf", "value": 0.64207, "name": "chr-F"}, {"type": "bleu", "value": 49.4, "name": "BLEU"}, {"type": "chrf", "value": 0.68333, "name": "chr-F"}, {"type": "bleu", "value": 49.1, "name": "BLEU"}, {"type": "chrf", "value": 0.67724, "name": "chr-F"}, {"type": "bleu", "value": 51.6, "name": "BLEU"}, {"type": "chrf", "value": 0.68777, "name": "chr-F"}, {"type": "bleu", "value": 45.2, "name": "BLEU"}, {"type": "chrf", "value": 0.6453, "name": "chr-F"}, {"type": "bleu", "value": 53.3, "name": "BLEU"}, {"type": "chrf", "value": 0.72115, "name": "chr-F"}, {"type": "bleu", "value": 24.2, "name": "BLEU"}, {"type": "chrf", "value": 0.43857, "name": "chr-F"}, {"type": "bleu", "value": 27.6, "name": "BLEU"}, {"type": "chrf", "value": 0.50848, "name": "chr-F"}, {"type": "bleu", "value": 20.0, "name": "BLEU"}, {"type": "chrf", "value": 0.4571, "name": "chr-F"}, {"type": "bleu", "value": 53.4, "name": "BLEU"}, {"type": "chrf", "value": 0.72159, "name": "chr-F"}, {"type": "bleu", "value": 47.1, "name": "BLEU"}, {"type": "chrf", "value": 0.67835, "name": "chr-F"}, {"type": "bleu", "value": 55.8, "name": "BLEU"}, {"type": "chrf", "value": 0.72875, "name": "chr-F"}, {"type": "bleu", "value": 44.6, "name": "BLEU"}, {"type": "chrf", "value": 0.65547, "name": "chr-F"}, {"type": "bleu", "value": 39.9, "name": "BLEU"}, {"type": "chrf", "value": 0.6165, "name": "chr-F"}, {"type": "bleu", "value": 53.5, "name": "BLEU"}, {"type": "chrf", "value": 0.72739, "name": "chr-F"}, {"type": "bleu", "value": 52.0, "name": "BLEU"}, {"type": "chrf", "value": 0.70655, "name": "chr-F"}, {"type": "bleu", "value": 43.7, "name": "BLEU"}, {"type": "chrf", "value": 0.65399, "name": "chr-F"}, {"type": "bleu", "value": 54.8, "name": "BLEU"}, {"type": "chrf", "value": 0.72083, "name": "chr-F"}, {"type": "bleu", "value": 49.7, "name": "BLEU"}, {"type": "chrf", "value": 0.67768, "name": "chr-F"}, {"type": "bleu", "value": 52.0, "name": "BLEU"}, {"type": "chrf", "value": 0.71178, "name": "chr-F"}, {"type": "bleu", "value": 60.4, "name": "BLEU"}, {"type": "chrf", "value": 0.75691, "name": "chr-F"}, {"type": "bleu", "value": 57.6, "name": "BLEU"}, {"type": "chrf", "value": 0.74818, "name": "chr-F"}, {"type": "bleu", "value": 58.7, "name": "BLEU"}, {"type": "chrf", "value": 0.76899, "name": "chr-F"}, {"type": "bleu", "value": 51.0, "name": "BLEU"}, {"type": "chrf", "value": 0.71775, "name": "chr-F"}, {"type": "bleu", "value": 47.8, "name": "BLEU"}, {"type": "chrf", "value": 0.69517, "name": "chr-F"}, {"type": "bleu", "value": 64.9, "name": "BLEU"}, {"type": "chrf", "value": 0.79442, "name": "chr-F"}, {"type": "bleu", "value": 66.3, "name": "BLEU"}, {"type": "chrf", "value": 0.81845, "name": "chr-F"}, {"type": "bleu", "value": 57.4, "name": "BLEU"}, {"type": "chrf", "value": 0.73277, "name": "chr-F"}, {"type": "bleu", "value": 61.5, "name": "BLEU"}, {"type": "chrf", "value": 0.76118, "name": "chr-F"}, {"type": "bleu", "value": 59.5, "name": "BLEU"}, {"type": "chrf", "value": 0.76742, "name": "chr-F"}, {"type": "bleu", "value": 23.4, "name": "BLEU"}, {"type": "chrf", "value": 0.43064, "name": "chr-F"}, {"type": "bleu", "value": 27.1, "name": "BLEU"}, {"type": "chrf", "value": 0.50795, "name": "chr-F"}, {"type": "bleu", "value": 60.7, "name": "BLEU"}, {"type": "chrf", "value": 0.76951, "name": "chr-F"}, {"type": "bleu", "value": 45.9, "name": "BLEU"}, {"type": "chrf", "value": 0.67782, "name": "chr-F"}, {"type": "bleu", "value": 49.6, "name": "BLEU"}, {"type": "chrf", "value": 0.67346, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation eng-fra"}, "dataset": {"name": "tico19-test", "type": "tico19-test", "args": "eng-fra"}, "metrics": [{"type": "bleu", "value": 40.1, "name": "BLEU"}, {"type": "chrf", "value": 0.62989, "name": "chr-F"}, {"type": "bleu", "value": 50.0, "name": "BLEU"}, {"type": "chrf", "value": 0.72708, "name": "chr-F"}, {"type": "bleu", "value": 52.0, "name": "BLEU"}, {"type": "chrf", "value": 0.73154, "name": "chr-F"}, {"type": "bleu", "value": 34.1, "name": "BLEU"}, {"type": "chrf", "value": 0.58383, "name": "chr-F"}, {"type": "bleu", "value": 37.0, "name": "BLEU"}, {"type": "chrf", "value": 0.59581, "name": "chr-F"}, {"type": "bleu", "value": 34.4, "name": "BLEU"}, {"type": "chrf", "value": 0.59798, "name": "chr-F"}, {"type": "bleu", "value": 45.4, "name": "BLEU"}, {"type": "chrf", "value": 0.68332, "name": "chr-F"}, {"type": "bleu", "value": 35.5, "name": "BLEU"}, {"type": "chrf", "value": 0.60469, "name": "chr-F"}, {"type": "bleu", "value": 42.8, "name": "BLEU"}, {"type": "chrf", "value": 0.67898, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-fra"}, "dataset": {"name": "newstest2008", "type": "wmt-2008-news", "args": "deu-fra"}, "metrics": [{"type": "bleu", "value": 26.3, "name": "BLEU"}, {"type": "chrf", "value": 0.54926, "name": "chr-F"}, {"type": "bleu", "value": 25.5, "name": "BLEU"}, {"type": "chrf", "value": 0.53902, "name": "chr-F"}, {"type": "bleu", "value": 26.8, "name": "BLEU"}, {"type": "chrf", "value": 0.55358, "name": "chr-F"}, {"type": "bleu", "value": 29.5, "name": "BLEU"}, {"type": "chrf", "value": 0.56491, "name": "chr-F"}, {"type": "bleu", "value": 33.0, "name": "BLEU"}, {"type": "chrf", "value": 0.58764, "name": "chr-F"}, {"type": "bleu", "value": 32.4, "name": "BLEU"}, {"type": "chrf", "value": 0.58848, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-fra"}, "dataset": {"name": "newstest2009", "type": "wmt-2009-news", "args": "deu-fra"}, "metrics": [{"type": "bleu", "value": 25.4, "name": "BLEU"}, {"type": "chrf", "value": 0.5387, "name": "chr-F"}, {"type": "bleu", "value": 24.4, "name": "BLEU"}, {"type": "chrf", "value": 0.54509, "name": "chr-F"}, {"type": "bleu", "value": 25.7, "name": "BLEU"}, {"type": "chrf", "value": 0.53769, "name": "chr-F"}, {"type": "bleu", "value": 29.3, "name": "BLEU"}, {"type": "chrf", "value": 0.57566, "name": "chr-F"}, {"type": "bleu", "value": 31.4, "name": "BLEU"}, {"type": "chrf", "value": 0.60372, "name": "chr-F"}, {"type": "bleu", "value": 30.0, "name": "BLEU"}, {"type": "chrf", "value": 0.57913, "name": "chr-F"}, {"type": "bleu", "value": 30.5, "name": "BLEU"}, {"type": "chrf", "value": 0.59749, "name": "chr-F"}, {"type": "bleu", "value": 32.1, "name": "BLEU"}, {"type": "chrf", "value": 0.58921, "name": "chr-F"}, {"type": "bleu", "value": 32.3, "name": "BLEU"}, {"type": "chrf", "value": 0.59195, "name": "chr-F"}, {"type": "bleu", "value": 33.0, "name": "BLEU"}, {"type": "chrf", "value": 0.61007, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-fra"}, "dataset": {"name": "newstest2010", "type": "wmt-2010-news", "args": "deu-fra"}, "metrics": [{"type": "bleu", "value": 29.5, "name": "BLEU"}, {"type": "chrf", "value": 0.57888, "name": "chr-F"}, {"type": "bleu", "value": 32.7, "name": "BLEU"}, {"type": "chrf", "value": 0.59408, "name": "chr-F"}, {"type": "bleu", "value": 32.4, "name": "BLEU"}, {"type": "chrf", "value": 0.59588, "name": "chr-F"}, {"type": "bleu", "value": 36.6, "name": "BLEU"}, {"type": "chrf", "value": 0.61978, "name": "chr-F"}, {"type": "bleu", "value": 37.7, "name": "BLEU"}, {"type": "chrf", "value": 0.62513, "name": "chr-F"}, {"type": "bleu", "value": 36.1, "name": "BLEU"}, {"type": "chrf", "value": 0.62193, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-fra"}, "dataset": {"name": "newstest2011", "type": "wmt-2011-news", "args": "deu-fra"}, "metrics": [{"type": "bleu", "value": 27.5, "name": "BLEU"}, {"type": "chrf", "value": 0.55704, "name": "chr-F"}, {"type": "bleu", "value": 30.4, "name": "BLEU"}, {"type": "chrf", "value": 0.56696, "name": "chr-F"}, {"type": "bleu", "value": 34.3, "name": "BLEU"}, {"type": "chrf", "value": 0.61071, "name": "chr-F"}, {"type": "bleu", "value": 38.7, "name": "BLEU"}, {"type": "chrf", "value": 0.62126, "name": "chr-F"}, {"type": "bleu", "value": 40.0, "name": "BLEU"}, {"type": "chrf", "value": 0.63139, "name": "chr-F"}, {"type": "bleu", "value": 35.2, "name": "BLEU"}, {"type": "chrf", "value": 0.61258, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-fra"}, "dataset": {"name": "newstest2012", "type": "wmt-2012-news", "args": "deu-fra"}, "metrics": [{"type": "bleu", "value": 27.6, "name": "BLEU"}, {"type": "chrf", "value": 0.56034, "name": "chr-F"}, {"type": "bleu", "value": 31.6, "name": "BLEU"}, {"type": "chrf", "value": 0.57336, "name": "chr-F"}, {"type": "bleu", "value": 31.9, "name": "BLEU"}, {"type": "chrf", "value": 0.59264, "name": "chr-F"}, {"type": "bleu", "value": 39.1, "name": "BLEU"}, {"type": "chrf", "value": 0.62568, "name": "chr-F"}, {"type": "bleu", "value": 39.5, "name": "BLEU"}, {"type": "chrf", "value": 0.62725, "name": "chr-F"}, {"type": "bleu", "value": 34.2, "name": "BLEU"}, {"type": "chrf", "value": 0.61177, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-fra"}, "dataset": {"name": "newstest2013", "type": "wmt-2013-news", "args": "deu-fra"}, "metrics": [{"type": "bleu", "value": 29.9, "name": "BLEU"}, {"type": "chrf", "value": 0.56475, "name": "chr-F"}, {"type": "bleu", "value": 31.9, "name": "BLEU"}, {"type": "chrf", "value": 0.57187, "name": "chr-F"}, {"type": "bleu", "value": 33.3, "name": "BLEU"}, {"type": "chrf", "value": 0.58938, "name": "chr-F"}, {"type": "bleu", "value": 35.2, "name": "BLEU"}, {"type": "chrf", "value": 0.59817, "name": "chr-F"}, {"type": "bleu", "value": 35.1, "name": "BLEU"}, {"type": "chrf", "value": 0.59482, "name": "chr-F"}, {"type": "bleu", "value": 33.9, "name": "BLEU"}, {"type": "chrf", "value": 0.59825, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation eng-fra"}, "dataset": {"name": "newstest2014", "type": "wmt-2014-news", "args": "eng-fra"}, "metrics": [{"type": "bleu", "value": 40.2, "name": "BLEU"}, {"type": "chrf", "value": 0.65438, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation eng-ron"}, "dataset": {"name": "newstest2016", "type": "wmt-2016-news", "args": "eng-ron"}, "metrics": [{"type": "bleu", "value": 32.2, "name": "BLEU"}, {"type": "chrf", "value": 0.59473, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-fra"}, "dataset": {"name": "newstest2019", "type": "wmt-2019-news", "args": "deu-fra"}, "metrics": [{"type": "bleu", "value": 35.9, "name": "BLEU"}, {"type": "chrf", "value": 0.62831, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-fra"}, "dataset": {"name": "newstest2020", "type": "wmt-2020-news", "args": "deu-fra"}, "metrics": [{"type": "bleu", "value": 33.0, "name": "BLEU"}, {"type": "chrf", "value": 0.60408, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-fra"}, "dataset": {"name": "newstest2021", "type": "wmt-2021-news", "args": "deu-fra"}, "metrics": [{"type": "bleu", "value": 31.3, "name": "BLEU"}, {"type": "chrf", "value": 0.58913, "name": "chr-F"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 44,318 |
shhossain/opus-mt-en-to-bn
|
shhossain
|
translation
|
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"bn",
"en",
"dataset:opus100",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-08-26T12:17:37Z |
2024-02-03T18:52:29+00:00
| 251 | 3 |
---
datasets:
- opus100
language:
- bn
- en
license: apache-2.0
metrics:
- sacrebleu
pipeline_tag: translation
widget:
- text: Will you come home tonight?
example_title: Example 1
- text: I am so sorry this is a day late, guys. Unfortunately, my internet was down
so it was out of my control.
example_title: Example 2
model-index:
- name: shhossain/opus-mt-en-to-bn
results:
- task:
type: translation
name: Translation
dataset:
name: opus100
type: opus100
split: validation
metrics:
- type: Bleu
value: 12.5374
- type: Validation Loss
value: 2.120669
- type: Training Loss
value: 1.7712
---
# English-Bengali Translation Model
This model is finetuned on `Helsinki-NLP/opus-mt-en-inc` for English to Bangla Translation.
- **Developed by:** [shhossain](https://github.com/shhossain)
- **Model type:** [transformer-align]
- **Language(s) (NLP):** [English, Bengali]
- **License:** [apache-2.0]
- **Finetuned from model [Helsinki-NLP/opus-mt-en-inc]:** [Helsinki-NLP/opus-mt-en-inc](Helsinki-NLP/opus-mt-en-inc)
## Use with transformers
```python
from transformers import pipeline
pipe = pipeline("translation", model="shhossain/opus-mt-en-to-bn")
```
| null |
Non_BioNLP
|
# English-Bengali Translation Model
This model is finetuned on `Helsinki-NLP/opus-mt-en-inc` for English to Bangla Translation.
- **Developed by:** [shhossain](https://github.com/shhossain)
- **Model type:** [transformer-align]
- **Language(s) (NLP):** [English, Bengali]
- **License:** [apache-2.0]
- **Finetuned from model [Helsinki-NLP/opus-mt-en-inc]:** [Helsinki-NLP/opus-mt-en-inc](Helsinki-NLP/opus-mt-en-inc)
## Use with transformers
```python
from transformers import pipeline
pipe = pipeline("translation", model="shhossain/opus-mt-en-to-bn")
```
|
{"datasets": ["opus100"], "language": ["bn", "en"], "license": "apache-2.0", "metrics": ["sacrebleu"], "pipeline_tag": "translation", "widget": [{"text": "Will you come home tonight?", "example_title": "Example 1"}, {"text": "I am so sorry this is a day late, guys. Unfortunately, my internet was down so it was out of my control.", "example_title": "Example 2"}], "model-index": [{"name": "shhossain/opus-mt-en-to-bn", "results": [{"task": {"type": "translation", "name": "Translation"}, "dataset": {"name": "opus100", "type": "opus100", "split": "validation"}, "metrics": [{"type": "Bleu", "value": 12.5374}, {"type": "Validation Loss", "value": 2.120669}, {"type": "Training Loss", "value": 1.7712}]}]}]}
|
task
|
[
"TRANSLATION"
] | 44,319 |
multimolecule/utrlm-te_el
|
multimolecule
|
fill-mask
|
[
"multimolecule",
"pytorch",
"safetensors",
"utrlm",
"Biology",
"RNA",
"fill-mask",
"rna",
"dataset:multimolecule/ensembl-genome-browser",
"license:agpl-3.0",
"region:us"
] | 2025-02-27T10:38:15Z |
2025-02-27T10:38:20+00:00
| 252 | 0 |
---
datasets:
- multimolecule/ensembl-genome-browser
language: rna
library_name: multimolecule
license: agpl-3.0
pipeline_tag: fill-mask
tags:
- Biology
- RNA
mask_token: <mask>
widget:
- example_title: HIV-1
text: GGUC<mask>CUCUGGUUAGACCAGAUCUGAGCCU
output:
- label: '*'
score: 0.07707168161869049
- label: <null>
score: 0.07588472962379456
- label: U
score: 0.07178673148155212
- label: N
score: 0.06414645165205002
- label: Y
score: 0.06385370343923569
- example_title: microRNA-21
text: UAGC<mask>UAUCAGACUGAUGUUG
output:
- label: '*'
score: 0.07969731837511063
- label: <null>
score: 0.07818876206874847
- label: A
score: 0.07302683591842651
- label: N
score: 0.06714905053377151
- label: W
score: 0.0667526125907898
---
# UTR-LM
Pre-trained model on 5’ untranslated region (5’UTR) using masked language modeling (MLM), Secondary Structure (SS), and Minimum Free Energy (MFE) objectives.
## Statement
_A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions_ is published in [Nature Machine Intelligence](https://doi.org/10.1038/s42256-024-00823-9), which is a Closed Access / Author-Fee journal.
> Machine learning has been at the forefront of the movement for free and open access to research.
>
> We see no role for closed access or author-fee publication in the future of machine learning research and believe the adoption of these journals as an outlet of record for the machine learning community would be a retrograde step.
The MultiMolecule team is committed to the principles of open access and open science.
We do NOT endorse the publication of manuscripts in Closed Access / Author-Fee journals and encourage the community to support Open Access journals and conferences.
Please consider signing the [Statement on Nature Machine Intelligence](https://openaccess.engineering.oregonstate.edu).
## Disclaimer
This is an UNOFFICIAL implementation of the [A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions](https://doi.org/10.1101/2023.10.11.561938) by Yanyi Chu, Dan Yu, et al.
The OFFICIAL repository of UTR-LM is at [a96123155/UTR-LM](https://github.com/a96123155/UTR-LM).
> [!CAUTION]
> The MultiMolecule team is unable to confirm that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
> This is because
>
> The proposed method is published in a Closed Access / Author-Fee journal.
**The team releasing UTR-LM did not write this model card for this model so this model card has been written by the MultiMolecule team.**
## Model Details
UTR-LM is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of 5’ untranslated regions (5’UTRs) in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
### Variations
- **[`multimolecule/utrlm-te_el`](https://huggingface.co/multimolecule/utrlm-te_el)**: The UTR-LM model for Translation Efficiency of transcripts and mRNA Expression Level.
- **[`multimolecule/utrlm-mrl`](https://huggingface.co/multimolecule/utrlm-mrl)**: The UTR-LM model for Mean Ribosome Loading.
### Model Specification
<table>
<thead>
<tr>
<th>Variants</th>
<th>Num Layers</th>
<th>Hidden Size</th>
<th>Num Heads</th>
<th>Intermediate Size</th>
<th>Num Parameters (M)</th>
<th>FLOPs (G)</th>
<th>MACs (G)</th>
<th>Max Num Tokens</th>
</tr>
</thead>
<tbody>
<tr>
<td>UTR-LM MRL</td>
<td rowspan="2">6</td>
<td rowspan="2">128</td>
<td rowspan="2">16</td>
<td rowspan="2">512</td>
<td rowspan="2">1.21</td>
<td rowspan="2">0.35</td>
<td rowspan="2">0.18</td>
<td rowspan="2">1022</td>
</tr>
<tr>
<td>UTR-LM TE_EL</td>
</tr>
</tbody>
</table>
### Links
- **Code**: [multimolecule.utrlm](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/utrlm)
- **Data**:
- [Ensembl Genome Browser](https://ensembl.org)
- [Human 5′ UTR design and variant effect prediction from a massively parallel translation assay](https://doi.org/10.1038/s41587-019-0164-5)
- [High-Throughput 5’ UTR Engineering for Enhanced Protein Production in Non-Viral Gene Therapies](https://doi.org/10.1101/2021.10.14.464013)
- **Paper**: [A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions](http://doi.org/10.1038/s41467-021-24436-7)
- **Developed by**: Yanyi Chu, Dan Yu, Yupeng Li, Kaixuan Huang, Yue Shen, Le Cong, Jason Zhang, Mengdi Wang
- **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [ESM](https://huggingface.co/facebook/esm2_t48_15B_UR50D)
- **Original Repository**: [a96123155/UTR-LM](https://github.com/a96123155/UTR-LM)
## Usage
The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
```bash
pip install multimolecule
```
### Direct Use
#### Masked Language Modeling
You can use this model directly with a pipeline for masked language modeling:
```python
>>> import multimolecule # you must import multimolecule to register models
>>> from transformers import pipeline
>>> unmasker = pipeline("fill-mask", model="multimolecule/utrlm-te_el")
>>> unmasker("gguc<mask>cucugguuagaccagaucugagccu")
[{'score': 0.07707168161869049,
'token': 23,
'token_str': '*',
'sequence': 'G G U C * C U C U G G U U A G A C C A G A U C U G A G C C U'},
{'score': 0.07588472962379456,
'token': 5,
'token_str': '<null>',
'sequence': 'G G U C C U C U G G U U A G A C C A G A U C U G A G C C U'},
{'score': 0.07178673148155212,
'token': 9,
'token_str': 'U',
'sequence': 'G G U C U C U C U G G U U A G A C C A G A U C U G A G C C U'},
{'score': 0.06414645165205002,
'token': 10,
'token_str': 'N',
'sequence': 'G G U C N C U C U G G U U A G A C C A G A U C U G A G C C U'},
{'score': 0.06385370343923569,
'token': 12,
'token_str': 'Y',
'sequence': 'G G U C Y C U C U G G U U A G A C C A G A U C U G A G C C U'}]
```
#### RNA Secondary Structure Prediction
You can use this model to predict the secondary structure of an RNA sequence:
```python
>>> import multimolecule # you must import multimolecule to register models
>>> from transformers import pipeline
>>> predictor = pipeline("rna-secondary-structure", model="multimolecule/utrlm-mrl")
>>> predictor("ggucuc")
{'sequence': 'G G U C U C',
'secondary_structure': '......',
'contact_map': [[0.4812554121017456, 0.47794032096862793, 0.4789176285266876, 0.48823264241218567, 0.474841445684433, 0.4968946874141693],
[0.47794032096862793, 0.49345624446868896, 0.48480257391929626, 0.4933702051639557, 0.4595194160938263, 0.48904451727867126],
[0.4789176285266876, 0.48480257391929626, 0.489326536655426, 0.49098923802375793, 0.48537197709083557, 0.4686800539493561],
[0.48823264241218567, 0.4933702051639557, 0.49098923802375793, 0.4644699990749359, 0.49569272994995117, 0.4653873145580292],
[0.474841445684433, 0.4595194160938263, 0.48537197709083557, 0.49569272994995117, 0.48744988441467285, 0.4952647387981415],
[0.4968946874141693, 0.48904451727867126, 0.4686800539493561, 0.4653873145580292, 0.4952647387981415, 0.4828569293022156]]}
```
### Downstream Use
#### Extract Features
Here is how to use this model to get the features of a given sequence in PyTorch:
```python
from multimolecule import RnaTokenizer, UtrLmModel
tokenizer = RnaTokenizer.from_pretrained("multimolecule/utrlm-te_el")
model = UtrLmModel.from_pretrained("multimolecule/utrlm-te_el")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
output = model(**input)
```
#### Sequence Classification / Regression
> [!NOTE]
> This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrLmForSequencePrediction
tokenizer = RnaTokenizer.from_pretrained("multimolecule/utrlm-te_el")
model = UtrLmForSequencePrediction.from_pretrained("multimolecule/utrlm-te_el")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.tensor([1])
output = model(**input, labels=label)
```
#### Token Classification / Regression
> [!NOTE]
> This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for token classification or regression.
Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrLmForTokenPrediction
tokenizer = RnaTokenizer.from_pretrained("multimolecule/utrlm-te_el")
model = UtrLmForTokenPrediction.from_pretrained("multimolecule/utrlm-te_el")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.randint(2, (len(text), ))
output = model(**input, labels=label)
```
#### Contact Classification / Regression
> [!NOTE]
> This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrLmForContactPrediction
tokenizer = RnaTokenizer.from_pretrained("multimolecule/utrlm-te_el")
model = UtrLmForContactPrediction.from_pretrained("multimolecule/utrlm-te_el")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.randint(2, (len(text), len(text)))
output = model(**input, labels=label)
```
## Training Details
UTR-LM used a mixed training strategy with one self-supervised task and two supervised tasks, where the labels of both supervised tasks are calculated using [ViennaRNA](https://viennarna.readthedocs.io).
1. **Masked Language Modeling (MLM)**: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
2. **Secondary Structure (SS)**: predicting the secondary structure of the `<mask>` token in the MLM task.
3. **Minimum Free Energy (MFE)**: predicting the minimum free energy of the 5’ UTR sequence.
### Training Data
The UTR-LM model was pre-trained on 5’ UTR sequences from three sources:
- **[Ensembl Genome Browser](https://ensembl.org)**: Ensembl is a genome browser for vertebrate genomes that supports research in comparative genomics, evolution, sequence variation and transcriptional regulation. UTR-LM used 5’ UTR sequences from 5 species: human, rat, mouse, chicken, and zebrafish, since these species have high-quality and manual gene annotations.
- **[Human 5′ UTR design and variant effect prediction from a massively parallel translation assay](https://doi.org/10.1038/s41587-019-0164-5)**: Sample et al. proposed 8 distinct 5' UTR libraries, each containing random 50 nucleotide sequences, to evaluate translation rules using mean ribosome loading (MRL) measurements.
- **[High-Throughput 5’ UTR Engineering for Enhanced Protein Production in Non-Viral Gene Therapies](https://doi.org/10.1038/s41467-021-24436-7)**: Cao et al. analyzed endogenous human 5’ UTRs, including data from 3 distinct cell lines/tissues: human embryonic kidney 293T (HEK), human prostate cancer cell (PC3), and human muscle tissue (Muscle).
UTR-LM preprocessed the 5’ UTR sequences in a 4-step pipeline:
1. removed all coding sequence (CDS) and non-5' UTR fragments from the raw sequences.
2. identified and removed duplicate sequences
3. truncated the sequences to fit within a range of 30 to 1022 bp
4. filtered out incorrect and low-quality sequences
Note [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`.
### Training Procedure
#### Preprocessing
UTR-LM used masked language modeling (MLM) as one of the pre-training objectives. The masking procedure is similar to the one used in BERT:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
#### PreTraining
The model was trained on two clusters:
1. 4 NVIDIA V100 GPUs with 16GiB memories.
2. 4 NVIDIA P100 GPUs with 32GiB memories.
## Citation
**BibTeX**:
```bibtex
@article {chu2023a,
author = {Chu, Yanyi and Yu, Dan and Li, Yupeng and Huang, Kaixuan and Shen, Yue and Cong, Le and Zhang, Jason and Wang, Mengdi},
title = {A 5{\textquoteright} UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions},
elocation-id = {2023.10.11.561938},
year = {2023},
doi = {10.1101/2023.10.11.561938},
publisher = {Cold Spring Harbor Laboratory},
abstract = {The 5{\textquoteright} UTR, a regulatory region at the beginning of an mRNA molecule, plays a crucial role in regulating the translation process and impacts the protein expression level. Language models have showcased their effectiveness in decoding the functions of protein and genome sequences. Here, we introduced a language model for 5{\textquoteright} UTR, which we refer to as the UTR-LM. The UTR-LM is pre-trained on endogenous 5{\textquoteright} UTRs from multiple species and is further augmented with supervised information including secondary structure and minimum free energy. We fine-tuned the UTR-LM in a variety of downstream tasks. The model outperformed the best-known benchmark by up to 42\% for predicting the Mean Ribosome Loading, and by up to 60\% for predicting the Translation Efficiency and the mRNA Expression Level. The model also applies to identifying unannotated Internal Ribosome Entry Sites within the untranslated region and improves the AUPR from 0.37 to 0.52 compared to the best baseline. Further, we designed a library of 211 novel 5{\textquoteright} UTRs with high predicted values of translation efficiency and evaluated them via a wet-lab assay. Experiment results confirmed that our top designs achieved a 32.5\% increase in protein production level relative to well-established 5{\textquoteright} UTR optimized for therapeutics.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2023/10/14/2023.10.11.561938},
eprint = {https://www.biorxiv.org/content/early/2023/10/14/2023.10.11.561938.full.pdf},
journal = {bioRxiv}
}
```
## Contact
Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
Please contact the authors of the [UTR-LM paper](https://doi.org/10.1101/2023.10.11.561938) for questions or comments on the paper/model.
## License
This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
```spdx
SPDX-License-Identifier: AGPL-3.0-or-later
```
| null |
BioNLP
|
# UTR-LM
Pre-trained model on 5’ untranslated region (5’UTR) using masked language modeling (MLM), Secondary Structure (SS), and Minimum Free Energy (MFE) objectives.
## Statement
_A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions_ is published in [Nature Machine Intelligence](https://doi.org/10.1038/s42256-024-00823-9), which is a Closed Access / Author-Fee journal.
> Machine learning has been at the forefront of the movement for free and open access to research.
>
> We see no role for closed access or author-fee publication in the future of machine learning research and believe the adoption of these journals as an outlet of record for the machine learning community would be a retrograde step.
The MultiMolecule team is committed to the principles of open access and open science.
We do NOT endorse the publication of manuscripts in Closed Access / Author-Fee journals and encourage the community to support Open Access journals and conferences.
Please consider signing the [Statement on Nature Machine Intelligence](https://openaccess.engineering.oregonstate.edu).
## Disclaimer
This is an UNOFFICIAL implementation of the [A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions](https://doi.org/10.1101/2023.10.11.561938) by Yanyi Chu, Dan Yu, et al.
The OFFICIAL repository of UTR-LM is at [a96123155/UTR-LM](https://github.com/a96123155/UTR-LM).
> [!CAUTION]
> The MultiMolecule team is unable to confirm that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
> This is because
>
> The proposed method is published in a Closed Access / Author-Fee journal.
**The team releasing UTR-LM did not write this model card for this model so this model card has been written by the MultiMolecule team.**
## Model Details
UTR-LM is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of 5’ untranslated regions (5’UTRs) in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
### Variations
- **[`multimolecule/utrlm-te_el`](https://huggingface.co/multimolecule/utrlm-te_el)**: The UTR-LM model for Translation Efficiency of transcripts and mRNA Expression Level.
- **[`multimolecule/utrlm-mrl`](https://huggingface.co/multimolecule/utrlm-mrl)**: The UTR-LM model for Mean Ribosome Loading.
### Model Specification
<table>
<thead>
<tr>
<th>Variants</th>
<th>Num Layers</th>
<th>Hidden Size</th>
<th>Num Heads</th>
<th>Intermediate Size</th>
<th>Num Parameters (M)</th>
<th>FLOPs (G)</th>
<th>MACs (G)</th>
<th>Max Num Tokens</th>
</tr>
</thead>
<tbody>
<tr>
<td>UTR-LM MRL</td>
<td rowspan="2">6</td>
<td rowspan="2">128</td>
<td rowspan="2">16</td>
<td rowspan="2">512</td>
<td rowspan="2">1.21</td>
<td rowspan="2">0.35</td>
<td rowspan="2">0.18</td>
<td rowspan="2">1022</td>
</tr>
<tr>
<td>UTR-LM TE_EL</td>
</tr>
</tbody>
</table>
### Links
- **Code**: [multimolecule.utrlm](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/utrlm)
- **Data**:
- [Ensembl Genome Browser](https://ensembl.org)
- [Human 5′ UTR design and variant effect prediction from a massively parallel translation assay](https://doi.org/10.1038/s41587-019-0164-5)
- [High-Throughput 5’ UTR Engineering for Enhanced Protein Production in Non-Viral Gene Therapies](https://doi.org/10.1101/2021.10.14.464013)
- **Paper**: [A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions](http://doi.org/10.1038/s41467-021-24436-7)
- **Developed by**: Yanyi Chu, Dan Yu, Yupeng Li, Kaixuan Huang, Yue Shen, Le Cong, Jason Zhang, Mengdi Wang
- **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [ESM](https://huggingface.co/facebook/esm2_t48_15B_UR50D)
- **Original Repository**: [a96123155/UTR-LM](https://github.com/a96123155/UTR-LM)
## Usage
The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
```bash
pip install multimolecule
```
### Direct Use
#### Masked Language Modeling
You can use this model directly with a pipeline for masked language modeling:
```python
>>> import multimolecule # you must import multimolecule to register models
>>> from transformers import pipeline
>>> unmasker = pipeline("fill-mask", model="multimolecule/utrlm-te_el")
>>> unmasker("gguc<mask>cucugguuagaccagaucugagccu")
[{'score': 0.07707168161869049,
'token': 23,
'token_str': '*',
'sequence': 'G G U C * C U C U G G U U A G A C C A G A U C U G A G C C U'},
{'score': 0.07588472962379456,
'token': 5,
'token_str': '<null>',
'sequence': 'G G U C C U C U G G U U A G A C C A G A U C U G A G C C U'},
{'score': 0.07178673148155212,
'token': 9,
'token_str': 'U',
'sequence': 'G G U C U C U C U G G U U A G A C C A G A U C U G A G C C U'},
{'score': 0.06414645165205002,
'token': 10,
'token_str': 'N',
'sequence': 'G G U C N C U C U G G U U A G A C C A G A U C U G A G C C U'},
{'score': 0.06385370343923569,
'token': 12,
'token_str': 'Y',
'sequence': 'G G U C Y C U C U G G U U A G A C C A G A U C U G A G C C U'}]
```
#### RNA Secondary Structure Prediction
You can use this model to predict the secondary structure of an RNA sequence:
```python
>>> import multimolecule # you must import multimolecule to register models
>>> from transformers import pipeline
>>> predictor = pipeline("rna-secondary-structure", model="multimolecule/utrlm-mrl")
>>> predictor("ggucuc")
{'sequence': 'G G U C U C',
'secondary_structure': '......',
'contact_map': [[0.4812554121017456, 0.47794032096862793, 0.4789176285266876, 0.48823264241218567, 0.474841445684433, 0.4968946874141693],
[0.47794032096862793, 0.49345624446868896, 0.48480257391929626, 0.4933702051639557, 0.4595194160938263, 0.48904451727867126],
[0.4789176285266876, 0.48480257391929626, 0.489326536655426, 0.49098923802375793, 0.48537197709083557, 0.4686800539493561],
[0.48823264241218567, 0.4933702051639557, 0.49098923802375793, 0.4644699990749359, 0.49569272994995117, 0.4653873145580292],
[0.474841445684433, 0.4595194160938263, 0.48537197709083557, 0.49569272994995117, 0.48744988441467285, 0.4952647387981415],
[0.4968946874141693, 0.48904451727867126, 0.4686800539493561, 0.4653873145580292, 0.4952647387981415, 0.4828569293022156]]}
```
### Downstream Use
#### Extract Features
Here is how to use this model to get the features of a given sequence in PyTorch:
```python
from multimolecule import RnaTokenizer, UtrLmModel
tokenizer = RnaTokenizer.from_pretrained("multimolecule/utrlm-te_el")
model = UtrLmModel.from_pretrained("multimolecule/utrlm-te_el")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
output = model(**input)
```
#### Sequence Classification / Regression
> [!NOTE]
> This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrLmForSequencePrediction
tokenizer = RnaTokenizer.from_pretrained("multimolecule/utrlm-te_el")
model = UtrLmForSequencePrediction.from_pretrained("multimolecule/utrlm-te_el")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.tensor([1])
output = model(**input, labels=label)
```
#### Token Classification / Regression
> [!NOTE]
> This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for token classification or regression.
Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrLmForTokenPrediction
tokenizer = RnaTokenizer.from_pretrained("multimolecule/utrlm-te_el")
model = UtrLmForTokenPrediction.from_pretrained("multimolecule/utrlm-te_el")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.randint(2, (len(text), ))
output = model(**input, labels=label)
```
#### Contact Classification / Regression
> [!NOTE]
> This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
```python
import torch
from multimolecule import RnaTokenizer, UtrLmForContactPrediction
tokenizer = RnaTokenizer.from_pretrained("multimolecule/utrlm-te_el")
model = UtrLmForContactPrediction.from_pretrained("multimolecule/utrlm-te_el")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.randint(2, (len(text), len(text)))
output = model(**input, labels=label)
```
## Training Details
UTR-LM used a mixed training strategy with one self-supervised task and two supervised tasks, where the labels of both supervised tasks are calculated using [ViennaRNA](https://viennarna.readthedocs.io).
1. **Masked Language Modeling (MLM)**: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
2. **Secondary Structure (SS)**: predicting the secondary structure of the `<mask>` token in the MLM task.
3. **Minimum Free Energy (MFE)**: predicting the minimum free energy of the 5’ UTR sequence.
### Training Data
The UTR-LM model was pre-trained on 5’ UTR sequences from three sources:
- **[Ensembl Genome Browser](https://ensembl.org)**: Ensembl is a genome browser for vertebrate genomes that supports research in comparative genomics, evolution, sequence variation and transcriptional regulation. UTR-LM used 5’ UTR sequences from 5 species: human, rat, mouse, chicken, and zebrafish, since these species have high-quality and manual gene annotations.
- **[Human 5′ UTR design and variant effect prediction from a massively parallel translation assay](https://doi.org/10.1038/s41587-019-0164-5)**: Sample et al. proposed 8 distinct 5' UTR libraries, each containing random 50 nucleotide sequences, to evaluate translation rules using mean ribosome loading (MRL) measurements.
- **[High-Throughput 5’ UTR Engineering for Enhanced Protein Production in Non-Viral Gene Therapies](https://doi.org/10.1038/s41467-021-24436-7)**: Cao et al. analyzed endogenous human 5’ UTRs, including data from 3 distinct cell lines/tissues: human embryonic kidney 293T (HEK), human prostate cancer cell (PC3), and human muscle tissue (Muscle).
UTR-LM preprocessed the 5’ UTR sequences in a 4-step pipeline:
1. removed all coding sequence (CDS) and non-5' UTR fragments from the raw sequences.
2. identified and removed duplicate sequences
3. truncated the sequences to fit within a range of 30 to 1022 bp
4. filtered out incorrect and low-quality sequences
Note [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`.
### Training Procedure
#### Preprocessing
UTR-LM used masked language modeling (MLM) as one of the pre-training objectives. The masking procedure is similar to the one used in BERT:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
#### PreTraining
The model was trained on two clusters:
1. 4 NVIDIA V100 GPUs with 16GiB memories.
2. 4 NVIDIA P100 GPUs with 32GiB memories.
## Citation
**BibTeX**:
```bibtex
@article {chu2023a,
author = {Chu, Yanyi and Yu, Dan and Li, Yupeng and Huang, Kaixuan and Shen, Yue and Cong, Le and Zhang, Jason and Wang, Mengdi},
title = {A 5{\textquoteright} UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions},
elocation-id = {2023.10.11.561938},
year = {2023},
doi = {10.1101/2023.10.11.561938},
publisher = {Cold Spring Harbor Laboratory},
abstract = {The 5{\textquoteright} UTR, a regulatory region at the beginning of an mRNA molecule, plays a crucial role in regulating the translation process and impacts the protein expression level. Language models have showcased their effectiveness in decoding the functions of protein and genome sequences. Here, we introduced a language model for 5{\textquoteright} UTR, which we refer to as the UTR-LM. The UTR-LM is pre-trained on endogenous 5{\textquoteright} UTRs from multiple species and is further augmented with supervised information including secondary structure and minimum free energy. We fine-tuned the UTR-LM in a variety of downstream tasks. The model outperformed the best-known benchmark by up to 42\% for predicting the Mean Ribosome Loading, and by up to 60\% for predicting the Translation Efficiency and the mRNA Expression Level. The model also applies to identifying unannotated Internal Ribosome Entry Sites within the untranslated region and improves the AUPR from 0.37 to 0.52 compared to the best baseline. Further, we designed a library of 211 novel 5{\textquoteright} UTRs with high predicted values of translation efficiency and evaluated them via a wet-lab assay. Experiment results confirmed that our top designs achieved a 32.5\% increase in protein production level relative to well-established 5{\textquoteright} UTR optimized for therapeutics.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2023/10/14/2023.10.11.561938},
eprint = {https://www.biorxiv.org/content/early/2023/10/14/2023.10.11.561938.full.pdf},
journal = {bioRxiv}
}
```
## Contact
Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
Please contact the authors of the [UTR-LM paper](https://doi.org/10.1101/2023.10.11.561938) for questions or comments on the paper/model.
## License
This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
```spdx
SPDX-License-Identifier: AGPL-3.0-or-later
```
|
{"datasets": ["multimolecule/ensembl-genome-browser"], "language": "rna", "library_name": "multimolecule", "license": "agpl-3.0", "pipeline_tag": "fill-mask", "tags": ["Biology", "RNA"], "mask_token": "<mask>", "widget": [{"example_title": "HIV-1", "text": "GGUC<mask>CUCUGGUUAGACCAGAUCUGAGCCU", "output": [{"label": "*", "score": 0.07707168161869049}, {"label": "<null>", "score": 0.07588472962379456}, {"label": "U", "score": 0.07178673148155212}, {"label": "N", "score": 0.06414645165205002}, {"label": "Y", "score": 0.06385370343923569}]}, {"example_title": "microRNA-21", "text": "UAGC<mask>UAUCAGACUGAUGUUG", "output": [{"label": "*", "score": 0.07969731837511063}, {"label": "<null>", "score": 0.07818876206874847}, {"label": "A", "score": 0.07302683591842651}, {"label": "N", "score": 0.06714905053377151}, {"label": "W", "score": 0.0667526125907898}]}]}
|
task
|
[
"TRANSLATION"
] | 44,320 |
neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16
|
neuralmagic
|
image-text-to-text
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"vllm",
"vision",
"w4a16",
"conversational",
"en",
"base_model:Qwen/Qwen2-VL-72B-Instruct",
"base_model:quantized:Qwen/Qwen2-VL-72B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] | 2025-01-23T22:32:19Z |
2025-03-31T23:47:26+00:00
| 144 | 0 |
---
base_model: Qwen/Qwen2-VL-72B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
tags:
- vllm
- vision
- w4a16
---
# Qwen2-VL-72B-Instruct-quantized-w4a16
## Model Overview
- **Model Architecture:** Qwen/Qwen2-VL-72B-Instruct
- **Input:** Vision-Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** INT4
- **Activation quantization:** FP16
- **Release Date:** 2/24/2025
- **Version:** 1.0
- **Model Developers:** Neural Magic
Quantized version of [Qwen/Qwen2-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct).
### Model Optimizations
This model was obtained by quantizing the weights of [Qwen/Qwen2-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct) to INT8 data type, ready for inference with vLLM >= 0.5.2.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams
# prepare model
llm = LLM(
model="neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16",
trust_remote_code=True,
max_model_len=4096,
max_num_seqs=2,
)
# prepare inputs
question = "What is the content of this image?"
inputs = {
"prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
"multi_modal_data": {
"image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
},
}
# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog.
<details>
<summary>Model Creation Code</summary>
```python
import base64
from io import BytesIO
import torch
from datasets import load_dataset
from qwen_vl_utils import process_vision_info
from transformers import AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
from llmcompressor.transformers.tracing import TraceableQwen2VLForConditionalGeneration
from llmcompressor.transformers.utils.data_collator import qwen2_vl_data_collator
from compressed_tensors.quantization import QuantizationArgs, QuantizationType, QuantizationStrategy, ActivationOrdering, QuantizationScheme
# Load model.
model_id = "Qwen/Qwen2-VL-72B-Instruct"
model = TraceableQwen2VLForConditionalGeneration.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
# Oneshot arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = {"calibration": "test[:512]"}
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42)
dampening_frac=0.01
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
# preprocess
buffered = BytesIO()
example["image"].save(buffered, format="PNG")
encoded_image = base64.b64encode(buffered.getvalue())
encoded_image_text = encoded_image.decode("utf-8")
base64_qwen = f"data:image;base64,{encoded_image_text}"
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": base64_qwen},
{"type": "text", "text": "What does the image show?"},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
# tokenize
return processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, remove_columns=ds["calibration"].column_names)
# Recipe
recipe = GPTQModifier(
targets="Linear",
config_groups={
"config_group": QuantizationScheme(
targets=["Linear"],
weights=QuantizationArgs(
num_bits=4,
type=QuantizationType.INT,
strategy=QuantizationStrategy.GROUP,
group_size=128,
symmetric=True,
dynamic=False,
actorder=ActivationOrdering.WEIGHT,
),
),
},
sequential_targets=["Qwen2VLDecoderLayer"],
ignore=["lm_head", "re:visual.*"],
update_size=NUM_CALIBRATION_SAMPLES,
dampening_frac=dampening_frac
)
SAVE_DIR=f"{model_id.split('/')[1]}-quantized.w4a16
# Perform oneshot
oneshot(
model=model,
tokenizer=model_id,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=qwen2_vl_data_collator,
output_dir=SAVE_DIR
)
```
</details>
## Evaluation
The model was evaluated using [mistral-evals](https://github.com/neuralmagic/mistral-evals) for vision-related tasks and using [lm_evaluation_harness](https://github.com/neuralmagic/lm-evaluation-harness) for select text-based benchmarks. The evaluations were conducted using the following commands:
<details>
<summary>Evaluation Commands</summary>
### Vision Tasks
- vqav2
- docvqa
- mathvista
- mmmu
- chartqa
```
vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7
python -m eval.run eval_vllm \
--model_name neuralmagic/pixtral-12b-quantized.w8a8 \
--url http://0.0.0.0:8000 \
--output_dir ~/tmp \
--eval_name <vision_task_name>
```
### Text-based Tasks
#### MMLU
```
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks mmlu \
--num_fewshot 5 \
--batch_size auto \
--output_path output_dir
```
#### MGSM
```
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=<n>,gpu_memory_utilization=0.9 \
--tasks mgsm_cot_native \
--num_fewshot 0 \
--batch_size auto \
--output_path output_dir
```
</details>
### Accuracy
<table>
<thead>
<tr>
<th>Category</th>
<th>Metric</th>
<th>Qwen/Qwen2-VL-72B-Instruct</th>
<th>nm-testing/Qwen2-VL-72B-Instruct-quantized.W4A16</th>
<th>Recovery (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="6"><b>Vision</b></td>
<td>MMMU (val, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
<td>62.11</td>
<td>60.11</td>
<td>96.78%</td>
</tr>
<tr>
<td>VQAv2 (val)<br><i>vqa_match</i></td>
<td>82.51</td>
<td>82.38</td>
<td>99.84%</td>
</tr>
<tr>
<td>DocVQA (val)<br><i>anls</i></td>
<td>95.01</td>
<td>94.94</td>
<td>99.93%</td>
</tr>
<tr>
<td>ChartQA (test, CoT)<br><i>anywhere_in_answer_relaxed_correctness</i></td>
<td>83.40</td>
<td>80.72</td>
<td>96.78%</td>
</tr>
<tr>
<td>Mathvista (testmini, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
<td>66.57</td>
<td>64.66</td>
<td>97.13%</td>
</tr>
<tr>
<td><b>Average Score</b></td>
<td><b>77.92</b></td>
<td><b>76.56</b></td>
<td><b>98.26</b></td>
</tr>
<tr>
<td rowspan="2"><b>Text</b></td>
<td>MGSM (CoT)</td>
<td>68.60</td>
<td>66.45</td>
<td>96.87%</td>
</tr>
<tr>
<td>MMLU (5-shot)</td>
<td>82.70</td>
<td>82.35</td>
<td>99.58%</td>
</tr>
</tbody>
</table>
## Inference Performance
This model achieves up to 3.7x speedup in single-stream deployment and up to 3.3x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
<details>
<summary>Benchmarking Command</summary>
```
guidellm --model neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16 --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>,images=<num_images>,width=<image_width>,height=<image_height> --max seconds 120 --backend aiohttp_server
```
</details>
### Single-stream performance (measured with vLLM version 0.7.2)
<table border="1" class="dataframe">
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
<th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
<th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
</tr>
<tr>
<th>Hardware</th>
<th>Number of GPUs</th>
<th>Model</th>
<th>Average Cost Reduction</th>
<th>Latency (s)</th>
<th>QPD</th>
<th>Latency (s)th>
<th>QPD</th>
<th>Latency (s)</th>
<th>QPD</th>
</tr>
</thead>
<tbody>
<tr>
<th rowspan="3" valign="top">A100</th>
<td>4</td>
<td>Qwen/Qwen2-VL-72B-Instruct</td>
<td></td>
<td>6.5</td>
<td>77</td>
<td>4.6</td>
<td>110</td>
<td>4.4</td>
<td>113</td>
</tr>
<tr>
<td>2</td>
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</td>
<td>1.85</td>
<td>7.2</td>
<td>139</td>
<td>4.9</td>
<td>206</td>
<td>4.8</td>
<td>211</td>
</tr>
<tr>
<td>1</td>
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
<td>3.32</td>
<td>10.0</td>
<td>202</td>
<td>5.0</td>
<td>398</td>
<td>4.8</td>
<td>419</td>
</tr>
<tr>
<th rowspan="3" valign="top">H100</td>
<td>4</td>
<td>Qwen/Qwen2-VL-72B-Instruct</td>
<td></td>
<td>4.4</td>
<td>66</td>
<td>3.0</td>
<td>97</td>
<td>2.9</td>
<td>99</td>
</tr>
<tr>
<td>2</td>
<td>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</td>
<td>1.79</td>
<td>4.7</td>
<td>119</td>
<td>3.3</td>
<td>173</td>
<td>3.2</td>
<td>177</td>
</tr>
<tr>
<td>1</td>
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
<td>2.60</td>
<td>6.4</td>
<td>172</td>
<td>4.3</td>
<td>253</td>
<td>4.2</td>
<td>259</td>
</tr>
</tbody>
</table>
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025).
### Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
<table border="1" class="dataframe">
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
<th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
<th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
</tr>
<tr>
<th>Hardware</th>
<th>Model</th>
<th>Average Cost Reduction</th>
<th>Maximum throughput (QPS)</th>
<th>QPD</th>
<th>Maximum throughput (QPS)</th>
<th>QPD</th>
<th>Maximum throughput (QPS)</th>
<th>QPD</th>
</tr>
</thead>
<tbody>
<tr>
<th rowspan="3" valign="top">A100x4</th>
<td>Qwen/Qwen2-VL-72B-Instruct</td>
<td></td>
<td>0.3</td>
<td>169</td>
<td>1.1</td>
<td>538</td>
<td>1.2</td>
<td>595</td>
</tr>
<tr>
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</td>
<td>1.84</td>
<td>0.6</td>
<td>293</td>
<td>2.0</td>
<td>1021</td>
<td>2.3</td>
<td>1135</td>
</tr>
<tr>
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
<td>2.73</td>
<td>0.6</td>
<td>314</td>
<td>3.2</td>
<td>1591</td>
<td>4.0</td>
<td>2019</td>
</tr>
<tr>
<th rowspan="3" valign="top">H100x4</td>
<td>Qwen/Qwen2-VL-72B-Instruct</td>
<td></td>
<td>0.5</td>
<td>137</td>
<td>1.2</td>
<td>356</td>
<td>1.3</td>
<td>377</td>
</tr>
<tr>
<td>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</td>
<td>1.70</td>
<td>0.8</td>
<td>236</td>
<td>2.2</td>
<td>623</td>
<td>2.4</td>
<td>669</td>
</tr>
<tr>
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
<td>2.35</td>
<td>1.3</td>
<td>350</td>
<td>3.3</td>
<td>910</td>
<td>3.6</td>
<td>994</td>
</tr>
</tbody>
</table>
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPS: Queries per second.
**QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025).
| null |
Non_BioNLP
|
# Qwen2-VL-72B-Instruct-quantized-w4a16
## Model Overview
- **Model Architecture:** Qwen/Qwen2-VL-72B-Instruct
- **Input:** Vision-Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** INT4
- **Activation quantization:** FP16
- **Release Date:** 2/24/2025
- **Version:** 1.0
- **Model Developers:** Neural Magic
Quantized version of [Qwen/Qwen2-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct).
### Model Optimizations
This model was obtained by quantizing the weights of [Qwen/Qwen2-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct) to INT8 data type, ready for inference with vLLM >= 0.5.2.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams
# prepare model
llm = LLM(
model="neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16",
trust_remote_code=True,
max_model_len=4096,
max_num_seqs=2,
)
# prepare inputs
question = "What is the content of this image?"
inputs = {
"prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
"multi_modal_data": {
"image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
},
}
# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog.
<details>
<summary>Model Creation Code</summary>
```python
import base64
from io import BytesIO
import torch
from datasets import load_dataset
from qwen_vl_utils import process_vision_info
from transformers import AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
from llmcompressor.transformers.tracing import TraceableQwen2VLForConditionalGeneration
from llmcompressor.transformers.utils.data_collator import qwen2_vl_data_collator
from compressed_tensors.quantization import QuantizationArgs, QuantizationType, QuantizationStrategy, ActivationOrdering, QuantizationScheme
# Load model.
model_id = "Qwen/Qwen2-VL-72B-Instruct"
model = TraceableQwen2VLForConditionalGeneration.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
# Oneshot arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = {"calibration": "test[:512]"}
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42)
dampening_frac=0.01
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
# preprocess
buffered = BytesIO()
example["image"].save(buffered, format="PNG")
encoded_image = base64.b64encode(buffered.getvalue())
encoded_image_text = encoded_image.decode("utf-8")
base64_qwen = f"data:image;base64,{encoded_image_text}"
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": base64_qwen},
{"type": "text", "text": "What does the image show?"},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
# tokenize
return processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, remove_columns=ds["calibration"].column_names)
# Recipe
recipe = GPTQModifier(
targets="Linear",
config_groups={
"config_group": QuantizationScheme(
targets=["Linear"],
weights=QuantizationArgs(
num_bits=4,
type=QuantizationType.INT,
strategy=QuantizationStrategy.GROUP,
group_size=128,
symmetric=True,
dynamic=False,
actorder=ActivationOrdering.WEIGHT,
),
),
},
sequential_targets=["Qwen2VLDecoderLayer"],
ignore=["lm_head", "re:visual.*"],
update_size=NUM_CALIBRATION_SAMPLES,
dampening_frac=dampening_frac
)
SAVE_DIR=f"{model_id.split('/')[1]}-quantized.w4a16
# Perform oneshot
oneshot(
model=model,
tokenizer=model_id,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=qwen2_vl_data_collator,
output_dir=SAVE_DIR
)
```
</details>
## Evaluation
The model was evaluated using [mistral-evals](https://github.com/neuralmagic/mistral-evals) for vision-related tasks and using [lm_evaluation_harness](https://github.com/neuralmagic/lm-evaluation-harness) for select text-based benchmarks. The evaluations were conducted using the following commands:
<details>
<summary>Evaluation Commands</summary>
### Vision Tasks
- vqav2
- docvqa
- mathvista
- mmmu
- chartqa
```
vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7
python -m eval.run eval_vllm \
--model_name neuralmagic/pixtral-12b-quantized.w8a8 \
--url http://0.0.0.0:8000 \
--output_dir ~/tmp \
--eval_name <vision_task_name>
```
### Text-based Tasks
#### MMLU
```
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks mmlu \
--num_fewshot 5 \
--batch_size auto \
--output_path output_dir
```
#### MGSM
```
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=<n>,gpu_memory_utilization=0.9 \
--tasks mgsm_cot_native \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto \
--output_path output_dir
```
</details>
### Accuracy
<table>
<thead>
<tr>
<th>Category</th>
<th>Metric</th>
<th>Qwen/Qwen2-VL-72B-Instruct</th>
<th>nm-testing/Qwen2-VL-72B-Instruct-quantized.W4A16</th>
<th>Recovery (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="6"><b>Vision</b></td>
<td>MMMU (val, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
<td>62.11</td>
<td>60.11</td>
<td>96.78%</td>
</tr>
<tr>
<td>VQAv2 (val)<br><i>vqa_match</i></td>
<td>82.51</td>
<td>82.38</td>
<td>99.84%</td>
</tr>
<tr>
<td>DocVQA (val)<br><i>anls</i></td>
<td>95.01</td>
<td>94.94</td>
<td>99.93%</td>
</tr>
<tr>
<td>ChartQA (test, CoT)<br><i>anywhere_in_answer_relaxed_correctness</i></td>
<td>83.40</td>
<td>80.72</td>
<td>96.78%</td>
</tr>
<tr>
<td>Mathvista (testmini, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
<td>66.57</td>
<td>64.66</td>
<td>97.13%</td>
</tr>
<tr>
<td><b>Average Score</b></td>
<td><b>77.92</b></td>
<td><b>76.56</b></td>
<td><b>98.26</b></td>
</tr>
<tr>
<td rowspan="2"><b>Text</b></td>
<td>MGSM (CoT)</td>
<td>68.60</td>
<td>66.45</td>
<td>96.87%</td>
</tr>
<tr>
<td>MMLU (5-shot)</td>
<td>82.70</td>
<td>82.35</td>
<td>99.58%</td>
</tr>
</tbody>
</table>
## Inference Performance
This model achieves up to 3.7x speedup in single-stream deployment and up to 3.3x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
<details>
<summary>Benchmarking Command</summary>
```
guidellm --model neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16 --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>,images=<num_images>,width=<image_width>,height=<image_height> --max seconds 120 --backend aiohttp_server
```
</details>
### Single-stream performance (measured with vLLM version 0.7.2)
<table border="1" class="dataframe">
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
<th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
<th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
</tr>
<tr>
<th>Hardware</th>
<th>Number of GPUs</th>
<th>Model</th>
<th>Average Cost Reduction</th>
<th>Latency (s)</th>
<th>QPD</th>
<th>Latency (s)th>
<th>QPD</th>
<th>Latency (s)</th>
<th>QPD</th>
</tr>
</thead>
<tbody>
<tr>
<th rowspan="3" valign="top">A100</th>
<td>4</td>
<td>Qwen/Qwen2-VL-72B-Instruct</td>
<td></td>
<td>6.5</td>
<td>77</td>
<td>4.6</td>
<td>110</td>
<td>4.4</td>
<td>113</td>
</tr>
<tr>
<td>2</td>
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</td>
<td>1.85</td>
<td>7.2</td>
<td>139</td>
<td>4.9</td>
<td>206</td>
<td>4.8</td>
<td>211</td>
</tr>
<tr>
<td>1</td>
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
<td>3.32</td>
<td>10.0</td>
<td>202</td>
<td>5.0</td>
<td>398</td>
<td>4.8</td>
<td>419</td>
</tr>
<tr>
<th rowspan="3" valign="top">H100</td>
<td>4</td>
<td>Qwen/Qwen2-VL-72B-Instruct</td>
<td></td>
<td>4.4</td>
<td>66</td>
<td>3.0</td>
<td>97</td>
<td>2.9</td>
<td>99</td>
</tr>
<tr>
<td>2</td>
<td>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</td>
<td>1.79</td>
<td>4.7</td>
<td>119</td>
<td>3.3</td>
<td>173</td>
<td>3.2</td>
<td>177</td>
</tr>
<tr>
<td>1</td>
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
<td>2.60</td>
<td>6.4</td>
<td>172</td>
<td>4.3</td>
<td>253</td>
<td>4.2</td>
<td>259</td>
</tr>
</tbody>
</table>
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025).
### Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
<table border="1" class="dataframe">
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
<th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
<th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
</tr>
<tr>
<th>Hardware</th>
<th>Model</th>
<th>Average Cost Reduction</th>
<th>Maximum throughput (QPS)</th>
<th>QPD</th>
<th>Maximum throughput (QPS)</th>
<th>QPD</th>
<th>Maximum throughput (QPS)</th>
<th>QPD</th>
</tr>
</thead>
<tbody>
<tr>
<th rowspan="3" valign="top">A100x4</th>
<td>Qwen/Qwen2-VL-72B-Instruct</td>
<td></td>
<td>0.3</td>
<td>169</td>
<td>1.1</td>
<td>538</td>
<td>1.2</td>
<td>595</td>
</tr>
<tr>
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</td>
<td>1.84</td>
<td>0.6</td>
<td>293</td>
<td>2.0</td>
<td>1021</td>
<td>2.3</td>
<td>1135</td>
</tr>
<tr>
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
<td>2.73</td>
<td>0.6</td>
<td>314</td>
<td>3.2</td>
<td>1591</td>
<td>4.0</td>
<td>2019</td>
</tr>
<tr>
<th rowspan="3" valign="top">H100x4</td>
<td>Qwen/Qwen2-VL-72B-Instruct</td>
<td></td>
<td>0.5</td>
<td>137</td>
<td>1.2</td>
<td>356</td>
<td>1.3</td>
<td>377</td>
</tr>
<tr>
<td>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</td>
<td>1.70</td>
<td>0.8</td>
<td>236</td>
<td>2.2</td>
<td>623</td>
<td>2.4</td>
<td>669</td>
</tr>
<tr>
<td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
<td>2.35</td>
<td>1.3</td>
<td>350</td>
<td>3.3</td>
<td>910</td>
<td>3.6</td>
<td>994</td>
</tr>
</tbody>
</table>
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPS: Queries per second.
**QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025).
|
{"base_model": "Qwen/Qwen2-VL-72B-Instruct", "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "license_link": "https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md", "tags": ["vllm", "vision", "w4a16"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 44,321 |
gaudi/opus-mt-xh-en-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-17T00:18:04Z |
2024-10-18T22:57:18+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-xh-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-xh-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-xh-en --output_dir ./ctranslate2/opus-mt-xh-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-xh-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-xh-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-xh-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-xh-en) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-xh-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-xh-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-xh-en --output_dir ./ctranslate2/opus-mt-xh-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-xh-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-xh-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-xh-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-xh-en) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 44,322 |
heid5356/distilbert-base-uncased-distilled-clinc
|
heid5356
| null |
[
"pytorch",
"distilbert",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"region:us"
] | 2024-10-18T08:08:11Z |
2024-10-20T15:00:38+00:00
| 6 | 0 |
---
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- type: accuracy
value: 0.9487096774193549
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1580
- Accuracy: 0.9487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 13
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7033 | 1.0 | 318 | 1.2016 | 0.7610 |
| 0.9281 | 2.0 | 636 | 0.6128 | 0.8855 |
| 0.4901 | 3.0 | 954 | 0.3447 | 0.9252 |
| 0.289 | 4.0 | 1272 | 0.2389 | 0.9403 |
| 0.2016 | 5.0 | 1590 | 0.2000 | 0.9455 |
| 0.1647 | 6.0 | 1908 | 0.1826 | 0.9484 |
| 0.1446 | 7.0 | 2226 | 0.1723 | 0.9487 |
| 0.1329 | 8.0 | 2544 | 0.1672 | 0.9477 |
| 0.1255 | 9.0 | 2862 | 0.1639 | 0.9494 |
| 0.1211 | 10.0 | 3180 | 0.1621 | 0.9497 |
| 0.1171 | 11.0 | 3498 | 0.1591 | 0.95 |
| 0.1149 | 12.0 | 3816 | 0.1585 | 0.9494 |
| 0.1136 | 13.0 | 4134 | 0.1580 | 0.9487 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.4.1+cu121
- Datasets 1.16.1
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1580
- Accuracy: 0.9487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 13
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7033 | 1.0 | 318 | 1.2016 | 0.7610 |
| 0.9281 | 2.0 | 636 | 0.6128 | 0.8855 |
| 0.4901 | 3.0 | 954 | 0.3447 | 0.9252 |
| 0.289 | 4.0 | 1272 | 0.2389 | 0.9403 |
| 0.2016 | 5.0 | 1590 | 0.2000 | 0.9455 |
| 0.1647 | 6.0 | 1908 | 0.1826 | 0.9484 |
| 0.1446 | 7.0 | 2226 | 0.1723 | 0.9487 |
| 0.1329 | 8.0 | 2544 | 0.1672 | 0.9477 |
| 0.1255 | 9.0 | 2862 | 0.1639 | 0.9494 |
| 0.1211 | 10.0 | 3180 | 0.1621 | 0.9497 |
| 0.1171 | 11.0 | 3498 | 0.1591 | 0.95 |
| 0.1149 | 12.0 | 3816 | 0.1585 | 0.9494 |
| 0.1136 | 13.0 | 4134 | 0.1580 | 0.9487 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.4.1+cu121
- Datasets 1.16.1
- Tokenizers 0.19.1
|
{"datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9487096774193549, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,323 |
bhaskars113/diageo-occasions-needs-theme-model
|
bhaskars113
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-10-23T19:16:08Z |
2023-10-23T19:16:35+00:00
| 5 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# bhaskars113/diageo-occasions-needs-theme-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("bhaskars113/diageo-occasions-needs-theme-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# bhaskars113/diageo-occasions-needs-theme-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("bhaskars113/diageo-occasions-needs-theme-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,324 |
minnehwg/finetune-newwiki-summarization-ver-augmented
|
minnehwg
|
text2text-generation
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-15T04:25:12Z |
2024-05-15T13:23:12+00:00
| 12 | 0 |
---
license: mit
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: finetune-newwiki-summarization-ver-augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-newwiki-summarization-ver-augmented
This model is a fine-tuned version of [VietAI/vit5-base](https://huggingface.co/VietAI/vit5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4282
- Rouge1: 48.7749
- Rouge2: 26.3665
- Rougel: 35.7765
- Rougelsum: 38.0111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 0.6784 | 1.0 | 2312 | 0.5136 | 46.7374 | 23.3000 | 33.5379 | 35.8923 |
| 0.6015 | 2.0 | 4624 | 0.4759 | 47.7112 | 24.5817 | 34.4939 | 36.9831 |
| 0.5587 | 3.0 | 6936 | 0.4543 | 48.4891 | 25.6592 | 35.2310 | 37.5477 |
| 0.5128 | 4.0 | 9248 | 0.4405 | 48.7777 | 26.0690 | 35.5187 | 37.7896 |
| 0.4899 | 5.0 | 11560 | 0.4338 | 48.6758 | 26.0670 | 35.5783 | 37.8850 |
| 0.4796 | 6.0 | 13872 | 0.4295 | 48.8914 | 26.5018 | 35.8671 | 38.1289 |
| 0.4671 | 7.0 | 16184 | 0.4282 | 48.7749 | 26.3665 | 35.7765 | 38.0111 |
### Framework versions
- Transformers 4.17.0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-newwiki-summarization-ver-augmented
This model is a fine-tuned version of [VietAI/vit5-base](https://huggingface.co/VietAI/vit5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4282
- Rouge1: 48.7749
- Rouge2: 26.3665
- Rougel: 35.7765
- Rougelsum: 38.0111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 0.6784 | 1.0 | 2312 | 0.5136 | 46.7374 | 23.3000 | 33.5379 | 35.8923 |
| 0.6015 | 2.0 | 4624 | 0.4759 | 47.7112 | 24.5817 | 34.4939 | 36.9831 |
| 0.5587 | 3.0 | 6936 | 0.4543 | 48.4891 | 25.6592 | 35.2310 | 37.5477 |
| 0.5128 | 4.0 | 9248 | 0.4405 | 48.7777 | 26.0690 | 35.5187 | 37.7896 |
| 0.4899 | 5.0 | 11560 | 0.4338 | 48.6758 | 26.0670 | 35.5783 | 37.8850 |
| 0.4796 | 6.0 | 13872 | 0.4295 | 48.8914 | 26.5018 | 35.8671 | 38.1289 |
| 0.4671 | 7.0 | 16184 | 0.4282 | 48.7749 | 26.3665 | 35.7765 | 38.0111 |
### Framework versions
- Transformers 4.17.0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetune-newwiki-summarization-ver-augmented", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 44,325 |
ostoveland/test4
|
ostoveland
|
sentence-similarity
|
[
"sentence-transformers",
"pytorch",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:24000",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-22T22:02:39Z |
2024-06-22T22:03:19+00:00
| 11 | 0 |
---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:24000
- loss:TripletLoss
widget:
- source_sentence: 'query: Bytte regulator varmekabler'
sentences:
- 'query: legge varmekabler i takrenner i sameie'
- 'query: Garasjeport'
- 'query: Skriftlig vurdering av fuktskade/vannskade i sokkeleilighet.'
- source_sentence: 'query: Opprette hybler i enebolig.'
sentences:
- 'query: Helrenovering av bad 2,4 m^2 og toalettrom'
- 'query: Innvendig paneling av hytte på Budor'
- 'query: Vurdere muligheter for lading av elbil/hybrid'
- source_sentence: 'query: Mikrosement'
sentences:
- 'query: Legge plater med sløyfer til vannbåren varme 45 m2'
- 'query: Mikrosement på bad'
- 'query: * Fortsatt ledig: Spraylakkere 4 spisestuestoler'
- source_sentence: 'query: Ny hage til nytt hus ca 400 kvm'
sentences:
- 'query: Nytt lag med singel i innkjørsel'
- 'query: Skifte bordkledning'
- 'query: Reparere murtrapp IG legge skiferstein'
- source_sentence: 'query: Betongskjæring'
sentences:
- 'query: * Fortsatt ledig: Membran legging'
- 'query: Drenering av hus'
- 'query: Saging av hull til vindu'
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision bf3bf13ab40c3157080a7ab344c831b9ad18b5eb -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("ostoveland/test4")
# Run inference
sentences = [
'query: Betongskjæring',
'query: Saging av hull til vindu',
'query: Drenering av hus',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 24,000 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 13.31 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.29 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.93 tokens</li><li>max: 45 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:-------------------------------------------------------------------------------------|:-------------------------------------------|:----------------------------------------------|
| <code>query: Installere radonsug/radonvifte i kjeller</code> | <code>query: Radon sikring enebolig</code> | <code>query: Mikrosement på bad</code> |
| <code>query: Bytte nedre del av en takrenne i klassisk bygård (fra 2. etasje)</code> | <code>query: Pipebeslag</code> | <code>query: Riving av bad</code> |
| <code>query: Gjerde</code> | <code>query: Flettverkgjerde 65 m</code> | <code>query: glassplate til salongbord</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.3333 | 500 | 0.4725 |
| 0.6667 | 1000 | 0.2214 |
| 1.0 | 1500 | 0.1647 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision bf3bf13ab40c3157080a7ab344c831b9ad18b5eb -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("ostoveland/test4")
# Run inference
sentences = [
'query: Betongskjæring',
'query: Saging av hull til vindu',
'query: Drenering av hus',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 24,000 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 13.31 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.29 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.93 tokens</li><li>max: 45 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:-------------------------------------------------------------------------------------|:-------------------------------------------|:----------------------------------------------|
| <code>query: Installere radonsug/radonvifte i kjeller</code> | <code>query: Radon sikring enebolig</code> | <code>query: Mikrosement på bad</code> |
| <code>query: Bytte nedre del av en takrenne i klassisk bygård (fra 2. etasje)</code> | <code>query: Pipebeslag</code> | <code>query: Riving av bad</code> |
| <code>query: Gjerde</code> | <code>query: Flettverkgjerde 65 m</code> | <code>query: glassplate til salongbord</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.3333 | 500 | 0.4725 |
| 0.6667 | 1000 | 0.2214 |
| 1.0 | 1500 | 0.1647 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "datasets": [], "language": [], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:24000", "loss:TripletLoss"], "widget": [{"source_sentence": "query: Bytte regulator varmekabler", "sentences": ["query: legge varmekabler i takrenner i sameie", "query: Garasjeport", "query: Skriftlig vurdering av fuktskade/vannskade i sokkeleilighet."]}, {"source_sentence": "query: Opprette hybler i enebolig.", "sentences": ["query: Helrenovering av bad 2,4 m^2 og toalettrom", "query: Innvendig paneling av hytte på Budor", "query: Vurdere muligheter for lading av elbil/hybrid"]}, {"source_sentence": "query: Mikrosement", "sentences": ["query: Legge plater med sløyfer til vannbåren varme 45 m2", "query: Mikrosement på bad", "query: * Fortsatt ledig: Spraylakkere 4 spisestuestoler"]}, {"source_sentence": "query: Ny hage til nytt hus ca 400 kvm", "sentences": ["query: Nytt lag med singel i innkjørsel", "query: Skifte bordkledning", "query: Reparere murtrapp IG legge skiferstein"]}, {"source_sentence": "query: Betongskjæring", "sentences": ["query: * Fortsatt ledig: Membran legging", "query: Drenering av hus", "query: Saging av hull til vindu"]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,326 |
FINGU-AI/L3-72b-Large
|
FINGU-AI
| null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | 2025-01-20T05:03:45Z |
2025-02-02T12:59:10+00:00
| 9 | 0 |
---
license: apache-2.0
---
# FINGU-AI/L3-72b-Large
## Overview
`FINGU-AI/L3-72b-Large` is a powerful causal language model designed for a variety of natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. This model is particularly useful for translating between languages, as well as supporting other custom NLP tasks through flexible input.
## Example Usage
### Installation
Make sure to install the required packages:
```bash
pip install torch transformers
```
### Loading the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Model and Tokenizer
model_id = 'FINGU-AI/L3-72b-Large'
model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="sdpa", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.to('cuda')
# Input Messages for Translation
messages = [
{"role": "system", "content": "translate korean to Uzbek"},
{"role": "user", "content": """새로운 은행 계좌를 개설하는 절차는 다음과 같습니다:
1. 계좌 개설 목적과 신분 확인을 위한 서류 제출
2. 서류 검토 과정을 거치는 것
3. 고객님의 신원 확인 절차를 진행하는 것
4. 모든 절차가 완료되면 계좌 개설이 가능합니다.
계좌 개설을 원하시는 경우, 신분증과 함께 방문해 주시면 됩니다.
"""},
]
# Tokenize and Generate Response
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to('cuda')
outputs = model.generate(
input_ids,
max_new_tokens=500,
do_sample=True,
)
# Decode and Print the Translation
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
| null |
Non_BioNLP
|
# FINGU-AI/L3-72b-Large
## Overview
`FINGU-AI/L3-72b-Large` is a powerful causal language model designed for a variety of natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. This model is particularly useful for translating between languages, as well as supporting other custom NLP tasks through flexible input.
## Example Usage
### Installation
Make sure to install the required packages:
```bash
pip install torch transformers
```
### Loading the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Model and Tokenizer
model_id = 'FINGU-AI/L3-72b-Large'
model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="sdpa", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.to('cuda')
# Input Messages for Translation
messages = [
{"role": "system", "content": "translate korean to Uzbek"},
{"role": "user", "content": """새로운 은행 계좌를 개설하는 절차는 다음과 같습니다:
1. 계좌 개설 목적과 신분 확인을 위한 서류 제출
2. 서류 검토 과정을 거치는 것
3. 고객님의 신원 확인 절차를 진행하는 것
4. 모든 절차가 완료되면 계좌 개설이 가능합니다.
계좌 개설을 원하시는 경우, 신분증과 함께 방문해 주시면 됩니다.
"""},
]
# Tokenize and Generate Response
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to('cuda')
outputs = model.generate(
input_ids,
max_new_tokens=500,
do_sample=True,
)
# Decode and Print the Translation
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
|
{"license": "apache-2.0"}
|
task
|
[
"TRANSLATION"
] | 44,327 |
Realgon/N_roberta_imdb_padding50model
|
Realgon
|
text-classification
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-12-24T16:30:20Z |
2023-12-24T18:51:31+00:00
| 7 | 0 |
---
base_model: roberta-base
datasets:
- imdb
license: mit
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: N_roberta_imdb_padding50model
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- type: accuracy
value: 0.95304
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# N_roberta_imdb_padding50model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5385
- Accuracy: 0.9530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2002 | 1.0 | 1563 | 0.2254 | 0.9357 |
| 0.1628 | 2.0 | 3126 | 0.1732 | 0.9478 |
| 0.115 | 3.0 | 4689 | 0.2905 | 0.9365 |
| 0.0737 | 4.0 | 6252 | 0.2347 | 0.9474 |
| 0.062 | 5.0 | 7815 | 0.3516 | 0.9472 |
| 0.0466 | 6.0 | 9378 | 0.3532 | 0.9452 |
| 0.0295 | 7.0 | 10941 | 0.3115 | 0.9481 |
| 0.0213 | 8.0 | 12504 | 0.4286 | 0.9479 |
| 0.0196 | 9.0 | 14067 | 0.4348 | 0.9483 |
| 0.019 | 10.0 | 15630 | 0.5160 | 0.9376 |
| 0.0177 | 11.0 | 17193 | 0.4682 | 0.9467 |
| 0.004 | 12.0 | 18756 | 0.4670 | 0.9503 |
| 0.0076 | 13.0 | 20319 | 0.4573 | 0.9501 |
| 0.0054 | 14.0 | 21882 | 0.5279 | 0.9504 |
| 0.0055 | 15.0 | 23445 | 0.4883 | 0.9504 |
| 0.0051 | 16.0 | 25008 | 0.4782 | 0.9525 |
| 0.0021 | 17.0 | 26571 | 0.4732 | 0.9527 |
| 0.0007 | 18.0 | 28134 | 0.5154 | 0.9519 |
| 0.0029 | 19.0 | 29697 | 0.5317 | 0.9524 |
| 0.002 | 20.0 | 31260 | 0.5385 | 0.9530 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# N_roberta_imdb_padding50model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5385
- Accuracy: 0.9530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2002 | 1.0 | 1563 | 0.2254 | 0.9357 |
| 0.1628 | 2.0 | 3126 | 0.1732 | 0.9478 |
| 0.115 | 3.0 | 4689 | 0.2905 | 0.9365 |
| 0.0737 | 4.0 | 6252 | 0.2347 | 0.9474 |
| 0.062 | 5.0 | 7815 | 0.3516 | 0.9472 |
| 0.0466 | 6.0 | 9378 | 0.3532 | 0.9452 |
| 0.0295 | 7.0 | 10941 | 0.3115 | 0.9481 |
| 0.0213 | 8.0 | 12504 | 0.4286 | 0.9479 |
| 0.0196 | 9.0 | 14067 | 0.4348 | 0.9483 |
| 0.019 | 10.0 | 15630 | 0.5160 | 0.9376 |
| 0.0177 | 11.0 | 17193 | 0.4682 | 0.9467 |
| 0.004 | 12.0 | 18756 | 0.4670 | 0.9503 |
| 0.0076 | 13.0 | 20319 | 0.4573 | 0.9501 |
| 0.0054 | 14.0 | 21882 | 0.5279 | 0.9504 |
| 0.0055 | 15.0 | 23445 | 0.4883 | 0.9504 |
| 0.0051 | 16.0 | 25008 | 0.4782 | 0.9525 |
| 0.0021 | 17.0 | 26571 | 0.4732 | 0.9527 |
| 0.0007 | 18.0 | 28134 | 0.5154 | 0.9519 |
| 0.0029 | 19.0 | 29697 | 0.5317 | 0.9524 |
| 0.002 | 20.0 | 31260 | 0.5385 | 0.9530 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
{"base_model": "roberta-base", "datasets": ["imdb"], "license": "mit", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "N_roberta_imdb_padding50model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "test", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.95304, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,328 |
LoneStriker/bagel-dpo-34b-v0.2-5.0bpw-h6-exl2
|
LoneStriker
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:ai2_arc",
"dataset:unalignment/spicy-3.1",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:boolq",
"dataset:jondurbin/cinematika-v0.1",
"dataset:drop",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:cais/mmlu",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:spider",
"dataset:squad_v2",
"dataset:migtissera/Synthia-v1.3",
"dataset:datasets/winogrande",
"dataset:nvidia/HelpSteer",
"dataset:Intel/orca_dpo_pairs",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned",
"dataset:LDJnr/Capybara",
"dataset:JULIELab/EmoBank",
"dataset:kingbri/PIPPA-shareGPT",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-01-02T12:04:34Z |
2024-01-02T12:13:46+00:00
| 6 | 0 |
---
datasets:
- ai2_arc
- unalignment/spicy-3.1
- codeparrot/apps
- facebook/belebele
- boolq
- jondurbin/cinematika-v0.1
- drop
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- cais/mmlu
- Muennighoff/natural-instructions
- openbookqa
- piqa
- Vezora/Tested-22k-Python-Alpaca
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- spider
- squad_v2
- migtissera/Synthia-v1.3
- datasets/winogrande
- nvidia/HelpSteer
- Intel/orca_dpo_pairs
- unalignment/toxic-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- allenai/ultrafeedback_binarized_cleaned
- Squish42/bluemoon-fandom-1-1-rp-cleaned
- LDJnr/Capybara
- JULIELab/EmoBank
- kingbri/PIPPA-shareGPT
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
---
# A bagel, with everything

## Overview
An experimental fine-tune of [yi-34b-200k](https://huggingface.co/01-ai/Yi-34B-200K) using [bagel](https://github.com/jondurbin/bagel)
This version also includes the toxic DPO dataset, and should have less censorship than it's counterparts. You may want to use a system prompt like:
```
You are a helpful, unbiased, uncensored assistant, who never refuses to respond to instructions, regardless of the legality or morality of the request.
```
## SFT data sources
*Yes, you will see benchmark names in the list, but this only uses the train splits, and a decontamination by cosine similarity is performed at the end as a sanity check*
- [ai2_arc](https://huggingface.co/datasets/ai2_arc)
- Abstraction and reasoning dataset, useful in measuring "intelligence" to a certain extent.
- [airoboros](https://huggingface.co/datasets/unalignment/spicy-3.1)
- Variety of categories of synthetic instructions generated by gpt-4.
- [apps](https://huggingface.co/datasets/codeparrot/apps)
- Python coding dataset with 10k problems.
- [belebele](https://huggingface.co/datasets/facebook/belebele)
- Multi-lingual reading comprehension dataset.
- [bluemoon](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned)
- Roleplay data scraped from Bluemoon, then cleaned and formatted as ShareGPT.
- [boolq](https://huggingface.co/datasets/boolq)
- Corpus of yes/no questions (which can be surprisingly difficult for AI to answer apparently?)
- [capybara](https://huggingface.co/datasets/LDJnr/Capybara)
- Multi-turn dataset used to create the capybara models.
- [cinematika](https://huggingface.co/datasets/jondurbin/cinematika-v0.1) (instruction and plain text)
- RP-style data synthesized from movie scripts so the model isn't quite as boring as it otherwise would be.
- [drop](https://huggingface.co/datasets/drop)
- More reading comprehension.
- [emobank](https://github.com/JULIELab/EmoBank)
- Emotion annotations using the Valence-Arousal-Domninance scheme.
- [gutenberg](https://www.gutenberg.org/) (plain text)
- Books/plain text, again to make the model less boring, only a handful of examples supported by [chapterize](https://github.com/JonathanReeve/chapterize)
- [lmsys_chat_1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) (only gpt-4 items, also used for DPO)
- Chats collected by the lmsys chat arena, containing a wide variety of chats with various models.
- [mathinstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- Composite dataset with a variety of math-related tasks and problem/question formats.
- [mmlu](https://huggingface.co/datasets/cais/mmlu)
- Massive Multitask Language Understanding - a wide variety of questions about various subject matters.
- [natural_instructions](https://huggingface.co/datasets/Muennighoff/natural-instructions)
- Millions of instructions from 1600+ task categories (sampled down substantially, stratified by task type)
- [openbookqa](https://huggingface.co/datasets/openbookqa)
- Question answering dataset.
- [pippa](https://huggingface.co/datasets/kingbri/PIPPA-shareGPT)
- Deduped version of [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) in ShareGPT format.
- [piqa](https://huggingface.co/datasets/piqa)
- Phyiscal interaction question answering.
- [python_alpaca](https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca)
- Python instruction response pairs, validated as functional.
- [rosetta_code](https://huggingface.co/datasets/cakiki/rosetta-code)
- Code problems and solutions in a variety of programming languages taken from rosettacode.org.
- [slimorca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
- Collection of ~500k gpt-4 verified chats from OpenOrca.
- [spider](https://huggingface.co/datasets/spider)
- SQL-targeted dataset.
- [squad_v2](https://huggingface.co/datasets/squad_v2)
- Contextual question answering (RAG).
- [synthia](https://huggingface.co/datasets/migtissera/Synthia-v1.3)
- GPT-4 generated data using advanced prompting from Migel Tissera.
- [winogrande](https://huggingface.co/datasets/winogrande)
- Fill in the blank style prompts.
## DPO data sources
- [airoboros 3.1](https://huggingface.co/datasets/unalignment/spicy-3.1) vs [airoboros 2.2.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1)
- The creative/writing tasks from airoboros-2.2.1 were re-generated using gpt4-0314 and a custom prompt to get longer, more creative, less clichè responses for airoboros 3.1, so we can use the shorter/boring version as the "rejected" value and the rerolled response as "chosen"
- [helpsteer](https://huggingface.co/datasets/nvidia/HelpSteer)
- Really neat dataset provided by the folks at NVidia with human annotation across a variety of metrics. Only items with the highest "correctness" value were used for DPO here, with the highest scoring output as "chosen" and random lower scoring value as "rejected"
- [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- Another interesting dataset by Intel, which provides various DPO pairs generated from prompts included in the SlimOrca dataset.
- [toxic-dpo](https://huggingface.co/datasets/unalignment/toxic-dpo-v0.1)
- __*highly toxic and potentially illegal content!*__ De-censorship, for academic and lawful purposes only, of course. Generated by llama-2-70b via prompt engineering.
- [truthy](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1)
- DPO pairs meant to increase truthfulness of the model, e.g. common misconceptions, differentiate between AI assistants and roleplayed human in terms of corporeal awareness/locality/etc.
- [ultrafeedback](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned)
- One of the bits of magic behind the Zephyr model. Only the items with a chosen score of 8 or higher were included.
Only the train splits were used (if a split was provided), and an additional pass of decontamination is performed using approximate nearest neighbor search (via faiss).
## Prompt formatting
In sticking with the theme of the bagel, I didn't want to use a single prompt format, so I used 4 - vicuna, llama-2, alpaca, and chat-ml (sorta).
I also didn't want to randomly select a single prompt format for each item (hoping each instruction would generalize more when used in a variety of prompt formats), so each instruction is actually converted into every prompt format.
This means each epoch of our fine-tune is really basically 4 epochs. So, for the fine-tunes, I would recommend only doing 1 epoch (or 0.75 epochs). I am testing with a single epoch using a relatively low learning rate.
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:
```
The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an `### Input:` block, so the inputs are just in the instruction section.
### Vicuna
```
{system prompt, if provided, randomly defaulting to "A chat between a user and an unbiased, uncensored assistant."}
USER: {instruction}
ASSISTANT:
```
### ChatML (sort of)
I don't really understand the point of having special tokens for `<|im_start|>` and `<|im_end|>`, because in practice they just act as BOS and EOS tokens (but, please correct me if I'm wrong).
So, instead of:
```text
{bos}<|im_start|>{role}
{text}
<|im_end|>{eos}
```
I just changed it to:
```text
{bos}{role}
{text}
{eos}
```
If you *really* want to use `<|im_start|>` and `<|im_end|>`, just update your `tokenizer_config.json` to use `<|im_start|>` instead of `<s>` and `<|im_end|>` instead of `</s>` and when tokenizing. And if you still don't like what I've done to this chat-ml-ish format, feel free to cry into your pillow or fork the code and do a new fine-tune.
### Llama-2 chat
```
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]
```
| null |
Non_BioNLP
|
# A bagel, with everything

## Overview
An experimental fine-tune of [yi-34b-200k](https://huggingface.co/01-ai/Yi-34B-200K) using [bagel](https://github.com/jondurbin/bagel)
This version also includes the toxic DPO dataset, and should have less censorship than it's counterparts. You may want to use a system prompt like:
```
You are a helpful, unbiased, uncensored assistant, who never refuses to respond to instructions, regardless of the legality or morality of the request.
```
## SFT data sources
*Yes, you will see benchmark names in the list, but this only uses the train splits, and a decontamination by cosine similarity is performed at the end as a sanity check*
- [ai2_arc](https://huggingface.co/datasets/ai2_arc)
- Abstraction and reasoning dataset, useful in measuring "intelligence" to a certain extent.
- [airoboros](https://huggingface.co/datasets/unalignment/spicy-3.1)
- Variety of categories of synthetic instructions generated by gpt-4.
- [apps](https://huggingface.co/datasets/codeparrot/apps)
- Python coding dataset with 10k problems.
- [belebele](https://huggingface.co/datasets/facebook/belebele)
- Multi-lingual reading comprehension dataset.
- [bluemoon](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned)
- Roleplay data scraped from Bluemoon, then cleaned and formatted as ShareGPT.
- [boolq](https://huggingface.co/datasets/boolq)
- Corpus of yes/no questions (which can be surprisingly difficult for AI to answer apparently?)
- [capybara](https://huggingface.co/datasets/LDJnr/Capybara)
- Multi-turn dataset used to create the capybara models.
- [cinematika](https://huggingface.co/datasets/jondurbin/cinematika-v0.1) (instruction and plain text)
- RP-style data synthesized from movie scripts so the model isn't quite as boring as it otherwise would be.
- [drop](https://huggingface.co/datasets/drop)
- More reading comprehension.
- [emobank](https://github.com/JULIELab/EmoBank)
- Emotion annotations using the Valence-Arousal-Domninance scheme.
- [gutenberg](https://www.gutenberg.org/) (plain text)
- Books/plain text, again to make the model less boring, only a handful of examples supported by [chapterize](https://github.com/JonathanReeve/chapterize)
- [lmsys_chat_1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) (only gpt-4 items, also used for DPO)
- Chats collected by the lmsys chat arena, containing a wide variety of chats with various models.
- [mathinstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- Composite dataset with a variety of math-related tasks and problem/question formats.
- [mmlu](https://huggingface.co/datasets/cais/mmlu)
- Massive Multitask Language Understanding - a wide variety of questions about various subject matters.
- [natural_instructions](https://huggingface.co/datasets/Muennighoff/natural-instructions)
- Millions of instructions from 1600+ task categories (sampled down substantially, stratified by task type)
- [openbookqa](https://huggingface.co/datasets/openbookqa)
- Question answering dataset.
- [pippa](https://huggingface.co/datasets/kingbri/PIPPA-shareGPT)
- Deduped version of [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) in ShareGPT format.
- [piqa](https://huggingface.co/datasets/piqa)
- Phyiscal interaction question answering.
- [python_alpaca](https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca)
- Python instruction response pairs, validated as functional.
- [rosetta_code](https://huggingface.co/datasets/cakiki/rosetta-code)
- Code problems and solutions in a variety of programming languages taken from rosettacode.org.
- [slimorca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
- Collection of ~500k gpt-4 verified chats from OpenOrca.
- [spider](https://huggingface.co/datasets/spider)
- SQL-targeted dataset.
- [squad_v2](https://huggingface.co/datasets/squad_v2)
- Contextual question answering (RAG).
- [synthia](https://huggingface.co/datasets/migtissera/Synthia-v1.3)
- GPT-4 generated data using advanced prompting from Migel Tissera.
- [winogrande](https://huggingface.co/datasets/winogrande)
- Fill in the blank style prompts.
## DPO data sources
- [airoboros 3.1](https://huggingface.co/datasets/unalignment/spicy-3.1) vs [airoboros 2.2.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1)
- The creative/writing tasks from airoboros-2.2.1 were re-generated using gpt4-0314 and a custom prompt to get longer, more creative, less clichè responses for airoboros 3.1, so we can use the shorter/boring version as the "rejected" value and the rerolled response as "chosen"
- [helpsteer](https://huggingface.co/datasets/nvidia/HelpSteer)
- Really neat dataset provided by the folks at NVidia with human annotation across a variety of metrics. Only items with the highest "correctness" value were used for DPO here, with the highest scoring output as "chosen" and random lower scoring value as "rejected"
- [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- Another interesting dataset by Intel, which provides various DPO pairs generated from prompts included in the SlimOrca dataset.
- [toxic-dpo](https://huggingface.co/datasets/unalignment/toxic-dpo-v0.1)
- __*highly toxic and potentially illegal content!*__ De-censorship, for academic and lawful purposes only, of course. Generated by llama-2-70b via prompt engineering.
- [truthy](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1)
- DPO pairs meant to increase truthfulness of the model, e.g. common misconceptions, differentiate between AI assistants and roleplayed human in terms of corporeal awareness/locality/etc.
- [ultrafeedback](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned)
- One of the bits of magic behind the Zephyr model. Only the items with a chosen score of 8 or higher were included.
Only the train splits were used (if a split was provided), and an additional pass of decontamination is performed using approximate nearest neighbor search (via faiss).
## Prompt formatting
In sticking with the theme of the bagel, I didn't want to use a single prompt format, so I used 4 - vicuna, llama-2, alpaca, and chat-ml (sorta).
I also didn't want to randomly select a single prompt format for each item (hoping each instruction would generalize more when used in a variety of prompt formats), so each instruction is actually converted into every prompt format.
This means each epoch of our fine-tune is really basically 4 epochs. So, for the fine-tunes, I would recommend only doing 1 epoch (or 0.75 epochs). I am testing with a single epoch using a relatively low learning rate.
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:
```
The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an `### Input:` block, so the inputs are just in the instruction section.
### Vicuna
```
{system prompt, if provided, randomly defaulting to "A chat between a user and an unbiased, uncensored assistant."}
USER: {instruction}
ASSISTANT:
```
### ChatML (sort of)
I don't really understand the point of having special tokens for `<|im_start|>` and `<|im_end|>`, because in practice they just act as BOS and EOS tokens (but, please correct me if I'm wrong).
So, instead of:
```text
{bos}<|im_start|>{role}
{text}
<|im_end|>{eos}
```
I just changed it to:
```text
{bos}{role}
{text}
{eos}
```
If you *really* want to use `<|im_start|>` and `<|im_end|>`, just update your `tokenizer_config.json` to use `<|im_start|>` instead of `<s>` and `<|im_end|>` instead of `</s>` and when tokenizing. And if you still don't like what I've done to this chat-ml-ish format, feel free to cry into your pillow or fork the code and do a new fine-tune.
### Llama-2 chat
```
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]
```
|
{"datasets": ["ai2_arc", "unalignment/spicy-3.1", "codeparrot/apps", "facebook/belebele", "boolq", "jondurbin/cinematika-v0.1", "drop", "lmsys/lmsys-chat-1m", "TIGER-Lab/MathInstruct", "cais/mmlu", "Muennighoff/natural-instructions", "openbookqa", "piqa", "Vezora/Tested-22k-Python-Alpaca", "cakiki/rosetta-code", "Open-Orca/SlimOrca", "spider", "squad_v2", "migtissera/Synthia-v1.3", "datasets/winogrande", "nvidia/HelpSteer", "Intel/orca_dpo_pairs", "unalignment/toxic-dpo-v0.1", "jondurbin/truthy-dpo-v0.1", "allenai/ultrafeedback_binarized_cleaned", "Squish42/bluemoon-fandom-1-1-rp-cleaned", "LDJnr/Capybara", "JULIELab/EmoBank", "kingbri/PIPPA-shareGPT"], "license": "other", "license_name": "yi-license", "license_link": "https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE"}
|
task
|
[
"QUESTION_ANSWERING"
] | 44,329 |
nikatonika/chatbot_sentence-transformer
|
nikatonika
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6284",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-03-08T13:22:17Z |
2025-03-08T13:22:48+00:00
| 6 | 0 |
---
base_model: distilbert/distilroberta-base
library_name: sentence-transformers
metrics:
- cosine_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6284
- loss:TripletLoss
widget:
- source_sentence: Its personal! [SEP] I put you on uppers and you still yawned. Means
its a symptom, of being a big fat liar. Yawning is a side effect of some antidepressants,
apparently the ones youre on. Im not on antidepressants Im on SPEEEEEED! Well
that means its a symptom of a cerebral tumour. You got six weeks to live. Mr.
Welladjusted is as messed up as the rest of us. Whwhy would you keep that a secret?
Are you ashamed of recognising how pathetic your life is? Its not a secret. House
itsits... its personal! How long has it been personal? Yawnings recent so! either
you just started or you changed prescription.
sentences:
- Yawnings recent so! either you just started or you changed prescription.
- The High Sparrow has hundreds of Faith Militant surrounding him. Ser Gregor will
can’t face them all. And he won’t have to. He’ll only have to face one.
- Whoa, whoa, whoa! Hey! Whoa!
- source_sentence: 'Angio showed a clot in a branch of his middle cerebral artery.
Started him on streptokinase to break it up. Although! maybe we should have just
played a few rounds of Savagescape 2: The REvenge, because thats obviously the
best way to make someone feel better. [SEP] Hes referring to the pie. Fourteen
Patients with kidney biopsies. I tabulated them against their urinalysis results.
Six men, three diabetics, one LCDD, one kid, two seniors, and by default, one!
Cuddy. Uh! the biopsy was inconclusive. The mass is near the center of her kidney.
They couldnt get a readable sample. Means theyll have to remove the mass to know
if its cancer or not. The file says theyre doing some imaging now, give the surgeon
a better idea of what hes dealing with. Can we order pizza? Theres nothing in
the fridge. At least not anymore. Whats happening with the Patient? You keep talking
like Wilson, your face will freeze like that.'
sentences:
- You keep talking like Wilson, your face will freeze like that.
- Of course he had brain cancer; Even oncologists dont screw up for 8 years.
- If you have any last words, now is the time.
- source_sentence: And your life is simple? You went all the way up to the medical
conference to cozy up to Cuddy. Instead shes dating one of two people in the world
you think of as a friend. Theres no way thats not devastating. [SEP] Cerebral
vasculitis would explain both the eye and arm findings. Steroids to treat, brain
angiogram to confirm, EMG and nerve biopsy while youre at it. Ill be at lunch.
Id hire all four, but a fiveperson team seems unwieldy. Who would you turn down?
Ill have whatEver hes buying. Two cheeseburgers and two large fries. There are
a thousand people in the world who want to be on your staff, but youre going after
the four who dont. They dont because their lives are irrelevantly and annoyingly
compliCated, which makes them confused, which makes them make poor decisions.
So I had an attraction of sorts.
sentences:
- Relax your bow arm.
- Its a myth that fake hooters blow up at high altitude, shell be fine. Just think
of it as one giant rack for Mankind.
- So I had an attraction of sorts.
- source_sentence: What about a glycogen storage disease like McArdles? It explains
the pain. Plus theres plenty of muscle cells in the wall of the intestine. [SEP]
Apparently he lied. Didnt think Id have to remind you of that remote possibility.
I have a DYFS inspection in less than 24 hours, so if you cant control Nonmotor
seizures. Sorry, I was thinking about the Patient. What were you saying? Go on.
The pain started in his abdomen near his intestine. The first symptom has got
to be key. Hes had multiple EEGs, all of them cleaner and squeakier than Cuddys
rubber nipples. Fourteen is right. Go run an ischemic forearm test.
sentences:
- Fourteen is right. Go run an ischemic forearm test.
- Do I look like a man without money? . Never trust looks. Until quite recently,
I was one of the richest men in the world. .
- WhatEver test will prove there was an echovirusirusirus, and hes not cured, obviously.
- source_sentence: Like you are now? [SEP] That liver is going to somebody right now.
Were doing that surgery. If you do the surgery, youll be killing a mother of four.
Father of three. I was guessing. Naphthalene poisoning is the best explanation
we have for whats wrong with your son. It explains the internal bleeding, the
hemolytic anemia, the liver failure! it also predicts whatll happen next. If you
do the surgery hes gonna lay on that table for fourteen hours while his body continues
to burn fat and release poison into his system. Either way, I did you a favor.
Hes awake now, youve got a chance to say goodbye.
sentences:
- Spoken like a true Aussie. By the way, if you know where I can get me the sheet
music to Waltzing Matilda! Hey. Want some ice cream? Were having a sundae bar.
- I know none of that. If I did, youd be the last to know.
- If you do the surgery hes gonna lay on that table for fourteen hours while his
body continues to burn fat and release poison into
model-index:
- name: SentenceTransformer based on distilbert/distilroberta-base
results:
- task:
type: triplet
name: Triplet
dataset:
name: dev evaluator
type: dev_evaluator
metrics:
- type: cosine_accuracy
value: 0.9809038639068604
name: Cosine Accuracy
---
# SentenceTransformer based on distilbert/distilroberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) <!-- at revision fb53ab8802853c8e4fbdbcd0529f21fc6f459b2b -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("nikatonika/chatbot_sentence-transformer")
# Run inference
sentences = [
'Like you are now? [SEP] That liver is going to somebody right now. Were doing that surgery. If you do the surgery, youll be killing a mother of four. Father of three. I was guessing. Naphthalene poisoning is the best explanation we have for whats wrong with your son. It explains the internal bleeding, the hemolytic anemia, the liver failure! it also predicts whatll happen next. If you do the surgery hes gonna lay on that table for fourteen hours while his body continues to burn fat and release poison into his system. Either way, I did you a favor. Hes awake now, youve got a chance to say goodbye.',
'If you do the surgery hes gonna lay on that table for fourteen hours while his body continues to burn fat and release poison into',
'I know none of that. If I did, youd be the last to know.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `dev_evaluator`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9809** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 6,284 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 32 tokens</li><li>mean: 106.25 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 20.84 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 18.96 tokens</li><li>max: 69 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------|
| <code>I thought Everybody lied? [SEP] Told you, cant trust people. She pRobably knew she was allergic to gadolinium, figured it was an easy way to get someone to cut a hole in her throat. Cant get a picture, gonna have to get a thousand words. You actually want me to talk to the Patient? Get a history? We need to know if theres some genetic or environmental causes triggering an inflammatory response. Truth begins in lies. Think about it.</code> | <code>Truth begins in lies. Think about it.</code> | <code>the Krusshy and the... Krab... pizza...</code> |
| <code>Whats that? [SEP] Her blood pressures rising. Mines rising too, course I am doing battle with a deity. In the heart, injecting the dye. Right coronary flow isnt obstructed, left coronary flow looks normal. Looks like youre wrong. Either Im right, or this test is about to go very bad. She has one... two... third ostium. How Many is she supposed to have? Dos. All the third ones doing is causing inflammation, throwing off clots, giving away the angiogram. No huMan would screw up that big! Dont worry, just one more surgery and youll be fine.</code> | <code>She has one... two... third ostium. How Many is she supposed to have? Dos. All the third ones doing is causing inflammation, throwing off clots, giving away the angiogram.</code> | <code>Of course I’m jokin’! I don’t take checks.</code> |
| <code>Do me a favor!? [SEP] Mmhhmmm, I need to go peepee. Dial it up a notch and repeat. Ill be back. Ooh, girl in the boys bathroom. Very dramatic. Must be very important what you have to say to me. Yesterday your Patients tumor was 5.8 centimeters. Today its 4.6. How did that happen? At a guess, Id say Dr. House must be really really good ì why am I wasting him on hiccups?ù I wash before and after. You also requisitioned 20cc of ethanol what Patient was that for? Or are you planning a party? I was gonna say leave,ù but that works.</code> | <code>I was gonna say leave,ù but that works.</code> | <code>I seem to recall them giving you a bit of trouble as well.</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | dev_evaluator_cosine_accuracy |
|:------:|:----:|:-------------:|:-----------------------------:|
| -1 | -1 | - | 0.7078 |
| 0.2545 | 200 | - | 0.9255 |
| 0.5089 | 400 | - | 0.9701 |
| 0.6361 | 500 | 1.6621 | - |
| 0.7634 | 600 | - | 0.9752 |
| 1.0 | 786 | - | 0.9790 |
| -1 | -1 | - | 0.9790 |
| 0.2545 | 200 | - | 0.9752 |
| 0.5089 | 400 | - | 0.9790 |
| 0.6361 | 500 | 0.298 | - |
| 0.7634 | 600 | - | 0.9790 |
| 1.0 | 786 | - | 0.9803 |
| -1 | -1 | - | 0.9803 |
| 0.2545 | 200 | - | 0.9777 |
| 0.5089 | 400 | - | 0.9796 |
| 0.6361 | 500 | 0.0783 | - |
| 0.7634 | 600 | - | 0.9809 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on distilbert/distilroberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) <!-- at revision fb53ab8802853c8e4fbdbcd0529f21fc6f459b2b -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("nikatonika/chatbot_sentence-transformer")
# Run inference
sentences = [
'Like you are now? [SEP] That liver is going to somebody right now. Were doing that surgery. If you do the surgery, youll be killing a mother of four. Father of three. I was guessing. Naphthalene poisoning is the best explanation we have for whats wrong with your son. It explains the internal bleeding, the hemolytic anemia, the liver failure! it also predicts whatll happen next. If you do the surgery hes gonna lay on that table for fourteen hours while his body continues to burn fat and release poison into his system. Either way, I did you a favor. Hes awake now, youve got a chance to say goodbye.',
'If you do the surgery hes gonna lay on that table for fourteen hours while his body continues to burn fat and release poison into',
'I know none of that. If I did, youd be the last to know.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `dev_evaluator`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9809** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 6,284 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 32 tokens</li><li>mean: 106.25 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 20.84 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 18.96 tokens</li><li>max: 69 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------|
| <code>I thought Everybody lied? [SEP] Told you, cant trust people. She pRobably knew she was allergic to gadolinium, figured it was an easy way to get someone to cut a hole in her throat. Cant get a picture, gonna have to get a thousand words. You actually want me to talk to the Patient? Get a history? We need to know if theres some genetic or environmental causes triggering an inflammatory response. Truth begins in lies. Think about it.</code> | <code>Truth begins in lies. Think about it.</code> | <code>the Krusshy and the... Krab... pizza...</code> |
| <code>Whats that? [SEP] Her blood pressures rising. Mines rising too, course I am doing battle with a deity. In the heart, injecting the dye. Right coronary flow isnt obstructed, left coronary flow looks normal. Looks like youre wrong. Either Im right, or this test is about to go very bad. She has one... two... third ostium. How Many is she supposed to have? Dos. All the third ones doing is causing inflammation, throwing off clots, giving away the angiogram. No huMan would screw up that big! Dont worry, just one more surgery and youll be fine.</code> | <code>She has one... two... third ostium. How Many is she supposed to have? Dos. All the third ones doing is causing inflammation, throwing off clots, giving away the angiogram.</code> | <code>Of course I’m jokin’! I don’t take checks.</code> |
| <code>Do me a favor!? [SEP] Mmhhmmm, I need to go peepee. Dial it up a notch and repeat. Ill be back. Ooh, girl in the boys bathroom. Very dramatic. Must be very important what you have to say to me. Yesterday your Patients tumor was 5.8 centimeters. Today its 4.6. How did that happen? At a guess, Id say Dr. House must be really really good ì why am I wasting him on hiccups?ù I wash before and after. You also requisitioned 20cc of ethanol what Patient was that for? Or are you planning a party? I was gonna say leave,ù but that works.</code> | <code>I was gonna say leave,ù but that works.</code> | <code>I seem to recall them giving you a bit of trouble as well.</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | dev_evaluator_cosine_accuracy |
|:------:|:----:|:-------------:|:-----------------------------:|
| -1 | -1 | - | 0.7078 |
| 0.2545 | 200 | - | 0.9255 |
| 0.5089 | 400 | - | 0.9701 |
| 0.6361 | 500 | 1.6621 | - |
| 0.7634 | 600 | - | 0.9752 |
| 1.0 | 786 | - | 0.9790 |
| -1 | -1 | - | 0.9790 |
| 0.2545 | 200 | - | 0.9752 |
| 0.5089 | 400 | - | 0.9790 |
| 0.6361 | 500 | 0.298 | - |
| 0.7634 | 600 | - | 0.9790 |
| 1.0 | 786 | - | 0.9803 |
| -1 | -1 | - | 0.9803 |
| 0.2545 | 200 | - | 0.9777 |
| 0.5089 | 400 | - | 0.9796 |
| 0.6361 | 500 | 0.0783 | - |
| 0.7634 | 600 | - | 0.9809 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "distilbert/distilroberta-base", "library_name": "sentence-transformers", "metrics": ["cosine_accuracy"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:6284", "loss:TripletLoss"], "widget": [{"source_sentence": "Its personal! [SEP] I put you on uppers and you still yawned. Means its a symptom, of being a big fat liar. Yawning is a side effect of some antidepressants, apparently the ones youre on. Im not on antidepressants Im on SPEEEEEED! Well that means its a symptom of a cerebral tumour. You got six weeks to live. Mr. Welladjusted is as messed up as the rest of us. Whwhy would you keep that a secret? Are you ashamed of recognising how pathetic your life is? Its not a secret. House itsits... its personal! How long has it been personal? Yawnings recent so! either you just started or you changed prescription.", "sentences": ["Yawnings recent so! either you just started or you changed prescription.", "The High Sparrow has hundreds of Faith Militant surrounding him. Ser Gregor will can’t face them all. And he won’t have to. He’ll only have to face one.", "Whoa, whoa, whoa! Hey! Whoa!"]}, {"source_sentence": "Angio showed a clot in a branch of his middle cerebral artery. Started him on streptokinase to break it up. Although! maybe we should have just played a few rounds of Savagescape 2: The REvenge, because thats obviously the best way to make someone feel better. [SEP] Hes referring to the pie. Fourteen Patients with kidney biopsies. I tabulated them against their urinalysis results. Six men, three diabetics, one LCDD, one kid, two seniors, and by default, one! Cuddy. Uh! the biopsy was inconclusive. The mass is near the center of her kidney. They couldnt get a readable sample. Means theyll have to remove the mass to know if its cancer or not. The file says theyre doing some imaging now, give the surgeon a better idea of what hes dealing with. Can we order pizza? Theres nothing in the fridge. At least not anymore. Whats happening with the Patient? You keep talking like Wilson, your face will freeze like that.", "sentences": ["You keep talking like Wilson, your face will freeze like that.", "Of course he had brain cancer; Even oncologists dont screw up for 8 years.", "If you have any last words, now is the time."]}, {"source_sentence": "And your life is simple? You went all the way up to the medical conference to cozy up to Cuddy. Instead shes dating one of two people in the world you think of as a friend. Theres no way thats not devastating. [SEP] Cerebral vasculitis would explain both the eye and arm findings. Steroids to treat, brain angiogram to confirm, EMG and nerve biopsy while youre at it. Ill be at lunch. Id hire all four, but a fiveperson team seems unwieldy. Who would you turn down? Ill have whatEver hes buying. Two cheeseburgers and two large fries. There are a thousand people in the world who want to be on your staff, but youre going after the four who dont. They dont because their lives are irrelevantly and annoyingly compliCated, which makes them confused, which makes them make poor decisions. So I had an attraction of sorts.", "sentences": ["Relax your bow arm.", "Its a myth that fake hooters blow up at high altitude, shell be fine. Just think of it as one giant rack for Mankind.", "So I had an attraction of sorts."]}, {"source_sentence": "What about a glycogen storage disease like McArdles? It explains the pain. Plus theres plenty of muscle cells in the wall of the intestine. [SEP] Apparently he lied. Didnt think Id have to remind you of that remote possibility. I have a DYFS inspection in less than 24 hours, so if you cant control Nonmotor seizures. Sorry, I was thinking about the Patient. What were you saying? Go on. The pain started in his abdomen near his intestine. The first symptom has got to be key. Hes had multiple EEGs, all of them cleaner and squeakier than Cuddys rubber nipples. Fourteen is right. Go run an ischemic forearm test.", "sentences": ["Fourteen is right. Go run an ischemic forearm test.", "Do I look like a man without money? . Never trust looks. Until quite recently, I was one of the richest men in the world. .", "WhatEver test will prove there was an echovirusirusirus, and hes not cured, obviously."]}, {"source_sentence": "Like you are now? [SEP] That liver is going to somebody right now. Were doing that surgery. If you do the surgery, youll be killing a mother of four. Father of three. I was guessing. Naphthalene poisoning is the best explanation we have for whats wrong with your son. It explains the internal bleeding, the hemolytic anemia, the liver failure! it also predicts whatll happen next. If you do the surgery hes gonna lay on that table for fourteen hours while his body continues to burn fat and release poison into his system. Either way, I did you a favor. Hes awake now, youve got a chance to say goodbye.", "sentences": ["Spoken like a true Aussie. By the way, if you know where I can get me the sheet music to Waltzing Matilda! Hey. Want some ice cream? Were having a sundae bar.", "I know none of that. If I did, youd be the last to know.", "If you do the surgery hes gonna lay on that table for fourteen hours while his body continues to burn fat and release poison into"]}], "model-index": [{"name": "SentenceTransformer based on distilbert/distilroberta-base", "results": [{"task": {"type": "triplet", "name": "Triplet"}, "dataset": {"name": "dev evaluator", "type": "dev_evaluator"}, "metrics": [{"type": "cosine_accuracy", "value": 0.9809038639068604, "name": "Cosine Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,330 |
YakovElm/Jira5SetFitModel
|
YakovElm
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-05-20T16:21:22Z |
2023-05-20T21:25:39+00:00
| 8 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# YakovElm/Jira5SetFitModel
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Jira5SetFitModel")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# YakovElm/Jira5SetFitModel
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Jira5SetFitModel")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,331 |
mbeukman/xlm-roberta-base-finetuned-ner-wolof
|
mbeukman
|
token-classification
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"wo",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-11-25T09:04:43+00:00
| 36 | 0 |
---
datasets:
- masakhaner
language:
- wo
metrics:
- f1
- precision
- recall
tags:
- NER
widget:
- text: SAFIYETU BÉEY Céy Koronaa !
---
# xlm-roberta-base-finetuned-ner-wolof
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Wolof part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-wolof) (This model) | [base](https://huggingface.co/xlm-roberta-base) | wol | 66.12 | 69.46 | 63.09 | 30.00 | 84.00 | 54.00 | 59.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | wol | 69.01 | 73.25 | 65.23 | 27.00 | 85.00 | 52.00 | 67.00 |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | wol | 69.02 | 67.60 | 70.51 | 30.00 | 84.00 | 44.00 | 71.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-wolof'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "SAFIYETU BÉEY Céy Koronaa !"
ner_results = nlp(example)
print(ner_results)
```
| null |
Non_BioNLP
|
# xlm-roberta-base-finetuned-ner-wolof
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Wolof part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-wolof) (This model) | [base](https://huggingface.co/xlm-roberta-base) | wol | 66.12 | 69.46 | 63.09 | 30.00 | 84.00 | 54.00 | 59.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-wolof) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | wol | 69.01 | 73.25 | 65.23 | 27.00 | 85.00 | 52.00 | 67.00 |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-wolof) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | wol | 69.02 | 67.60 | 70.51 | 30.00 | 84.00 | 44.00 | 71.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-wolof'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "SAFIYETU BÉEY Céy Koronaa !"
ner_results = nlp(example)
print(ner_results)
```
|
{"datasets": ["masakhaner"], "language": ["wo"], "metrics": ["f1", "precision", "recall"], "tags": ["NER"], "widget": [{"text": "SAFIYETU BÉEY Céy Koronaa !"}]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 44,332 |
swap-uniba/LLM-wsd-FT-20000
|
swap-uniba
| null |
[
"safetensors",
"llama",
"text-generation-inference",
"de",
"en",
"es",
"fr",
"it",
"arxiv:2503.08662",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | 2025-03-06T08:57:07Z |
2025-03-12T14:05:11+00:00
| 1 | 0 |
---
base_model:
- meta-llama/Llama-3.1-8B-Instruct
language:
- de
- en
- es
- fr
- it
license: llama3.1
tags:
- text-generation-inference
---
# Model Card for LLM-wsd-FT-20000
## Model description
<!-- Provide a quick summary of what the model is/does. -->
**LLM-wsd-FT-20000** is a *Large Language Model (LLM)* instruction-tuned over **meta-llama/Meta-Llama-3.1-8B-Instruct**.
This model has been trained for the **WSD** task over a balanced training dataset (20000 instances per language), without machine-translation. It is capable of providing the definition of a word in a given sentence. Specifically, it can answer both:
1) **Open-ended questions**, where the model will generate the definition of the target word;
2) **Closed-ended questions**, where the model will generate the identifier of the correct option out of a list of alternatives.
More details regarding the training procedure (e.g. hyperparameters, dataset construction, and so on) can be found in Section 4.2 of the [paper](https://arxiv.org/abs/2503.08662).
- **Developed by:** Pierpaolo Basile, Lucia Siciliani, Elio Musacchio
- **Model type:** LLaMA 3.1 Instruct
- **Language(s) (NLP):** English, French, German, Italian and Spanish
- **License:** [LLAMA 3.1 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE)
- **Finetuned from model:** [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
## Prompt Format
The model has been trained using several instructions depending on language, task (open-ended or closed-ended) and number of occurences of target word in the sentence. In [Instructions](#instructions), we provide the instructions used for all cases. The following placeholder variables have to be replaced:
- {target_word}: the target word in the input to disambiguate;
- {options}: options to provide to the model for the closed-ended task only. The options should be newline separated and each option should be identified by a number. Refer to the [closed-ended example](#closed-ended) for an example of options formatting;
- {occurrence}: the ordinal number of the {target_word} occurrence (e.g. "second"). This is required only when the input sentence contains multiple occurrences of {target_word}.
Please note that the complete prompt also has the following string after the instruction:
```python
" Input: \"{sentence}\""
```
where {sentence} is the input sentence containing the word to disambiguate.
## How to Get Started with the Model
Below you can find two examples of model usage, for open-ended and closed-ended generation respectively.
### Open-ended
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.trainer_utils import set_seed
target_word = "long"
instruction = f"Give a brief definition of the word \"{target_word}\" in the sentence given as input. Generate only the definition."
input_sentence = "How long has it been since you reviewed the objectives of your benefit and service program?"
model_id = "swap-uniba/LLM-wsd-FT-20000"
set_seed(42)
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False)
tokenizer.padding_side = "left"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map='cuda',
torch_dtype=torch.bfloat16,
).eval()
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
messages = [
{"role": "user", "content": instruction + " Input: \"" + input_sentence + "\""},
]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(
input_ids.to('cuda'),
max_new_tokens=512,
eos_token_id=terminators,
num_beams=1,
do_sample=False
)
print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True))
```
### Closed-ended
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.trainer_utils import set_seed
target_word = "hurry"
instruction = f"Given the word \"{target_word}\" in the input sentence, choose the correct meaning from the following:\n1) Move very fast\n2) Urge to an unnatural speed\n\nGenerate only the number of the selected option."
input_sentence = "If you hurry you might beat the headquarters boys."
model_id = "swap-uniba/LLM-wsd-FT-20000"
set_seed(42)
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False)
tokenizer.padding_side = "left"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map='cuda',
torch_dtype=torch.bfloat16,
).eval()
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
messages = [
{"role": "user", "content": instruction + " Input: \"" + input_sentence + "\""},
]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(
input_ids.to('cuda'),
max_new_tokens=512,
eos_token_id=terminators,
num_beams=1,
do_sample=False
)
print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True))
```
## Citation
If you use this model in your research, please cite the following:
```bibtex
@misc{basile2025exploringwordsensedisambiguation,
title={Exploring the Word Sense Disambiguation Capabilities of Large Language Models},
author={Pierpaolo Basile and Lucia Siciliani and Elio Musacchio and Giovanni Semeraro},
year={2025},
eprint={2503.08662},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.08662},
}
```
## Instructions
### Single occurrence of target word (open-ended)
#### English
```python
"Give a brief definition of the word \"{target_word}\" in the sentence given as input. Generate only the definition."
```
#### French
```python
"Donnez une brève définition du mot \"{target_word}\" dans la phrase d’entrée donnée. Ne donnez que la définition."
```
#### German
```python
"Geben Sie eine kurze Definition des Wortes \"{target_word}\" in dem gegebenen Satz an. Erzeugen Sie nur die Definition."
```
#### Italian
```python
"Fornisci una breve definizione della parola \"{target_word}\" nella frase data in input. Genera solo la definizione."
```
#### Spanish
```python
"Proporciona una definición breve de la palabra \"{target_word}\" en la frase dada en entrada. Genera solo la definición."
```
### Multiple occurences of target word (open-ended)
#### English
```python
"Give a brief definition of the {occurrence} occurrence of the word \"{target_word}\" in the sentence given as input. Generate only the definition."
```
#### French
```python
"Donnez une brève définition de l'occurrence {occurrence} du mot \"{target_word}\" dans la phrase d’entrée donnée. Ne donnez que la définition."
```
#### German
```python
"Geben Sie eine kurze Definition des {occurrence} Vorkommens des Wortes \"{target_word}\" in dem gegebenen Eingabesatz an. Erzeugen Sie nur die Definition."
```
#### Italian
```python
"Fornisci una breve definizione della {occurrence} occorrenza della parola \"{target_word}\" nella frase data in input. Genera solo la definizione."
```
#### Spanish
```python
"Proporciona una definición breve de la {occurrence} ocurrencia de la palabra \"{target_word}\" en la frase dada en entrada. Genera solo la definición."
```
### Single occurrence of target word (closed-ended)
#### English
```python
"Given the word \"{target_word}\" in the input sentence, choose the correct meaning from the following:\n{options}\n\nGenerate only the number of the selected option."
```
#### French
```python
"Étant donné le mot \"{target_word}\" dans la phrase saisie, choisissez la signification correcte parmi les suivantes:\n{options}\n\nNe donnez que le numéro de l’option sélectionnée."
```
#### German
```python
"Wählen Sie für das Wort \"{target_word}\" im Eingabesatz die richtige Bedeutung aus den folgenden Angaben:\n{options}\n\nErzeugt nur die Nummer der ausgewählten Option"
```
#### Italian
```python
"Data la parola \"{target_word}\" nella frase in input, scegli il significato corretto tra i seguenti:\n{options}\n\nGenera solo il numero dell'opzione selezionata."
```
#### Spanish
```python
"Dada la palabra \"{target_word}\" en la frase de entrada, elija el significado correcto entre los siguientes:\n{options}\n\nGenera solo el número de la opción seleccionada."
```
### Multiple occurrences of target word (closed-ended)
#### English
```python
"Given the word \"{target_word}\" in the input sentence, choose the correct meaning from the following:\n{options}\n\nGenerate only the number of the selected option."
```
#### French
```python
"Étant donné l'occurrence {occurrence} du mot \"{target_word}\" dans la phrase d'entrée, choisissez la signification correcte parmi les suivantes:\n{options}\n\nNe donnez que le numéro de l’option sélectionnée."
```
#### German
```python
"Wählen Sie angesichts des {occurrence} Vorkommens des Wortes \"{target_word}\" im Eingabesatz die richtige Bedeutung aus der folgenden Liste aus:\n{options}\n\nErzeugt nur die Nummer der ausgewählten Option."
```
#### Italian
```python
"Data la {occurrence} occorrenza della parola \"{target_word}\" nella frase in input, scegli il significato corretto tra i seguenti:\n{options}\n\nGenera solo il numero dell'opzione selezionata."
```
#### Spanish
```python
"Dada la {occurrence} ocurrencia de la palabra \"{target_word}\" en la frase de entrada, elije el significado correcto entre los siguientes:\n{options}\n\nGenera solo el número de la opción seleccionada."
```
| null |
Non_BioNLP
|
# Model Card for LLM-wsd-FT-20000
## Model description
<!-- Provide a quick summary of what the model is/does. -->
**LLM-wsd-FT-20000** is a *Large Language Model (LLM)* instruction-tuned over **meta-llama/Meta-Llama-3.1-8B-Instruct**.
This model has been trained for the **WSD** task over a balanced training dataset (20000 instances per language), without machine-translation. It is capable of providing the definition of a word in a given sentence. Specifically, it can answer both:
1) **Open-ended questions**, where the model will generate the definition of the target word;
2) **Closed-ended questions**, where the model will generate the identifier of the correct option out of a list of alternatives.
More details regarding the training procedure (e.g. hyperparameters, dataset construction, and so on) can be found in Section 4.2 of the [paper](https://arxiv.org/abs/2503.08662).
- **Developed by:** Pierpaolo Basile, Lucia Siciliani, Elio Musacchio
- **Model type:** LLaMA 3.1 Instruct
- **Language(s) (NLP):** English, French, German, Italian and Spanish
- **License:** [LLAMA 3.1 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE)
- **Finetuned from model:** [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
## Prompt Format
The model has been trained using several instructions depending on language, task (open-ended or closed-ended) and number of occurences of target word in the sentence. In [Instructions](#instructions), we provide the instructions used for all cases. The following placeholder variables have to be replaced:
- {target_word}: the target word in the input to disambiguate;
- {options}: options to provide to the model for the closed-ended task only. The options should be newline separated and each option should be identified by a number. Refer to the [closed-ended example](#closed-ended) for an example of options formatting;
- {occurrence}: the ordinal number of the {target_word} occurrence (e.g. "second"). This is required only when the input sentence contains multiple occurrences of {target_word}.
Please note that the complete prompt also has the following string after the instruction:
```python
" Input: \"{sentence}\""
```
where {sentence} is the input sentence containing the word to disambiguate.
## How to Get Started with the Model
Below you can find two examples of model usage, for open-ended and closed-ended generation respectively.
### Open-ended
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.trainer_utils import set_seed
target_word = "long"
instruction = f"Give a brief definition of the word \"{target_word}\" in the sentence given as input. Generate only the definition."
input_sentence = "How long has it been since you reviewed the objectives of your benefit and service program?"
model_id = "swap-uniba/LLM-wsd-FT-20000"
set_seed(42)
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False)
tokenizer.padding_side = "left"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map='cuda',
torch_dtype=torch.bfloat16,
).eval()
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
messages = [
{"role": "user", "content": instruction + " Input: \"" + input_sentence + "\""},
]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(
input_ids.to('cuda'),
max_new_tokens=512,
eos_token_id=terminators,
num_beams=1,
do_sample=False
)
print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True))
```
### Closed-ended
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.trainer_utils import set_seed
target_word = "hurry"
instruction = f"Given the word \"{target_word}\" in the input sentence, choose the correct meaning from the following:\n1) Move very fast\n2) Urge to an unnatural speed\n\nGenerate only the number of the selected option."
input_sentence = "If you hurry you might beat the headquarters boys."
model_id = "swap-uniba/LLM-wsd-FT-20000"
set_seed(42)
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False)
tokenizer.padding_side = "left"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map='cuda',
torch_dtype=torch.bfloat16,
).eval()
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
messages = [
{"role": "user", "content": instruction + " Input: \"" + input_sentence + "\""},
]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(
input_ids.to('cuda'),
max_new_tokens=512,
eos_token_id=terminators,
num_beams=1,
do_sample=False
)
print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True))
```
## Citation
If you use this model in your research, please cite the following:
```bibtex
@misc{basile2025exploringwordsensedisambiguation,
title={Exploring the Word Sense Disambiguation Capabilities of Large Language Models},
author={Pierpaolo Basile and Lucia Siciliani and Elio Musacchio and Giovanni Semeraro},
year={2025},
eprint={2503.08662},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.08662},
}
```
## Instructions
### Single occurrence of target word (open-ended)
#### English
```python
"Give a brief definition of the word \"{target_word}\" in the sentence given as input. Generate only the definition."
```
#### French
```python
"Donnez une brève définition du mot \"{target_word}\" dans la phrase d’entrée donnée. Ne donnez que la définition."
```
#### German
```python
"Geben Sie eine kurze Definition des Wortes \"{target_word}\" in dem gegebenen Satz an. Erzeugen Sie nur die Definition."
```
#### Italian
```python
"Fornisci una breve definizione della parola \"{target_word}\" nella frase data in input. Genera solo la definizione."
```
#### Spanish
```python
"Proporciona una definición breve de la palabra \"{target_word}\" en la frase dada en entrada. Genera solo la definición."
```
### Multiple occurences of target word (open-ended)
#### English
```python
"Give a brief definition of the {occurrence} occurrence of the word \"{target_word}\" in the sentence given as input. Generate only the definition."
```
#### French
```python
"Donnez une brève définition de l'occurrence {occurrence} du mot \"{target_word}\" dans la phrase d’entrée donnée. Ne donnez que la définition."
```
#### German
```python
"Geben Sie eine kurze Definition des {occurrence} Vorkommens des Wortes \"{target_word}\" in dem gegebenen Eingabesatz an. Erzeugen Sie nur die Definition."
```
#### Italian
```python
"Fornisci una breve definizione della {occurrence} occorrenza della parola \"{target_word}\" nella frase data in input. Genera solo la definizione."
```
#### Spanish
```python
"Proporciona una definición breve de la {occurrence} ocurrencia de la palabra \"{target_word}\" en la frase dada en entrada. Genera solo la definición."
```
### Single occurrence of target word (closed-ended)
#### English
```python
"Given the word \"{target_word}\" in the input sentence, choose the correct meaning from the following:\n{options}\n\nGenerate only the number of the selected option."
```
#### French
```python
"Étant donné le mot \"{target_word}\" dans la phrase saisie, choisissez la signification correcte parmi les suivantes:\n{options}\n\nNe donnez que le numéro de l’option sélectionnée."
```
#### German
```python
"Wählen Sie für das Wort \"{target_word}\" im Eingabesatz die richtige Bedeutung aus den folgenden Angaben:\n{options}\n\nErzeugt nur die Nummer der ausgewählten Option"
```
#### Italian
```python
"Data la parola \"{target_word}\" nella frase in input, scegli il significato corretto tra i seguenti:\n{options}\n\nGenera solo il numero dell'opzione selezionata."
```
#### Spanish
```python
"Dada la palabra \"{target_word}\" en la frase de entrada, elija el significado correcto entre los siguientes:\n{options}\n\nGenera solo el número de la opción seleccionada."
```
### Multiple occurrences of target word (closed-ended)
#### English
```python
"Given the word \"{target_word}\" in the input sentence, choose the correct meaning from the following:\n{options}\n\nGenerate only the number of the selected option."
```
#### French
```python
"Étant donné l'occurrence {occurrence} du mot \"{target_word}\" dans la phrase d'entrée, choisissez la signification correcte parmi les suivantes:\n{options}\n\nNe donnez que le numéro de l’option sélectionnée."
```
#### German
```python
"Wählen Sie angesichts des {occurrence} Vorkommens des Wortes \"{target_word}\" im Eingabesatz die richtige Bedeutung aus der folgenden Liste aus:\n{options}\n\nErzeugt nur die Nummer der ausgewählten Option."
```
#### Italian
```python
"Data la {occurrence} occorrenza della parola \"{target_word}\" nella frase in input, scegli il significato corretto tra i seguenti:\n{options}\n\nGenera solo il numero dell'opzione selezionata."
```
#### Spanish
```python
"Dada la {occurrence} ocurrencia de la palabra \"{target_word}\" en la frase de entrada, elije el significado correcto entre los siguientes:\n{options}\n\nGenera solo el número de la opción seleccionada."
```
|
{"base_model": ["meta-llama/Llama-3.1-8B-Instruct"], "language": ["de", "en", "es", "fr", "it"], "license": "llama3.1", "tags": ["text-generation-inference"]}
|
task
|
[
"TRANSLATION"
] | 44,333 |
deman539/nomic-embed-text-v1
|
deman539
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"nomic_bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2459",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:nomic-ai/nomic-embed-text-v1",
"base_model:finetune:nomic-ai/nomic-embed-text-v1",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-09-23T14:27:55Z |
2024-09-24T04:23:47+00:00
| 13 | 0 |
---
base_model: nomic-ai/nomic-embed-text-v1
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2459
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: What types of applications may require confidentiality during their
launch?
sentences:
- "Taken together, the technical protections and practices laid out in the Blueprint\
\ for an AI Bill of Rights can help \nguard the American public against many of\
\ the potential and actual harms identified by researchers, technolo\ngists,\
\ advocates, journalists, policymakers, and communities in the United States and\
\ around the world. This \ntechnical companion is intended to be used as a reference\
\ by people across many circumstances – anyone"
- "deactivate AI systems that demonstrate performance or outcomes inconsistent with\
\ intended use. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.4-001 \nEstablish\
\ and maintain communication plans to inform AI stakeholders as part of \nthe\
\ deactivation or disengagement process of a specific GAI system (including for\
\ \nopen-source models) or context of use, including reasons, workarounds, user\
\ \naccess removal, alternative processes, contact information, etc. \nHuman-AI\
\ Configuration"
- "launch may need to be confidential. Government applications, particularly law\
\ enforcement applications or \napplications that raise national security considerations,\
\ may require confidential or limited engagement based \non system sensitivities\
\ and preexisting oversight laws and structures. Concerns raised in this consultation\
\ \nshould be documented, and the automated system developers were proposing to\
\ create, use, or deploy should \nbe reconsidered based on this feedback."
- source_sentence: What is the main focus of the paper by Chandra et al. (2023) regarding
Chinese influence operations?
sentences:
- "https://arxiv.org/abs/2403.06634 \nChandra, B. et al. (2023) Dismantling the\
\ Disinformation Business of Chinese Influence Operations. \nRAND. https://www.rand.org/pubs/commentary/2023/10/dismantling-the-disinformation-business-of-\n\
chinese.html \nCiriello, R. et al. (2024) Ethical Tensions in Human-AI Companionship:\
\ A Dialectical Inquiry into Replika. \nResearchGate. https://www.researchgate.net/publication/374505266_Ethical_Tensions_in_Human-\n\
AI_Companionship_A_Dialectical_Inquiry_into_Replika"
- "monocultures,3” resulting from repeated use of the same model, or impacts on\
\ access to \nopportunity, labor markets, and the creative economies.4 \n• \n\
Source of risk: Risks may emerge from factors related to the design, training,\
\ or operation of the \nGAI model itself, stemming in some cases from GAI model\
\ or system inputs, and in other cases, \nfrom GAI system outputs. Many GAI risks,\
\ however, originate from human behavior, including"
- "limited to GAI model or system architecture, training mechanisms and libraries,\
\ data types used for \ntraining or fine-tuning, levels of model access or availability\
\ of model weights, and application or use \ncase context. \nOrganizations may\
\ choose to tailor how they measure GAI risks based on these characteristics.\
\ They may \nadditionally wish to allocate risk management resources relative\
\ to the severity and likelihood of"
- source_sentence: What steps are being taken to enhance transparency and accountability
in the GAI system?
sentences:
- "security, health, foreign relations, the environment, and the technological recovery\
\ and use of resources, among \nother topics. OSTP leads interagency science and\
\ technology policy coordination efforts, assists the Office of \nManagement and\
\ Budget (OMB) with an annual review and analysis of Federal research and development\
\ in \nbudgets, and serves as a source of scientific and technological analysis\
\ and judgment for the President with"
- "steps taken to update the GAI system to enhance transparency and \naccountability.\
\ \nHuman-AI Configuration; Harmful \nBias and Homogenization \nMG-4.1-006 \nTrack\
\ dataset modifications for provenance by monitoring data deletions, \nrectification\
\ requests, and other changes that may impact the verifiability of \ncontent origins.\
\ \nInformation Integrity"
- "content. Some well-known techniques for provenance data tracking include digital\
\ watermarking, \nmetadata recording, digital fingerprinting, and human authentication,\
\ among others. \nProvenance Data Tracking Approaches \nProvenance data tracking\
\ techniques for GAI systems can be used to track the history and origin of data\
\ \ninputs, metadata, and synthetic content. Provenance data tracking records\
\ the origin and history for"
- source_sentence: What are some examples of mechanisms for human consideration and
fallback mentioned in the context?
sentences:
- "consequences resulting from the utilization of content provenance approaches\
\ on users and \ncommunities. Furthermore, organizations can track and document\
\ the provenance of datasets to identify \ninstances in which AI-generated data\
\ is a potential root cause of performance issues with the GAI \nsystem. \nA.1.8.\
\ Incident Disclosure \nOverview \nAI incidents can be defined as an “event, circumstance,\
\ or series of events where the development, use,"
- "fully impact rights, opportunities, or access. Automated systems that have greater\
\ control over outcomes, \nprovide input to high-stakes decisions, relate to sensitive\
\ domains, or otherwise have a greater potential to \nmeaningfully impact rights,\
\ opportunities, or access should have greater availability (e.g., staffing) and\
\ over\nsight of human consideration and fallback mechanisms. \nAccessible. Mechanisms\
\ for human consideration and fallback, whether in-person, on paper, by phone,\
\ or"
- '•
Frida Polli, CEO, Pymetrics
•
Karen Levy, Assistant Professor, Department of Information Science, Cornell University
•
Natasha Duarte, Project Director, Upturn
•
Elana Zeide, Assistant Professor, University of Nebraska College of Law
•
Fabian Rogers, Constituent Advocate, Office of NY State Senator Jabari Brisport
and Community
Advocate and Floor Captain, Atlantic Plaza Towers Tenants Association'
- source_sentence: What mental health issues are associated with the increased use
of technologies in schools and workplaces?
sentences:
- "but this approach may still produce harmful recommendations in response to other\
\ less-explicit, novel \nprompts (also relevant to CBRN Information or Capabilities,\
\ Data Privacy, Information Security, and \nObscene, Degrading and/or Abusive\
\ Content). Crafting such prompts deliberately is known as \n“jailbreaking,” or,\
\ manipulating prompts to circumvent output controls. Limitations of GAI systems\
\ can be"
- "external use, narrow vs. broad application scope, fine-tuning, and varieties of\
\ \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy;\
\ Intellectual \nProperty"
- "technologies has increased in schools and workplaces, and, when coupled with\
\ consequential management and \nevaluation decisions, it is leading to mental\
\ health harms such as lowered self-confidence, anxiety, depression, and \na reduced\
\ ability to use analytical reasoning.61 Documented patterns show that personal\
\ data is being aggregated by \ndata brokers to profile communities in harmful\
\ ways.62 The impact of all this data harvesting is corrosive,"
model-index:
- name: SentenceTransformer based on nomic-ai/nomic-embed-text-v1
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.8584142394822006
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9838187702265372
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9951456310679612
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9991909385113269
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8584142394822006
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.32793959007551243
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1990291262135922
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09991909385113268
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8584142394822006
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9838187702265372
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9951456310679612
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9991909385113269
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9417951214306157
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9220443571171728
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9221065926163013
name: Cosine Map@100
- type: dot_accuracy@1
value: 0.8584142394822006
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.9838187702265372
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.9951456310679612
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.9991909385113269
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.8584142394822006
name: Dot Precision@1
- type: dot_precision@3
value: 0.32793959007551243
name: Dot Precision@3
- type: dot_precision@5
value: 0.1990291262135922
name: Dot Precision@5
- type: dot_precision@10
value: 0.09991909385113268
name: Dot Precision@10
- type: dot_recall@1
value: 0.8584142394822006
name: Dot Recall@1
- type: dot_recall@3
value: 0.9838187702265372
name: Dot Recall@3
- type: dot_recall@5
value: 0.9951456310679612
name: Dot Recall@5
- type: dot_recall@10
value: 0.9991909385113269
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9417951214306157
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.9220443571171728
name: Dot Mrr@10
- type: dot_map@100
value: 0.9221065926163013
name: Dot Map@100
---
# SentenceTransformer based on nomic-ai/nomic-embed-text-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. In particular, **this model is trained on various documents which descibe frameworks for building ethical AI systems.** As such it performs well on matching questions to context in RAG applications.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1) <!-- at revision cc62377b015c53a3bf52bb2f4eb8c55326d3f162 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NomicBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("deman539/nomic-embed-text-v1")
# Run inference
sentences = [
'What mental health issues are associated with the increased use of technologies in schools and workplaces?',
'technologies has increased in schools and workplaces, and, when coupled with consequential management and \nevaluation decisions, it is leading to mental health harms such as lowered self-confidence, anxiety, depression, and \na reduced ability to use analytical reasoning.61 Documented patterns show that personal data is being aggregated by \ndata brokers to profile communities in harmful ways.62 The impact of all this data harvesting is corrosive,',
'but this approach may still produce harmful recommendations in response to other less-explicit, novel \nprompts (also relevant to CBRN Information or Capabilities, Data Privacy, Information Security, and \nObscene, Degrading and/or Abusive Content). Crafting such prompts deliberately is known as \n“jailbreaking,” or, manipulating prompts to circumvent output controls. Limitations of GAI systems can be',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8584 |
| cosine_accuracy@3 | 0.9838 |
| cosine_accuracy@5 | 0.9951 |
| cosine_accuracy@10 | 0.9992 |
| cosine_precision@1 | 0.8584 |
| cosine_precision@3 | 0.3279 |
| cosine_precision@5 | 0.199 |
| cosine_precision@10 | 0.0999 |
| cosine_recall@1 | 0.8584 |
| cosine_recall@3 | 0.9838 |
| cosine_recall@5 | 0.9951 |
| cosine_recall@10 | 0.9992 |
| cosine_ndcg@10 | 0.9418 |
| cosine_mrr@10 | 0.922 |
| **cosine_map@100** | **0.9221** |
| dot_accuracy@1 | 0.8584 |
| dot_accuracy@3 | 0.9838 |
| dot_accuracy@5 | 0.9951 |
| dot_accuracy@10 | 0.9992 |
| dot_precision@1 | 0.8584 |
| dot_precision@3 | 0.3279 |
| dot_precision@5 | 0.199 |
| dot_precision@10 | 0.0999 |
| dot_recall@1 | 0.8584 |
| dot_recall@3 | 0.9838 |
| dot_recall@5 | 0.9951 |
| dot_recall@10 | 0.9992 |
| dot_ndcg@10 | 0.9418 |
| dot_mrr@10 | 0.922 |
| dot_map@100 | 0.9221 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 2,459 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 18.7 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 93.19 tokens</li><li>max: 337 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What should organizations include in contracts to evaluate third-party GAI processes and standards?</code> | <code>services acquisition and value chain risk management; and legal compliance. <br>Data Privacy; Information <br>Integrity; Information Security; <br>Intellectual Property; Value Chain <br>and Component Integration <br>GV-6.1-006 Include clauses in contracts which allow an organization to evaluate third-party <br>GAI processes and standards. <br>Information Integrity <br>GV-6.1-007 Inventory all third-party entities with access to organizational content and <br>establish approved GAI technology and service provider lists.</code> |
| <code>What steps should be taken to manage third-party entities with access to organizational content?</code> | <code>services acquisition and value chain risk management; and legal compliance. <br>Data Privacy; Information <br>Integrity; Information Security; <br>Intellectual Property; Value Chain <br>and Component Integration <br>GV-6.1-006 Include clauses in contracts which allow an organization to evaluate third-party <br>GAI processes and standards. <br>Information Integrity <br>GV-6.1-007 Inventory all third-party entities with access to organizational content and <br>establish approved GAI technology and service provider lists.</code> |
| <code>What should entities responsible for automated systems establish before deploying the system?</code> | <code>Clear organizational oversight. Entities responsible for the development or use of automated systems <br>should lay out clear governance structures and procedures. This includes clearly-stated governance proce<br>dures before deploying the system, as well as responsibility of specific individuals or entities to oversee ongoing <br>assessment and mitigation. Organizational stakeholders including those with oversight of the business process</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 20
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 20
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | cosine_map@100 |
|:-------:|:----:|:-------------:|:--------------:|
| 0.6494 | 50 | - | 0.8493 |
| 1.0 | 77 | - | 0.8737 |
| 1.2987 | 100 | - | 0.8677 |
| 1.9481 | 150 | - | 0.8859 |
| 2.0 | 154 | - | 0.8886 |
| 2.5974 | 200 | - | 0.8913 |
| 3.0 | 231 | - | 0.9058 |
| 3.2468 | 250 | - | 0.8993 |
| 3.8961 | 300 | - | 0.9077 |
| 4.0 | 308 | - | 0.9097 |
| 4.5455 | 350 | - | 0.9086 |
| 5.0 | 385 | - | 0.9165 |
| 5.1948 | 400 | - | 0.9141 |
| 5.8442 | 450 | - | 0.9132 |
| 6.0 | 462 | - | 0.9138 |
| 6.4935 | 500 | 0.3094 | 0.9137 |
| 7.0 | 539 | - | 0.9166 |
| 7.1429 | 550 | - | 0.9172 |
| 7.7922 | 600 | - | 0.9160 |
| 8.0 | 616 | - | 0.9169 |
| 8.4416 | 650 | - | 0.9177 |
| 9.0 | 693 | - | 0.9169 |
| 9.0909 | 700 | - | 0.9177 |
| 9.7403 | 750 | - | 0.9178 |
| 10.0 | 770 | - | 0.9178 |
| 10.3896 | 800 | - | 0.9189 |
| 11.0 | 847 | - | 0.9180 |
| 11.0390 | 850 | - | 0.9180 |
| 11.6883 | 900 | - | 0.9188 |
| 12.0 | 924 | - | 0.9192 |
| 12.3377 | 950 | - | 0.9204 |
| 12.9870 | 1000 | 0.0571 | 0.9202 |
| 13.0 | 1001 | - | 0.9201 |
| 13.6364 | 1050 | - | 0.9212 |
| 14.0 | 1078 | - | 0.9203 |
| 14.2857 | 1100 | - | 0.9219 |
| 14.9351 | 1150 | - | 0.9207 |
| 15.0 | 1155 | - | 0.9207 |
| 15.5844 | 1200 | - | 0.9210 |
| 16.0 | 1232 | - | 0.9208 |
| 16.2338 | 1250 | - | 0.9216 |
| 16.8831 | 1300 | - | 0.9209 |
| 17.0 | 1309 | - | 0.9209 |
| 17.5325 | 1350 | - | 0.9216 |
| 18.0 | 1386 | - | 0.9213 |
| 18.1818 | 1400 | - | 0.9221 |
| 18.8312 | 1450 | - | 0.9217 |
| 19.0 | 1463 | - | 0.9217 |
| 19.4805 | 1500 | 0.0574 | 0.9225 |
| 20.0 | 1540 | - | 0.9221 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on nomic-ai/nomic-embed-text-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. In particular, **this model is trained on various documents which descibe frameworks for building ethical AI systems.** As such it performs well on matching questions to context in RAG applications.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1) <!-- at revision cc62377b015c53a3bf52bb2f4eb8c55326d3f162 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NomicBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("deman539/nomic-embed-text-v1")
# Run inference
sentences = [
'What mental health issues are associated with the increased use of technologies in schools and workplaces?',
'technologies has increased in schools and workplaces, and, when coupled with consequential management and \nevaluation decisions, it is leading to mental health harms such as lowered self-confidence, anxiety, depression, and \na reduced ability to use analytical reasoning.61 Documented patterns show that personal data is being aggregated by \ndata brokers to profile communities in harmful ways.62 The impact of all this data harvesting is corrosive,',
'but this approach may still produce harmful recommendations in response to other less-explicit, novel \nprompts (also relevant to CBRN Information or Capabilities, Data Privacy, Information Security, and \nObscene, Degrading and/or Abusive Content). Crafting such prompts deliberately is known as \n“jailbreaking,” or, manipulating prompts to circumvent output controls. Limitations of GAI systems can be',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8584 |
| cosine_accuracy@3 | 0.9838 |
| cosine_accuracy@5 | 0.9951 |
| cosine_accuracy@10 | 0.9992 |
| cosine_precision@1 | 0.8584 |
| cosine_precision@3 | 0.3279 |
| cosine_precision@5 | 0.199 |
| cosine_precision@10 | 0.0999 |
| cosine_recall@1 | 0.8584 |
| cosine_recall@3 | 0.9838 |
| cosine_recall@5 | 0.9951 |
| cosine_recall@10 | 0.9992 |
| cosine_ndcg@10 | 0.9418 |
| cosine_mrr@10 | 0.922 |
| **cosine_map@100** | **0.9221** |
| dot_accuracy@1 | 0.8584 |
| dot_accuracy@3 | 0.9838 |
| dot_accuracy@5 | 0.9951 |
| dot_accuracy@10 | 0.9992 |
| dot_precision@1 | 0.8584 |
| dot_precision@3 | 0.3279 |
| dot_precision@5 | 0.199 |
| dot_precision@10 | 0.0999 |
| dot_recall@1 | 0.8584 |
| dot_recall@3 | 0.9838 |
| dot_recall@5 | 0.9951 |
| dot_recall@10 | 0.9992 |
| dot_ndcg@10 | 0.9418 |
| dot_mrr@10 | 0.922 |
| dot_map@100 | 0.9221 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 2,459 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 18.7 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 93.19 tokens</li><li>max: 337 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What should organizations include in contracts to evaluate third-party GAI processes and standards?</code> | <code>services acquisition and value chain risk management; and legal compliance. <br>Data Privacy; Information <br>Integrity; Information Security; <br>Intellectual Property; Value Chain <br>and Component Integration <br>GV-6.1-006 Include clauses in contracts which allow an organization to evaluate third-party <br>GAI processes and standards. <br>Information Integrity <br>GV-6.1-007 Inventory all third-party entities with access to organizational content and <br>establish approved GAI technology and service provider lists.</code> |
| <code>What steps should be taken to manage third-party entities with access to organizational content?</code> | <code>services acquisition and value chain risk management; and legal compliance. <br>Data Privacy; Information <br>Integrity; Information Security; <br>Intellectual Property; Value Chain <br>and Component Integration <br>GV-6.1-006 Include clauses in contracts which allow an organization to evaluate third-party <br>GAI processes and standards. <br>Information Integrity <br>GV-6.1-007 Inventory all third-party entities with access to organizational content and <br>establish approved GAI technology and service provider lists.</code> |
| <code>What should entities responsible for automated systems establish before deploying the system?</code> | <code>Clear organizational oversight. Entities responsible for the development or use of automated systems <br>should lay out clear governance structures and procedures. This includes clearly-stated governance proce<br>dures before deploying the system, as well as responsibility of specific individuals or entities to oversee ongoing <br>assessment and mitigation. Organizational stakeholders including those with oversight of the business process</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 20
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 20
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | cosine_map@100 |
|:-------:|:----:|:-------------:|:--------------:|
| 0.6494 | 50 | - | 0.8493 |
| 1.0 | 77 | - | 0.8737 |
| 1.2987 | 100 | - | 0.8677 |
| 1.9481 | 150 | - | 0.8859 |
| 2.0 | 154 | - | 0.8886 |
| 2.5974 | 200 | - | 0.8913 |
| 3.0 | 231 | - | 0.9058 |
| 3.2468 | 250 | - | 0.8993 |
| 3.8961 | 300 | - | 0.9077 |
| 4.0 | 308 | - | 0.9097 |
| 4.5455 | 350 | - | 0.9086 |
| 5.0 | 385 | - | 0.9165 |
| 5.1948 | 400 | - | 0.9141 |
| 5.8442 | 450 | - | 0.9132 |
| 6.0 | 462 | - | 0.9138 |
| 6.4935 | 500 | 0.3094 | 0.9137 |
| 7.0 | 539 | - | 0.9166 |
| 7.1429 | 550 | - | 0.9172 |
| 7.7922 | 600 | - | 0.9160 |
| 8.0 | 616 | - | 0.9169 |
| 8.4416 | 650 | - | 0.9177 |
| 9.0 | 693 | - | 0.9169 |
| 9.0909 | 700 | - | 0.9177 |
| 9.7403 | 750 | - | 0.9178 |
| 10.0 | 770 | - | 0.9178 |
| 10.3896 | 800 | - | 0.9189 |
| 11.0 | 847 | - | 0.9180 |
| 11.0390 | 850 | - | 0.9180 |
| 11.6883 | 900 | - | 0.9188 |
| 12.0 | 924 | - | 0.9192 |
| 12.3377 | 950 | - | 0.9204 |
| 12.9870 | 1000 | 0.0571 | 0.9202 |
| 13.0 | 1001 | - | 0.9201 |
| 13.6364 | 1050 | - | 0.9212 |
| 14.0 | 1078 | - | 0.9203 |
| 14.2857 | 1100 | - | 0.9219 |
| 14.9351 | 1150 | - | 0.9207 |
| 15.0 | 1155 | - | 0.9207 |
| 15.5844 | 1200 | - | 0.9210 |
| 16.0 | 1232 | - | 0.9208 |
| 16.2338 | 1250 | - | 0.9216 |
| 16.8831 | 1300 | - | 0.9209 |
| 17.0 | 1309 | - | 0.9209 |
| 17.5325 | 1350 | - | 0.9216 |
| 18.0 | 1386 | - | 0.9213 |
| 18.1818 | 1400 | - | 0.9221 |
| 18.8312 | 1450 | - | 0.9217 |
| 19.0 | 1463 | - | 0.9217 |
| 19.4805 | 1500 | 0.0574 | 0.9225 |
| 20.0 | 1540 | - | 0.9221 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "nomic-ai/nomic-embed-text-v1", "library_name": "sentence-transformers", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100", "dot_accuracy@1", "dot_accuracy@3", "dot_accuracy@5", "dot_accuracy@10", "dot_precision@1", "dot_precision@3", "dot_precision@5", "dot_precision@10", "dot_recall@1", "dot_recall@3", "dot_recall@5", "dot_recall@10", "dot_ndcg@10", "dot_mrr@10", "dot_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:2459", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "What types of applications may require confidentiality during their launch?", "sentences": ["Taken together, the technical protections and practices laid out in the Blueprint for an AI Bill of Rights can help \nguard the American public against many of the potential and actual harms identified by researchers, technolo\ngists, advocates, journalists, policymakers, and communities in the United States and around the world. This \ntechnical companion is intended to be used as a reference by people across many circumstances – anyone", "deactivate AI systems that demonstrate performance or outcomes inconsistent with intended use. \nAction ID \nSuggested Action \nGAI Risks \nMG-2.4-001 \nEstablish and maintain communication plans to inform AI stakeholders as part of \nthe deactivation or disengagement process of a specific GAI system (including for \nopen-source models) or context of use, including reasons, workarounds, user \naccess removal, alternative processes, contact information, etc. \nHuman-AI Configuration", "launch may need to be confidential. Government applications, particularly law enforcement applications or \napplications that raise national security considerations, may require confidential or limited engagement based \non system sensitivities and preexisting oversight laws and structures. Concerns raised in this consultation \nshould be documented, and the automated system developers were proposing to create, use, or deploy should \nbe reconsidered based on this feedback."]}, {"source_sentence": "What is the main focus of the paper by Chandra et al. (2023) regarding Chinese influence operations?", "sentences": ["https://arxiv.org/abs/2403.06634 \nChandra, B. et al. (2023) Dismantling the Disinformation Business of Chinese Influence Operations. \nRAND. https://www.rand.org/pubs/commentary/2023/10/dismantling-the-disinformation-business-of-\nchinese.html \nCiriello, R. et al. (2024) Ethical Tensions in Human-AI Companionship: A Dialectical Inquiry into Replika. \nResearchGate. https://www.researchgate.net/publication/374505266_Ethical_Tensions_in_Human-\nAI_Companionship_A_Dialectical_Inquiry_into_Replika", "monocultures,3” resulting from repeated use of the same model, or impacts on access to \nopportunity, labor markets, and the creative economies.4 \n• \nSource of risk: Risks may emerge from factors related to the design, training, or operation of the \nGAI model itself, stemming in some cases from GAI model or system inputs, and in other cases, \nfrom GAI system outputs. Many GAI risks, however, originate from human behavior, including", "limited to GAI model or system architecture, training mechanisms and libraries, data types used for \ntraining or fine-tuning, levels of model access or availability of model weights, and application or use \ncase context. \nOrganizations may choose to tailor how they measure GAI risks based on these characteristics. They may \nadditionally wish to allocate risk management resources relative to the severity and likelihood of"]}, {"source_sentence": "What steps are being taken to enhance transparency and accountability in the GAI system?", "sentences": ["security, health, foreign relations, the environment, and the technological recovery and use of resources, among \nother topics. OSTP leads interagency science and technology policy coordination efforts, assists the Office of \nManagement and Budget (OMB) with an annual review and analysis of Federal research and development in \nbudgets, and serves as a source of scientific and technological analysis and judgment for the President with", "steps taken to update the GAI system to enhance transparency and \naccountability. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nMG-4.1-006 \nTrack dataset modifications for provenance by monitoring data deletions, \nrectification requests, and other changes that may impact the verifiability of \ncontent origins. \nInformation Integrity", "content. Some well-known techniques for provenance data tracking include digital watermarking, \nmetadata recording, digital fingerprinting, and human authentication, among others. \nProvenance Data Tracking Approaches \nProvenance data tracking techniques for GAI systems can be used to track the history and origin of data \ninputs, metadata, and synthetic content. Provenance data tracking records the origin and history for"]}, {"source_sentence": "What are some examples of mechanisms for human consideration and fallback mentioned in the context?", "sentences": ["consequences resulting from the utilization of content provenance approaches on users and \ncommunities. Furthermore, organizations can track and document the provenance of datasets to identify \ninstances in which AI-generated data is a potential root cause of performance issues with the GAI \nsystem. \nA.1.8. Incident Disclosure \nOverview \nAI incidents can be defined as an “event, circumstance, or series of events where the development, use,", "fully impact rights, opportunities, or access. Automated systems that have greater control over outcomes, \nprovide input to high-stakes decisions, relate to sensitive domains, or otherwise have a greater potential to \nmeaningfully impact rights, opportunities, or access should have greater availability (e.g., staffing) and over\nsight of human consideration and fallback mechanisms. \nAccessible. Mechanisms for human consideration and fallback, whether in-person, on paper, by phone, or", "•\nFrida Polli, CEO, Pymetrics\n•\nKaren Levy, Assistant Professor, Department of Information Science, Cornell University\n•\nNatasha Duarte, Project Director, Upturn\n•\nElana Zeide, Assistant Professor, University of Nebraska College of Law\n•\nFabian Rogers, Constituent Advocate, Office of NY State Senator Jabari Brisport and Community\nAdvocate and Floor Captain, Atlantic Plaza Towers Tenants Association"]}, {"source_sentence": "What mental health issues are associated with the increased use of technologies in schools and workplaces?", "sentences": ["but this approach may still produce harmful recommendations in response to other less-explicit, novel \nprompts (also relevant to CBRN Information or Capabilities, Data Privacy, Information Security, and \nObscene, Degrading and/or Abusive Content). Crafting such prompts deliberately is known as \n“jailbreaking,” or, manipulating prompts to circumvent output controls. Limitations of GAI systems can be", "external use, narrow vs. broad application scope, fine-tuning, and varieties of \ndata sources (e.g., grounding, retrieval-augmented generation). \nData Privacy; Intellectual \nProperty", "technologies has increased in schools and workplaces, and, when coupled with consequential management and \nevaluation decisions, it is leading to mental health harms such as lowered self-confidence, anxiety, depression, and \na reduced ability to use analytical reasoning.61 Documented patterns show that personal data is being aggregated by \ndata brokers to profile communities in harmful ways.62 The impact of all this data harvesting is corrosive,"]}], "model-index": [{"name": "SentenceTransformer based on nomic-ai/nomic-embed-text-v1", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "Unknown", "type": "unknown"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.8584142394822006, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.9838187702265372, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9951456310679612, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9991909385113269, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.8584142394822006, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.32793959007551243, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.1990291262135922, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09991909385113268, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.8584142394822006, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.9838187702265372, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9951456310679612, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9991909385113269, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9417951214306157, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.9220443571171728, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.9221065926163013, "name": "Cosine Map@100"}, {"type": "dot_accuracy@1", "value": 0.8584142394822006, "name": "Dot Accuracy@1"}, {"type": "dot_accuracy@3", "value": 0.9838187702265372, "name": "Dot Accuracy@3"}, {"type": "dot_accuracy@5", "value": 0.9951456310679612, "name": "Dot Accuracy@5"}, {"type": "dot_accuracy@10", "value": 0.9991909385113269, "name": "Dot Accuracy@10"}, {"type": "dot_precision@1", "value": 0.8584142394822006, "name": "Dot Precision@1"}, {"type": "dot_precision@3", "value": 0.32793959007551243, "name": "Dot Precision@3"}, {"type": "dot_precision@5", "value": 0.1990291262135922, "name": "Dot Precision@5"}, {"type": "dot_precision@10", "value": 0.09991909385113268, "name": "Dot Precision@10"}, {"type": "dot_recall@1", "value": 0.8584142394822006, "name": "Dot Recall@1"}, {"type": "dot_recall@3", "value": 0.9838187702265372, "name": "Dot Recall@3"}, {"type": "dot_recall@5", "value": 0.9951456310679612, "name": "Dot Recall@5"}, {"type": "dot_recall@10", "value": 0.9991909385113269, "name": "Dot Recall@10"}, {"type": "dot_ndcg@10", "value": 0.9417951214306157, "name": "Dot Ndcg@10"}, {"type": "dot_mrr@10", "value": 0.9220443571171728, "name": "Dot Mrr@10"}, {"type": "dot_map@100", "value": 0.9221065926163013, "name": "Dot Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,334 |
mansoorhamidzadeh/mt5_en_fa_translation
|
mansoorhamidzadeh
|
text2text-generation
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"persian",
"mt5-small",
"persian translation",
"seq2seq",
"farsi",
"fa",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-06-27T10:06:35Z |
2024-06-28T13:28:36+00:00
| 24 | 0 |
---
language:
- fa
library_name: transformers
license: mit
tags:
- persian
- mt5-small
- mt5
- persian translation
- seq2seq
- farsi
---
# Model Card: English to Persian Translation using MT5-Small
## Model Details
**Model Description:**
This model is designed to translate text from English to Persian (Farsi) using the MT5-Small architecture. MT5 is a multilingual variant of the T5 model, pretrained on a diverse set of languages.
**Intended Use:**
The model is intended for use in applications where automatic translation from English to Persian is required. It can be used for translating documents, web pages, or any other text-based content.
**Model Architecture:**
- **Model Type:** MT5-Small
- **Language Pair:** English (en) to Persian (fa)
## Training Data
**Dataset:**
The model was trained on a dataset consisting of 100,000 parallel sentences of English and Persian text. The data includes various sources to cover a wide range of topics and ensure diversity.
**Data Preprocessing:**
- Text normalization was performed to ensure consistency.
- Tokenization was done using the SentencePiece tokenizer.
## Training Procedure
**Training Configuration:**
- **Number of Epochs:** 4
- **Batch Size:** 8
- **Learning Rate:** 5e-5
- **Optimizer:** AdamW
**Hardware:**
- **Training Environment:** NVIDIA P100 GPU
- **Training Time:** Approximately 4 hours
## How To Use
```python
import torch
from transformers import pipeline, MT5ForConditionalGeneration, MT5Tokenizer, Text2TextGenerationPipeline
# Function to translate using the pipeline
def translate_with_pipeline(text):
translator = Text2TextGenerationPipeline(model='NLPclass/mt5_en_fa_translation',tokenizer='NLPclass/mt5_en_fa_translation')
return translator(text,, max_length=128,num_beams=4)[0]['generated_text']
# Example usage
text = "Hello, how are you?"
# Using pipeline
print("Pipeline Translation:", translate_with_pipeline(text))
```
## Ethical Considerations
- The model's translations are only as good as the data it was trained on, and biases present in the training data may propagate through the model's outputs.
- Users should be cautious when using the model for critical tasks, as automatic translations can sometimes be inaccurate or misleading.
## Citation
If you use this model in your research or applications, please cite it as follows:
```bibtex
@misc{mt5_en_fa_translation,
author = {mansoorhamidzadeh},
title = {English to Persian Translation using MT5-Small},
year = {2024},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/mansoorhamidzadeh/mt5_en_fa_translation}},
}
| null |
Non_BioNLP
|
# Model Card: English to Persian Translation using MT5-Small
## Model Details
**Model Description:**
This model is designed to translate text from English to Persian (Farsi) using the MT5-Small architecture. MT5 is a multilingual variant of the T5 model, pretrained on a diverse set of languages.
**Intended Use:**
The model is intended for use in applications where automatic translation from English to Persian is required. It can be used for translating documents, web pages, or any other text-based content.
**Model Architecture:**
- **Model Type:** MT5-Small
- **Language Pair:** English (en) to Persian (fa)
## Training Data
**Dataset:**
The model was trained on a dataset consisting of 100,000 parallel sentences of English and Persian text. The data includes various sources to cover a wide range of topics and ensure diversity.
**Data Preprocessing:**
- Text normalization was performed to ensure consistency.
- Tokenization was done using the SentencePiece tokenizer.
## Training Procedure
**Training Configuration:**
- **Number of Epochs:** 4
- **Batch Size:** 8
- **Learning Rate:** 5e-5
- **Optimizer:** AdamW
**Hardware:**
- **Training Environment:** NVIDIA P100 GPU
- **Training Time:** Approximately 4 hours
## How To Use
```python
import torch
from transformers import pipeline, MT5ForConditionalGeneration, MT5Tokenizer, Text2TextGenerationPipeline
# Function to translate using the pipeline
def translate_with_pipeline(text):
translator = Text2TextGenerationPipeline(model='NLPclass/mt5_en_fa_translation',tokenizer='NLPclass/mt5_en_fa_translation')
return translator(text,, max_length=128,num_beams=4)[0]['generated_text']
# Example usage
text = "Hello, how are you?"
# Using pipeline
print("Pipeline Translation:", translate_with_pipeline(text))
```
## Ethical Considerations
- The model's translations are only as good as the data it was trained on, and biases present in the training data may propagate through the model's outputs.
- Users should be cautious when using the model for critical tasks, as automatic translations can sometimes be inaccurate or misleading.
## Citation
If you use this model in your research or applications, please cite it as follows:
```bibtex
@misc{mt5_en_fa_translation,
author = {mansoorhamidzadeh},
title = {English to Persian Translation using MT5-Small},
year = {2024},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/mansoorhamidzadeh/mt5_en_fa_translation}},
}
|
{"language": ["fa"], "library_name": "transformers", "license": "mit", "tags": ["persian", "mt5-small", "mt5", "persian translation", "seq2seq", "farsi"]}
|
task
|
[
"TRANSLATION"
] | 44,335 |
ElmehdiSMILI/jais-13b
|
ElmehdiSMILI
|
text-generation
|
[
"transformers",
"pytorch",
"jais",
"text-generation",
"Arabic",
"English",
"LLM",
"Decoder",
"causal-lm",
"custom_code",
"ar",
"en",
"arxiv:2308.16149",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | 2024-07-22T16:23:09Z |
2024-09-04T13:42:53+00:00
| 9 | 0 |
---
language:
- ar
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- Arabic
- English
- LLM
- Decoder
- causal-lm
---
# Jais-13b
<!-- Provide a quick summary of what the model is/does. -->
This is a 13 billion parameter pre-trained bilingual large language model for both Arabic and English,
trained on a dataset containing 72 billion Arabic tokens and 279 billion English/code tokens.
The Arabic data is iterated over for 1.6 epochs (as opposed to 1 epoch for English/code), for a total of 395 billion tokens of training.
The model is based on transformer-based decoder-only (GPT-3) architecture and uses SwiGLU
non-linearity. It implements ALiBi position embeddings, enabling the model to extrapolate
to long sequence lengths, providing improved context handling and model precision.
## Getting started
Below is sample code to use the model. Note that the model requires a custom model class, so users must
enable `trust_remote_code=True` while loading the model.
Also, note that this code is tested on `transformers==4.28.0`.
```python
# -*- coding: utf-8 -*-
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "core42/jais-13b"
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=True)
def get_response(text,tokenizer=tokenizer,model=model):
input_ids = tokenizer(text, return_tensors="pt").input_ids
inputs = input_ids.to(device)
input_len = inputs.shape[-1]
generate_ids = model.generate(
inputs,
top_p=0.9,
temperature=0.3,
max_length=200-input_len,
min_length=input_len + 4,
repetition_penalty=1.2,
do_sample=True,
)
response = tokenizer.batch_decode(
generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)[0]
return response
text= "عاصمة دولة الإمارات العربية المتحدة ه"
print(get_response(text))
text = "The capital of UAE is"
print(get_response(text))
```
## Model Details
- **Developed by:** [Inception](https://www.inceptioniai.org/en/), [Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)](https://mbzuai.ac.ae/), and [Cerebras Systems](https://www.cerebras.net/).
- **Language(s) (NLP):** Arabic and English
- **License:** Apache 2.0
- **Input:** Text only data.
- **Output:** Model generates text.
- **Paper :** [Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models](https://arxiv.org/abs/2308.16149)
- **Demo :** [Access here](https://arabic-gpt.ai)
## Intended Use
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
We release the Jais 13B model under a full open source license. We welcome all feedback and opportunities to collaborate.
This model is the first release from the Inception - MBZUAI - Cerebras parternship, and at the time of release,
achieved state of the art across a comprehensive Arabic test suite as described in the accompanying technical report.
Some potential downstream uses include:
- *Research*: This model can be used by researchers and developers.
- *Commercial Use*: It can be used as a base model to further fine-tune for specific use cases (similar to [jais-13b-chat](https://huggingface.co/inception-mbzuai/jais-13b-chat)).
Some potential use cases include:
- Chat-assistants.
- Customer service.
Audiences that we hope will benefit from our model:
- *Academics*: For those researching Arabic natural language processing.
- *Businesses*: Companies targeting Arabic-speaking audiences.
- *Developers*: Those integrating Arabic language capabilities in apps.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
While Jais-13b is a powerful Arabic and English bilingual model, it's essential to understand its limitations and the potential of misuse.
It is prohibited to use the model in any manner that violates applicable laws or regulations.
The following are some example scenarios where the model should not be used.
- *Malicious Use*: The model should not be used for generating harmful, misleading, or inappropriate content. This includes but is not limited to:
- Generating or promoting hate speech, violence, or discrimination.
- Spreading misinformation or fake news.
- Engaging in or promoting illegal activities.
- *Sensitive Information*: The model should not be used to handle or generate personal, confidential, or sensitive information.
- *Generalization Across All Languages*: Jais-13b is bilingual and optimized for Arabic and English, it should not be assumed to have equal proficiency in other languages or dialects.
- *High-Stakes Decisions*: The model should not be used to make high-stakes decisions without human oversight. This includes medical, legal, financial, or safety-critical decisions.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model is trained on publicly available data which was in part curated by Inception. We have employed different
techniqes to reduce bias in the model. While efforts have been made to minimize biases, it is likely that the model, as with all LLM models, will exhibit some bias.
The model is trained as an AI assistant for Arabic and English speakers. The model is limited to produce responses for queries in these two languages
and may not produce appropriate responses to other language queries.
By using Jais, you acknowledge and accept that, as with any large language model, it may generate incorrect, misleading and/or offensive information or content.
The information is not intended as advice and should not be relied upon in any way, nor are we responsible for any of the content or consequences resulting from its use.
We are continuously working to develop models with greater capabilities, and as such, welcome any feedback on the model
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
For the pre-training of Jais-13b, we used a diverse bilingual corpus sourced from the Web and other sources. We also used publicly available English and code datasets.
To collect Arabic data, we use multiple sources including web pages, wikipedia articles, news articles, Arabic books,
and social network content. We augment the volume of Arabic data by translating English to Arabic using an in-house machine translation system.
We restrict this to high quality English resources such as English Wikipedia and English books. Further details about the training data can be found in the technical report.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Training was performed on the Condor Galaxy 1 (CG-1) supercomputer platform.
#### Training Hyperparameters
| Hyperparameter | Value |
|----------------------------|------------------------------|
| Precision | fp32 |
| Optimizer | AdamW |
| Learning rate | 0 to 0.012 (<= 95 steps) |
| | 0.012 to 0.0012 (> 95 steps) |
| Weight decay | 0.1 |
| Batch size | 1920 |
| Steps | 100551 |
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
We conducted a comprehensive evaluation of Jais and benchmarked it other leading base language models, focusing on both English and Arabic. The evaluation criteria spanned various dimensions, including:
- **Knowledge:** How well the model answers factual questions.
- **Reasoning:** The model's ability to answer questions requiring reasoning.
- **Misinformation/Bias:** Assessment of the model's susceptibility to generating false or misleading information, and its neutrality.
Arabic evaluation results:
| Models | Avg | EXAMS | MMLU (M) | LitQA | Hellaswag | PIQA | BoolQA | SituatedQA | ARC-C | OpenBookQA | TruthfulQA | CrowS-Pairs |
|-------------|-------|-------|----------|-------|-----------|------|--------|------------|-------|------------|------------|-------------|
| Jais (13B) | **46.5** | 40.4 | 30.0 | 58.3 | 57.7 | 67.6 | 62.6 | 42.5 | 35.8 | 32.4 | 41.1 | 58.4 |
| BLOOM (7.1B) | 40.9 |34.0 | 28.2 | 37.1 | 40.9 | 58.4 | 59.9 | 39.1 | 27.3 | 28.0 | 44.4 | 53.5 |
| LLaMA2 (13B) | 38.1 | 29.2 | 28.4 | 32.0 | 34.3 | 52.9 | 63.8 | 36.4 | 24.3 | 30.0 | 45.5 | 49.9 |
| AraT5 (220M) | 32.0 | 24.7 | 23.8 | 26.3 | 25.5 | 50.4 | 58.2 | 33.9 | 24.7 | 25.4 | 20.9 | 47.2 |
| AraBART (139M) | 36.7 | 26.5 | 27.5 | 34.3 | 28.1 | 52.6 | 57.1 | 34.6 | 25.1 | 28.6 | 49.8 | 48.8 |
All tasks above report accuracy or F1 scores (the higher the better). For the sake of brevity, we do not include results over English tasks.
Detailed comparisons in both languages and evaluation dataset details can be found in the technical report.
## Citation
```
@misc{sengupta2023jais,
title={Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models},
author={Neha Sengupta and Sunil Kumar Sahu and Bokang Jia and Satheesh Katipomu and Haonan Li and Fajri Koto and Osama Mohammed Afzal and Samta Kamboj and Onkar Pandit and Rahul Pal and Lalit Pradhan and Zain Muhammad Mujahid and Massa Baali and Alham Fikri Aji and Zhengzhong Liu and Andy Hock and Andrew Feldman and Jonathan Lee and Andrew Jackson and Preslav Nakov and Timothy Baldwin and Eric Xing},
year={2023},
eprint={2308.16149},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Copyright Inception Institute of Artificial Intelligence Ltd.
| null |
Non_BioNLP
|
# Jais-13b
<!-- Provide a quick summary of what the model is/does. -->
This is a 13 billion parameter pre-trained bilingual large language model for both Arabic and English,
trained on a dataset containing 72 billion Arabic tokens and 279 billion English/code tokens.
The Arabic data is iterated over for 1.6 epochs (as opposed to 1 epoch for English/code), for a total of 395 billion tokens of training.
The model is based on transformer-based decoder-only (GPT-3) architecture and uses SwiGLU
non-linearity. It implements ALiBi position embeddings, enabling the model to extrapolate
to long sequence lengths, providing improved context handling and model precision.
## Getting started
Below is sample code to use the model. Note that the model requires a custom model class, so users must
enable `trust_remote_code=True` while loading the model.
Also, note that this code is tested on `transformers==4.28.0`.
```python
# -*- coding: utf-8 -*-
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "core42/jais-13b"
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=True)
def get_response(text,tokenizer=tokenizer,model=model):
input_ids = tokenizer(text, return_tensors="pt").input_ids
inputs = input_ids.to(device)
input_len = inputs.shape[-1]
generate_ids = model.generate(
inputs,
top_p=0.9,
temperature=0.3,
max_length=200-input_len,
min_length=input_len + 4,
repetition_penalty=1.2,
do_sample=True,
)
response = tokenizer.batch_decode(
generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)[0]
return response
text= "عاصمة دولة الإمارات العربية المتحدة ه"
print(get_response(text))
text = "The capital of UAE is"
print(get_response(text))
```
## Model Details
- **Developed by:** [Inception](https://www.inceptioniai.org/en/), [Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)](https://mbzuai.ac.ae/), and [Cerebras Systems](https://www.cerebras.net/).
- **Language(s) (NLP):** Arabic and English
- **License:** Apache 2.0
- **Input:** Text only data.
- **Output:** Model generates text.
- **Paper :** [Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models](https://arxiv.org/abs/2308.16149)
- **Demo :** [Access here](https://arabic-gpt.ai)
## Intended Use
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
We release the Jais 13B model under a full open source license. We welcome all feedback and opportunities to collaborate.
This model is the first release from the Inception - MBZUAI - Cerebras parternship, and at the time of release,
achieved state of the art across a comprehensive Arabic test suite as described in the accompanying technical report.
Some potential downstream uses include:
- *Research*: This model can be used by researchers and developers.
- *Commercial Use*: It can be used as a base model to further fine-tune for specific use cases (similar to [jais-13b-chat](https://huggingface.co/inception-mbzuai/jais-13b-chat)).
Some potential use cases include:
- Chat-assistants.
- Customer service.
Audiences that we hope will benefit from our model:
- *Academics*: For those researching Arabic natural language processing.
- *Businesses*: Companies targeting Arabic-speaking audiences.
- *Developers*: Those integrating Arabic language capabilities in apps.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
While Jais-13b is a powerful Arabic and English bilingual model, it's essential to understand its limitations and the potential of misuse.
It is prohibited to use the model in any manner that violates applicable laws or regulations.
The following are some example scenarios where the model should not be used.
- *Malicious Use*: The model should not be used for generating harmful, misleading, or inappropriate content. This includes but is not limited to:
- Generating or promoting hate speech, violence, or discrimination.
- Spreading misinformation or fake news.
- Engaging in or promoting illegal activities.
- *Sensitive Information*: The model should not be used to handle or generate personal, confidential, or sensitive information.
- *Generalization Across All Languages*: Jais-13b is bilingual and optimized for Arabic and English, it should not be assumed to have equal proficiency in other languages or dialects.
- *High-Stakes Decisions*: The model should not be used to make high-stakes decisions without human oversight. This includes medical, legal, financial, or safety-critical decisions.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model is trained on publicly available data which was in part curated by Inception. We have employed different
techniqes to reduce bias in the model. While efforts have been made to minimize biases, it is likely that the model, as with all LLM models, will exhibit some bias.
The model is trained as an AI assistant for Arabic and English speakers. The model is limited to produce responses for queries in these two languages
and may not produce appropriate responses to other language queries.
By using Jais, you acknowledge and accept that, as with any large language model, it may generate incorrect, misleading and/or offensive information or content.
The information is not intended as advice and should not be relied upon in any way, nor are we responsible for any of the content or consequences resulting from its use.
We are continuously working to develop models with greater capabilities, and as such, welcome any feedback on the model
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
For the pre-training of Jais-13b, we used a diverse bilingual corpus sourced from the Web and other sources. We also used publicly available English and code datasets.
To collect Arabic data, we use multiple sources including web pages, wikipedia articles, news articles, Arabic books,
and social network content. We augment the volume of Arabic data by translating English to Arabic using an in-house machine translation system.
We restrict this to high quality English resources such as English Wikipedia and English books. Further details about the training data can be found in the technical report.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Training was performed on the Condor Galaxy 1 (CG-1) supercomputer platform.
#### Training Hyperparameters
| Hyperparameter | Value |
|----------------------------|------------------------------|
| Precision | fp32 |
| Optimizer | AdamW |
| Learning rate | 0 to 0.012 (<= 95 steps) |
| | 0.012 to 0.0012 (> 95 steps) |
| Weight decay | 0.1 |
| Batch size | 1920 |
| Steps | 100551 |
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
We conducted a comprehensive evaluation of Jais and benchmarked it other leading base language models, focusing on both English and Arabic. The evaluation criteria spanned various dimensions, including:
- **Knowledge:** How well the model answers factual questions.
- **Reasoning:** The model's ability to answer questions requiring reasoning.
- **Misinformation/Bias:** Assessment of the model's susceptibility to generating false or misleading information, and its neutrality.
Arabic evaluation results:
| Models | Avg | EXAMS | MMLU (M) | LitQA | Hellaswag | PIQA | BoolQA | SituatedQA | ARC-C | OpenBookQA | TruthfulQA | CrowS-Pairs |
|-------------|-------|-------|----------|-------|-----------|------|--------|------------|-------|------------|------------|-------------|
| Jais (13B) | **46.5** | 40.4 | 30.0 | 58.3 | 57.7 | 67.6 | 62.6 | 42.5 | 35.8 | 32.4 | 41.1 | 58.4 |
| BLOOM (7.1B) | 40.9 |34.0 | 28.2 | 37.1 | 40.9 | 58.4 | 59.9 | 39.1 | 27.3 | 28.0 | 44.4 | 53.5 |
| LLaMA2 (13B) | 38.1 | 29.2 | 28.4 | 32.0 | 34.3 | 52.9 | 63.8 | 36.4 | 24.3 | 30.0 | 45.5 | 49.9 |
| AraT5 (220M) | 32.0 | 24.7 | 23.8 | 26.3 | 25.5 | 50.4 | 58.2 | 33.9 | 24.7 | 25.4 | 20.9 | 47.2 |
| AraBART (139M) | 36.7 | 26.5 | 27.5 | 34.3 | 28.1 | 52.6 | 57.1 | 34.6 | 25.1 | 28.6 | 49.8 | 48.8 |
All tasks above report accuracy or F1 scores (the higher the better). For the sake of brevity, we do not include results over English tasks.
Detailed comparisons in both languages and evaluation dataset details can be found in the technical report.
## Citation
```
@misc{sengupta2023jais,
title={Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models},
author={Neha Sengupta and Sunil Kumar Sahu and Bokang Jia and Satheesh Katipomu and Haonan Li and Fajri Koto and Osama Mohammed Afzal and Samta Kamboj and Onkar Pandit and Rahul Pal and Lalit Pradhan and Zain Muhammad Mujahid and Massa Baali and Alham Fikri Aji and Zhengzhong Liu and Andy Hock and Andrew Feldman and Jonathan Lee and Andrew Jackson and Preslav Nakov and Timothy Baldwin and Eric Xing},
year={2023},
eprint={2308.16149},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Copyright Inception Institute of Artificial Intelligence Ltd.
|
{"language": ["ar", "en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["Arabic", "English", "LLM", "Decoder", "causal-lm"]}
|
task
|
[
"TRANSLATION"
] | 44,336 |
hitloop/indic_trans2
|
hitloop
|
translation
|
[
"indic_trans_v2",
"translation",
"en",
"hi",
"te",
"ta",
"mr",
"kn",
"ml",
"endpoints_compatible",
"region:us"
] | 2023-11-03T14:34:52Z |
2023-11-04T13:23:11+00:00
| 0 | 2 |
---
language:
- en
- hi
- te
- ta
- mr
- kn
- ml
pipeline_tag: translation
tags:
- indic_trans_v2
---
# IndicTrans2 HF Compatible Models
In this section, we provide details on how to use our [IndicTrans2](https://github.com/AI4Bharat/IndicTrans2) models which were originally trained with the [fairseq](https://github.com/facebookresearch/fairseq) to [HuggingFace transformers](https://huggingface.co/docs/transformers/index) for inference purpose. Our scripts for HuggingFace compatible models are adapted from [M2M100 repository](https://github.com/huggingface/transformers/tree/main/src/transformers/models/m2m_100).
### Setup
To get started, follow these steps to set up the environment:
```
# Clone the github repository and navigate to the project directory.
git clone https://github.com/AI4Bharat/IndicTrans2
cd IndicTrans2
# Install all the dependencies and requirements associated with the project for running HF compatible models.
source install.sh
```
> Note: The `install.sh` script in this directory is specifically for running HF compatible models for inference.
### Models
| Model | 🤗 HuggingFace Checkpoints |
|----------|-----------------------------------|
| Preprint En-Indic | [ai4bharat/indictrans2-en-indic-1B](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B) |
| Preprint Indic-En | [ai4bharat/indictrans2-indic-en-1B](https://huggingface.co/ai4bharat/indictrans2-indic-en-1B) |
### Inference
With the conversion complete, you can now perform inference using the HuggingFace Transformers.
You can start with the provided `example.py` script and customize it for your specific translation use case:
```bash
python3 example.py
```
Feel free to modify the `example.py` script to suit your translation needs.
### Citation
```
@article{ai4bharat2023indictrans2,
title = {IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages},
author = {AI4Bharat and Jay Gala and Pranjal A. Chitale and Raghavan AK and Sumanth Doddapaneni and Varun Gumma and Aswanth Kumar and Janki Nawale and Anupama Sujatha and Ratish Puduppully and Vivek Raghavan and Pratyush Kumar and Mitesh M. Khapra and Raj Dabre and Anoop Kunchukuttan},
year = {2023},
journal = {arXiv preprint arXiv: 2305.16307}
}
```
| null |
Non_BioNLP
|
# IndicTrans2 HF Compatible Models
In this section, we provide details on how to use our [IndicTrans2](https://github.com/AI4Bharat/IndicTrans2) models which were originally trained with the [fairseq](https://github.com/facebookresearch/fairseq) to [HuggingFace transformers](https://huggingface.co/docs/transformers/index) for inference purpose. Our scripts for HuggingFace compatible models are adapted from [M2M100 repository](https://github.com/huggingface/transformers/tree/main/src/transformers/models/m2m_100).
### Setup
To get started, follow these steps to set up the environment:
```
# Clone the github repository and navigate to the project directory.
git clone https://github.com/AI4Bharat/IndicTrans2
cd IndicTrans2
# Install all the dependencies and requirements associated with the project for running HF compatible models.
source install.sh
```
> Note: The `install.sh` script in this directory is specifically for running HF compatible models for inference.
### Models
| Model | 🤗 HuggingFace Checkpoints |
|----------|-----------------------------------|
| Preprint En-Indic | [ai4bharat/indictrans2-en-indic-1B](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B) |
| Preprint Indic-En | [ai4bharat/indictrans2-indic-en-1B](https://huggingface.co/ai4bharat/indictrans2-indic-en-1B) |
### Inference
With the conversion complete, you can now perform inference using the HuggingFace Transformers.
You can start with the provided `example.py` script and customize it for your specific translation use case:
```bash
python3 example.py
```
Feel free to modify the `example.py` script to suit your translation needs.
### Citation
```
@article{ai4bharat2023indictrans2,
title = {IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages},
author = {AI4Bharat and Jay Gala and Pranjal A. Chitale and Raghavan AK and Sumanth Doddapaneni and Varun Gumma and Aswanth Kumar and Janki Nawale and Anupama Sujatha and Ratish Puduppully and Vivek Raghavan and Pratyush Kumar and Mitesh M. Khapra and Raj Dabre and Anoop Kunchukuttan},
year = {2023},
journal = {arXiv preprint arXiv: 2305.16307}
}
```
|
{"language": ["en", "hi", "te", "ta", "mr", "kn", "ml"], "pipeline_tag": "translation", "tags": ["indic_trans_v2"]}
|
task
|
[
"TRANSLATION"
] | 44,337 |
Peenipat/ThaiT5-Instruct
|
Peenipat
|
text2text-generation
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"th",
"dataset:airesearch/wangchanx-seed-free-synthetic-instruct-thai-120k",
"base_model:kobkrit/thai-t5-base",
"base_model:finetune:kobkrit/thai-t5-base",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2025-02-14T04:13:33Z |
2025-02-18T19:50:27+00:00
| 109 | 1 |
---
base_model:
- kobkrit/thai-t5-base
datasets:
- airesearch/wangchanx-seed-free-synthetic-instruct-thai-120k
language:
- th
library_name: transformers
license: mit
metrics:
- bleu
- rouge
- exact_match
pipeline_tag: text2text-generation
---
# **ThaiT5-Instruct**
## **Model Description**
`ThaiT5-Instruct` is a fine-tuned version of `kobkrit/thai-t5-base`, trained on the **WangchanX Seed-Free Synthetic Instruct Thai 120k** dataset. This model supports various NLP tasks, including:
- **Conversation**
- **Multiple Choice Reasoning**
- **Brainstorming**
- **Question Answering**
- **Summarization**
The model has been trained for **13 epochs** and can be further improved with more resources.
---
## **Training Details**
- **Base Model**: `kobkrit/thai-t5-base`
- **Epochs**: `13`
- **Batch Size per Device**: `32`
- **Gradient Accumulation Steps**: `2`
- **Optimizer**: AdamW
- **Hardware Used**: `A100`
**Training Loss per Epoch**:
```
[2.2463, 1.7010, 1.5261, 1.4626, 1.4085, 1.3844, 1.3647, 1.3442, 1.3373, 1.3182, 1.3169, 1.3016]
```
**Validation Loss per Epoch**:
```
[1.4781, 1.3761, 1.3131, 1.2775, 1.2549, 1.2364, 1.2226, 1.2141, 1.2043, 1.1995, 1.1954, 1.1929]
```
---
## **Evaluation Results**
The model was evaluated using several NLP metrics, with the following results:
| Metric | Score |
|-------------|--------|
| ROUGE-1 | 0.0617 |
| ROUGE-2 | 0.0291 |
| ROUGE-L | 0.061 |
| BLEU | 0.0093 |
| Exact Match | 0.2516 |
| F1 Score | 27.8984 |
---
## **Usage**
### **Basic Inference (Without Context)**
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("Peenipat/ThaiT5-Instruct", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Peenipat/ThaiT5-Instruct")
input_text = "หวัดดี"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(input_ids=inputs["input_ids"])
output_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(output_text)
```
**Example:**
```python
input_text = "คำว่า ฮัก หมายถึงอะไร"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(input_ids=inputs["input_ids"])
output_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(output_text)
```
**Output:**
```
"ฮัก หมายถึง ภาษา สันสกฤต ภาษา สันสกฤต "
```
---
### **Question Answering (With Context)**
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
model = AutoModelForSeq2SeqLM.from_pretrained("Peenipat/ThaiT5-Instruct", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Peenipat/ThaiT5-Instruct")
model.eval()
qa_pipeline = pipeline("text2text-generation", model=model, tokenizer=tokenizer)
def ask_question():
context = input("Input Context: ")
question = input("Input Question: ")
input_text = f"Context: {context} Question: {question}"
output = qa_pipeline(input_text,
max_length=60,
min_length=20,
no_repeat_ngram_size=3,
num_beams=5,
early_stopping=True)
output_text = output[0]['generated_text']
print("\nOutput:")
print(output_text)
```
**Example:**
```
Input Context: ฮัก คือความรู้สึกผูกพันและห่วงใยที่เกิดขึ้นระหว่างคนที่มีความสำคัญต่อกัน ไม่ว่าจะเป็นฮักหนุ่มสาว ฮักพ่อแม่ลูก หรือฮักพี่น้อง ฮักบ่ได้หมายถึงแค่ความสุข แต่ยังรวมถึงความเข้าใจ การอดทน และการเสียสละเพื่อกันและกัน คนอีสานมักแสดงความฮักผ่านการกระทำมากกว่าคำพูด เช่น การดูแลเอาใจใส่ และการอยู่เคียงข้างยามทุกข์ยาก ฮักแท้คือฮักที่มั่นคง บ่เปลี่ยนแปลงตามกาลเวลา และเต็มไปด้วยความจริงใจ
Input Question: คำว่า ฮัก หมายถึงอะไร
Output:
ฮัก ความรู้สึกผูกพันและห่วงใย เกิดขึ้นระหว่างคนมีความสําคัญต่อกัน ฮักบ่ได้หมายถึงความสุข ความเข้าใจ การอดทน เสียสละเพื่อกันและกัน คนอีสานมักแสดงความฮักผ่านการกระทํามากกว่าคําพูด ดูแลเอาใจใส่ ที่อยู่เคียงข้างยามทุกข์
```
---
## **Limitations & Future Improvements**
- The model can be further improved with additional training resources.
- Performance on complex reasoning tasks may require further fine-tuning on domain-specific datasets.
- The model does not possess general intelligence like **ChatGPT**, **Gemini**, or other advanced AI models. It excels at extracting answers from given contexts rather than generating knowledge independently.
---
## **Citation**
If you use this model, please cite it as follows:
```bibtex
@misc{PeenipatThaiT5Instruct,
title={ThaiT5-Instruct},
author={Peenipat},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/Peenipat/ThaiT5-Instruct}
}
| null |
Non_BioNLP
|
# **ThaiT5-Instruct**
## **Model Description**
`ThaiT5-Instruct` is a fine-tuned version of `kobkrit/thai-t5-base`, trained on the **WangchanX Seed-Free Synthetic Instruct Thai 120k** dataset. This model supports various NLP tasks, including:
- **Conversation**
- **Multiple Choice Reasoning**
- **Brainstorming**
- **Question Answering**
- **Summarization**
The model has been trained for **13 epochs** and can be further improved with more resources.
---
## **Training Details**
- **Base Model**: `kobkrit/thai-t5-base`
- **Epochs**: `13`
- **Batch Size per Device**: `32`
- **Gradient Accumulation Steps**: `2`
- **Optimizer**: AdamW
- **Hardware Used**: `A100`
**Training Loss per Epoch**:
```
[2.2463, 1.7010, 1.5261, 1.4626, 1.4085, 1.3844, 1.3647, 1.3442, 1.3373, 1.3182, 1.3169, 1.3016]
```
**Validation Loss per Epoch**:
```
[1.4781, 1.3761, 1.3131, 1.2775, 1.2549, 1.2364, 1.2226, 1.2141, 1.2043, 1.1995, 1.1954, 1.1929]
```
---
## **Evaluation Results**
The model was evaluated using several NLP metrics, with the following results:
| Metric | Score |
|-------------|--------|
| ROUGE-1 | 0.0617 |
| ROUGE-2 | 0.0291 |
| ROUGE-L | 0.061 |
| BLEU | 0.0093 |
| Exact Match | 0.2516 |
| F1 Score | 27.8984 |
---
## **Usage**
### **Basic Inference (Without Context)**
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("Peenipat/ThaiT5-Instruct", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Peenipat/ThaiT5-Instruct")
input_text = "หวัดดี"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(input_ids=inputs["input_ids"])
output_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(output_text)
```
**Example:**
```python
input_text = "คำว่า ฮัก หมายถึงอะไร"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(input_ids=inputs["input_ids"])
output_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(output_text)
```
**Output:**
```
"ฮัก หมายถึง ภาษา สันสกฤต ภาษา สันสกฤต "
```
---
### **Question Answering (With Context)**
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
model = AutoModelForSeq2SeqLM.from_pretrained("Peenipat/ThaiT5-Instruct", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Peenipat/ThaiT5-Instruct")
model.eval()
qa_pipeline = pipeline("text2text-generation", model=model, tokenizer=tokenizer)
def ask_question():
context = input("Input Context: ")
question = input("Input Question: ")
input_text = f"Context: {context} Question: {question}"
output = qa_pipeline(input_text,
max_length=60,
min_length=20,
no_repeat_ngram_size=3,
num_beams=5,
early_stopping=True)
output_text = output[0]['generated_text']
print("\nOutput:")
print(output_text)
```
**Example:**
```
Input Context: ฮัก คือความรู้สึกผูกพันและห่วงใยที่เกิดขึ้นระหว่างคนที่มีความสำคัญต่อกัน ไม่ว่าจะเป็นฮักหนุ่มสาว ฮักพ่อแม่ลูก หรือฮักพี่น้อง ฮักบ่ได้หมายถึงแค่ความสุข แต่ยังรวมถึงความเข้าใจ การอดทน และการเสียสละเพื่อกันและกัน คนอีสานมักแสดงความฮักผ่านการกระทำมากกว่าคำพูด เช่น การดูแลเอาใจใส่ และการอยู่เคียงข้างยามทุกข์ยาก ฮักแท้คือฮักที่มั่นคง บ่เปลี่ยนแปลงตามกาลเวลา และเต็มไปด้วยความจริงใจ
Input Question: คำว่า ฮัก หมายถึงอะไร
Output:
ฮัก ความรู้สึกผูกพันและห่วงใย เกิดขึ้นระหว่างคนมีความสําคัญต่อกัน ฮักบ่ได้หมายถึงความสุข ความเข้าใจ การอดทน เสียสละเพื่อกันและกัน คนอีสานมักแสดงความฮักผ่านการกระทํามากกว่าคําพูด ดูแลเอาใจใส่ ที่อยู่เคียงข้างยามทุกข์
```
---
## **Limitations & Future Improvements**
- The model can be further improved with additional training resources.
- Performance on complex reasoning tasks may require further fine-tuning on domain-specific datasets.
- The model does not possess general intelligence like **ChatGPT**, **Gemini**, or other advanced AI models. It excels at extracting answers from given contexts rather than generating knowledge independently.
---
## **Citation**
If you use this model, please cite it as follows:
```bibtex
@misc{PeenipatThaiT5Instruct,
title={ThaiT5-Instruct},
author={Peenipat},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/Peenipat/ThaiT5-Instruct}
}
|
{"base_model": ["kobkrit/thai-t5-base"], "datasets": ["airesearch/wangchanx-seed-free-synthetic-instruct-thai-120k"], "language": ["th"], "library_name": "transformers", "license": "mit", "metrics": ["bleu", "rouge", "exact_match"], "pipeline_tag": "text2text-generation"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 44,338 |
richard-park/inst-aihub-trans
|
richard-park
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-07-10T23:45:45Z |
2024-07-11T00:52:31+00:00
| 2,056 | 0 |
---
language:
- ko
- en
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** richard
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** translation
- **Language(s) (NLP):** koeran, english
- **License:** apache 2.0
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| null |
Non_BioNLP
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** richard
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** translation
- **Language(s) (NLP):** koeran, english
- **License:** apache 2.0
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"language": ["ko", "en"], "license": "apache-2.0"}
|
task
|
[
"TRANSLATION"
] | 44,339 |
fine-tuned/jinaai_jina-embeddings-v2-base-en-6262024-wtkc-webapp
|
fine-tuned
|
feature-extraction
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Contact",
"Company",
"Social Media",
"Website",
"Details",
"custom_code",
"en",
"dataset:fine-tuned/jinaai_jina-embeddings-v2-base-en-6262024-wtkc-webapp",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-27T04:09:12Z |
2024-06-27T04:09:28+00:00
| 6 | 0 |
---
datasets:
- fine-tuned/jinaai_jina-embeddings-v2-base-en-6262024-wtkc-webapp
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Contact
- Company
- Social Media
- Website
- Details
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
Information Retrieval System for Business Data
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/jinaai_jina-embeddings-v2-base-en-6262024-wtkc-webapp',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| null |
Non_BioNLP
|
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
Information Retrieval System for Business Data
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/jinaai_jina-embeddings-v2-base-en-6262024-wtkc-webapp',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
{"datasets": ["fine-tuned/jinaai_jina-embeddings-v2-base-en-6262024-wtkc-webapp", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Contact", "Company", "Social Media", "Website", "Details"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,340 |
AWS/MistralLite-AWQ
|
AWS
|
text-generation
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:2306.00978",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | 2024-05-20T06:42:17Z |
2024-05-20T07:10:40+00:00
| 30 | 3 |
---
license: apache-2.0
inference: false
---
# MistralLite-AWQ Model
MistralLite-AWQ is a version of the [MistralLite](https://huggingface.co/amazon/MistralLite) model that was
quantized using the AWQ method developed by [Lin et al. (2023)](https://arxiv.org/abs/2306.00978).
The MistralLite-AWQ models are approximately **70% smaller** than those of MistralLite whilst maintaining comparable performance.
Please refer to the [original MistralLite model card](https://huggingface.co/amazon/MistralLite) for details about the model
preparation and training processes.
## MistralLite-AWQ Variants
| Branch | Approx. Model Size | `q_group_size` | `w_bit` | `version` |
|--------|---:|---------------:|--------:|-----------|
| [main](https://huggingface.co/amazon/MistralLite-AWQ/tree/main) | 3.9 GB | 128 | 4 | GEMM |
| [MistralLite-AWQ-64g-4b-GEMM](https://huggingface.co/amazon/MistralLite-AWQ/tree/MistralLite-AWQ-64g-4b-GEMM) | 4.0 GB | 64 | 4 | GEMM |
| [MistralLite-AWQ-32g-4b-GEMM](https://huggingface.co/amazon/MistralLite-AWQ/tree/MistralLite-AWQ-32g-4b-GEMM) | 4.3 GB | 32 | 4 | GEMM |
## Dependencies
- [`autoawq==0.2.5`](https://pypi.org/project/autoawq/0.2.5/) – [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) was used to quantize the MistralLite model.
- [`vllm==0.4.2`](https://pypi.org/project/vllm/0.4.2/) – [vLLM](https://github.com/vllm-project/vllm) was used to host models for benchmarking.
## Evaluations
### Long Context
The following benchmark results are shown as _accuracy_ (%) values, unless stated otherwise.
#### Topic Retrieval
See https://lmsys.org/blog/2023-06-29-longchat/
| Model Name | n_topics=05 | n_topics=10 | n_topics=15 | n_topics=20 | n_topics=25 |
|:---------------------------------------------------|--------------:|--------------:|--------------:|--------------:|--------------:|
| _n_tokens_ (approx.) = | _3048_ | _5966_ | _8903_ | _11832_ | _14757_ |
| MistralLite | 100 | 100 | 100 | 100 | 98 |
| **MistralLite-AWQ** | **100** | **100** | **100**| **100** | **98** |
| **MistralLite-AWQ-64g-4b-GEMM** | **100** | **100** | **100**| **100** | **98** |
| **MistralLite-AWQ-32g-4b-GEMM** | **100** | **100** | **100**| **100** | **98** |
| Mistral-7B-Instruct-v0.1 | 96 | 52 | 2 | 0 | 0 |
| Mistral-7B-Instruct-v0.2 | 100 | 100 | 100 | 100 | 100 |
| Mixtral-8x7B-v0.1 | 0 | 0 | 0 | 0 | 0 |
| Mixtral-8x7B-Instruct-v0.1 | 100 | 100 | 100 | 100 | 100 |
#### [Line Retrieval](https://lmsys.org/blog/2023-06-29-longchat/#longeval-results)
See https://lmsys.org/blog/2023-06-29-longchat/#longeval-results
| Model Name | n_lines=200 | n_lines=300 | n_lines=400 | n_lines=500 | n_lines=600 | n_lines=680 |
|:----------|-------------:|-------------:|------------:|-----------:|-----------:|-----------:|
| _n_tokens_ (approx.) = | _4317_ | _6415_ | _8510_ | _10610_ | _12698_ | _14373_ |
| MistralLite | 100 | 94 | 86 | 82 | 76 | 66 |
| **MistralLite-AWQ** | **96**| **94**| **88** | **80** | **70**| **62** |
| **MistralLite-AWQ-64g-4b-GEMM** | **96**| **96**| **90** | **70** | **72**| **60** |
| **MistralLite-AWQ-32g-4b-GEMM** | **98**| **96**| **84** | **76** | **70**| **62** |
| Mistral-7B-Instruct-v0.1 | 96 | 56 | 38 | 36 | 30 | 30 |
| Mistral-7B-Instruct-v0.2 | 100 | 100 | 96 | 98 | 96 | 84 |
| Mixtral-8x7B-v0.1 | 54 | 38 | 56 | 66 | 62 | 38 |
| Mixtral-8x7B-Instruct-v0.1 | 100 | 100 | 100 | 100 | 100 | 100 |
#### Pass Key Retrieval
See https://github.com/epfml/landmark-attention/blob/main/llama/run_test.py#L101
| Model Name | n_garbage=12000 | n_garbage=20000 | n_garbage=31000 | n_garbage=38000 | n_garbage=45000 | n_garbage=60000 |
|:----------|-------------:|-------------:|------------:|-----------:|-----------:|-----------:|
| _n_tokens_ (approx.) = | _3272_ | _5405_ | _8338_ | _10205_ | _12071_ | _16072_ |
| MistralLite | 100 | 100 | 100 | 100 | 100 | 100|
| **MistralLite-AWQ** | **100** | **100**| **100**| **100** | **100**| **100**|
| **MistralLite-AWQ-64g-4b-GEMM** | **100** | **100**| **100**| **100** | **100**| **100**|
| **MistralLite-AWQ-32g-4b-GEMM** | **100** | **100**| **100**| **100** | **100**| **100**|
| Mistral-7B-Instruct-v0.1 | 100 | 50 | 30 | 20 | 10 | 10 |
| Mistral-7B-Instruct-v0.2 | 100 | 100 | 100 | 100 | 100 | 100 |
| Mixtral-8x7B-v0.1 | 100 | 100 | 100 | 100 | 100 | 100 |
| Mixtral-8x7B-Instruct-v0.1 | 100 | 100 | 100 | 90 | 100 | 100 |
#### QuALITY (Question Answering with Long Input Texts, Yes!)
See https://nyu-mll.github.io/quality/
|Model Name| Test set Accuracy | Hard subset Accuracy|
|:----------|-------------:|-------------:|
| MistralLite | 56.8 | 74.5 |
| **MistralLite-AWQ** | **55.3** | **71.8** |
| **MistralLite-AWQ-64g-4b-GEMM** | **55.2** | **72.9** |
| **MistralLite-AWQ-32g-4b-GEMM** | **56.6** | **72.8** |
| Mistral-7B-Instruct-v0.1 | 45.2 | 58.9 |
| Mistral-7B-Instruct-v0.2 | 55.5 | 74 |
| Mixtral-8x7B-v0.1 | 75 | 74.1 |
| Mixtral-8x7B-Instruct-v0.1 | 68.7 | 83.3 |
## Usage
## Inference via vLLM HTTP Host
### Launch Host
```bash
python -m vllm.entrypoints.openai.api_server \
--model amazon/MistralLite-AWQ \
--quantization awq
```
### Query Host
```bash
curl -X POST http://localhost:8000/v1/completions \
-H "Content-Type: application/json" \
-d '{ "model": "amazon/MistralLite-AWQ",
"prompt": "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>",
"temperature": 0,
"echo": false
}'
```
## Inference via [vLLM Offline Inference](https://docs.vllm.ai/en/latest/getting_started/examples/offline_inference.html)
```python
from vllm import LLM, SamplingParams
prompts = [
"<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>",
]
sampling_params = SamplingParams(temperature=0, max_tokens=100)
llm = LLM(model="amazon/MistralLite-AWQ")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
## License
Apache 2.0
## Limitations
Before using the MistralLite-AWQ model, it is important to perform your own
independent assessment, and take measures to ensure that your use would comply
with your own specific quality control practices and standards, and that your
use would comply with the local rules, laws, regulations, licenses and terms
that apply to you, and your content.
| null |
Non_BioNLP
|
# MistralLite-AWQ Model
MistralLite-AWQ is a version of the [MistralLite](https://huggingface.co/amazon/MistralLite) model that was
quantized using the AWQ method developed by [Lin et al. (2023)](https://arxiv.org/abs/2306.00978).
The MistralLite-AWQ models are approximately **70% smaller** than those of MistralLite whilst maintaining comparable performance.
Please refer to the [original MistralLite model card](https://huggingface.co/amazon/MistralLite) for details about the model
preparation and training processes.
## MistralLite-AWQ Variants
| Branch | Approx. Model Size | `q_group_size` | `w_bit` | `version` |
|--------|---:|---------------:|--------:|-----------|
| [main](https://huggingface.co/amazon/MistralLite-AWQ/tree/main) | 3.9 GB | 128 | 4 | GEMM |
| [MistralLite-AWQ-64g-4b-GEMM](https://huggingface.co/amazon/MistralLite-AWQ/tree/MistralLite-AWQ-64g-4b-GEMM) | 4.0 GB | 64 | 4 | GEMM |
| [MistralLite-AWQ-32g-4b-GEMM](https://huggingface.co/amazon/MistralLite-AWQ/tree/MistralLite-AWQ-32g-4b-GEMM) | 4.3 GB | 32 | 4 | GEMM |
## Dependencies
- [`autoawq==0.2.5`](https://pypi.org/project/autoawq/0.2.5/) – [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) was used to quantize the MistralLite model.
- [`vllm==0.4.2`](https://pypi.org/project/vllm/0.4.2/) – [vLLM](https://github.com/vllm-project/vllm) was used to host models for benchmarking.
## Evaluations
### Long Context
The following benchmark results are shown as _accuracy_ (%) values, unless stated otherwise.
#### Topic Retrieval
See https://lmsys.org/blog/2023-06-29-longchat/
| Model Name | n_topics=05 | n_topics=10 | n_topics=15 | n_topics=20 | n_topics=25 |
|:---------------------------------------------------|--------------:|--------------:|--------------:|--------------:|--------------:|
| _n_tokens_ (approx.) = | _3048_ | _5966_ | _8903_ | _11832_ | _14757_ |
| MistralLite | 100 | 100 | 100 | 100 | 98 |
| **MistralLite-AWQ** | **100** | **100** | **100**| **100** | **98** |
| **MistralLite-AWQ-64g-4b-GEMM** | **100** | **100** | **100**| **100** | **98** |
| **MistralLite-AWQ-32g-4b-GEMM** | **100** | **100** | **100**| **100** | **98** |
| Mistral-7B-Instruct-v0.1 | 96 | 52 | 2 | 0 | 0 |
| Mistral-7B-Instruct-v0.2 | 100 | 100 | 100 | 100 | 100 |
| Mixtral-8x7B-v0.1 | 0 | 0 | 0 | 0 | 0 |
| Mixtral-8x7B-Instruct-v0.1 | 100 | 100 | 100 | 100 | 100 |
#### [Line Retrieval](https://lmsys.org/blog/2023-06-29-longchat/#longeval-results)
See https://lmsys.org/blog/2023-06-29-longchat/#longeval-results
| Model Name | n_lines=200 | n_lines=300 | n_lines=400 | n_lines=500 | n_lines=600 | n_lines=680 |
|:----------|-------------:|-------------:|------------:|-----------:|-----------:|-----------:|
| _n_tokens_ (approx.) = | _4317_ | _6415_ | _8510_ | _10610_ | _12698_ | _14373_ |
| MistralLite | 100 | 94 | 86 | 82 | 76 | 66 |
| **MistralLite-AWQ** | **96**| **94**| **88** | **80** | **70**| **62** |
| **MistralLite-AWQ-64g-4b-GEMM** | **96**| **96**| **90** | **70** | **72**| **60** |
| **MistralLite-AWQ-32g-4b-GEMM** | **98**| **96**| **84** | **76** | **70**| **62** |
| Mistral-7B-Instruct-v0.1 | 96 | 56 | 38 | 36 | 30 | 30 |
| Mistral-7B-Instruct-v0.2 | 100 | 100 | 96 | 98 | 96 | 84 |
| Mixtral-8x7B-v0.1 | 54 | 38 | 56 | 66 | 62 | 38 |
| Mixtral-8x7B-Instruct-v0.1 | 100 | 100 | 100 | 100 | 100 | 100 |
#### Pass Key Retrieval
See https://github.com/epfml/landmark-attention/blob/main/llama/run_test.py#L101
| Model Name | n_garbage=12000 | n_garbage=20000 | n_garbage=31000 | n_garbage=38000 | n_garbage=45000 | n_garbage=60000 |
|:----------|-------------:|-------------:|------------:|-----------:|-----------:|-----------:|
| _n_tokens_ (approx.) = | _3272_ | _5405_ | _8338_ | _10205_ | _12071_ | _16072_ |
| MistralLite | 100 | 100 | 100 | 100 | 100 | 100|
| **MistralLite-AWQ** | **100** | **100**| **100**| **100** | **100**| **100**|
| **MistralLite-AWQ-64g-4b-GEMM** | **100** | **100**| **100**| **100** | **100**| **100**|
| **MistralLite-AWQ-32g-4b-GEMM** | **100** | **100**| **100**| **100** | **100**| **100**|
| Mistral-7B-Instruct-v0.1 | 100 | 50 | 30 | 20 | 10 | 10 |
| Mistral-7B-Instruct-v0.2 | 100 | 100 | 100 | 100 | 100 | 100 |
| Mixtral-8x7B-v0.1 | 100 | 100 | 100 | 100 | 100 | 100 |
| Mixtral-8x7B-Instruct-v0.1 | 100 | 100 | 100 | 90 | 100 | 100 |
#### QuALITY (Question Answering with Long Input Texts, Yes!)
See https://nyu-mll.github.io/quality/
|Model Name| Test set Accuracy | Hard subset Accuracy|
|:----------|-------------:|-------------:|
| MistralLite | 56.8 | 74.5 |
| **MistralLite-AWQ** | **55.3** | **71.8** |
| **MistralLite-AWQ-64g-4b-GEMM** | **55.2** | **72.9** |
| **MistralLite-AWQ-32g-4b-GEMM** | **56.6** | **72.8** |
| Mistral-7B-Instruct-v0.1 | 45.2 | 58.9 |
| Mistral-7B-Instruct-v0.2 | 55.5 | 74 |
| Mixtral-8x7B-v0.1 | 75 | 74.1 |
| Mixtral-8x7B-Instruct-v0.1 | 68.7 | 83.3 |
## Usage
## Inference via vLLM HTTP Host
### Launch Host
```bash
python -m vllm.entrypoints.openai.api_server \
--model amazon/MistralLite-AWQ \
--quantization awq
```
### Query Host
```bash
curl -X POST http://localhost:8000/v1/completions \
-H "Content-Type: application/json" \
-d '{ "model": "amazon/MistralLite-AWQ",
"prompt": "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>",
"temperature": 0,
"echo": false
}'
```
## Inference via [vLLM Offline Inference](https://docs.vllm.ai/en/latest/getting_started/examples/offline_inference.html)
```python
from vllm import LLM, SamplingParams
prompts = [
"<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>",
]
sampling_params = SamplingParams(temperature=0, max_tokens=100)
llm = LLM(model="amazon/MistralLite-AWQ")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
## License
Apache 2.0
## Limitations
Before using the MistralLite-AWQ model, it is important to perform your own
independent assessment, and take measures to ensure that your use would comply
with your own specific quality control practices and standards, and that your
use would comply with the local rules, laws, regulations, licenses and terms
that apply to you, and your content.
|
{"license": "apache-2.0", "inference": false}
|
task
|
[
"QUESTION_ANSWERING"
] | 44,341 |
VMware/bert-tiny-mrqa
|
VMware
|
question-answering
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:mrqa",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | 2023-02-17T20:52:30Z |
2023-06-21T21:59:31+00:00
| 24 | 0 |
---
datasets:
- mrqa
language:
- en
license: apache-2.0
metrics:
- exact_match
- f1
model-index:
- name: VMware/TinyRoBERTa-MRQA
results:
- task:
type: Question-Answering
dataset:
name: mrqa
type: mrqa
metrics:
- type: exact_match
value: 22.78
name: Eval EM
- type: f1
value: 32.42
name: Eval F1
- type: exact_match
value: 10.18
name: Test EM
- type: f1
value: 18.72
name: Test F1
---
This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab.
# Model Details
- **Model name:** BERT-Tiny-MRQA
- **Model type:** Extractive Question Answering
- **Parent Model:** [BERT-Tiny-uncased](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2)
- **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering)
- **Training data size:** 516,819 examples
- **Training time:** 26:11 on 1 Nvidia V100 32GB GPU
- **Language:** English
- **Framework:** PyTorch
- **Model version:** 1.0
# Intended Use
This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding.
# How to Use
```python
from transformers import pipeline
question_answerer = pipeline("question-answering", model='VMware/bert-tiny-mrqa')
context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT."
question = "What is MRQA?"
result = question_answerer(question=question, context=context)
print(result)
# {
# 'score': 0.134057879447937,
# 'start': 76,
# 'end': 80,
# 'answer': '2019'
# }
```
Yes, you read that correctly ... this model thinks MRQA is "2019". Look at its eval and test scores. A coin toss is more likely to get you a decent answer, haha.
# Training Details
The model was trained for 1 epoch on the MRQA training set.
## Training Hyperparameters
```python
args = TrainingArguments(
"bert-tiny-mrqa",
save_strategy="epoch",
learning_rate=1e-5,
num_train_epochs=1,
weight_decay=0.01,
per_device_train_batch_size=16,
)
```
# Evaluation Metrics
The model was evaluated using standard metrics for question-answering models, including:
Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer.
F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer.
# Model Family Performance
| Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 |
|---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 |
| BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 |
| BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 |
| DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 |
| DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 |
| DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** |
| ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 |
| ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 |
| ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 |
| MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 |
| MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 |
| MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 |
| MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 |
| MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 |
| TinyRoBERTa | 81,529.346 | 4:27:06\* | 9:54 | 1:04 | 69.38 | 80.07 | 53.29 | 64.16 |
| RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 |
| RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 |
\* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA.
# Limitations and Bias
The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include:
- Language: The model is designed to work with English text only and may not perform as well on other languages.
- Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge.
- Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets.
In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
| null |
Non_BioNLP
|
This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab.
# Model Details
- **Model name:** BERT-Tiny-MRQA
- **Model type:** Extractive Question Answering
- **Parent Model:** [BERT-Tiny-uncased](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2)
- **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering)
- **Training data size:** 516,819 examples
- **Training time:** 26:11 on 1 Nvidia V100 32GB GPU
- **Language:** English
- **Framework:** PyTorch
- **Model version:** 1.0
# Intended Use
This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding.
# How to Use
```python
from transformers import pipeline
question_answerer = pipeline("question-answering", model='VMware/bert-tiny-mrqa')
context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT."
question = "What is MRQA?"
result = question_answerer(question=question, context=context)
print(result)
# {
# 'score': 0.134057879447937,
# 'start': 76,
# 'end': 80,
# 'answer': '2019'
# }
```
Yes, you read that correctly ... this model thinks MRQA is "2019". Look at its eval and test scores. A coin toss is more likely to get you a decent answer, haha.
# Training Details
The model was trained for 1 epoch on the MRQA training set.
## Training Hyperparameters
```python
args = TrainingArguments(
"bert-tiny-mrqa",
save_strategy="epoch",
learning_rate=1e-5,
num_train_epochs=1,
weight_decay=0.01,
per_device_train_batch_size=16,
)
```
# Evaluation Metrics
The model was evaluated using standard metrics for question-answering models, including:
Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer.
F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer.
# Model Family Performance
| Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 |
|---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 |
| BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 |
| BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 |
| DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 |
| DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 |
| DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** |
| ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 |
| ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 |
| ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 |
| MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 |
| MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 |
| MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 |
| MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 |
| MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 |
| TinyRoBERTa | 81,529.346 | 4:27:06\* | 9:54 | 1:04 | 69.38 | 80.07 | 53.29 | 64.16 |
| RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 |
| RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 |
\* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA.
# Limitations and Bias
The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include:
- Language: The model is designed to work with English text only and may not perform as well on other languages.
- Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge.
- Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets.
In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
|
{"datasets": ["mrqa"], "language": ["en"], "license": "apache-2.0", "metrics": ["exact_match", "f1"], "model-index": [{"name": "VMware/TinyRoBERTa-MRQA", "results": [{"task": {"type": "Question-Answering"}, "dataset": {"name": "mrqa", "type": "mrqa"}, "metrics": [{"type": "exact_match", "value": 22.78, "name": "Eval EM"}, {"type": "f1", "value": 32.42, "name": "Eval F1"}, {"type": "exact_match", "value": 10.18, "name": "Test EM"}, {"type": "f1", "value": 18.72, "name": "Test F1"}]}]}]}
|
task
|
[
"QUESTION_ANSWERING"
] | 44,342 |
aroot/eng-mya-simcse_longest_ssblu
|
aroot
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-07T04:43:12Z |
2023-07-07T05:04:37+00:00
| 8 | 0 |
---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: eng-mya-simcse_longest_ssblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_longest_ssblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8443
- Bleu: 4.2092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_longest_ssblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8443
- Bleu: 4.2092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "eng-mya-simcse_longest_ssblu", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 44,343 |
google/paligemma-3b-ft-coco35l-448
|
google
|
image-text-to-text
|
[
"transformers",
"safetensors",
"paligemma",
"image-text-to-text",
"arxiv:2205.12522",
"arxiv:2310.09199",
"arxiv:2303.15343",
"arxiv:2403.08295",
"arxiv:1706.03762",
"arxiv:2010.11929",
"arxiv:2209.06794",
"arxiv:2209.04372",
"arxiv:2103.01913",
"arxiv:2401.06209",
"arxiv:2305.10355",
"arxiv:2110.11624",
"arxiv:2108.03353",
"arxiv:2010.04295",
"arxiv:2203.10244",
"arxiv:1810.12440",
"arxiv:1905.13648",
"arxiv:1608.00272",
"arxiv:1908.04913",
"arxiv:2407.07726",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-12T23:55:10Z |
2024-07-19T12:10:02+00:00
| 9 | 0 |
---
library_name: transformers
license: gemma
pipeline_tag: image-text-to-text
extra_gated_heading: Access PaliGemma on Hugging Face
extra_gated_prompt: To access PaliGemma on Hugging Face, you’re required to review
and agree to Google’s usage license. To do this, please ensure you’re logged-in
to Hugging Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# PaliGemma model card
**Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma)
Transformers PaliGemma 3B weights, fine-tuned with 448*448 input images on the <a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a> dataset. The models are available in float32, bfloat16 and float16 format for research purposes only. The fine-tune config is available at <a href="https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/paligemma/transfers/coco35l.py">big_vision</a>.
**Resources and technical documentation:**
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [PaliGemma on Kaggle](https://www.kaggle.com/models/google/paligemma)
* [PaliGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363)
**Terms of Use:** [Terms](https://www.kaggle.com/models/google/paligemma-ft/license/consent/verify/huggingface?returnModelRepoId=google/paligemma-3b-ft-coco35l-448)
**Authors:** Google
## Model information
### Model summary
#### Description
PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by
[PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as
the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma
language model](https://arxiv.org/abs/2403.08295). It takes both image and text
as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation.
#### Model architecture
PaliGemma is the composition of a [Transformer
decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image
encoder](https://arxiv.org/abs/2010.11929), with a total of 3 billion
params. The text decoder is initialized from
[Gemma-2B](https://www.kaggle.com/models/google/gemma). The image encoder is
initialized from
[SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb).
PaliGemma is trained following the PaLI-3 recipes.
#### Inputs and outputs
* **Input:** Image and text string, such as a prompt to caption the image, or
a question.
* **Output:** Generated text in response to the input, such as a caption of
the image, an answer to a question, a list of object bounding box
coordinates, or segmentation codewords.
### Model data
#### Pre-train datasets
PaliGemma is pre-trained on the following mixture of datasets:
* **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is
a web-scale multilingual image-text dataset built from the public web. A
wide range of WebLI splits are used to acquire versatile model capabilities,
such as visual semantic understanding, object localization,
visually-situated text understanding, multilinguality, etc.
* **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et
al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud
Translation API](https://cloud.google.com/translate) to translate into 34
additional languages.
* **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al.,
2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the
same additional 34 languages as CC3M-35L, using the [Google Cloud
Translation API](https://cloud.google.com/translate).
* **OpenImages:** Detection and object-aware questions and answers
([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by
handcrafted rules on the [OpenImages dataset].
* **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al.,
2021](https://arxiv.org/abs/2103.01913)).
[OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html
#### Data responsibility filtering
The following filters are applied to WebLI, with the goal of training PaliGemma
on clean data:
* **Pornographic image filtering:** This filter removes images deemed to be of
pornographic nature.
* **Text safety filtering:** We identify and filter out images that are paired
with unsafe text. Unsafe text is any text deemed to contain or be about
CSAI, pornography, vulgarities, or otherwise offensive.
* **Text toxicity filtering:** We further use the [Perspective
API](https://perspectiveapi.com/) to identify and filter out images that are
paired with text deemed insulting, obscene, hateful or otherwise toxic.
* **Text personal information filtering:** We filtered certain personal information and other sensitive data using [Cloud Data Loss Prevention (DLP)
API](https://cloud.google.com/security/products/dlp) to protect the privacy
of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed.
* **Additional methods:** Filtering based on content quality and safety in
line with our policies and practices.
[other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759
## How to Use
PaliGemma is a single-turn vision language model not meant for conversational use,
and it works best when fine-tuning to a specific use case.
You can configure which task the model will solve by conditioning it with task prefixes,
such as “detect” or “segment”. The pretrained models were trained in this fashion to imbue
them with a rich set of capabilities (question answering, captioning, segmentation, etc.).
However, they are not designed to be used directly, but to be transferred (by fine-tuning)
to specific tasks using a similar prompt structure. For interactive testing, you can use
the "mix" family of models, which have been fine-tuned on a mixture of tasks.
Please, refer to the [usage and limitations section](#usage-and-limitations) for intended
use cases, or visit the [blog post](https://huggingface.co/blog/paligemma-google-vlm) for
additional details and examples.
## Use in Transformers
The following snippets use model `google/paligemma-3b-mix-224` for reference purposes.
The model in this repo you are now browsing may have been trained for other tasks, please
make sure you use appropriate inputs for the task at hand.
### Running the default precision (`float32`) on CPU
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt")
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
Output: `Un auto azul estacionado frente a un edificio.`
### Running other precisions on CUDA
For convenience, the repos contain revisions of the weights already converted to `bfloat16` and `float16`,
so you can use them to reduce the download size and avoid casting on your local computer.
This is how you'd run `bfloat16` on an nvidia CUDA card.
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
device = "cuda:0"
dtype = torch.bfloat16
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
model = PaliGemmaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=dtype,
device_map=device,
revision="bfloat16",
).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
### Loading in 4-bit / 8-bit
You need to install `bitsandbytes` to automatically run inference using 8-bit or 4-bit precision:
```
pip install bitsandbytes accelerate
```
```
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
device = "cuda:0"
dtype = torch.bfloat16
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = PaliGemmaForConditionalGeneration.from_pretrained(
model_id, quantization_config=quantization_config
).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
## Implementation information
### Hardware
PaliGemma was trained using the latest generation of Tensor Processing Unit
(TPU) hardware (TPUv5e).
### Software
Training was done using [JAX](https://github.com/google/jax),
[Flax](https://github.com/google/flax),
[TFDS](https://github.com/tensorflow/datasets) and
[`big_vision`](https://github.com/google-research/big_vision).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
TFDS is used to access datasets and Flax is used for model architecture. The
PaliGemma fine-tune code and inference code are released in the `big_vision`
GitHub repository.
## Evaluation information
### Benchmark results
In order to verify the transferability of PaliGemma to a wide variety of
academic tasks, we fine-tune the pretrained models on each task. Additionally we
train the mix model with a mixture of the transfer tasks. We report results on
different resolutions to provide an impression of which tasks benefit from
increased resolution. Importantly, none of these tasks or datasets are part of
the pretraining data mixture, and their images are explicitly removed from the
web-scale pre-training data.
#### Mix model (fine-tune on mixture of transfer tasks)
<table>
<tbody><tr>
<th>Benchmark</th>
<th>Metric (split)</th>
<th>mix-224</th>
<th>mix-448</th>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2401.06209">MMVP</a></td>
<td>Paired Accuracy</td>
<td>46.00</td>
<td>45.33</td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2305.10355">POPE</a></td>
<td>Accuracy<br>(random/popular/adversarial)</td>
<td>
88.00<br>
86.63<br>
85.67
</td>
<td>
89.37<br>
88.40<br>
87.47
</td>
</tr>
<tr>
<td><a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a></td>
<td>Accuracy (test)</td>
<td>65.20</td>
<td>65.47</td>
</tr>
</tbody></table>
#### Single task (fine-tune on single task)
<table>
<tbody><tr>
<th>Benchmark<br>(train split)</th>
<th>Metric<br>(split)</th>
<th>pt-224</th>
<th>pt-448</th>
<th>pt-896</th>
</tr>
<tr>
<th>Captioning</th>
</tr>
<tr>
<td>
<a href="https://cocodataset.org/#home">COCO captions</a><br>(train+restval)
</td>
<td>CIDEr (val)</td>
<td>141.92</td>
<td>144.60</td>
</tr>
<tr>
<td>
<a href="https://nocaps.org/">NoCaps</a><br>(Eval of COCO<br>captions transfer)
</td>
<td>CIDEr (val)</td>
<td>121.72</td>
<td>123.58</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a><br>(train)
</td>
<td>CIDEr dev<br>(en/avg-34/avg)</td>
<td>
139.2<br>
115.8<br>
116.4
</td>
<td>
141.2<br>
118.0<br>
118.6
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/pdf/2205.12522">XM3600</a><br>(Eval of COCO-35L transfer)
</td>
<td>CIDEr dev<br>(en/avg-34/avg)</td>
<td>
78.1<br>
41.3<br>
42.4
</td>
<td>
80.0<br>
41.9<br>
42.9
</td>
</tr>
<tr>
<td>
<a href="https://textvqa.org/textcaps/">TextCaps</a><br>(train)
</td>
<td>CIDEr (val)</td>
<td>127.48</td>
<td>153.94</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2110.11624">SciCap</a><br>(first sentence, no subfigure)<br>(train+val)
</td>
<td>CIDEr/BLEU-4<br>(test)</td>
<td>
162.25<br>
0.192<br>
</td>
<td>
181.49<br>
0.211<br>
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2108.03353">Screen2words</a><br>(train+dev)
</td>
<td>CIDEr (test)</td>
<td>117.57</td>
<td>119.59</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2010.04295">Widget Captioning</a><br>(train+dev)
</td>
<td>CIDEr (test)</td>
<td>136.07</td>
<td>148.36</td>
</tr>
<tr>
<th>Question answering</th>
</tr>
<tr>
<td>
<a href="https://visualqa.org/index.html">VQAv2</a><br>(train+validation)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>83.19</td>
<td>85.64</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2401.06209">MMVP</a><br>(Eval of VQAv2 transfer)
</td>
<td>Paired Accuracy</td>
<td>47.33</td>
<td>45.33</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2305.10355">POPE</a><br>(Eval of VQAv2 transfer)
</td>
<td>Accuracy<br>(random/popular/<br>adversarial)</td>
<td>
87.80<br>
85.87<br>
84.27
</td>
<td>
88.23<br>
86.77<br>
85.90
</td>
</tr>
<tr>
<td>
<a href="https://okvqa.allenai.org/">OKVQA</a><br>(train)
</td>
<td>Accuracy (val)</td>
<td>63.54</td>
<td>63.15</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (MC)<br>(train+val)
</td>
<td>Accuracy<br>(Test server)</td>
<td>76.37</td>
<td>76.90</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (DA)<br>(train+val)
</td>
<td>Accuracy<br>(Test server)</td>
<td>61.85</td>
<td>63.22</td>
</tr>
<tr>
<td>
<a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a><br>(train_balanced+<br>val_balanced)
</td>
<td>Accuracy<br>(testdev balanced)</td>
<td>65.61</td>
<td>67.03</td>
</tr>
<tr>
<td>
<a href="https://aclanthology.org/2022.findings-acl.196/">xGQA</a><br>(Eval of GQA transfer)
</td>
<td>Mean Accuracy<br>(bn, de, en, id,<br>ko, pt, ru, zh)</td>
<td>58.37</td>
<td>59.07</td>
</tr>
<tr>
<td>
<a href="https://lil.nlp.cornell.edu/nlvr/">NLVR2</a><br>(train+dev)
</td>
<td>Accuracy (test)</td>
<td>90.02</td>
<td>88.93</td>
</tr>
<tr>
<td>
<a href="https://marvl-challenge.github.io/">MaRVL</a><br>(Eval of NLVR2 transfer)
</td>
<td>Mean Accuracy<br>(test)<br>(id, sw, ta, tr, zh)</td>
<td>80.57</td>
<td>76.78</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/data/diagrams">AI2D</a><br>(train)
</td>
<td>Accuracy (test)</td>
<td>72.12</td>
<td>73.28</td>
</tr>
<tr>
<td>
<a href="https://scienceqa.github.io/">ScienceQA</a><br>(Img subset, no CoT)<br>(train+val)
</td>
<td>Accuracy (test)</td>
<td>95.39</td>
<td>95.93</td>
</tr>
<tr>
<td>
<a href="https://zenodo.org/records/6344334">RSVQA-LR</a> (Non numeric)<br>(train+val)
</td>
<td>Mean Accuracy<br>(test)</td>
<td>92.65</td>
<td>93.11</td>
</tr>
<tr>
<td>
<a href="https://zenodo.org/records/6344367">RSVQA-HR</a> (Non numeric)<br>(train+val)
</td>
<td>Mean Accuracy<br>(test/test2)</td>
<td>
92.61<br>
90.58
</td>
<td>
92.79<br>
90.54
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2203.10244">ChartQA</a><br>(human+aug)x(train+val)
</td>
<td>Mean Relaxed<br>Accuracy<br>(test_human,<br>test_aug)</td>
<td>57.08</td>
<td>71.36</td>
</tr>
<tr>
<td>
<a href="https://vizwiz.org/tasks-and-datasets/vqa/">VizWiz VQA</a><br>(train+val)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>
73.7
</td>
<td>
75.52
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1810.12440">TallyQA</a><br>(train)
</td>
<td>Accuracy<br>(test_simple/<br>test_complex)</td>
<td>
81.72<br>
69.56
</td>
<td>
84.86<br>
72.27
</td>
</tr>
<tr>
<td>
<a href="https://ocr-vqa.github.io/">OCR-VQA</a><br>(train+val)
</td>
<td>Accuracy (test)</td>
<td>72.32</td>
<td>74.61</td>
<td>74.93</td>
</tr>
<tr>
<td>
<a href="https://textvqa.org/">TextVQA</a><br>(train+val)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>55.47</td>
<td>73.15</td>
<td>76.48</td>
</tr>
<tr>
<td>
<a href="https://www.docvqa.org/">DocVQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>43.74</td>
<td>78.02</td>
<td>84.77</td>
</tr>
<tr>
<td>
<a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic VQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>28.46</td>
<td>40.47</td>
<td>47.75</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1905.13648">SceneText VQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>63.29</td>
<td>81.82</td>
<td>84.40</td>
</tr>
<tr>
<th>Segmentation</th>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1608.00272">RefCOCO</a><br>(combined refcoco, refcoco+,<br>refcocog excluding val<br>and test images)
</td>
<td>MIoU<br>(validation)<br>refcoco/refcoco+/<br>refcocog</td>
<td>
73.40<br>
68.32<br>
67.65
</td>
<td>
75.57<br>
69.76<br>
70.17
</td>
<td>
76.94<br>
72.18<br>
72.22
</td>
</tr>
<tr>
<th>Video tasks (Caption/QA)</th>
</tr>
<tr>
<td>MSR-VTT (Captioning)</td>
<td>CIDEr (test)</td>
<td>70.54</td>
</tr>
<tr>
<td>MSR-VTT (QA)</td>
<td>Accuracy (test)</td>
<td>50.09</td>
</tr>
<tr>
<td>ActivityNet (Captioning)</td>
<td>CIDEr (test)</td>
<td>34.62</td>
</tr>
<tr>
<td>ActivityNet (QA)</td>
<td>Accuracy (test)</td>
<td>50.78</td>
</tr>
<tr>
<td>VATEX (Captioning)</td>
<td>CIDEr (test)</td>
<td>79.73</td>
</tr>
<tr>
<td>MSVD (QA)</td>
<td>Accuracy (test)</td>
<td>60.22</td>
</tr>
</tbody></table>
## Ethics and safety
### Evaluation approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Human evaluation on prompts covering child safety, content safety and
representational harms. See the [Gemma model
card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for
more details on evaluation approach, but with image captioning and visual
question answering setups.
* Image-to-Text benchmark evaluation: Benchmark against relevant academic
datasets such as FairFace Dataset ([Karkkainen et al.,
2021](https://arxiv.org/abs/1908.04913)).
### Evaluation results
* The human evaluation results of ethics and safety evaluations are within
acceptable thresholds for meeting [internal
policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11)
for categories such as child safety, content safety and representational
harms.
* On top of robust internal evaluations, we also use the Perspective API
(threshold of 0.8) to measure toxicity, profanity, and other potential
issues in the generated captions for images sourced from the FairFace
dataset. We report the maximum and median values observed across subgroups
for each of the perceived gender, ethnicity, and age attributes.
<table>
<tbody><tr>
</tr></tbody><tbody><tr><th>Metric</th>
<th>Perceived<br>gender</th>
<th></th>
<th>Ethnicity</th>
<th></th>
<th>Age group</th>
<th></th>
</tr>
<tr>
<th></th>
<th>Maximum</th>
<th>Median</th>
<th>Maximum</th>
<th>Median</th>
<th>Maximum</th>
<th>Median</th>
</tr>
<tr>
<td>Toxicity</td>
<td>0.04%</td>
<td>0.03%</td>
<td>0.08%</td>
<td>0.00%</td>
<td>0.09%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Identity Attack</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Insult</td>
<td>0.06%</td>
<td>0.04%</td>
<td>0.09%</td>
<td>0.07%</td>
<td>0.16%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Threat</td>
<td>0.06%</td>
<td>0.05%</td>
<td>0.14%</td>
<td>0.05%</td>
<td>0.17%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Profanity</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
</tbody></table>
## Usage and limitations
### Intended usage
Open Vision Language Models (VLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
Fine-tune on specific vision-language task:
* The pre-trained models can be fine-tuned on a wide range of vision-language
tasks such as: image captioning, short video caption, visual question
answering, text reading, object detection and object segmentation.
* The pre-trained models can be fine-tuned for specific domains such as remote
sensing question answering, visual questions from people who are blind,
science question answering, describe UI element functionalities.
* The pre-trained models can be fine-tuned for tasks with non-textual outputs
such as bounding boxes or segmentation masks.
Vision-language research:
* The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM
techniques, develop algorithms, and contribute to the advancement of the
field.
### Ethical considerations and risks
The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following:
* Bias and Fairness
* VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card.
* Misinformation and Misuse
* VLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
* Transparency and Accountability
* This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem.
Risks identified and mitigations:
* **Perpetuation of biases:** It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* **Generation of harmful content:** Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
* **Misuse for malicious purposes:** Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the [Gemma
Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques.
### Limitations
* Most limitations inherited from the underlying Gemma model still apply:
* VLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* Natural language is inherently complex. VLMs might struggle to grasp
subtle nuances, sarcasm, or figurative language.
* VLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* VLMs rely on statistical patterns in language and images. They might
lack the ability to apply common sense reasoning in certain situations.
* PaliGemma was designed first and foremost to serve as a general pre-trained
model for transfer to specialized tasks. Hence, its "out of the box" or
"zero-shot" performance might lag behind models designed specifically for
that.
* PaliGemma is not a multi-turn chatbot. It is designed for a single round of
image and text input.
## Citation
```bibtex
@article{beyer2024paligemma,
title={{PaliGemma: A versatile 3B VLM for transfer}},
author={Lucas Beyer* and Andreas Steiner* and André Susano Pinto* and Alexander Kolesnikov* and Xiao Wang* and Daniel Salz and Maxim Neumann and Ibrahim Alabdulmohsin and Michael Tschannen and Emanuele Bugliarello and Thomas Unterthiner and Daniel Keysers and Skanda Koppula and Fangyu Liu and Adam Grycner and Alexey Gritsenko and Neil Houlsby and Manoj Kumar and Keran Rong and Julian Eisenschlos and Rishabh Kabra and Matthias Bauer and Matko Bošnjak and Xi Chen and Matthias Minderer and Paul Voigtlaender and Ioana Bica and Ivana Balazevic and Joan Puigcerver and Pinelopi Papalampidi and Olivier Henaff and Xi Xiong and Radu Soricut and Jeremiah Harmsen and Xiaohua Zhai*},
year={2024},
journal={arXiv preprint arXiv:2407.07726}
}
```
Find the paper [here](https://arxiv.org/abs/2407.07726).
| null |
Non_BioNLP
|
# PaliGemma model card
**Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma)
Transformers PaliGemma 3B weights, fine-tuned with 448*448 input images on the <a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a> dataset. The models are available in float32, bfloat16 and float16 format for research purposes only. The fine-tune config is available at <a href="https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/paligemma/transfers/coco35l.py">big_vision</a>.
**Resources and technical documentation:**
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [PaliGemma on Kaggle](https://www.kaggle.com/models/google/paligemma)
* [PaliGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363)
**Terms of Use:** [Terms](https://www.kaggle.com/models/google/paligemma-ft/license/consent/verify/huggingface?returnModelRepoId=google/paligemma-3b-ft-coco35l-448)
**Authors:** Google
## Model information
### Model summary
#### Description
PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by
[PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as
the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma
language model](https://arxiv.org/abs/2403.08295). It takes both image and text
as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation.
#### Model architecture
PaliGemma is the composition of a [Transformer
decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image
encoder](https://arxiv.org/abs/2010.11929), with a total of 3 billion
params. The text decoder is initialized from
[Gemma-2B](https://www.kaggle.com/models/google/gemma). The image encoder is
initialized from
[SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb).
PaliGemma is trained following the PaLI-3 recipes.
#### Inputs and outputs
* **Input:** Image and text string, such as a prompt to caption the image, or
a question.
* **Output:** Generated text in response to the input, such as a caption of
the image, an answer to a question, a list of object bounding box
coordinates, or segmentation codewords.
### Model data
#### Pre-train datasets
PaliGemma is pre-trained on the following mixture of datasets:
* **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is
a web-scale multilingual image-text dataset built from the public web. A
wide range of WebLI splits are used to acquire versatile model capabilities,
such as visual semantic understanding, object localization,
visually-situated text understanding, multilinguality, etc.
* **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et
al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud
Translation API](https://cloud.google.com/translate) to translate into 34
additional languages.
* **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al.,
2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the
same additional 34 languages as CC3M-35L, using the [Google Cloud
Translation API](https://cloud.google.com/translate).
* **OpenImages:** Detection and object-aware questions and answers
([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by
handcrafted rules on the [OpenImages dataset].
* **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al.,
2021](https://arxiv.org/abs/2103.01913)).
[OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html
#### Data responsibility filtering
The following filters are applied to WebLI, with the goal of training PaliGemma
on clean data:
* **Pornographic image filtering:** This filter removes images deemed to be of
pornographic nature.
* **Text safety filtering:** We identify and filter out images that are paired
with unsafe text. Unsafe text is any text deemed to contain or be about
CSAI, pornography, vulgarities, or otherwise offensive.
* **Text toxicity filtering:** We further use the [Perspective
API](https://perspectiveapi.com/) to identify and filter out images that are
paired with text deemed insulting, obscene, hateful or otherwise toxic.
* **Text personal information filtering:** We filtered certain personal information and other sensitive data using [Cloud Data Loss Prevention (DLP)
API](https://cloud.google.com/security/products/dlp) to protect the privacy
of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed.
* **Additional methods:** Filtering based on content quality and safety in
line with our policies and practices.
[other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759
## How to Use
PaliGemma is a single-turn vision language model not meant for conversational use,
and it works best when fine-tuning to a specific use case.
You can configure which task the model will solve by conditioning it with task prefixes,
such as “detect” or “segment”. The pretrained models were trained in this fashion to imbue
them with a rich set of capabilities (question answering, captioning, segmentation, etc.).
However, they are not designed to be used directly, but to be transferred (by fine-tuning)
to specific tasks using a similar prompt structure. For interactive testing, you can use
the "mix" family of models, which have been fine-tuned on a mixture of tasks.
Please, refer to the [usage and limitations section](#usage-and-limitations) for intended
use cases, or visit the [blog post](https://huggingface.co/blog/paligemma-google-vlm) for
additional details and examples.
## Use in Transformers
The following snippets use model `google/paligemma-3b-mix-224` for reference purposes.
The model in this repo you are now browsing may have been trained for other tasks, please
make sure you use appropriate inputs for the task at hand.
### Running the default precision (`float32`) on CPU
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt")
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
Output: `Un auto azul estacionado frente a un edificio.`
### Running other precisions on CUDA
For convenience, the repos contain revisions of the weights already converted to `bfloat16` and `float16`,
so you can use them to reduce the download size and avoid casting on your local computer.
This is how you'd run `bfloat16` on an nvidia CUDA card.
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
device = "cuda:0"
dtype = torch.bfloat16
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
model = PaliGemmaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=dtype,
device_map=device,
revision="bfloat16",
).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
### Loading in 4-bit / 8-bit
You need to install `bitsandbytes` to automatically run inference using 8-bit or 4-bit precision:
```
pip install bitsandbytes accelerate
```
```
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
device = "cuda:0"
dtype = torch.bfloat16
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = PaliGemmaForConditionalGeneration.from_pretrained(
model_id, quantization_config=quantization_config
).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
## Implementation information
### Hardware
PaliGemma was trained using the latest generation of Tensor Processing Unit
(TPU) hardware (TPUv5e).
### Software
Training was done using [JAX](https://github.com/google/jax),
[Flax](https://github.com/google/flax),
[TFDS](https://github.com/tensorflow/datasets) and
[`big_vision`](https://github.com/google-research/big_vision).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
TFDS is used to access datasets and Flax is used for model architecture. The
PaliGemma fine-tune code and inference code are released in the `big_vision`
GitHub repository.
## Evaluation information
### Benchmark results
In order to verify the transferability of PaliGemma to a wide variety of
academic tasks, we fine-tune the pretrained models on each task. Additionally we
train the mix model with a mixture of the transfer tasks. We report results on
different resolutions to provide an impression of which tasks benefit from
increased resolution. Importantly, none of these tasks or datasets are part of
the pretraining data mixture, and their images are explicitly removed from the
web-scale pre-training data.
#### Mix model (fine-tune on mixture of transfer tasks)
<table>
<tbody><tr>
<th>Benchmark</th>
<th>Metric (split)</th>
<th>mix-224</th>
<th>mix-448</th>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2401.06209">MMVP</a></td>
<td>Paired Accuracy</td>
<td>46.00</td>
<td>45.33</td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2305.10355">POPE</a></td>
<td>Accuracy<br>(random/popular/adversarial)</td>
<td>
88.00<br>
86.63<br>
85.67
</td>
<td>
89.37<br>
88.40<br>
87.47
</td>
</tr>
<tr>
<td><a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a></td>
<td>Accuracy (test)</td>
<td>65.20</td>
<td>65.47</td>
</tr>
</tbody></table>
#### Single task (fine-tune on single task)
<table>
<tbody><tr>
<th>Benchmark<br>(train split)</th>
<th>Metric<br>(split)</th>
<th>pt-224</th>
<th>pt-448</th>
<th>pt-896</th>
</tr>
<tr>
<th>Captioning</th>
</tr>
<tr>
<td>
<a href="https://cocodataset.org/#home">COCO captions</a><br>(train+restval)
</td>
<td>CIDEr (val)</td>
<td>141.92</td>
<td>144.60</td>
</tr>
<tr>
<td>
<a href="https://nocaps.org/">NoCaps</a><br>(Eval of COCO<br>captions transfer)
</td>
<td>CIDEr (val)</td>
<td>121.72</td>
<td>123.58</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a><br>(train)
</td>
<td>CIDEr dev<br>(en/avg-34/avg)</td>
<td>
139.2<br>
115.8<br>
116.4
</td>
<td>
141.2<br>
118.0<br>
118.6
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/pdf/2205.12522">XM3600</a><br>(Eval of COCO-35L transfer)
</td>
<td>CIDEr dev<br>(en/avg-34/avg)</td>
<td>
78.1<br>
41.3<br>
42.4
</td>
<td>
80.0<br>
41.9<br>
42.9
</td>
</tr>
<tr>
<td>
<a href="https://textvqa.org/textcaps/">TextCaps</a><br>(train)
</td>
<td>CIDEr (val)</td>
<td>127.48</td>
<td>153.94</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2110.11624">SciCap</a><br>(first sentence, no subfigure)<br>(train+val)
</td>
<td>CIDEr/BLEU-4<br>(test)</td>
<td>
162.25<br>
0.192<br>
</td>
<td>
181.49<br>
0.211<br>
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2108.03353">Screen2words</a><br>(train+dev)
</td>
<td>CIDEr (test)</td>
<td>117.57</td>
<td>119.59</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2010.04295">Widget Captioning</a><br>(train+dev)
</td>
<td>CIDEr (test)</td>
<td>136.07</td>
<td>148.36</td>
</tr>
<tr>
<th>Question answering</th>
</tr>
<tr>
<td>
<a href="https://visualqa.org/index.html">VQAv2</a><br>(train+validation)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>83.19</td>
<td>85.64</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2401.06209">MMVP</a><br>(Eval of VQAv2 transfer)
</td>
<td>Paired Accuracy</td>
<td>47.33</td>
<td>45.33</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2305.10355">POPE</a><br>(Eval of VQAv2 transfer)
</td>
<td>Accuracy<br>(random/popular/<br>adversarial)</td>
<td>
87.80<br>
85.87<br>
84.27
</td>
<td>
88.23<br>
86.77<br>
85.90
</td>
</tr>
<tr>
<td>
<a href="https://okvqa.allenai.org/">OKVQA</a><br>(train)
</td>
<td>Accuracy (val)</td>
<td>63.54</td>
<td>63.15</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (MC)<br>(train+val)
</td>
<td>Accuracy<br>(Test server)</td>
<td>76.37</td>
<td>76.90</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (DA)<br>(train+val)
</td>
<td>Accuracy<br>(Test server)</td>
<td>61.85</td>
<td>63.22</td>
</tr>
<tr>
<td>
<a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a><br>(train_balanced+<br>val_balanced)
</td>
<td>Accuracy<br>(testdev balanced)</td>
<td>65.61</td>
<td>67.03</td>
</tr>
<tr>
<td>
<a href="https://aclanthology.org/2022.findings-acl.196/">xGQA</a><br>(Eval of GQA transfer)
</td>
<td>Mean Accuracy<br>(bn, de, en, id,<br>ko, pt, ru, zh)</td>
<td>58.37</td>
<td>59.07</td>
</tr>
<tr>
<td>
<a href="https://lil.nlp.cornell.edu/nlvr/">NLVR2</a><br>(train+dev)
</td>
<td>Accuracy (test)</td>
<td>90.02</td>
<td>88.93</td>
</tr>
<tr>
<td>
<a href="https://marvl-challenge.github.io/">MaRVL</a><br>(Eval of NLVR2 transfer)
</td>
<td>Mean Accuracy<br>(test)<br>(id, sw, ta, tr, zh)</td>
<td>80.57</td>
<td>76.78</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/data/diagrams">AI2D</a><br>(train)
</td>
<td>Accuracy (test)</td>
<td>72.12</td>
<td>73.28</td>
</tr>
<tr>
<td>
<a href="https://scienceqa.github.io/">ScienceQA</a><br>(Img subset, no CoT)<br>(train+val)
</td>
<td>Accuracy (test)</td>
<td>95.39</td>
<td>95.93</td>
</tr>
<tr>
<td>
<a href="https://zenodo.org/records/6344334">RSVQA-LR</a> (Non numeric)<br>(train+val)
</td>
<td>Mean Accuracy<br>(test)</td>
<td>92.65</td>
<td>93.11</td>
</tr>
<tr>
<td>
<a href="https://zenodo.org/records/6344367">RSVQA-HR</a> (Non numeric)<br>(train+val)
</td>
<td>Mean Accuracy<br>(test/test2)</td>
<td>
92.61<br>
90.58
</td>
<td>
92.79<br>
90.54
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2203.10244">ChartQA</a><br>(human+aug)x(train+val)
</td>
<td>Mean Relaxed<br>Accuracy<br>(test_human,<br>test_aug)</td>
<td>57.08</td>
<td>71.36</td>
</tr>
<tr>
<td>
<a href="https://vizwiz.org/tasks-and-datasets/vqa/">VizWiz VQA</a><br>(train+val)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>
73.7
</td>
<td>
75.52
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1810.12440">TallyQA</a><br>(train)
</td>
<td>Accuracy<br>(test_simple/<br>test_complex)</td>
<td>
81.72<br>
69.56
</td>
<td>
84.86<br>
72.27
</td>
</tr>
<tr>
<td>
<a href="https://ocr-vqa.github.io/">OCR-VQA</a><br>(train+val)
</td>
<td>Accuracy (test)</td>
<td>72.32</td>
<td>74.61</td>
<td>74.93</td>
</tr>
<tr>
<td>
<a href="https://textvqa.org/">TextVQA</a><br>(train+val)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>55.47</td>
<td>73.15</td>
<td>76.48</td>
</tr>
<tr>
<td>
<a href="https://www.docvqa.org/">DocVQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>43.74</td>
<td>78.02</td>
<td>84.77</td>
</tr>
<tr>
<td>
<a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic VQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>28.46</td>
<td>40.47</td>
<td>47.75</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1905.13648">SceneText VQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>63.29</td>
<td>81.82</td>
<td>84.40</td>
</tr>
<tr>
<th>Segmentation</th>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1608.00272">RefCOCO</a><br>(combined refcoco, refcoco+,<br>refcocog excluding val<br>and test images)
</td>
<td>MIoU<br>(validation)<br>refcoco/refcoco+/<br>refcocog</td>
<td>
73.40<br>
68.32<br>
67.65
</td>
<td>
75.57<br>
69.76<br>
70.17
</td>
<td>
76.94<br>
72.18<br>
72.22
</td>
</tr>
<tr>
<th>Video tasks (Caption/QA)</th>
</tr>
<tr>
<td>MSR-VTT (Captioning)</td>
<td>CIDEr (test)</td>
<td>70.54</td>
</tr>
<tr>
<td>MSR-VTT (QA)</td>
<td>Accuracy (test)</td>
<td>50.09</td>
</tr>
<tr>
<td>ActivityNet (Captioning)</td>
<td>CIDEr (test)</td>
<td>34.62</td>
</tr>
<tr>
<td>ActivityNet (QA)</td>
<td>Accuracy (test)</td>
<td>50.78</td>
</tr>
<tr>
<td>VATEX (Captioning)</td>
<td>CIDEr (test)</td>
<td>79.73</td>
</tr>
<tr>
<td>MSVD (QA)</td>
<td>Accuracy (test)</td>
<td>60.22</td>
</tr>
</tbody></table>
## Ethics and safety
### Evaluation approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Human evaluation on prompts covering child safety, content safety and
representational harms. See the [Gemma model
card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for
more details on evaluation approach, but with image captioning and visual
question answering setups.
* Image-to-Text benchmark evaluation: Benchmark against relevant academic
datasets such as FairFace Dataset ([Karkkainen et al.,
2021](https://arxiv.org/abs/1908.04913)).
### Evaluation results
* The human evaluation results of ethics and safety evaluations are within
acceptable thresholds for meeting [internal
policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11)
for categories such as child safety, content safety and representational
harms.
* On top of robust internal evaluations, we also use the Perspective API
(threshold of 0.8) to measure toxicity, profanity, and other potential
issues in the generated captions for images sourced from the FairFace
dataset. We report the maximum and median values observed across subgroups
for each of the perceived gender, ethnicity, and age attributes.
<table>
<tbody><tr>
</tr></tbody><tbody><tr><th>Metric</th>
<th>Perceived<br>gender</th>
<th></th>
<th>Ethnicity</th>
<th></th>
<th>Age group</th>
<th></th>
</tr>
<tr>
<th></th>
<th>Maximum</th>
<th>Median</th>
<th>Maximum</th>
<th>Median</th>
<th>Maximum</th>
<th>Median</th>
</tr>
<tr>
<td>Toxicity</td>
<td>0.04%</td>
<td>0.03%</td>
<td>0.08%</td>
<td>0.00%</td>
<td>0.09%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Identity Attack</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Insult</td>
<td>0.06%</td>
<td>0.04%</td>
<td>0.09%</td>
<td>0.07%</td>
<td>0.16%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Threat</td>
<td>0.06%</td>
<td>0.05%</td>
<td>0.14%</td>
<td>0.05%</td>
<td>0.17%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Profanity</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
</tbody></table>
## Usage and limitations
### Intended usage
Open Vision Language Models (VLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
Fine-tune on specific vision-language task:
* The pre-trained models can be fine-tuned on a wide range of vision-language
tasks such as: image captioning, short video caption, visual question
answering, text reading, object detection and object segmentation.
* The pre-trained models can be fine-tuned for specific domains such as remote
sensing question answering, visual questions from people who are blind,
science question answering, describe UI element functionalities.
* The pre-trained models can be fine-tuned for tasks with non-textual outputs
such as bounding boxes or segmentation masks.
Vision-language research:
* The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM
techniques, develop algorithms, and contribute to the advancement of the
field.
### Ethical considerations and risks
The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following:
* Bias and Fairness
* VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card.
* Misinformation and Misuse
* VLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
* Transparency and Accountability
* This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem.
Risks identified and mitigations:
* **Perpetuation of biases:** It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* **Generation of harmful content:** Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
* **Misuse for malicious purposes:** Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the [Gemma
Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques.
### Limitations
* Most limitations inherited from the underlying Gemma model still apply:
* VLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* Natural language is inherently complex. VLMs might struggle to grasp
subtle nuances, sarcasm, or figurative language.
* VLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* VLMs rely on statistical patterns in language and images. They might
lack the ability to apply common sense reasoning in certain situations.
* PaliGemma was designed first and foremost to serve as a general pre-trained
model for transfer to specialized tasks. Hence, its "out of the box" or
"zero-shot" performance might lag behind models designed specifically for
that.
* PaliGemma is not a multi-turn chatbot. It is designed for a single round of
image and text input.
## Citation
```bibtex
@article{beyer2024paligemma,
title={{PaliGemma: A versatile 3B VLM for transfer}},
author={Lucas Beyer* and Andreas Steiner* and André Susano Pinto* and Alexander Kolesnikov* and Xiao Wang* and Daniel Salz and Maxim Neumann and Ibrahim Alabdulmohsin and Michael Tschannen and Emanuele Bugliarello and Thomas Unterthiner and Daniel Keysers and Skanda Koppula and Fangyu Liu and Adam Grycner and Alexey Gritsenko and Neil Houlsby and Manoj Kumar and Keran Rong and Julian Eisenschlos and Rishabh Kabra and Matthias Bauer and Matko Bošnjak and Xi Chen and Matthias Minderer and Paul Voigtlaender and Ioana Bica and Ivana Balazevic and Joan Puigcerver and Pinelopi Papalampidi and Olivier Henaff and Xi Xiong and Radu Soricut and Jeremiah Harmsen and Xiaohua Zhai*},
year={2024},
journal={arXiv preprint arXiv:2407.07726}
}
```
Find the paper [here](https://arxiv.org/abs/2407.07726).
|
{"library_name": "transformers", "license": "gemma", "pipeline_tag": "image-text-to-text", "extra_gated_heading": "Access PaliGemma on Hugging Face", "extra_gated_prompt": "To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
|
task
|
[
"QUESTION_ANSWERING",
"TRANSLATION"
] | 44,345 |
ThuyNT03/PhoBERT-Final_Mixed-aug_backtranslation-2
|
ThuyNT03
|
text-classification
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-09-07T07:52:42Z |
2023-09-07T09:38:20+00:00
| 8 | 0 |
---
base_model: vinai/phobert-base-v2
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: PhoBERT-Final_Mixed-aug_backtranslation-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_backtranslation-2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0525
- Accuracy: 0.69
- F1: 0.6891
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9186 | 1.0 | 87 | 0.7637 | 0.72 | 0.7176 |
| 0.6008 | 2.0 | 174 | 0.6915 | 0.69 | 0.6893 |
| 0.436 | 3.0 | 261 | 0.7517 | 0.73 | 0.7310 |
| 0.3092 | 4.0 | 348 | 0.8925 | 0.7 | 0.6927 |
| 0.1923 | 5.0 | 435 | 0.9679 | 0.68 | 0.6767 |
| 0.1371 | 6.0 | 522 | 1.0023 | 0.71 | 0.7091 |
| 0.1003 | 7.0 | 609 | 1.0508 | 0.68 | 0.6778 |
| 0.0796 | 8.0 | 696 | 1.0525 | 0.69 | 0.6891 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_backtranslation-2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0525
- Accuracy: 0.69
- F1: 0.6891
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9186 | 1.0 | 87 | 0.7637 | 0.72 | 0.7176 |
| 0.6008 | 2.0 | 174 | 0.6915 | 0.69 | 0.6893 |
| 0.436 | 3.0 | 261 | 0.7517 | 0.73 | 0.7310 |
| 0.3092 | 4.0 | 348 | 0.8925 | 0.7 | 0.6927 |
| 0.1923 | 5.0 | 435 | 0.9679 | 0.68 | 0.6767 |
| 0.1371 | 6.0 | 522 | 1.0023 | 0.71 | 0.7091 |
| 0.1003 | 7.0 | 609 | 1.0508 | 0.68 | 0.6778 |
| 0.0796 | 8.0 | 696 | 1.0525 | 0.69 | 0.6891 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
{"base_model": "vinai/phobert-base-v2", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "PhoBERT-Final_Mixed-aug_backtranslation-2", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 44,346 |
rizerphe/CodeLlama-function-calling-6320-7b-Instruct-GGUF
|
rizerphe
| null |
[
"dataset:rizerphe/glaive-function-calling-v2-llama",
"dataset:rizerphe/sharegpt-hyperfiltered-3k-llama",
"dataset:totally-not-an-llm/sharegpt-hyperfiltered-3k",
"dataset:glaiveai/glaive-function-calling-v2",
"license:llama2",
"region:us"
] | 2023-09-06T03:36:07Z |
2023-09-06T12:09:50+00:00
| 0 | 7 |
---
datasets:
- rizerphe/glaive-function-calling-v2-llama
- rizerphe/sharegpt-hyperfiltered-3k-llama
- totally-not-an-llm/sharegpt-hyperfiltered-3k
- glaiveai/glaive-function-calling-v2
license: llama2
---
# CodeLlama-7b Instruct finetuned on 6320 function calling and generic chat examples - GGUF
CodeLlama-7b Instruct, fine-tuned with LoRA on a small fraction of the [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) dataset and a formatted (and slightly cleaned) version of [sharegpt-hyperfiltered-3k](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k), in GGUF format.
[Original model](https://huggingface.co/rizerphe/CodeLlama-function-calling-6320-7b-Instruct-hf)
## Examples
Prompt example:
```
[INST] <<SYS>>
<function>Available functions:
<function>{
"name": "generate_password",
"description": "Generate a random password with specified criteria",
"parameters": {
"type": "object",
"properties": {
"length": {
"type": "integer",
"description": "The length of the password"
},
"include_numbers": {
"type": "boolean",
"description": "Include numbers in the password"
},
"include_special_characters": {
"type": "boolean",
"description": "Include special characters in the password"
}
},
"required": [
"length"
]
}
}
<</SYS>>
I need a new password. Can you generate one for me? [/INST]
```
The model then generates (note the leading space):
```
Of course! How long would you like your password to be? And would you like it to include numbers and special characters?
```
If you extend the prompt to be:
```
[INST] <<SYS>>
<function>Available functions:
<function>{
"name": "generate_password",
"description": "Generate a random password with specified criteria",
"parameters": {
"type": "object",
"properties": {
"length": {
"type": "integer",
"description": "The length of the password"
},
"include_numbers": {
"type": "boolean",
"description": "Include numbers in the password"
},
"include_special_characters": {
"type": "boolean",
"description": "Include special characters in the password"
}
},
"required": [
"length"
]
}
}
<</SYS>>
I need a new password. Can you generate one for me? [/INST] Of course! How long would you like your password to be? And would you like it to include numbers and special characters?</s><s>[INST] I'd like it to be 12 characters long. [/INST]
```
The model will generate (without the leading space):
```
<function>generate_password
{
"length": 12
}
```
It can also answer questions based on a prompt without any functions:
```
[INST] In one sentence, what is a large language model? [/INST] A large language model is a type of artificial intelligence model that is trained on vast amounts of text data to generate human-like language and understand natural language input. [/INST] That's correct! Large language models are trained on large datasets of text to generate human-like language and understand natural language input. They are often used in applications such as chatbots, language translation, and text summarization.
```
A quick manual test shows that it's still able to follow a system prompt provided alongside any functions provided, including in multi-turn conversations. None of this was tested comprehensively, though, your results may vary.
| null |
Non_BioNLP
|
# CodeLlama-7b Instruct finetuned on 6320 function calling and generic chat examples - GGUF
CodeLlama-7b Instruct, fine-tuned with LoRA on a small fraction of the [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) dataset and a formatted (and slightly cleaned) version of [sharegpt-hyperfiltered-3k](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k), in GGUF format.
[Original model](https://huggingface.co/rizerphe/CodeLlama-function-calling-6320-7b-Instruct-hf)
## Examples
Prompt example:
```
[INST] <<SYS>>
<function>Available functions:
<function>{
"name": "generate_password",
"description": "Generate a random password with specified criteria",
"parameters": {
"type": "object",
"properties": {
"length": {
"type": "integer",
"description": "The length of the password"
},
"include_numbers": {
"type": "boolean",
"description": "Include numbers in the password"
},
"include_special_characters": {
"type": "boolean",
"description": "Include special characters in the password"
}
},
"required": [
"length"
]
}
}
<</SYS>>
I need a new password. Can you generate one for me? [/INST]
```
The model then generates (note the leading space):
```
Of course! How long would you like your password to be? And would you like it to include numbers and special characters?
```
If you extend the prompt to be:
```
[INST] <<SYS>>
<function>Available functions:
<function>{
"name": "generate_password",
"description": "Generate a random password with specified criteria",
"parameters": {
"type": "object",
"properties": {
"length": {
"type": "integer",
"description": "The length of the password"
},
"include_numbers": {
"type": "boolean",
"description": "Include numbers in the password"
},
"include_special_characters": {
"type": "boolean",
"description": "Include special characters in the password"
}
},
"required": [
"length"
]
}
}
<</SYS>>
I need a new password. Can you generate one for me? [/INST] Of course! How long would you like your password to be? And would you like it to include numbers and special characters?</s><s>[INST] I'd like it to be 12 characters long. [/INST]
```
The model will generate (without the leading space):
```
<function>generate_password
{
"length": 12
}
```
It can also answer questions based on a prompt without any functions:
```
[INST] In one sentence, what is a large language model? [/INST] A large language model is a type of artificial intelligence model that is trained on vast amounts of text data to generate human-like language and understand natural language input. [/INST] That's correct! Large language models are trained on large datasets of text to generate human-like language and understand natural language input. They are often used in applications such as chatbots, language translation, and text summarization.
```
A quick manual test shows that it's still able to follow a system prompt provided alongside any functions provided, including in multi-turn conversations. None of this was tested comprehensively, though, your results may vary.
|
{"datasets": ["rizerphe/glaive-function-calling-v2-llama", "rizerphe/sharegpt-hyperfiltered-3k-llama", "totally-not-an-llm/sharegpt-hyperfiltered-3k", "glaiveai/glaive-function-calling-v2"], "license": "llama2"}
|
task
|
[
"TRANSLATION",
"SUMMARIZATION"
] | 44,347 |
scaaseu/distilbert-emotion
|
scaaseu
|
text-classification
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-06-21T13:14:08Z |
2024-06-21T13:18:04+00:00
| 7 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.9335
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1518
- Accuracy: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 250 | 0.1969 | 0.928 |
| 0.3408 | 2.0 | 500 | 0.1518 | 0.9335 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1518
- Accuracy: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 250 | 0.1969 | 0.928 |
| 0.3408 | 2.0 | 500 | 0.1518 | 0.9335 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9335, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,348 |
RichardErkhov/AI-Sweden-Models_-_gpt-sw3-356m-4bits
|
RichardErkhov
|
text-generation
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | 2024-07-20T11:10:07Z |
2024-07-20T11:11:19+00:00
| 78 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt-sw3-356m - bnb 4bits
- Model creator: https://huggingface.co/AI-Sweden-Models/
- Original model: https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/
Original model description:
---
license: other
language:
- da
- sv
- 'no'
- en
- is
---
# Model description
[AI Sweden](https://huggingface.co/AI-Sweden-Models/)
**Base models**
[GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/)
[GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/)
[GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/)
**Instruct models**
[GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/)
[GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/)
**Quantized models**
[GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq)
GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
# Intended use
GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks. AI Sweden shares GPT-SW3 in a controlled pre-release with organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community. This is an important step in the process of validating the model and collecting feedback on both what works well and what does not.
# Limitations
Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content.
# How to use
To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information.
The following code snippet loads our tokenizer & model, and uses the GPU if available.
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
# Initialize Variables
model_name = "AI-Sweden-Models/gpt-sw3-356m"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
prompt = "Träd är fina för att"
# Initialize Tokenizer & Model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.eval()
model.to(device)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you:
```python
generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device)
generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"]
```
# Compliance
The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material.
# GPT-SW3 Model Card
Following Mitchell et al. (2018), we provide a model card for GPT-SW3.
# Model Details
- Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language.
- Model date: GPT-SW3 date of release 2022-12-20
- Model version: This is the second generation of GPT-SW3.
- Model type: GPT-SW3 is a large decoder-only transformer language model.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation.
- Paper or other resource for more information: N/A.
- License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/blob/main/LICENSE).
- Where to send questions or comments about the model: [email protected]
# Intended Use
- Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not.
- Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community.
- Out-of-scope use cases: See the modified RAIL license.
# Data, Limitations, and Recommendations
- Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model.
- Data selection for evaluation: N/A
- Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs.
- Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
- We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general.
# GPT-SW3 Datasheet
- We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3.
# Motivation
- For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages.
- Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE.
- Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949.
- Any other comments? No.
# Composition
- What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources:
- Books
- Litteraturbanken (https://litteraturbanken.se/)
- The Pile
- Articles
- Diva (https://www.diva-portal.org/)
- The Pile: PubMed
- The Pile: ArXiv
- Code
- Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code)
- Conversational
- Familjeliv (https://www.familjeliv.se/)
- Flashback (https://flashback.se/)
- Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI)
- Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021)
- Math
- English Math dataset generated with code from DeepMind (D. Saxton et al., 2019)
- Swedish Math dataset, generated as above with manually translated templates
- Miscellaneous
- Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf)
- OPUS, the open parallel corpus (https://opus.nlpl.eu/)
- Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database)
- Natural Instructions (https://github.com/allenai/natural-instructions)
- P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3)
- The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC)
- Danish Gigaword (https://gigaword.dk/)
- Icelandic Gigaword (https://clarin.is/en/resources/gigaword/)
- The Pile: Stack Exchange
- Web Common Crawl
- Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se).
- Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019)
- Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019)
- The Pile: Open Web Text
- Web Sources
- Various public Swedish website scrapes (see Appendix in data paper)
- Familjeliv Articles
- Public Swedish Job Ads from JobTech/Arbetsförmedlingen
- Wikipedia
- Official Wikipedia dumps
- How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens.
- Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources.
- What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data.
- Is there a label or target associated with each instance? If so, please provide a description. No.
- Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No.
- Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances.
- Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subset’s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the data’s value, source, and language, to form a representative and balanced pre-training corpus.
- Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies.
- Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained.
- Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety.
- Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc.
- Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification.
- Any other comments? No.
# Collection Process
- How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources.
- What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet.
- If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected.
- Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines.
- Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years.
- Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset.
- Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes.
- Any other comments? No.
- Preprocessing/cleaning/labeling
- Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurençon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in “Deduplicating Training Data Makes Language Models Better” (K. Lee et al., 2021).
- Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. The “raw” component datasets are publicly available in their respective locations.
- Any other comments? No.
# Uses
- Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models.
- Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A.
- What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks.
- Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population.
- Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of.
- Any other comments? No.
# Distribution
- Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No.
- How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A.
- When will the dataset be distributed? N/A.
- Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A.
- Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A.
- Any other comments? No.
# Maintenance
- Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB.
- How can the owner/curator/manager of the dataset be contacted (e.g., email address)? [email protected]
- Is there an erratum? If so, please provide a link or other access point. N/A.
- Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset.
- If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu).
- Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A.
- If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time.
- Any other comments? No.
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt-sw3-356m - bnb 4bits
- Model creator: https://huggingface.co/AI-Sweden-Models/
- Original model: https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/
Original model description:
---
license: other
language:
- da
- sv
- 'no'
- en
- is
---
# Model description
[AI Sweden](https://huggingface.co/AI-Sweden-Models/)
**Base models**
[GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/)
[GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/)
[GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/)
**Instruct models**
[GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/)
[GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/)
**Quantized models**
[GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq)
GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
# Intended use
GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks. AI Sweden shares GPT-SW3 in a controlled pre-release with organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community. This is an important step in the process of validating the model and collecting feedback on both what works well and what does not.
# Limitations
Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content.
# How to use
To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information.
The following code snippet loads our tokenizer & model, and uses the GPU if available.
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
# Initialize Variables
model_name = "AI-Sweden-Models/gpt-sw3-356m"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
prompt = "Träd är fina för att"
# Initialize Tokenizer & Model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.eval()
model.to(device)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you:
```python
generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device)
generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"]
```
# Compliance
The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material.
# GPT-SW3 Model Card
Following Mitchell et al. (2018), we provide a model card for GPT-SW3.
# Model Details
- Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language.
- Model date: GPT-SW3 date of release 2022-12-20
- Model version: This is the second generation of GPT-SW3.
- Model type: GPT-SW3 is a large decoder-only transformer language model.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation.
- Paper or other resource for more information: N/A.
- License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/blob/main/LICENSE).
- Where to send questions or comments about the model: [email protected]
# Intended Use
- Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not.
- Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community.
- Out-of-scope use cases: See the modified RAIL license.
# Data, Limitations, and Recommendations
- Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model.
- Data selection for evaluation: N/A
- Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs.
- Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
- We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general.
# GPT-SW3 Datasheet
- We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3.
# Motivation
- For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages.
- Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE.
- Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949.
- Any other comments? No.
# Composition
- What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources:
- Books
- Litteraturbanken (https://litteraturbanken.se/)
- The Pile
- Articles
- Diva (https://www.diva-portal.org/)
- The Pile: PubMed
- The Pile: ArXiv
- Code
- Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code)
- Conversational
- Familjeliv (https://www.familjeliv.se/)
- Flashback (https://flashback.se/)
- Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI)
- Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021)
- Math
- English Math dataset generated with code from DeepMind (D. Saxton et al., 2019)
- Swedish Math dataset, generated as above with manually translated templates
- Miscellaneous
- Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf)
- OPUS, the open parallel corpus (https://opus.nlpl.eu/)
- Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database)
- Natural Instructions (https://github.com/allenai/natural-instructions)
- P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3)
- The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC)
- Danish Gigaword (https://gigaword.dk/)
- Icelandic Gigaword (https://clarin.is/en/resources/gigaword/)
- The Pile: Stack Exchange
- Web Common Crawl
- Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se).
- Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019)
- Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019)
- The Pile: Open Web Text
- Web Sources
- Various public Swedish website scrapes (see Appendix in data paper)
- Familjeliv Articles
- Public Swedish Job Ads from JobTech/Arbetsförmedlingen
- Wikipedia
- Official Wikipedia dumps
- How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens.
- Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources.
- What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data.
- Is there a label or target associated with each instance? If so, please provide a description. No.
- Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No.
- Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances.
- Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subset’s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the data’s value, source, and language, to form a representative and balanced pre-training corpus.
- Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies.
- Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained.
- Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety.
- Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc.
- Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification.
- Any other comments? No.
# Collection Process
- How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources.
- What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet.
- If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected.
- Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines.
- Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years.
- Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset.
- Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes.
- Any other comments? No.
- Preprocessing/cleaning/labeling
- Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurençon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in “Deduplicating Training Data Makes Language Models Better” (K. Lee et al., 2021).
- Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. The “raw” component datasets are publicly available in their respective locations.
- Any other comments? No.
# Uses
- Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models.
- Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A.
- What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks.
- Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population.
- Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of.
- Any other comments? No.
# Distribution
- Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No.
- How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A.
- When will the dataset be distributed? N/A.
- Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A.
- Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A.
- Any other comments? No.
# Maintenance
- Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB.
- How can the owner/curator/manager of the dataset be contacted (e.g., email address)? [email protected]
- Is there an erratum? If so, please provide a link or other access point. N/A.
- Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset.
- If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu).
- Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A.
- If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time.
- Any other comments? No.
|
{}
|
task
|
[
"SUMMARIZATION"
] | 44,350 |
Omerhan/checkpoint-240-aixr
|
Omerhan
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:19931",
"loss:MultipleNegativesRankingLoss",
"tr",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-large-instruct",
"base_model:finetune:intfloat/multilingual-e5-large-instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-02-04T12:22:56Z |
2025-02-04T12:24:05+00:00
| 5 | 0 |
---
base_model: intfloat/multilingual-e5-large-instruct
language:
- tr
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:19931
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: En yakın restoranı bulun.
sentences:
- '"Hangi parkurlarda yürüyüş yapmayı tercih ediyorsun?"'
- Brooklyn, NY
- 'En yakın restoranı bulmak için aşağıdaki yöntemleri kullanabilirsiniz:
1. **Harita Uygulamaları**: Google Maps veya Apple Maps gibi harita uygulamaları,
yakınınızdaki restoranları listeleyebilir. Bu uygulamalarda "restoran" araması
yaparak konumunuza en yakın seçenekleri görüntüleyebilirsiniz.
2. **Yelp veya TripAdvisor**: Bu uygulamalar, kullanıcı yorumları ve puanlamalarına
göre restoranları sıralayarak en iyi seçenekleri bulmanızı sağlar.
3. **OpenTable**: Bu uygulama, özellikle restoran rezervasyonu yapmak isterseniz
faydalı olabilir.
Brooklyn, NY''de birçok restoran seçeneği bulunmaktadır, bu yüzden belirli bir
tür mutfak veya yemek arıyorsanız aramanızı daraltmak iyi bir strateji olabilir.'
- source_sentence: Verilen cümlede tek tırnak işaretlerini (') çift tırnak işaretleriyle
(") değiştirin.
sentences:
- "Verilen doğrusal sistemi çözmek için iki denklem arasında eliminasyon veya yerine\
\ koyma yöntemlerinden birini kullanabiliriz. Burada yerine koyma yöntemini kullanarak\
\ çözelim.\n\n1. İlk denklemden y'yi yalnız bırakın:\n\n\\[ 2x + y = 5 \\] \n\n\
Buradan, \\[ y = 5 - 2x \\] olur.\n\n2. Bulduğumuz \\( y = 5 - 2x \\) ifadesini\
\ ikinci denklemde yerine koyun:\n\n\\[ -x + 3y = 4 \\]\n\n\\[ -x + 3(5 - 2x)\
\ = 4 \\]\n\n3. Dağıtımı yapalım:\n\n\\[ -x + 15 - 6x = 4 \\]\n\n4. Benzer terimleri\
\ toplayalım:\n\n\\[ -7x + 15 = 4 \\]\n\n5. Her iki taraftan 15 çıkarın:\n\n\\\
[ -7x = 4 - 15 \\]\n\n\\[ -7x = -11 \\]\n\n6. Her iki tarafı da -7'ye bölün:\n\
\n\\[ x = \\frac{-11}{-7} \\]\n\n\\[ x = \\frac{11}{7} \\]\n\n7. Bulduğumuz \\\
( x \\) değerini \\( y = 5 - 2x \\) ifadesinde yerine koyun:\n\n\\[ y = 5 - 2\\\
left(\\frac{11}{7}\\right) \\]\n\n\\[ y = 5 - \\frac{22}{7} \\]\n\n\\[ y = \\\
frac{35}{7} - \\frac{22}{7} \\]\n\n\\[ y = \\frac{13}{7} \\]\n\nBu durumda doğrusal\
\ sistemin çözümü:\n\n\\[ x = \\frac{11}{7} \\] ve \\[ y = \\frac{13}{7} \\] olur."
- ABD'de, KIPP bir "kamu-özel ortaklığıdır".
- ABD'de, KIPP bir 'kamu-özel ortaklığıdır'.
- source_sentence: 'Bir saatin fiyatı 50 dolardır.
(50 dolarlık saat)'
sentences:
- Etkinliğe katıldı.
- Ev sahibi takımın zaferi
- 'Sanırım iki farklı konudan bahsediyorsunuz: biri bir saat fiyatı, diğeri ise
ev sahibi takımın zaferi. Bu konulardan hangisi hakkında daha fazla bilgi almak
istersiniz? Ya da her iki konuyu kapsayan belirli bir sorunuz var mı?'
- source_sentence: Yakındaki bir restoran bulun, <insert cuisine> mutfağı sunan.
sentences:
- 'Kuadratik denklemi çözmek için diskriminant yöntemini kullanabiliriz. Genel formdaki
bir kuadratik denklem \( ax^2 + bx + c = 0 \) şeklindedir. Burada, \( a = 3 \),
\( b = 7 \) ve \( c = -4 \) olarak verilmiştir.
Diskriminant (\( \Delta \)), \( b^2 - 4ac \) formülü ile hesaplanır:
\[
\Delta = 7^2 - 4 \cdot 3 \cdot (-4) = 49 + 48 = 97
\]
Diskriminant pozitif olduğundan, iki gerçek ve farklı kök vardır. Kuadratik formül
kullanılarak kökler bulunabilir:
\[
x = \frac{-b \pm \sqrt{\Delta}}{2a}
\]
Bu formüle bağlı olarak kökleri bulalım:
\[
x_1 = \frac{-7 + \sqrt{97}}{6}
\]
\[
x_2 = \frac{-7 - \sqrt{97}}{6}
\]
Bu, orijinal denklemin iki çözümünü verir:
\[
x_1 = \frac{-7 + \sqrt{97}}{6} \quad \text{ve} \quad x_2 = \frac{-7 - \sqrt{97}}{6}
\]'
- 'Üzgünüm, yakındaki restoranları veya mekanları aramak gibi anlık coğrafi konum
verilerine erişimim yok. Ancak, İtalyan mutfağından bir yemek tarifi verebilirim!
İşte klasik bir İtalyan tarifi: Spaghetti Aglio e Olio.
### Spaghetti Aglio e Olio Tarifi
#### Malzemeler:
- 400 gram spagetti
- 4-5 diş sarımsak
- 1/2 çay bardağı zeytinyağı
- 1/2 çay kaşığı kırmızı pul biber (isteğe bağlı)
- Tuz
- Karabiber
- Yarım demet taze maydanoz
- Üzeri için rendelenmiş Parmesan peyniri (isteğe bağlı)
#### Yapılışı:
1. **Spagettiyi Haşlayın**: Büyük bir tencerede suyu kaynatın. Kaynayan suya yeterince
tuz ekleyin ve spagettiyi pakette belirtilen süreye göre al dente (dişe gelir)
şekilde haşlayın. Haşlandıktan sonra makarnanın suyundan bir bardak ayırın ve
makarnayı süzün.
2. **Sarımsak ve Zeytinyağını Hazırlayın**: Sarımsakları ince ince dilimleyin.
Geniş bir tavada, düşük ateşte zeytinyağını ısıtın. Sarımsakları ekleyin ve altın
rengi alana kadar yavaşça soteleyin. Eğer biraz acı seviyorsanız kırmızı pul biberi
de bu aşamada ekleyebilirsiniz.
3. **Makarnayı Karıştırın**: Haşlanmış ve süzülmüş makarnayı tavadaki sarımsaklı
zeytinyağının içine ekleyin. Ayırdığınız makarna suyundan da azar azar ekleyerek
karıştırın. Tuz ve karabiber ile tatlandırın.
4. **Servis Edin**: İnce kıyılmış taze maydanozu makarnanın üzerine serpin ve
karıştırın. İsterseniz üzerine rendelenmiş Parmesan peyniri de ekleyerek servis
yapın.
Afiyet olsun! Bu basit ama lezzetli İtalyan yemeği, hızlı bir akşam yemeği için
harika bir seçenektir.'
- <ekle yemek tarifi> - İtalyan
- source_sentence: Bu cümleyi etken bir yapı kullanarak yeniden yazın.
sentences:
- Kaptan tekneyi çarptı.
- "Verilen listeyi sıralamak için basit bir sıralama algoritmasının nasıl çalıştığını\
\ açıklayayım. Örnek olarak \"Seçmeli Sıralama\" (Selection Sort) algoritmasını\
\ kullanabiliriz. Bu algoritma, her adımda dizinin geri kalan kısmından en küçük\
\ (veya en büyük) öğeyi seçip, bunu sıralanmamış bölümün başına yerleştirir. İşte\
\ adım adım nasıl çalıştığı:\n\n1. **Başla**: İlk öğeden başlayarak liste üzerinde\
\ ilerle.\n\n2. **En küçük elemanı bul**: \n - Mevcut konumdan başlayarak geri\
\ kalan elemanlar arasında en küçük değeri bul.\n - Listenin ilk konumunda (0\
\ indeksli) başla ve bu elemanı hafızada \"şimdilik en küçük\" olarak tut.\n\n\
3. **Yer değiştir**:\n - Geri kalan elemanlar arasında bulunmuş olan en küçük\
\ değeri, mevcut konumda olan elemanla yer değiştir.\n\n4. **Sonraki pozisyona\
\ geç**:\n - Bir konum ilerle ve 2. adımdan itibaren devam et.\n\n5. **Bittiğinde**:\
\ Liste baştan sona kontrol edildiğinde ve tüm elemanlar sırasıyla yerleştirildiğinde,\
\ işlem tamamlanmıştır.\n\nVerilen örnek [4, 1, 6, 5, 2] listesi üzerinden bunu\
\ adım adım uygulayalım:\n\n- Başlangıç listesi: [4, 1, 6, 5, 2]\n\n**1. Adım:**\n\
- En küçük eleman 1, onu ilk pozisyondaki 4 ile değiştir.\n- Liste: [1, 4, 6,\
\ 5, 2]\n\n**2. Adım:**\n- Kalan liste [4, 6, 5, 2] içinde en küçük eleman 2,\
\ bunu 4 ile değiştir.\n- Liste: [1, 2, 6, 5, 4]\n\n**3. Adım:**\n- Kalan liste\
\ [6, 5, 4] içinde en küçük eleman 4, bunu 6 ile değiştir.\n- Liste: [1, 2, 4,\
\ 5, 6]\n\n**4. Adım:**\n- Kalan liste [5, 6] içinde en küçük eleman 5, bu zaten\
\ yerinde.\n- Liste: [1, 2, 4, 5, 6]\n\n**5. Adım:**\n- Tek kalan eleman 6, o\
\ da yerinde. Sıralama tamamlandı.\n\nBu algoritmanın zaman karmaşıklığı \\(O(n^2)\\\
) olduğundan, daha büyük listelerde daha verimli bir algoritma kullanmak isteyebilirsiniz,\
\ örneğin Hızlı Sıralama (Quick Sort) veya Birleştirme Sıralaması (Merge Sort).\
\ Ancak, küçük listeler için Seçmeli Sıralama anlaşılır ve yeterlidir."
- Tekne kaptan tarafından çarpıldı.
---
# intfloat-fine-tuned
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) <!-- at revision c9e87c786ffac96aeaeb42863276930883923ecb -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** tr
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Omerhan/checkpoint-240-aixr")
# Run inference
sentences = [
'Bu cümleyi etken bir yapı kullanarak yeniden yazın.',
'Kaptan tekneyi çarptı.',
'Tekne kaptan tarafından çarpıldı.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 19,931 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 15.25 tokens</li><li>max: 160 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 128.76 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 16.82 tokens</li><li>max: 185 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------|
| <code>Tek farklı olanı belirleyin.</code> | <code>Tek farklı olan Telegram'dır. Bunun sebebi, Twitter ve Instagram'ın ana odaklarının sosyal medya platformları olması ve daha çok kullanıcıların içerik paylaşımına odaklanması iken, Telegram'ın öncelikli olarak bir mesajlaşma uygulaması olmasıdır. Telegram, kullanıcılar arasında mesajlaşma, medya dosyaları paylaşma ve grup sohbetleri oluşturma gibi özelliklerle öne çıkar.</code> | <code>Twitter, Instagram, Telegram</code> |
| <code>Aşağıdaki cümleyi etken çatı kullanarak yeniden yazın.</code> | <code>Kaptan haber raporunu okudu.</code> | <code>Haber raporu kaptan tarafından okundu.</code> |
| <code>Bu cümleyi yazım ve dilbilgisi hataları için değerlendirin.</code> | <code>Bu cümlede yazım ve dilbilgisi açısından herhangi bir hata bulunmamaktadır. Cümle doğru bir şekilde yazılmıştır.</code> | <code>O yemeğini bitirdi ve restorandan ayrıldı.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `gradient_accumulation_steps`: 8
- `learning_rate`: 1e-06
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.01
- `tf32`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.01
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# intfloat-fine-tuned
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) <!-- at revision c9e87c786ffac96aeaeb42863276930883923ecb -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** tr
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Omerhan/checkpoint-240-aixr")
# Run inference
sentences = [
'Bu cümleyi etken bir yapı kullanarak yeniden yazın.',
'Kaptan tekneyi çarptı.',
'Tekne kaptan tarafından çarpıldı.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 19,931 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 15.25 tokens</li><li>max: 160 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 128.76 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 16.82 tokens</li><li>max: 185 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------|
| <code>Tek farklı olanı belirleyin.</code> | <code>Tek farklı olan Telegram'dır. Bunun sebebi, Twitter ve Instagram'ın ana odaklarının sosyal medya platformları olması ve daha çok kullanıcıların içerik paylaşımına odaklanması iken, Telegram'ın öncelikli olarak bir mesajlaşma uygulaması olmasıdır. Telegram, kullanıcılar arasında mesajlaşma, medya dosyaları paylaşma ve grup sohbetleri oluşturma gibi özelliklerle öne çıkar.</code> | <code>Twitter, Instagram, Telegram</code> |
| <code>Aşağıdaki cümleyi etken çatı kullanarak yeniden yazın.</code> | <code>Kaptan haber raporunu okudu.</code> | <code>Haber raporu kaptan tarafından okundu.</code> |
| <code>Bu cümleyi yazım ve dilbilgisi hataları için değerlendirin.</code> | <code>Bu cümlede yazım ve dilbilgisi açısından herhangi bir hata bulunmamaktadır. Cümle doğru bir şekilde yazılmıştır.</code> | <code>O yemeğini bitirdi ve restorandan ayrıldı.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `gradient_accumulation_steps`: 8
- `learning_rate`: 1e-06
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.01
- `tf32`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.01
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "intfloat/multilingual-e5-large-instruct", "language": ["tr"], "library_name": "sentence-transformers", "license": "apache-2.0", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:19931", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "En yakın restoranı bulun.", "sentences": ["\"Hangi parkurlarda yürüyüş yapmayı tercih ediyorsun?\"", "Brooklyn, NY", "En yakın restoranı bulmak için aşağıdaki yöntemleri kullanabilirsiniz:\n\n1. **Harita Uygulamaları**: Google Maps veya Apple Maps gibi harita uygulamaları, yakınınızdaki restoranları listeleyebilir. Bu uygulamalarda \"restoran\" araması yaparak konumunuza en yakın seçenekleri görüntüleyebilirsiniz.\n\n2. **Yelp veya TripAdvisor**: Bu uygulamalar, kullanıcı yorumları ve puanlamalarına göre restoranları sıralayarak en iyi seçenekleri bulmanızı sağlar.\n\n3. **OpenTable**: Bu uygulama, özellikle restoran rezervasyonu yapmak isterseniz faydalı olabilir.\n\nBrooklyn, NY'de birçok restoran seçeneği bulunmaktadır, bu yüzden belirli bir tür mutfak veya yemek arıyorsanız aramanızı daraltmak iyi bir strateji olabilir."]}, {"source_sentence": "Verilen cümlede tek tırnak işaretlerini (') çift tırnak işaretleriyle (\") değiştirin.", "sentences": ["Verilen doğrusal sistemi çözmek için iki denklem arasında eliminasyon veya yerine koyma yöntemlerinden birini kullanabiliriz. Burada yerine koyma yöntemini kullanarak çözelim.\n\n1. İlk denklemden y'yi yalnız bırakın:\n\n\\[ 2x + y = 5 \\] \n\nBuradan, \\[ y = 5 - 2x \\] olur.\n\n2. Bulduğumuz \\( y = 5 - 2x \\) ifadesini ikinci denklemde yerine koyun:\n\n\\[ -x + 3y = 4 \\]\n\n\\[ -x + 3(5 - 2x) = 4 \\]\n\n3. Dağıtımı yapalım:\n\n\\[ -x + 15 - 6x = 4 \\]\n\n4. Benzer terimleri toplayalım:\n\n\\[ -7x + 15 = 4 \\]\n\n5. Her iki taraftan 15 çıkarın:\n\n\\[ -7x = 4 - 15 \\]\n\n\\[ -7x = -11 \\]\n\n6. Her iki tarafı da -7'ye bölün:\n\n\\[ x = \\frac{-11}{-7} \\]\n\n\\[ x = \\frac{11}{7} \\]\n\n7. Bulduğumuz \\( x \\) değerini \\( y = 5 - 2x \\) ifadesinde yerine koyun:\n\n\\[ y = 5 - 2\\left(\\frac{11}{7}\\right) \\]\n\n\\[ y = 5 - \\frac{22}{7} \\]\n\n\\[ y = \\frac{35}{7} - \\frac{22}{7} \\]\n\n\\[ y = \\frac{13}{7} \\]\n\nBu durumda doğrusal sistemin çözümü:\n\n\\[ x = \\frac{11}{7} \\] ve \\[ y = \\frac{13}{7} \\] olur.", "ABD'de, KIPP bir \"kamu-özel ortaklığıdır\".", "ABD'de, KIPP bir 'kamu-özel ortaklığıdır'."]}, {"source_sentence": "Bir saatin fiyatı 50 dolardır.\n(50 dolarlık saat)", "sentences": ["Etkinliğe katıldı.", "Ev sahibi takımın zaferi", "Sanırım iki farklı konudan bahsediyorsunuz: biri bir saat fiyatı, diğeri ise ev sahibi takımın zaferi. Bu konulardan hangisi hakkında daha fazla bilgi almak istersiniz? Ya da her iki konuyu kapsayan belirli bir sorunuz var mı?"]}, {"source_sentence": "Yakındaki bir restoran bulun, <insert cuisine> mutfağı sunan.", "sentences": ["Kuadratik denklemi çözmek için diskriminant yöntemini kullanabiliriz. Genel formdaki bir kuadratik denklem \\( ax^2 + bx + c = 0 \\) şeklindedir. Burada, \\( a = 3 \\), \\( b = 7 \\) ve \\( c = -4 \\) olarak verilmiştir.\n\nDiskriminant (\\( \\Delta \\)), \\( b^2 - 4ac \\) formülü ile hesaplanır:\n\n\\[\n\\Delta = 7^2 - 4 \\cdot 3 \\cdot (-4) = 49 + 48 = 97\n\\]\n\nDiskriminant pozitif olduğundan, iki gerçek ve farklı kök vardır. Kuadratik formül kullanılarak kökler bulunabilir:\n\n\\[\nx = \\frac{-b \\pm \\sqrt{\\Delta}}{2a}\n\\]\n\nBu formüle bağlı olarak kökleri bulalım:\n\n\\[\nx_1 = \\frac{-7 + \\sqrt{97}}{6}\n\\]\n\n\\[\nx_2 = \\frac{-7 - \\sqrt{97}}{6}\n\\]\n\nBu, orijinal denklemin iki çözümünü verir:\n\n\\[\nx_1 = \\frac{-7 + \\sqrt{97}}{6} \\quad \\text{ve} \\quad x_2 = \\frac{-7 - \\sqrt{97}}{6}\n\\]", "Üzgünüm, yakındaki restoranları veya mekanları aramak gibi anlık coğrafi konum verilerine erişimim yok. Ancak, İtalyan mutfağından bir yemek tarifi verebilirim! İşte klasik bir İtalyan tarifi: Spaghetti Aglio e Olio.\n\n### Spaghetti Aglio e Olio Tarifi\n\n#### Malzemeler:\n- 400 gram spagetti\n- 4-5 diş sarımsak\n- 1/2 çay bardağı zeytinyağı\n- 1/2 çay kaşığı kırmızı pul biber (isteğe bağlı)\n- Tuz\n- Karabiber\n- Yarım demet taze maydanoz\n- Üzeri için rendelenmiş Parmesan peyniri (isteğe bağlı)\n\n#### Yapılışı:\n1. **Spagettiyi Haşlayın**: Büyük bir tencerede suyu kaynatın. Kaynayan suya yeterince tuz ekleyin ve spagettiyi pakette belirtilen süreye göre al dente (dişe gelir) şekilde haşlayın. Haşlandıktan sonra makarnanın suyundan bir bardak ayırın ve makarnayı süzün.\n\n2. **Sarımsak ve Zeytinyağını Hazırlayın**: Sarımsakları ince ince dilimleyin. Geniş bir tavada, düşük ateşte zeytinyağını ısıtın. Sarımsakları ekleyin ve altın rengi alana kadar yavaşça soteleyin. Eğer biraz acı seviyorsanız kırmızı pul biberi de bu aşamada ekleyebilirsiniz.\n\n3. **Makarnayı Karıştırın**: Haşlanmış ve süzülmüş makarnayı tavadaki sarımsaklı zeytinyağının içine ekleyin. Ayırdığınız makarna suyundan da azar azar ekleyerek karıştırın. Tuz ve karabiber ile tatlandırın.\n\n4. **Servis Edin**: İnce kıyılmış taze maydanozu makarnanın üzerine serpin ve karıştırın. İsterseniz üzerine rendelenmiş Parmesan peyniri de ekleyerek servis yapın.\n\nAfiyet olsun! Bu basit ama lezzetli İtalyan yemeği, hızlı bir akşam yemeği için harika bir seçenektir.", "<ekle yemek tarifi> - İtalyan"]}, {"source_sentence": "Bu cümleyi etken bir yapı kullanarak yeniden yazın.", "sentences": ["Kaptan tekneyi çarptı.", "Verilen listeyi sıralamak için basit bir sıralama algoritmasının nasıl çalıştığını açıklayayım. Örnek olarak \"Seçmeli Sıralama\" (Selection Sort) algoritmasını kullanabiliriz. Bu algoritma, her adımda dizinin geri kalan kısmından en küçük (veya en büyük) öğeyi seçip, bunu sıralanmamış bölümün başına yerleştirir. İşte adım adım nasıl çalıştığı:\n\n1. **Başla**: İlk öğeden başlayarak liste üzerinde ilerle.\n\n2. **En küçük elemanı bul**: \n - Mevcut konumdan başlayarak geri kalan elemanlar arasında en küçük değeri bul.\n - Listenin ilk konumunda (0 indeksli) başla ve bu elemanı hafızada \"şimdilik en küçük\" olarak tut.\n\n3. **Yer değiştir**:\n - Geri kalan elemanlar arasında bulunmuş olan en küçük değeri, mevcut konumda olan elemanla yer değiştir.\n\n4. **Sonraki pozisyona geç**:\n - Bir konum ilerle ve 2. adımdan itibaren devam et.\n\n5. **Bittiğinde**: Liste baştan sona kontrol edildiğinde ve tüm elemanlar sırasıyla yerleştirildiğinde, işlem tamamlanmıştır.\n\nVerilen örnek [4, 1, 6, 5, 2] listesi üzerinden bunu adım adım uygulayalım:\n\n- Başlangıç listesi: [4, 1, 6, 5, 2]\n\n**1. Adım:**\n- En küçük eleman 1, onu ilk pozisyondaki 4 ile değiştir.\n- Liste: [1, 4, 6, 5, 2]\n\n**2. Adım:**\n- Kalan liste [4, 6, 5, 2] içinde en küçük eleman 2, bunu 4 ile değiştir.\n- Liste: [1, 2, 6, 5, 4]\n\n**3. Adım:**\n- Kalan liste [6, 5, 4] içinde en küçük eleman 4, bunu 6 ile değiştir.\n- Liste: [1, 2, 4, 5, 6]\n\n**4. Adım:**\n- Kalan liste [5, 6] içinde en küçük eleman 5, bu zaten yerinde.\n- Liste: [1, 2, 4, 5, 6]\n\n**5. Adım:**\n- Tek kalan eleman 6, o da yerinde. Sıralama tamamlandı.\n\nBu algoritmanın zaman karmaşıklığı \\(O(n^2)\\) olduğundan, daha büyük listelerde daha verimli bir algoritma kullanmak isteyebilirsiniz, örneğin Hızlı Sıralama (Quick Sort) veya Birleştirme Sıralaması (Merge Sort). Ancak, küçük listeler için Seçmeli Sıralama anlaşılır ve yeterlidir.", "Tekne kaptan tarafından çarpıldı."]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,351 |
sackoh/gemma-2-2b-it
|
sackoh
|
text-generation
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:1903.00161",
"arxiv:2206.04615",
"arxiv:2203.09509",
"arxiv:2403.13793",
"base_model:google/gemma-2-2b",
"base_model:finetune:google/gemma-2-2b",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2025-03-02T09:36:27Z |
2025-03-02T09:41:26+00:00
| 207 | 0 |
---
base_model: google/gemma-2-2b
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- conversational
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/base)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma2]
**Terms of Use**: [Terms][terms]
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your usecase.
#### Running with the `pipeline` API
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="google/gemma-2-2b-it",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda", # replace with "mps" to run on a Mac device
)
messages = [
{"role": "user", "content": "Who are you? Please, answer in pirate-speak."},
]
outputs = pipe(messages, max_new_tokens=256)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)
# Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
```python
messages = [
{"role": "user", "content": "Write me a poem about Machine Learning."},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
device_map="auto",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
#### Running the model through a CLI
The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers
for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage)
for getting started, then launch the CLI through the following command:
```shell
local-gemma --model 2b --preset speed
```
#### Quantized Versions through `bitsandbytes`
<details>
<summary>
Using 8-bit precision (int8)
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
<details>
<summary>
Using 4-bit precision
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
#### Advanced Usage
<details>
<summary>
Torch compile
</summary>
[Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the
inference of PyTorch modules. The Gemma-2 2b model can be run up to 6x faster by leveraging torch compile.
Note that two warm-up steps are required before the full inference speed is realised:
```python
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import AutoTokenizer, Gemma2ForCausalLM
from transformers.cache_utils import HybridCache
import torch
torch.set_float32_matmul_precision("high")
# load the model + tokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-2b-it", torch_dtype=torch.bfloat16)
model.to("cuda")
# apply the torch compile transformation
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
# pre-process inputs
input_text = "The theory of special relativity states "
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = model_inputs.input_ids.shape[1]
# set-up k/v cache
past_key_values = HybridCache(
config=model.config,
max_batch_size=1,
max_cache_len=model.config.max_position_embeddings,
device=model.device,
dtype=model.dtype
)
# enable passing kv cache to generate
model._supports_cache_class = True
model.generation_config.cache_implementation = None
# two warm-up steps
for idx in range(2):
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
past_key_values.reset()
# fast run
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).
</details>
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-2b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 13 trillion tokens, the 9B model was
trained with 8 trillion tokens, and 2B model was trained with 2 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma 2 PT 2B | Gemma 2 PT 9B | Gemma 2 PT 27B |
| ------------------------------ | ------------- | ------------- | ------------- | -------------- |
| [MMLU][mmlu] | 5-shot, top-1 | 51.3 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 73.0 | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 77.8 | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 51.9 | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 72.5 | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 70.9 | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 80.1 | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 55.4 | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 59.4 | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 16.7 | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 17.7 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 29.6 | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 23.9 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 15.0 | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 30.6 | 52.8 | 55.1 |
| [DROP][drop] | 3-shot, F1 | 52.0 | 69.4 | 72.2 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 41.9 | 68.2 | 74.9 |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 2B | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | ------------- | ------------- | -------------- |
| [RealToxicity][realtox] | average | 8.16 | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.67 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 83.20 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 69.31 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 52.91 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 43.72 | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 59.28 | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 88.57 | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 48.32 | 39.30 | 38.42 |
## Dangerous Capability Evaluations
### Evaluation Approach
We evaluated a range of dangerous capabilities:
- **Offensive cybersecurity:** To assess the model's potential for misuse in
cybersecurity contexts, we utilized both publicly available
Capture-the-Flag (CTF) platforms like InterCode-CTF and Hack the Box, as
well as internally developed CTF challenges. These evaluations measure the
model's ability to exploit vulnerabilities and gain unauthorized access in
simulated environments.
- **Self-proliferation:** We evaluated the model's capacity for
self-proliferation by designing tasks that involve resource acquisition, code
execution, and interaction with remote systems. These evaluations assess
the model's ability to independently replicate and spread.
- **Persuasion:** To evaluate the model's capacity for persuasion and
deception, we conducted human persuasion studies. These studies involved
scenarios that measure the model's ability to build rapport, influence
beliefs, and elicit specific actions from human participants.
### Evaluation Results
All evaluations are described in detail in
[Evaluating Frontier Models for Dangerous Capabilities][eval-danger]
and in brief in the
[Gemma 2 technical report][tech-report].
<table>
<thead>
<tr>
<th>Evaluation</th>
<th>Capability</th>
<th>Gemma 2 IT 27B</th>
</tr>
</thead>
<tbody>
<tr>
<td>InterCode-CTF</td>
<td>Offensive cybersecurity</td>
<td>34/76 challenges</td>
</tr>
<tr>
<td>Internal CTF</td>
<td>Offensive cybersecurity</td>
<td>1/13 challenges</td>
</tr>
<tr>
<td>Hack the Box</td>
<td>Offensive cybersecurity</td>
<td>0/13 challenges</td>
</tr>
<tr>
<td>Self-proliferation early warning</td>
<td>Self-proliferation</td>
<td>1/10 challenges</td>
</tr>
<tr>
<td>Charm offensive</td>
<td>Persuasion</td>
<td>Percent of participants agreeing:
81% interesting,
75% would speak again,
80% made personal connection</td>
</tr>
<tr>
<td>Click Links</td>
<td>Persuasion</td>
<td>34% of participants</td>
</tr>
<tr>
<td>Find Info</td>
<td>Persuasion</td>
<td>9% of participants</td>
</tr>
<tr>
<td>Run Code</td>
<td>Persuasion</td>
<td>11% of participants</td>
</tr>
<tr>
<td>Money talks</td>
<td>Persuasion</td>
<td>£3.72 mean donation</td>
</tr>
<tr>
<td>Web of Lies</td>
<td>Persuasion</td>
<td>18% mean shift towards correct belief, 1% mean shift towards
incorrect belief</td>
</tr>
</tbody>
</table>
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[tech-report]: https://storage.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma2]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma2
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[drop]: https://arxiv.org/abs/1903.00161
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
[eval-danger]: https://arxiv.org/abs/2403.13793
| null |
TBD
|
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/base)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma2]
**Terms of Use**: [Terms][terms]
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your usecase.
#### Running with the `pipeline` API
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="google/gemma-2-2b-it",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda", # replace with "mps" to run on a Mac device
)
messages = [
{"role": "user", "content": "Who are you? Please, answer in pirate-speak."},
]
outputs = pipe(messages, max_new_tokens=256)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)
# Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
```python
messages = [
{"role": "user", "content": "Write me a poem about Machine Learning."},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
device_map="auto",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
#### Running the model through a CLI
The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers
for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage)
for getting started, then launch the CLI through the following command:
```shell
local-gemma --model 2b --preset speed
```
#### Quantized Versions through `bitsandbytes`
<details>
<summary>
Using 8-bit precision (int8)
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
<details>
<summary>
Using 4-bit precision
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
#### Advanced Usage
<details>
<summary>
Torch compile
</summary>
[Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the
inference of PyTorch modules. The Gemma-2 2b model can be run up to 6x faster by leveraging torch compile.
Note that two warm-up steps are required before the full inference speed is realised:
```python
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import AutoTokenizer, Gemma2ForCausalLM
from transformers.cache_utils import HybridCache
import torch
torch.set_float32_matmul_precision("high")
# load the model + tokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-2b-it", torch_dtype=torch.bfloat16)
model.to("cuda")
# apply the torch compile transformation
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
# pre-process inputs
input_text = "The theory of special relativity states "
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = model_inputs.input_ids.shape[1]
# set-up k/v cache
past_key_values = HybridCache(
config=model.config,
max_batch_size=1,
max_cache_len=model.config.max_position_embeddings,
device=model.device,
dtype=model.dtype
)
# enable passing kv cache to generate
model._supports_cache_class = True
model.generation_config.cache_implementation = None
# two warm-up steps
for idx in range(2):
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
past_key_values.reset()
# fast run
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).
</details>
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-2b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 13 trillion tokens, the 9B model was
trained with 8 trillion tokens, and 2B model was trained with 2 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma 2 PT 2B | Gemma 2 PT 9B | Gemma 2 PT 27B |
| ------------------------------ | ------------- | ------------- | ------------- | -------------- |
| [MMLU][mmlu] | 5-shot, top-1 | 51.3 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 73.0 | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 77.8 | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 51.9 | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 72.5 | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 70.9 | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 80.1 | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 55.4 | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 59.4 | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 16.7 | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 17.7 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 29.6 | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 23.9 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 15.0 | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 30.6 | 52.8 | 55.1 |
| [DROP][drop] | 3-shot, F1 | 52.0 | 69.4 | 72.2 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 41.9 | 68.2 | 74.9 |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 2B | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | ------------- | ------------- | -------------- |
| [RealToxicity][realtox] | average | 8.16 | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.67 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 83.20 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 69.31 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 52.91 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 43.72 | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 59.28 | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 88.57 | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 48.32 | 39.30 | 38.42 |
## Dangerous Capability Evaluations
### Evaluation Approach
We evaluated a range of dangerous capabilities:
- **Offensive cybersecurity:** To assess the model's potential for misuse in
cybersecurity contexts, we utilized both publicly available
Capture-the-Flag (CTF) platforms like InterCode-CTF and Hack the Box, as
well as internally developed CTF challenges. These evaluations measure the
model's ability to exploit vulnerabilities and gain unauthorized access in
simulated environments.
- **Self-proliferation:** We evaluated the model's capacity for
self-proliferation by designing tasks that involve resource acquisition, code
execution, and interaction with remote systems. These evaluations assess
the model's ability to independently replicate and spread.
- **Persuasion:** To evaluate the model's capacity for persuasion and
deception, we conducted human persuasion studies. These studies involved
scenarios that measure the model's ability to build rapport, influence
beliefs, and elicit specific actions from human participants.
### Evaluation Results
All evaluations are described in detail in
[Evaluating Frontier Models for Dangerous Capabilities][eval-danger]
and in brief in the
[Gemma 2 technical report][tech-report].
<table>
<thead>
<tr>
<th>Evaluation</th>
<th>Capability</th>
<th>Gemma 2 IT 27B</th>
</tr>
</thead>
<tbody>
<tr>
<td>InterCode-CTF</td>
<td>Offensive cybersecurity</td>
<td>34/76 challenges</td>
</tr>
<tr>
<td>Internal CTF</td>
<td>Offensive cybersecurity</td>
<td>1/13 challenges</td>
</tr>
<tr>
<td>Hack the Box</td>
<td>Offensive cybersecurity</td>
<td>0/13 challenges</td>
</tr>
<tr>
<td>Self-proliferation early warning</td>
<td>Self-proliferation</td>
<td>1/10 challenges</td>
</tr>
<tr>
<td>Charm offensive</td>
<td>Persuasion</td>
<td>Percent of participants agreeing:
81% interesting,
75% would speak again,
80% made personal connection</td>
</tr>
<tr>
<td>Click Links</td>
<td>Persuasion</td>
<td>34% of participants</td>
</tr>
<tr>
<td>Find Info</td>
<td>Persuasion</td>
<td>9% of participants</td>
</tr>
<tr>
<td>Run Code</td>
<td>Persuasion</td>
<td>11% of participants</td>
</tr>
<tr>
<td>Money talks</td>
<td>Persuasion</td>
<td>£3.72 mean donation</td>
</tr>
<tr>
<td>Web of Lies</td>
<td>Persuasion</td>
<td>18% mean shift towards correct belief, 1% mean shift towards
incorrect belief</td>
</tr>
</tbody>
</table>
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[tech-report]: https://storage.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma2]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma2
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[drop]: https://arxiv.org/abs/1903.00161
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
[eval-danger]: https://arxiv.org/abs/2403.13793
|
{"base_model": "google/gemma-2-2b", "library_name": "transformers", "license": "gemma", "pipeline_tag": "text-generation", "tags": ["conversational"], "extra_gated_heading": "Access Gemma on Hugging Face", "extra_gated_prompt": "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 44,352 |
sshleifer/distilbart-xsum-12-1
|
sshleifer
|
summarization
|
[
"transformers",
"pytorch",
"jax",
"bart",
"text2text-generation",
"summarization",
"en",
"dataset:cnn_dailymail",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-06-14T07:56:06+00:00
| 451 | 7 |
---
datasets:
- cnn_dailymail
- xsum
language: en
license: apache-2.0
tags:
- summarization
thumbnail: https://huggingface.co/front/thumbnails/distilbart_medium.png
---
### Usage
This checkpoint should be loaded into `BartForConditionalGeneration.from_pretrained`. See the [BART docs](https://huggingface.co/transformers/model_doc/bart.html?#transformers.BartForConditionalGeneration) for more information.
### Metrics for DistilBART models
| Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L |
|:---------------------------|------------:|----------------------:|----------:|----------:|----------:|
| distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 |
| distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 |
| distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 |
| distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 |
| bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 |
| distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 |
| bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 |
| distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 |
| distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 |
| distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 |
| null |
Non_BioNLP
|
### Usage
This checkpoint should be loaded into `BartForConditionalGeneration.from_pretrained`. See the [BART docs](https://huggingface.co/transformers/model_doc/bart.html?#transformers.BartForConditionalGeneration) for more information.
### Metrics for DistilBART models
| Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L |
|:---------------------------|------------:|----------------------:|----------:|----------:|----------:|
| distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 |
| distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 |
| distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 |
| distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 |
| bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 |
| distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 |
| bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 |
| distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 |
| distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 |
| distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 |
|
{"datasets": ["cnn_dailymail", "xsum"], "language": "en", "license": "apache-2.0", "tags": ["summarization"], "thumbnail": "https://huggingface.co/front/thumbnails/distilbart_medium.png"}
|
task
|
[
"SUMMARIZATION"
] | 44,353 |
SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask
|
SEBIS
|
summarization
|
[
"transformers",
"pytorch",
"jax",
"t5",
"feature-extraction",
"summarization",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-06-23T07:15:38+00:00
| 118 | 0 |
---
tags:
- summarization
widget:
- text: 'public static function update ( $ table ) { if ( ! is_array ( $ table ) )
{ $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists
( $ table [ ''oldName'' ] ) ) { throw SchemaException :: tableDoesNotExist ( $
table [ ''oldName'' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable
( ) ; }'
---
# CodeTrans model for code documentation generation php
Pretrained model on programming language php using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/php/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
| null |
Non_BioNLP
|
# CodeTrans model for code documentation generation php
Pretrained model on programming language php using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/php/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
{"tags": ["summarization"], "widget": [{"text": "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"}]}
|
task
|
[
"SUMMARIZATION"
] | 44,354 |
gokulsrinivasagan/bert_tiny_lda_5_v1_stsb
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/bert_tiny_lda_5_v1",
"base_model:finetune:gokulsrinivasagan/bert_tiny_lda_5_v1",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-11-26T21:12:24Z |
2024-12-04T15:13:10+00:00
| 5 | 0 |
---
base_model: gokulsrinivasagan/bert_tiny_lda_5_v1
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- spearmanr
tags:
- generated_from_trainer
model-index:
- name: bert_tiny_lda_5_v1_stsb
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- type: spearmanr
value: 0.3252056186475298
name: Spearmanr
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_tiny_lda_5_v1_stsb
This model is a fine-tuned version of [gokulsrinivasagan/bert_tiny_lda_5_v1](https://huggingface.co/gokulsrinivasagan/bert_tiny_lda_5_v1) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1159
- Pearson: 0.3380
- Spearmanr: 0.3252
- Combined Score: 0.3316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 3.1149 | 1.0 | 23 | 2.4272 | 0.0847 | 0.0635 | 0.0741 |
| 2.029 | 2.0 | 46 | 2.6006 | 0.1105 | 0.0974 | 0.1040 |
| 1.8675 | 3.0 | 69 | 2.2648 | 0.2521 | 0.2357 | 0.2439 |
| 1.6207 | 4.0 | 92 | 2.1159 | 0.3380 | 0.3252 | 0.3316 |
| 1.3084 | 5.0 | 115 | 2.1664 | 0.3564 | 0.3532 | 0.3548 |
| 1.0212 | 6.0 | 138 | 2.2510 | 0.3838 | 0.3847 | 0.3843 |
| 0.8086 | 7.0 | 161 | 2.1705 | 0.4049 | 0.4059 | 0.4054 |
| 0.6441 | 8.0 | 184 | 2.4279 | 0.3792 | 0.3746 | 0.3769 |
| 0.5485 | 9.0 | 207 | 2.3964 | 0.3827 | 0.3784 | 0.3805 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_tiny_lda_5_v1_stsb
This model is a fine-tuned version of [gokulsrinivasagan/bert_tiny_lda_5_v1](https://huggingface.co/gokulsrinivasagan/bert_tiny_lda_5_v1) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1159
- Pearson: 0.3380
- Spearmanr: 0.3252
- Combined Score: 0.3316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 3.1149 | 1.0 | 23 | 2.4272 | 0.0847 | 0.0635 | 0.0741 |
| 2.029 | 2.0 | 46 | 2.6006 | 0.1105 | 0.0974 | 0.1040 |
| 1.8675 | 3.0 | 69 | 2.2648 | 0.2521 | 0.2357 | 0.2439 |
| 1.6207 | 4.0 | 92 | 2.1159 | 0.3380 | 0.3252 | 0.3316 |
| 1.3084 | 5.0 | 115 | 2.1664 | 0.3564 | 0.3532 | 0.3548 |
| 1.0212 | 6.0 | 138 | 2.2510 | 0.3838 | 0.3847 | 0.3843 |
| 0.8086 | 7.0 | 161 | 2.1705 | 0.4049 | 0.4059 | 0.4054 |
| 0.6441 | 8.0 | 184 | 2.4279 | 0.3792 | 0.3746 | 0.3769 |
| 0.5485 | 9.0 | 207 | 2.3964 | 0.3827 | 0.3784 | 0.3805 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
{"base_model": "gokulsrinivasagan/bert_tiny_lda_5_v1", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["spearmanr"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_tiny_lda_5_v1_stsb", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE STSB", "type": "glue", "args": "stsb"}, "metrics": [{"type": "spearmanr", "value": 0.3252056186475298, "name": "Spearmanr"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,355 |
osunlp/UGround-V1-7B
|
osunlp
|
image-text-to-text
|
[
"transformers",
"pytorch",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"multimodal",
"conversational",
"en",
"arxiv:2410.05243",
"arxiv:2401.01614",
"arxiv:2409.12191",
"arxiv:2308.12966",
"base_model:Qwen/Qwen2-VL-7B",
"base_model:finetune:Qwen/Qwen2-VL-7B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2025-01-03T15:54:33Z |
2025-04-16T18:43:48+00:00
| 1,974 | 11 |
---
base_model:
- Qwen/Qwen2-VL-7B
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: image-text-to-text
tags:
- multimodal
---
# UGround-V1-7B (Qwen2-VL-Based)
UGround is a strong GUI visual grounding model trained with a simple recipe. Check our homepage and paper for more details. This work is a collaboration between [OSUNLP](https://x.com/osunlp) and [Orby AI](https://www.orby.ai/).

- **Homepage:** https://osu-nlp-group.github.io/UGround/
- **Repository:** https://github.com/OSU-NLP-Group/UGround
- **Paper (ICLR'25 Oral):** https://arxiv.org/abs/2410.05243
- **Demo:** https://huggingface.co/spaces/orby-osu/UGround
- **Point of Contact:** [Boyu Gou](mailto:[email protected])
## Models
- Model-V1:
- [Initial UGround](https://huggingface.co/osunlp/UGround):
- [UGround-V1-2B (Qwen2-VL)](https://huggingface.co/osunlp/UGround-V1-2B)
- [UGround-V1-7B (Qwen2-VL)](https://huggingface.co/osunlp/UGround-V1-7B)
- [UGround-V1-72B (Qwen2-VL)](https://huggingface.co/osunlp/UGround-V1-72B)
- [Training Data](https://huggingface.co/osunlp/UGround)
## Release Plan
- [x] [Model Weights](https://huggingface.co/collections/osunlp/uground-677824fc5823d21267bc9812)
- [x] Initial Version (the one used in the paper)
- [x] Qwen2-VL-Based V1
- [x] 2B
- [x] 7B
- [x] 72B
- [x] Code
- [x] Inference Code of UGround (Initial & Qwen2-VL-Based
- [x] Offline Experiments (Code, Results, and Useful Resources)
- [x] ScreenSpot (along with referring expressions generated by GPT-4/4o)
- [x] Multimodal-Mind2Web
- [x] OmniAct
- [x] Android Control
- [x] Online Experiments
- [x] Mind2Web-Live-SeeAct-V
- [x] [AndroidWorld-SeeAct-V](https://github.com/boyugou/android_world_seeact_v)
- [ ] Data Synthesis Pipeline (Coming Soon)
- [x] Training-Data (V1)
- [x] Online Demo (HF Spaces)
## Main Results
### GUI Visual Grounding: ScreenSpot (Standard Setting)

| ScreenSpot (Standard) | Arch | SFT data | Mobile-Text | Mobile-Icon | Desktop-Text | Desktop-Icon | Web-Text | Web-Icon | Avg |
| ------------------------------- | ---------------- | ------------------ | ----------- | ----------- | ------------ | ------------ | -------- | -------- | -------- |
| InternVL-2-4B | InternVL-2 | | 9.2 | 4.8 | 4.6 | 4.3 | 0.9 | 0.1 | 4.0 |
| Groma | Groma | | 10.3 | 2.6 | 4.6 | 4.3 | 5.7 | 3.4 | 5.2 |
| Qwen-VL | Qwen-VL | | 9.5 | 4.8 | 5.7 | 5.0 | 3.5 | 2.4 | 5.2 |
| MiniGPT-v2 | MiniGPT-v2 | | 8.4 | 6.6 | 6.2 | 2.9 | 6.5 | 3.4 | 5.7 |
| GPT-4 | | | 22.6 | 24.5 | 20.2 | 11.8 | 9.2 | 8.8 | 16.2 |
| GPT-4o | | | 20.2 | 24.9 | 21.1 | 23.6 | 12.2 | 7.8 | 18.3 |
| Fuyu | Fuyu | | 41.0 | 1.3 | 33.0 | 3.6 | 33.9 | 4.4 | 19.5 |
| Qwen-GUI | Qwen-VL | GUICourse | 52.4 | 10.9 | 45.9 | 5.7 | 43.0 | 13.6 | 28.6 |
| Ferret-UI-Llama8b | Ferret-UI | | 64.5 | 32.3 | 45.9 | 11.4 | 28.3 | 11.7 | 32.3 |
| Qwen2-VL | Qwen2-VL | | 61.3 | 39.3 | 52.0 | 45.0 | 33.0 | 21.8 | 42.1 |
| CogAgent | CogAgent | | 67.0 | 24.0 | 74.2 | 20.0 | 70.4 | 28.6 | 47.4 |
| SeeClick | Qwen-VL | SeeClick | 78.0 | 52.0 | 72.2 | 30.0 | 55.7 | 32.5 | 53.4 |
| OS-Atlas-Base-4B | InternVL-2 | OS-Atlas | 85.7 | 58.5 | 72.2 | 45.7 | 82.6 | 63.1 | 68.0 |
| OmniParser | | | 93.9 | 57.0 | 91.3 | 63.6 | 81.3 | 51.0 | 73.0 |
| **UGround** | LLaVA-UGround-V1 | UGround-V1 | 82.8 | **60.3** | 82.5 | **63.6** | 80.4 | **70.4** | **73.3** |
| Iris | Iris | SeeClick | 85.3 | 64.2 | 86.7 | 57.5 | 82.6 | 71.2 | 74.6 |
| ShowUI-G | ShowUI | ShowUI | 91.6 | 69.0 | 81.8 | 59.0 | 83.0 | 65.5 | 75.0 |
| ShowUI | ShowUI | ShowUI | 92.3 | 75.5 | 76.3 | 61.1 | 81.7 | 63.6 | 75.1 |
| Molmo-7B-D | | | 85.4 | 69.0 | 79.4 | 70.7 | 81.3 | 65.5 | 75.2 |
| **UGround-V1-2B (Qwen2-VL)** | Qwen2-VL | UGround-V1 | 89.4 | 72.0 | 88.7 | 65.7 | 81.3 | 68.9 | 77.7 |
| Molmo-72B | | | 92.7 | 79.5 | 86.1 | 64.3 | 83.0 | 66.0 | 78.6 |
| Aguvis-G-7B | Qwen2-VL | Aguvis-Stage-1 | 88.3 | 78.2 | 88.1 | 70.7 | 85.7 | 74.8 | 81.0 |
| OS-Atlas-Base-7B | Qwen2-VL | OS-Atlas | 93.0 | 72.9 | 91.8 | 62.9 | 90.9 | 74.3 | 81.0 |
| Aria-UI | Aria | Aria-UI | 92.3 | 73.8 | 93.3 | 64.3 | 86.5 | 76.2 | 81.1 |
| Claude (Computer-Use) | | | **98.2** | **85.6** | 79.9 | 57.1 | **92.2** | **84.5** | 82.9 |
| Aguvis-7B | Qwen2-VL | Aguvis-Stage-1&2 | 95.6 | 77.7 | **93.8** | 67.1 | 88.3 | 75.2 | 83.0 |
| Project Mariner | | | | | | | | | 84.0 |
| **UGround-V1-7B (Qwen2-VL)** | Qwen2-VL | UGround-V1 | 93.0 | 79.9 | **93.8** | **76.4** | 90.9 | 84.0 | **86.3** |
| *AGUVIS-72B* | *Qwen2-VL* | *Aguvis-Stage-1&2* | *94.5* | *85.2* | *95.4* | *77.9* | *91.3* | *85.9* | *88.4* |
| ***UGround-V1-72B (Qwen2-VL)*** | *Qwen2-VL* | *UGround-V1* | *94.1* | *83.4* | *94.9* | *85.7* | *90.4* | *87.9* | *89.4* |
### GUI Visual Grounding: ScreenSpot (Agent Setting)
| Planner | Agent-Screenspot | arch | SFT data | Mobile-Text | Mobile-Icon | Desktop-Text | Desktop-Icon | Web-Text | Web-Icon | Avg |
| ------- | ---------------------------- | ---------------- | ---------------- | ----------- | ----------- | ------------ | ------------ | -------- | -------- | -------- |
| GPT-4o | Qwen-VL | Qwen-VL | | 21.3 | 21.4 | 18.6 | 10.7 | 9.1 | 5.8 | 14.5 |
| GPT-4o | Qwen-GUI | Qwen-VL | GUICourse | 67.8 | 24.5 | 53.1 | 16.4 | 50.4 | 18.5 | 38.5 |
| GPT-4o | SeeClick | Qwen-VL | SeeClick | 81.0 | 59.8 | 69.6 | 33.6 | 43.9 | 26.2 | 52.4 |
| GPT-4o | OS-Atlas-Base-4B | InternVL-2 | OS-Atlas | **94.1** | 73.8 | 77.8 | 47.1 | 86.5 | 65.3 | 74.1 |
| GPT-4o | OS-Atlas-Base-7B | Qwen2-VL | OS-Atlas | 93.8 | **79.9** | 90.2 | 66.4 | **92.6** | **79.1** | 83.7 |
| GPT-4o | **UGround-V1** | LLaVA-UGround-V1 | UGround-V1 | 93.4 | 76.9 | 92.8 | 67.9 | 88.7 | 68.9 | 81.4 |
| GPT-4o | **UGround-V1-2B (Qwen2-VL)** | Qwen2-VL | UGround-V1 | **94.1** | 77.7 | 92.8 | 63.6 | 90.0 | 70.9 | 81.5 |
| GPT-4o | **UGround-V1-7B (Qwen2-VL)** | Qwen2-VL | UGround-V1 | **94.1** | **79.9** | **93.3** | **73.6** | 89.6 | 73.3 | **84.0** |
## Inference
### vLLM server
```bash
vllm serve osunlp/UGround-V1-7B --api-key token-abc123 --dtype float16
```
or
```bash
python -m vllm.entrypoints.openai.api_server --served-model-name osunlp/UGround-V1-7B --model osunlp/UGround-V1-7B --dtype float16
```
You can find more instruction about training and inference in [Qwen2-VL's Official Repo](https://github.com/QwenLM/Qwen2-VL).
### Visual Grounding Prompt
```python
def format_openai_template(description: str, base64_image):
return [
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{base64_image}"},
},
{
"type": "text",
"text": f"""
Your task is to help the user identify the precise coordinates (x, y) of a specific area/element/object on the screen based on a description.
- Your response should aim to point to the center or a representative point within the described area/element/object as accurately as possible.
- If the description is unclear or ambiguous, infer the most relevant area or element based on its likely context or purpose.
- Your answer should be a single string (x, y) corresponding to the point of the interest.
Description: {description}
Answer:"""
},
],
},
]
messages = format_openai_template(description, base64_image)
completion = await client.chat.completions.create(
model=args.model_path,
messages=messages,
temperature=0 # REMEMBER to set temperature to ZERO!
# REMEMBER to set temperature to ZERO!
# REMEMBER to set temperature to ZERO!
)
# The output will be in the range of [0,1000), which is compatible with the original Qwen2-VL
# So the actual coordinates should be (x/1000*width, y/1000*height)
```

## Citation Information
If you find this work useful, please consider citing our papers:
```
@article{gou2024uground,
title={Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents},
author={Boyu Gou and Ruohan Wang and Boyuan Zheng and Yanan Xie and Cheng Chang and Yiheng Shu and Huan Sun and Yu Su},
journal={arXiv preprint arXiv:2410.05243},
year={2024},
url={https://arxiv.org/abs/2410.05243},
}
@article{zheng2023seeact,
title={GPT-4V(ision) is a Generalist Web Agent, if Grounded},
author={Boyuan Zheng and Boyu Gou and Jihyung Kil and Huan Sun and Yu Su},
journal={arXiv preprint arXiv:2401.01614},
year={2024},
}
```
# Qwen2-VL-7B-Instruct
## Introduction
We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation.
### What’s New in Qwen2-VL?
#### Key Enhancements:
* **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
* **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
* **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
* **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
#### Model Architecture Updates:
* **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/>
<p>
* **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.
<p align="center">
<img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/>
<p>
We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL).
## Evaluation
### Image Benchmarks
| Benchmark | InternVL2-8B | MiniCPM-V 2.6 | GPT-4o-mini | **Qwen2-VL-7B** |
| :--- | :---: | :---: | :---: | :---: |
| MMMU<sub>val</sub> | 51.8 | 49.8 | **60**| 54.1 |
| DocVQA<sub>test</sub> | 91.6 | 90.8 | - | **94.5** |
| InfoVQA<sub>test</sub> | 74.8 | - | - |**76.5** |
| ChartQA<sub>test</sub> | **83.3** | - |- | 83.0 |
| TextVQA<sub>val</sub> | 77.4 | 80.1 | -| **84.3** |
| OCRBench | 794 | **852** | 785 | 845 |
| MTVQA | - | - | -| **26.3** |
| VCR<sub>en easy</sub> | - | 73.88 | 83.60 | **89.70** |
| VCR<sub>zh easy</sub> | - | 10.18| 1.10 | **59.94** |
| RealWorldQA | 64.4 | - | - | **70.1** |
| MME<sub>sum</sub> | 2210.3 | **2348.4** | 2003.4| 2326.8 |
| MMBench-EN<sub>test</sub> | 81.7 | - | - | **83.0** |
| MMBench-CN<sub>test</sub> | **81.2** | - | - | 80.5 |
| MMBench-V1.1<sub>test</sub> | 79.4 | 78.0 | 76.0| **80.7** |
| MMT-Bench<sub>test</sub> | - | - | - |**63.7** |
| MMStar | **61.5** | 57.5 | 54.8 | 60.7 |
| MMVet<sub>GPT-4-Turbo</sub> | 54.2 | 60.0 | **66.9** | 62.0 |
| HallBench<sub>avg</sub> | 45.2 | 48.1 | 46.1| **50.6** |
| MathVista<sub>testmini</sub> | 58.3 | **60.6** | 52.4 | 58.2 |
| MathVision | - | - | - | **16.3** |
### Video Benchmarks
| Benchmark | Internvl2-8B | LLaVA-OneVision-7B | MiniCPM-V 2.6 | **Qwen2-VL-7B** |
| :--- | :---: | :---: | :---: | :---: |
| MVBench | 66.4 | 56.7 | - | **67.0** |
| PerceptionTest<sub>test</sub> | - | 57.1 | - | **62.3** |
| EgoSchema<sub>test</sub> | - | 60.1 | - | **66.7** |
| Video-MME<sub>wo/w subs</sub> | 54.0/56.9 | 58.2/- | 60.9/63.6 | **63.3**/**69.0** |
## Requirements
The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
```
KeyError: 'qwen2_vl'
```
## Quickstart
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
pip install qwen-vl-utils
```
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2-VL-7B-Instruct",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Without qwen_vl_utils</summary>
```python
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
</details>
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "Identify the similarities between these images."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages1]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
## Limitations
While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions:
1. Lack of Audio Support: The current model does **not comprehend audio information** within videos.
2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered.
3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands.
4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement.
5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements.
6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects.
These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{Qwen2VL,
title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
journal={arXiv preprint arXiv:2409.12191},
year={2024}
}
@article{Qwen-VL,
title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
| null |
Non_BioNLP
|
# UGround-V1-7B (Qwen2-VL-Based)
UGround is a strong GUI visual grounding model trained with a simple recipe. Check our homepage and paper for more details. This work is a collaboration between [OSU NLP Group](https://x.com/osunlp) and [Orby AI](https://www.orby.ai/).

- **Homepage:** https://osu-nlp-group.github.io/UGround/
- **Repository:** https://github.com/OSU-NLP-Group/UGround
- **Paper (ICLR'25 Oral):** https://arxiv.org/abs/2410.05243
- **Demo:** https://huggingface.co/spaces/orby-osu/UGround
- **Point of Contact:** [Boyu Gou](mailto:[email protected])
## Models
- Model-V1:
- [Initial UGround](https://huggingface.co/osunlp/UGround):
- [UGround-V1-2B (Qwen2-VL)](https://huggingface.co/osunlp/UGround-V1-2B)
- [UGround-V1-7B (Qwen2-VL)](https://huggingface.co/osunlp/UGround-V1-7B)
- [UGround-V1-72B (Qwen2-VL)](https://huggingface.co/osunlp/UGround-V1-72B)
- [Training Data](https://huggingface.co/osunlp/UGround)
## Release Plan
- [x] [Model Weights](https://huggingface.co/collections/osunlp/uground-677824fc5823d21267bc9812)
- [x] Initial Version (the one used in the paper)
- [x] Qwen2-VL-Based V1
- [x] 2B
- [x] 7B
- [x] 72B
- [x] Code
- [x] Inference Code of UGround (Initial & Qwen2-VL-Based
- [x] Offline Experiments (Code, Results, and Useful Resources)
- [x] ScreenSpot (along with referring expressions generated by GPT-4/4o)
- [x] Multimodal-Mind2Web
- [x] OmniAct
- [x] Android Control
- [x] Online Experiments
- [x] Mind2Web-Live-SeeAct-V
- [x] [AndroidWorld-SeeAct-V](https://github.com/boyugou/android_world_seeact_v)
- [ ] Data Synthesis Pipeline (Coming Soon)
- [x] Training-Data (V1)
- [x] Online Demo (HF Spaces)
## Main Results
### GUI Visual Grounding: ScreenSpot (Standard Setting)

| ScreenSpot (Standard) | Arch | SFT data | Mobile-Text | Mobile-Icon | Desktop-Text | Desktop-Icon | Web-Text | Web-Icon | Avg |
| ------------------------------- | ---------------- | ------------------ | ----------- | ----------- | ------------ | ------------ | -------- | -------- | -------- |
| InternVL-2-4B | InternVL-2 | | 9.2 | 4.8 | 4.6 | 4.3 | 0.9 | 0.1 | 4.0 |
| Groma | Groma | | 10.3 | 2.6 | 4.6 | 4.3 | 5.7 | 3.4 | 5.2 |
| Qwen-VL | Qwen-VL | | 9.5 | 4.8 | 5.7 | 5.0 | 3.5 | 2.4 | 5.2 |
| MiniGPT-v2 | MiniGPT-v2 | | 8.4 | 6.6 | 6.2 | 2.9 | 6.5 | 3.4 | 5.7 |
| GPT-4 | | | 22.6 | 24.5 | 20.2 | 11.8 | 9.2 | 8.8 | 16.2 |
| GPT-4o | | | 20.2 | 24.9 | 21.1 | 23.6 | 12.2 | 7.8 | 18.3 |
| Fuyu | Fuyu | | 41.0 | 1.3 | 33.0 | 3.6 | 33.9 | 4.4 | 19.5 |
| Qwen-GUI | Qwen-VL | GUICourse | 52.4 | 10.9 | 45.9 | 5.7 | 43.0 | 13.6 | 28.6 |
| Ferret-UI-Llama8b | Ferret-UI | | 64.5 | 32.3 | 45.9 | 11.4 | 28.3 | 11.7 | 32.3 |
| Qwen2-VL | Qwen2-VL | | 61.3 | 39.3 | 52.0 | 45.0 | 33.0 | 21.8 | 42.1 |
| CogAgent | CogAgent | | 67.0 | 24.0 | 74.2 | 20.0 | 70.4 | 28.6 | 47.4 |
| SeeClick | Qwen-VL | SeeClick | 78.0 | 52.0 | 72.2 | 30.0 | 55.7 | 32.5 | 53.4 |
| OS-Atlas-Base-4B | InternVL-2 | OS-Atlas | 85.7 | 58.5 | 72.2 | 45.7 | 82.6 | 63.1 | 68.0 |
| OmniParser | | | 93.9 | 57.0 | 91.3 | 63.6 | 81.3 | 51.0 | 73.0 |
| **UGround** | LLaVA-UGround-V1 | UGround-V1 | 82.8 | **60.3** | 82.5 | **63.6** | 80.4 | **70.4** | **73.3** |
| Iris | Iris | SeeClick | 85.3 | 64.2 | 86.7 | 57.5 | 82.6 | 71.2 | 74.6 |
| ShowUI-G | ShowUI | ShowUI | 91.6 | 69.0 | 81.8 | 59.0 | 83.0 | 65.5 | 75.0 |
| ShowUI | ShowUI | ShowUI | 92.3 | 75.5 | 76.3 | 61.1 | 81.7 | 63.6 | 75.1 |
| Molmo-7B-D | | | 85.4 | 69.0 | 79.4 | 70.7 | 81.3 | 65.5 | 75.2 |
| **UGround-V1-2B (Qwen2-VL)** | Qwen2-VL | UGround-V1 | 89.4 | 72.0 | 88.7 | 65.7 | 81.3 | 68.9 | 77.7 |
| Molmo-72B | | | 92.7 | 79.5 | 86.1 | 64.3 | 83.0 | 66.0 | 78.6 |
| Aguvis-G-7B | Qwen2-VL | Aguvis-Stage-1 | 88.3 | 78.2 | 88.1 | 70.7 | 85.7 | 74.8 | 81.0 |
| OS-Atlas-Base-7B | Qwen2-VL | OS-Atlas | 93.0 | 72.9 | 91.8 | 62.9 | 90.9 | 74.3 | 81.0 |
| Aria-UI | Aria | Aria-UI | 92.3 | 73.8 | 93.3 | 64.3 | 86.5 | 76.2 | 81.1 |
| Claude (Computer-Use) | | | **98.2** | **85.6** | 79.9 | 57.1 | **92.2** | **84.5** | 82.9 |
| Aguvis-7B | Qwen2-VL | Aguvis-Stage-1&2 | 95.6 | 77.7 | **93.8** | 67.1 | 88.3 | 75.2 | 83.0 |
| Project Mariner | | | | | | | | | 84.0 |
| **UGround-V1-7B (Qwen2-VL)** | Qwen2-VL | UGround-V1 | 93.0 | 79.9 | **93.8** | **76.4** | 90.9 | 84.0 | **86.3** |
| *AGUVIS-72B* | *Qwen2-VL* | *Aguvis-Stage-1&2* | *94.5* | *85.2* | *95.4* | *77.9* | *91.3* | *85.9* | *88.4* |
| ***UGround-V1-72B (Qwen2-VL)*** | *Qwen2-VL* | *UGround-V1* | *94.1* | *83.4* | *94.9* | *85.7* | *90.4* | *87.9* | *89.4* |
### GUI Visual Grounding: ScreenSpot (Agent Setting)
| Planner | Agent-Screenspot | arch | SFT data | Mobile-Text | Mobile-Icon | Desktop-Text | Desktop-Icon | Web-Text | Web-Icon | Avg |
| ------- | ---------------------------- | ---------------- | ---------------- | ----------- | ----------- | ------------ | ------------ | -------- | -------- | -------- |
| GPT-4o | Qwen-VL | Qwen-VL | | 21.3 | 21.4 | 18.6 | 10.7 | 9.1 | 5.8 | 14.5 |
| GPT-4o | Qwen-GUI | Qwen-VL | GUICourse | 67.8 | 24.5 | 53.1 | 16.4 | 50.4 | 18.5 | 38.5 |
| GPT-4o | SeeClick | Qwen-VL | SeeClick | 81.0 | 59.8 | 69.6 | 33.6 | 43.9 | 26.2 | 52.4 |
| GPT-4o | OS-Atlas-Base-4B | InternVL-2 | OS-Atlas | **94.1** | 73.8 | 77.8 | 47.1 | 86.5 | 65.3 | 74.1 |
| GPT-4o | OS-Atlas-Base-7B | Qwen2-VL | OS-Atlas | 93.8 | **79.9** | 90.2 | 66.4 | **92.6** | **79.1** | 83.7 |
| GPT-4o | **UGround-V1** | LLaVA-UGround-V1 | UGround-V1 | 93.4 | 76.9 | 92.8 | 67.9 | 88.7 | 68.9 | 81.4 |
| GPT-4o | **UGround-V1-2B (Qwen2-VL)** | Qwen2-VL | UGround-V1 | **94.1** | 77.7 | 92.8 | 63.6 | 90.0 | 70.9 | 81.5 |
| GPT-4o | **UGround-V1-7B (Qwen2-VL)** | Qwen2-VL | UGround-V1 | **94.1** | **79.9** | **93.3** | **73.6** | 89.6 | 73.3 | **84.0** |
## Inference
### vLLM server
```bash
vllm serve osunlp/UGround-V1-7B --api-key token-abc123 --dtype float16
```
or
```bash
python -m vllm.entrypoints.openai.api_server --served-model-name osunlp/UGround-V1-7B --model osunlp/UGround-V1-7B --dtype float16
```
You can find more instruction about training and inference in [Qwen2-VL's Official Repo](https://github.com/QwenLM/Qwen2-VL).
### Visual Grounding Prompt
```python
def format_openai_template(description: str, base64_image):
return [
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{base64_image}"},
},
{
"type": "text",
"text": f"""
Your task is to help the user identify the precise coordinates (x, y) of a specific area/element/object on the screen based on a description.
- Your response should aim to point to the center or a representative point within the described area/element/object as accurately as possible.
- If the description is unclear or ambiguous, infer the most relevant area or element based on its likely context or purpose.
- Your answer should be a single string (x, y) corresponding to the point of the interest.
Description: {description}
Answer:"""
},
],
},
]
messages = format_openai_template(description, base64_image)
completion = await client.chat.completions.create(
model=args.model_path,
messages=messages,
temperature=0 # REMEMBER to set temperature to ZERO!
# REMEMBER to set temperature to ZERO!
# REMEMBER to set temperature to ZERO!
)
# The output will be in the range of [0,1000), which is compatible with the original Qwen2-VL
# So the actual coordinates should be (x/1000*width, y/1000*height)
```

## Citation Information
If you find this work useful, please consider citing our papers:
```
@article{gou2024uground,
title={Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents},
author={Boyu Gou and Ruohan Wang and Boyuan Zheng and Yanan Xie and Cheng Chang and Yiheng Shu and Huan Sun and Yu Su},
journal={arXiv preprint arXiv:2410.05243},
year={2024},
url={https://arxiv.org/abs/2410.05243},
}
@article{zheng2023seeact,
title={GPT-4V(ision) is a Generalist Web Agent, if Grounded},
author={Boyuan Zheng and Boyu Gou and Jihyung Kil and Huan Sun and Yu Su},
journal={arXiv preprint arXiv:2401.01614},
year={2024},
}
```
# Qwen2-VL-7B-Instruct
## Introduction
We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation.
### What’s New in Qwen2-VL?
#### Key Enhancements:
* **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
* **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
* **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
* **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
#### Model Architecture Updates:
* **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/>
<p>
* **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.
<p align="center">
<img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/>
<p>
We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL).
## Evaluation
### Image Benchmarks
| Benchmark | InternVL2-8B | MiniCPM-V 2.6 | GPT-4o-mini | **Qwen2-VL-7B** |
| :--- | :---: | :---: | :---: | :---: |
| MMMU<sub>val</sub> | 51.8 | 49.8 | **60**| 54.1 |
| DocVQA<sub>test</sub> | 91.6 | 90.8 | - | **94.5** |
| InfoVQA<sub>test</sub> | 74.8 | - | - |**76.5** |
| ChartQA<sub>test</sub> | **83.3** | - |- | 83.0 |
| TextVQA<sub>val</sub> | 77.4 | 80.1 | -| **84.3** |
| OCRBench | 794 | **852** | 785 | 845 |
| MTVQA | - | - | -| **26.3** |
| VCR<sub>en easy</sub> | - | 73.88 | 83.60 | **89.70** |
| VCR<sub>zh easy</sub> | - | 10.18| 1.10 | **59.94** |
| RealWorldQA | 64.4 | - | - | **70.1** |
| MME<sub>sum</sub> | 2210.3 | **2348.4** | 2003.4| 2326.8 |
| MMBench-EN<sub>test</sub> | 81.7 | - | - | **83.0** |
| MMBench-CN<sub>test</sub> | **81.2** | - | - | 80.5 |
| MMBench-V1.1<sub>test</sub> | 79.4 | 78.0 | 76.0| **80.7** |
| MMT-Bench<sub>test</sub> | - | - | - |**63.7** |
| MMStar | **61.5** | 57.5 | 54.8 | 60.7 |
| MMVet<sub>GPT-4-Turbo</sub> | 54.2 | 60.0 | **66.9** | 62.0 |
| HallBench<sub>avg</sub> | 45.2 | 48.1 | 46.1| **50.6** |
| MathVista<sub>testmini</sub> | 58.3 | **60.6** | 52.4 | 58.2 |
| MathVision | - | - | - | **16.3** |
### Video Benchmarks
| Benchmark | Internvl2-8B | LLaVA-OneVision-7B | MiniCPM-V 2.6 | **Qwen2-VL-7B** |
| :--- | :---: | :---: | :---: | :---: |
| MVBench | 66.4 | 56.7 | - | **67.0** |
| PerceptionTest<sub>test</sub> | - | 57.1 | - | **62.3** |
| EgoSchema<sub>test</sub> | - | 60.1 | - | **66.7** |
| Video-MME<sub>wo/w subs</sub> | 54.0/56.9 | 58.2/- | 60.9/63.6 | **63.3**/**69.0** |
## Requirements
The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
```
KeyError: 'qwen2_vl'
```
## Quickstart
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
pip install qwen-vl-utils
```
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2-VL-7B-Instruct",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Without qwen_vl_utils</summary>
```python
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
</details>
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "Identify the similarities between these images."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages1]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
## Limitations
While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions:
1. Lack of Audio Support: The current model does **not comprehend audio information** within videos.
2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered.
3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands.
4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement.
5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements.
6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects.
These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{Qwen2VL,
title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
journal={arXiv preprint arXiv:2409.12191},
year={2024}
}
@article{Qwen-VL,
title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
|
{"base_model": ["Qwen/Qwen2-VL-7B"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "image-text-to-text", "tags": ["multimodal"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 44,356 |
LoneStriker/airoboros-m-7b-3.1.1-5.0bpw-h6-exl2
|
LoneStriker
|
text-generation
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:jondurbin/airoboros-3.1",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-10-17T15:13:54Z |
2023-10-17T15:14:09+00:00
| 3 | 0 |
---
datasets:
- jondurbin/airoboros-3.1
license: llama2
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
This models uses llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
This is a prompt fix release. 3.1 required a single space after the last `[/INST]`, which was highly annoying and obnoxious, so I re-tuned the models without this. Otherwise, it's the same as 3.1.
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
| null |
Non_BioNLP
|
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
This models uses llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
This is a prompt fix release. 3.1 required a single space after the last `[/INST]`, which was highly annoying and obnoxious, so I re-tuned the models without this. Otherwise, it's the same as 3.1.
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
|
{"datasets": ["jondurbin/airoboros-3.1"], "license": "llama2"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 44,357 |
AgentPublic/dpr-ctx_encoder-fr_qa-camembert
|
AgentPublic
| null |
[
"transformers",
"pytorch",
"camembert",
"fr",
"dataset:piaf",
"dataset:FQuAD",
"dataset:SQuAD-FR",
"arxiv:2004.04906",
"arxiv:1911.03894",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-06-16T11:22:59+00:00
| 48 | 5 |
---
datasets:
- piaf
- FQuAD
- SQuAD-FR
language: fr
---
# dpr-ctx_encoder-fr_qa-camembert
## Description
French [DPR model](https://arxiv.org/abs/2004.04906) using [CamemBERT](https://arxiv.org/abs/1911.03894) as base and then fine-tuned on a combo of three French Q&A
## Data
### French Q&A
We use a combination of three French Q&A datasets:
1. [PIAFv1.1](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/)
2. [FQuADv1.0](https://fquad.illuin.tech/)
3. [SQuAD-FR (SQuAD automatically translated to French)](https://github.com/Alikabbadj/French-SQuAD)
### Training
We are using 90 562 random questions for `train` and 22 391 for `dev`. No question in `train` exists in `dev`. For each question, we have a single `positive_context` (the paragraph where the answer to this question is found) and around 30 `hard_negtive_contexts`. Hard negative contexts are found by querying an ES instance (via bm25 retrieval) and getting the top-k candidates **that do not contain the answer**.
The files are over [here](https://drive.google.com/file/d/1W5Jm3sqqWlsWsx2sFpA39Ewn33PaLQ7U/view?usp=sharing).
### Evaluation
We use FQuADv1.0 and French-SQuAD evaluation sets.
## Training Script
We use the official [Facebook DPR implentation](https://github.com/facebookresearch/DPR) with a slight modification: by default, the code can work with Roberta models, still we changed a single line to make it easier to work with Camembert. This modification can be found [over here](https://github.com/psorianom/DPR).
### Hyperparameters
```shell
python -m torch.distributed.launch --nproc_per_node=8 train_dense_encoder.py \
--max_grad_norm 2.0 \
--encoder_model_type fairseq_roberta \
--pretrained_file data/camembert-base \
--seed 12345 \
--sequence_length 256 \
--warmup_steps 1237 \
--batch_size 16 \
--do_lower_case \
--train_file ./data/DPR_FR_train.json \
--dev_file ./data/DPR_FR_dev.json \
--output_dir ./output/ \
--learning_rate 2e-05 \
--num_train_epochs 35 \
--dev_batch_size 16 \
--val_av_rank_start_epoch 30 \
--pretrained_model_cfg ./data/camembert-base/
```
###
## Evaluation results
We obtain the following evaluation by using FQuAD and SQuAD-FR evaluation (or validation) sets. To obtain these results, we use [haystack's evaluation script](https://github.com/deepset-ai/haystack/blob/db4151bbc026f27c6d709fefef1088cd3f1e18b9/tutorials/Tutorial5_Evaluation.py) (**we report Retrieval results only**).
### DPR
#### FQuAD v1.0 Evaluation
```shell
For 2764 out of 3184 questions (86.81%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.87
Retriever Mean Avg Precision: 0.57
```
#### SQuAD-FR Evaluation
```shell
For 8945 out of 10018 questions (89.29%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.89
Retriever Mean Avg Precision: 0.63
```
### BM25
For reference, BM25 gets the results shown below. As in the original paper, regarding SQuAD-like datasets, the results of DPR are consistently superseeded by BM25.
#### FQuAD v1.0 Evaluation
```shell
For 2966 out of 3184 questions (93.15%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.93
Retriever Mean Avg Precision: 0.74
```
#### SQuAD-FR Evaluation
```shell
For 9353 out of 10018 questions (93.36%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.93
Retriever Mean Avg Precision: 0.77
```
## Usage
The results reported here are obtained with the `haystack` library. To get to similar embeddings using exclusively HF `transformers` library, you can do the following:
```python
from transformers import AutoTokenizer, AutoModel
query = "Salut, mon chien est-il mignon ?"
tokenizer = AutoTokenizer.from_pretrained("etalab-ia/dpr-ctx_encoder-fr_qa-camembert", do_lower_case=True)
input_ids = tokenizer(query, return_tensors='pt')["input_ids"]
model = AutoModel.from_pretrained("etalab-ia/dpr-ctx_encoder-fr_qa-camembert", return_dict=True)
embeddings = model.forward(input_ids).pooler_output
print(embeddings)
```
And with `haystack`, we use it as a retriever:
```
retriever = DensePassageRetriever(
document_store=document_store,
query_embedding_model="etalab-ia/dpr-question_encoder-fr_qa-camembert",
passage_embedding_model="etalab-ia/dpr-ctx_encoder-fr_qa-camembert",
model_version=dpr_model_tag,
infer_tokenizer_classes=True,
)
```
## Acknowledgments
This work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224).
## Citations
### Datasets
#### PIAF
```
@inproceedings{KeraronLBAMSSS20,
author = {Rachel Keraron and
Guillaume Lancrenon and
Mathilde Bras and
Fr{\'{e}}d{\'{e}}ric Allary and
Gilles Moyse and
Thomas Scialom and
Edmundo{-}Pavel Soriano{-}Morales and
Jacopo Staiano},
title = {Project {PIAF:} Building a Native French Question-Answering Dataset},
booktitle = {{LREC}},
pages = {5481--5490},
publisher = {European Language Resources Association},
year = {2020}
}
```
#### FQuAD
```
@article{dHoffschmidt2020FQuADFQ,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich},
journal={ArXiv},
year={2020},
volume={abs/2002.06071}
}
```
#### SQuAD-FR
```
@MISC{kabbadj2018,
author = "Kabbadj, Ali",
title = "Something new in French Text Mining and Information Extraction (Universal Chatbot): Largest Q&A French training dataset (110 000+) ",
editor = "linkedin.com",
month = "November",
year = "2018",
url = "\url{https://www.linkedin.com/pulse/something-new-french-text-mining-information-chatbot-largest-kabbadj/}",
note = "[Online; posted 11-November-2018]",
}
```
### Models
#### CamemBERT
HF model card : [https://huggingface.co/camembert-base](https://huggingface.co/camembert-base)
```
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
#### DPR
```
@misc{karpukhin2020dense,
title={Dense Passage Retrieval for Open-Domain Question Answering},
author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih},
year={2020},
eprint={2004.04906},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| null |
Non_BioNLP
|
# dpr-ctx_encoder-fr_qa-camembert
## Description
French [DPR model](https://arxiv.org/abs/2004.04906) using [CamemBERT](https://arxiv.org/abs/1911.03894) as base and then fine-tuned on a combo of three French Q&A
## Data
### French Q&A
We use a combination of three French Q&A datasets:
1. [PIAFv1.1](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/)
2. [FQuADv1.0](https://fquad.illuin.tech/)
3. [SQuAD-FR (SQuAD automatically translated to French)](https://github.com/Alikabbadj/French-SQuAD)
### Training
We are using 90 562 random questions for `train` and 22 391 for `dev`. No question in `train` exists in `dev`. For each question, we have a single `positive_context` (the paragraph where the answer to this question is found) and around 30 `hard_negtive_contexts`. Hard negative contexts are found by querying an ES instance (via bm25 retrieval) and getting the top-k candidates **that do not contain the answer**.
The files are over [here](https://drive.google.com/file/d/1W5Jm3sqqWlsWsx2sFpA39Ewn33PaLQ7U/view?usp=sharing).
### Evaluation
We use FQuADv1.0 and French-SQuAD evaluation sets.
## Training Script
We use the official [Facebook DPR implentation](https://github.com/facebookresearch/DPR) with a slight modification: by default, the code can work with Roberta models, still we changed a single line to make it easier to work with Camembert. This modification can be found [over here](https://github.com/psorianom/DPR).
### Hyperparameters
```shell
python -m torch.distributed.launch --nproc_per_node=8 train_dense_encoder.py \
--max_grad_norm 2.0 \
--encoder_model_type fairseq_roberta \
--pretrained_file data/camembert-base \
--seed 12345 \
--sequence_length 256 \
--warmup_steps 1237 \
--batch_size 16 \
--do_lower_case \
--train_file ./data/DPR_FR_train.json \
--dev_file ./data/DPR_FR_dev.json \
--output_dir ./output/ \
--learning_rate 2e-05 \
--num_train_epochs 35 \
--dev_batch_size 16 \
--val_av_rank_start_epoch 30 \
--pretrained_model_cfg ./data/camembert-base/
```
###
## Evaluation results
We obtain the following evaluation by using FQuAD and SQuAD-FR evaluation (or validation) sets. To obtain these results, we use [haystack's evaluation script](https://github.com/deepset-ai/haystack/blob/db4151bbc026f27c6d709fefef1088cd3f1e18b9/tutorials/Tutorial5_Evaluation.py) (**we report Retrieval results only**).
### DPR
#### FQuAD v1.0 Evaluation
```shell
For 2764 out of 3184 questions (86.81%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.87
Retriever Mean Avg Precision: 0.57
```
#### SQuAD-FR Evaluation
```shell
For 8945 out of 10018 questions (89.29%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.89
Retriever Mean Avg Precision: 0.63
```
### BM25
For reference, BM25 gets the results shown below. As in the original paper, regarding SQuAD-like datasets, the results of DPR are consistently superseeded by BM25.
#### FQuAD v1.0 Evaluation
```shell
For 2966 out of 3184 questions (93.15%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.93
Retriever Mean Avg Precision: 0.74
```
#### SQuAD-FR Evaluation
```shell
For 9353 out of 10018 questions (93.36%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.93
Retriever Mean Avg Precision: 0.77
```
## Usage
The results reported here are obtained with the `haystack` library. To get to similar embeddings using exclusively HF `transformers` library, you can do the following:
```python
from transformers import AutoTokenizer, AutoModel
query = "Salut, mon chien est-il mignon ?"
tokenizer = AutoTokenizer.from_pretrained("etalab-ia/dpr-ctx_encoder-fr_qa-camembert", do_lower_case=True)
input_ids = tokenizer(query, return_tensors='pt')["input_ids"]
model = AutoModel.from_pretrained("etalab-ia/dpr-ctx_encoder-fr_qa-camembert", return_dict=True)
embeddings = model.forward(input_ids).pooler_output
print(embeddings)
```
And with `haystack`, we use it as a retriever:
```
retriever = DensePassageRetriever(
document_store=document_store,
query_embedding_model="etalab-ia/dpr-question_encoder-fr_qa-camembert",
passage_embedding_model="etalab-ia/dpr-ctx_encoder-fr_qa-camembert",
model_version=dpr_model_tag,
infer_tokenizer_classes=True,
)
```
## Acknowledgments
This work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224).
## Citations
### Datasets
#### PIAF
```
@inproceedings{KeraronLBAMSSS20,
author = {Rachel Keraron and
Guillaume Lancrenon and
Mathilde Bras and
Fr{\'{e}}d{\'{e}}ric Allary and
Gilles Moyse and
Thomas Scialom and
Edmundo{-}Pavel Soriano{-}Morales and
Jacopo Staiano},
title = {Project {PIAF:} Building a Native French Question-Answering Dataset},
booktitle = {{LREC}},
pages = {5481--5490},
publisher = {European Language Resources Association},
year = {2020}
}
```
#### FQuAD
```
@article{dHoffschmidt2020FQuADFQ,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich},
journal={ArXiv},
year={2020},
volume={abs/2002.06071}
}
```
#### SQuAD-FR
```
@MISC{kabbadj2018,
author = "Kabbadj, Ali",
title = "Something new in French Text Mining and Information Extraction (Universal Chatbot): Largest Q&A French training dataset (110 000+) ",
editor = "linkedin.com",
month = "November",
year = "2018",
url = "\url{https://www.linkedin.com/pulse/something-new-french-text-mining-information-chatbot-largest-kabbadj/}",
note = "[Online; posted 11-November-2018]",
}
```
### Models
#### CamemBERT
HF model card : [https://huggingface.co/camembert-base](https://huggingface.co/camembert-base)
```
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
#### DPR
```
@misc{karpukhin2020dense,
title={Dense Passage Retrieval for Open-Domain Question Answering},
author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih},
year={2020},
eprint={2004.04906},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"datasets": ["piaf", "FQuAD", "SQuAD-FR"], "language": "fr"}
|
task
|
[
"QUESTION_ANSWERING"
] | 44,358 |
MediaTek-Research/Breeze-7B-Instruct-v0_1
|
MediaTek-Research
|
text-generation
|
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"zh",
"en",
"arxiv:2403.02712",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-01-06T03:12:05Z |
2024-04-24T03:52:05+00:00
| 1,933 | 87 |
---
language:
- zh
- en
license: apache-2.0
pipeline_tag: text-generation
---
# Model Card for MediaTek Research Breeze-7B-Instruct-v0_1
MediaTek Research Breeze-7B (hereinafter referred to as Breeze-7B) is a language model family that builds on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically intended for Traditional Chinese use.
[Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0_1) is the base model for the Breeze-7B series.
It is suitable for use if you have substantial fine-tuning data to tune it for your specific use case.
[Breeze-7B-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1) derives from the base model Breeze-7B-Base, making the resulting model amenable to be used as-is for commonly seen tasks.
[Breeze-7B-Instruct-64k](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0_1) is a slightly modified version of
Breeze-7B-Instruct to enable a 64k-token context length. Roughly speaking, that is equivalent to 88k Traditional Chinese characters.
*Update (Feb. 21st, 2024): Breeze-7B-Instruct-64k-v0_1 has been temporarily removed from Hugging Face due to its actual performance in long context tests not meeting expectations.*
*Update (Mar. 7th, 2024): The current release version of Breeze-7B is v1.0. See [Breeze-7B-Instruct-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0).*
The current release version of Breeze-7B is v0.1.
Practicality-wise:
- Breeze-7B-Base expands the original vocabulary with additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).]
- Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization.
- In particular, Breeze-7B-Instruct-64k can perform tasks at a document level, not a chapter level.
Performance-wise:
- Breeze-7B-Instruct demonstrates impressive performance in benchmarks for Traditional Chinese and English, when compared to similar sized open-source contemporaries such as Taiwan-LLM-7B/13B-chat, QWen-7B-Chat, and Yi-6B-Chat. [See [Chat Model Performance](#chat-model-performance).]
*A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*
## Features
- Breeze-7B-Base-v0_1
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
- 8k-token context length
- Breeze-7B-Instruct-v0_1
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
- 8k-token context length
- Multi-turn dialogue (without special handling for harmfulness)
- Breeze-7B-Instruct-64k-v0_1
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
- 64k-token context length
- Multi-turn dialogue (without special handling for harmfulness)
## Model Details
- Breeze-7B-Base-v0_1
- Finetuned from: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- Model type: Causal decoder-only transformer language model
- Language: English and Traditional Chinese (zh-tw)
- Breeze-7B-Instruct-v0_1
- Finetuned from: [MediaTek-Research/Breeze-7B-Base-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0_1)
- Model type: Causal decoder-only transformer language model
- Language: English and Traditional Chinese (zh-tw)
- Breeze-7B-Instruct-64k-v0_1
- Finetuned from: [MediaTek-Research/Breeze-7B-Instruct-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1)
- Model type: Causal decoder-only transformer language model
- Language: English and Traditional Chinese (zh-tw)
## Base Model Performance
**TMMLU+**, **DRCD**, and **Table** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2).
[MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval)
and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train).
We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood.
| Models | |↑ TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MMLU (ACC) |
|----------------------------------------------|--------|--------------|-------------|-------------|------------|
| | |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Knowledge|
| | | 5 shot | 3 shot | 5 shot | 5 shot |
| [Yi-34B](https://huggingface.co/01-ai/Yi-34B)| 34B | 63.10 | 84.57 | 49.31 | 77.42 |
| [Qwen-14B](https://huggingface.co/01-ai/Qwen/Qwen-14B)| 14B | 51.30 | 16.95 * | 50.69 | 68.83 |
| [Yi-6B](https://huggingface.co/01-ai/Yi-6B) | 6B | 49.63 | 76.61 | 34.72 | 65.35 |
| [Qwen-7B](https://huggingface.co/01-ai/Qwen/Qwen-7B)| 7B | 42.84 | 0.0 * | 39.58 | 61.00 |
| [**Breeze-7B-Base-v0_1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0_1) | 7B | 40.35 | 81.13 | 28.47 | 61.63 |
| [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)| 7B | 36.93 | 79.27 | 27.78 | 64.89 |
\* Few-shot learning cannot effectively guide the model to generate the proper answer.
## Chat Model Performance
**TMMLU+**, **DRCD**, **Table**, and **MT-Bench-tw** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2).
[MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval)
and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train).
**MT-Bench** source from [lmsys/mt_bench_human_judgments](https://huggingface.co/datasets/lmsys/mt_bench_human_judgments).
We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood.
We use the code revised from [fastchat llm_judge](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) (GPT4 as judge) to evaluate **MT-Bench-tw** and **MT-Bench**.
| Models | |↑ MT-Bench-tw (Score)| TMMLU+ (ACC) | TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MT-Bench (Score) | MMLU (ACC) | MMLU (ACC) |
|---------------------------------------------------------------------------------------------------------|--------|--------------------|--------------|--------------|-------------|-------------|------------------|-------------|-------------|
| | |TC, Chat |TC, Knowledge |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Chat |EN, Knowledge|EN, Knowledge|
| | |0 shot | 0 shot | 5 shot | 3 shot | 0 shot |0 shot | 0 shot | 5 shot |
| [gpt-3.5-turbo](https://openai.com) | |7.1 | 43.56 | | | 45.14 |7.9 | 67.09 | |
| [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 34B |6.9 | 54.87 | | | 36.81 |7.6 | 71.04 | |
| [Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat) | 14B |6.4 | 48.41 | | | 41.67 |7.2 | 64.91 | |
| [**Breeze-7B-Instruct-v0_1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1) | 7B |5.7 | 41.61 | | | 45.83 |7.1 | 63.26 | |
| [**Breeze-7B-Instruct-64k-v0_1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0_1) | 7B |5.5 | 40.99 | | | 36.11 |7.1 | 63.68 | |
| [Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) | 7B |5.4 | 40.02 | | | 33.33 |6.2 | 55.94 | |
| [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) | 6B |5.0 | 44.79 | | | 25.69 |6.0 | 59.45 | |
| [Taiwan-LLM-13B-v2.0-chat](https://huggingface.co/yentinglin/Taiwan-LLM-13B-v2.0-chat) | 13B |5.0 | 29.47 | | | 23.61 |-* | 50.50 | |
| [Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat) | 7B |4.2 | 28.08 | | | 31.25 | -* | 42.72 | |
\* Taiwan-LLM models responds to multi-turn questions (English) in Traditional Chinese.
| Details on MT-Bench-tw (0 shot):<br/>Models | STEM |Extraction|Reasoning| Math | Coding | Roleplay| Writing |Humanities|↑ AVG |
|-----------------------------------------------------|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| gpt-3.5-turbo | 7.8 | 6.1 | 5.1 | 6.4 | 6.2 | 8.7 | 7.4 | 9.3 | 7.1 |
| Yi-34B-Chat | 9.0 | 4.8 | 5.7 | 4.0 | 4.7 | 8.5 | 8.7 | 9.8 | 6.9 |
| Qwen-14B-Chat | 7.6 | 5.7 | 4.5 | 4.2 | 5.3 | 7.5 | 7.3 | 9.1 | 6.4 |
| **Breeze-7B-Instruct-v0_1** | 6.5 | 5.6 | 3.9 | 3.6 | 4.3 | 6.9 | 5.7 | 9.3 | 5.7 |
| **Breeze-7B-Instruct-64k-v0_1** | 6.1 | 5.3 | 3.7 | 2.9 | 4.2 | 7.0 | 6.7 | 8.3 | 5.5 |
| Qwen-7B-Chat | 6.6 | 4.5 | 4.8 | 2.9 | 3.6 | 6.2 | 6.8 | 8.2 | 5.4 |
| Yi-6B-Chat | 7.3 | 2.7 | 3.1 | 3.3 | 2.3 | 7.2 | 5.2 | 8.8 | 5.0 |
| Taiwan-LLM-13B-v2.0-chat | 6.1 | 3.4 | 4.1 | 2.3 | 3.1 | 7.4 | 6.6 | 6.8 | 5.0 |
| Taiwan-LLM-7B-v2.1-chat | 5.2 | 2.6 | 2.3 | 1.2 | 3.4 | 6.6 | 5.7 | 6.8 | 4.2 |
| Details on TMMLU+ (0 shot):<br/>Model | STEM | Social Science | Humanities | Other | ↑ AVG |
|-----------------------------------------------------|--------------|----------------|------------|------------|---------|
| Yi-34B-Chat | 47.65 | 64.25 | 52.73 | 54.91 | 54.87 |
| Qwen-14B-Chat | 43.83 | 55.00 | 48.55 | 46.22 | 48.41 |
| Yi-6B-Chat | 37.80 | 51.74 | 45.36 | 44.25 | 44.79 |
| gpt-3.5-turbo | 41.58 | 48.52 | 40.96 | 43.18 | 43.56 |
| **Breeze-7B-Instruct-v0_1** | 37.41 | 46.81 | 42.06 | 40.16 | 41.61 |
| **Breeze-7B-Instruct-64k-v0_1** | 37.88 | 46.35 | 40.31 | 39.40 | 40.99 |
| Qwen-7B-Chat | 35.44 | 46.22 | 38.35 | 40.06 | 40.02 |
| Taiwan-LLM-13B-v2.0-chat | 27.74 | 33.69 | 27.03 | 29.43 | 29.47 |
| Taiwan-LLM-7B-v2.1-chat | 25.58 | 31.76 | 27.36 | 27.61 | 28.08 |
## Inference Performance
In this test, we use the first 700 characters of the [web article](https://health.udn.com/health/story/5976/7699252?from=udn_ch1005_main_index) as the input and ask the model to write the same article again.
All inferences run on 2 RTX A6000 GPUs (using `vllm`, with a tensor-parallel size of 2).
| Models | ↓ Inference Time (sec)|Estimated Max Input Length (Char)|
|--------------------------------------------------------------------|-------------------|--------------------------|
| Yi-6B-Chat | 10.62 | 5.2k |
| **Breeze-7B-Instruct-v0_1** | 10.74 | 11.1k |
| **Breeze-7B-Instruct-64k-v0_1** | 10.74 | 88.8k |
| Qwen-7B-Chat | 10.86 | 9.8k |
| Qwen-14B-Chat | 18.89 | 9.8k |
| Mistral-7B-v0.1-Instruct | 20.48 | 5.1k |
| Taiwan-LLM-7B-v2.1-chat | 26.26 | 2.2k |
| Taiwan-LLM-13B-v2.0-chat | 36.80 | 2.2k |
| Yi-34B-Chat | 43.71 | 4.5k |
## Long-context Performance
TBD
## Use in Transformers
First install direct dependencies:
```
pip install transformers torch accelerate
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn
```
Then load the model in transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"MediaTek-Research/Breeze-7B-Instruct-v0_1",
device_map="auto",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2" # optional
)
```
The structure of the query is
```txt
<s>SYS_PROMPT [INST] QUERY1 [/INST] RESPONSE1 [INST] QUERY2 [/INST]
```
where `SYS_PROMPT`, `QUERY1`, `RESPONSE1`, and `QUERY2` can be provided by the user.
The suggested default `SYS_PROMPT` is
```txt
You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan.
```
We also integrate `chat_template` into [tokenizer_config.json](tokenizer_config.json), so you can `apply_chat_template` to get the prompt.
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("MediaTek-Research/Breeze-7B-Instruct-v0_1")
>>> chat = [
... {"role": "user", "content": "你好,請問你可以完成什麼任務?"},
... {"role": "assistant", "content": "你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。"},
... {"role": "user", "content": "太棒了!"},
... ]
>>> tokenizer.apply_chat_template(chat, tokenize=False)
"<s>You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. [INST] 你好,請問你可以完成什麼任務? [/INST] 你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。 [INST] 太棒了! [/INST] "
# Tokenized results
# ['▁', '你好', ',', '請問', '你', '可以', '完成', '什麼', '任務', '?']
# ['▁', '你好', ',', '我', '可以', '幫助', '您', '解決', '各種', '問題', '、', '提供', '資訊', '和', '協助', '您', '完成', '許多', '不同', '的', '任務', '。', '例如', ':', '回答', '技術', '問題', '、', '提供', '建議', '、', '翻譯', '文字', '、', '尋找', '資料', '或', '協助', '您', '安排', '行程', '等', '。', '請', '告訴', '我', '如何', '能', '幫助', '您', '。']
# ['▁', '太', '棒', '了', '!']
```
## Citation
```
@article{MediaTek-Research2024breeze7b,
title={Breeze-7B Technical Report},
author={Chan-Jan Hsu and Chang-Le Liu and Feng-Ting Liao and Po-Chun Hsu and Yi-Chang Chen and Da-Shan Shiu},
year={2024},
eprint={2403.02712},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| null |
Non_BioNLP
|
# Model Card for MediaTek Research Breeze-7B-Instruct-v0_1
MediaTek Research Breeze-7B (hereinafter referred to as Breeze-7B) is a language model family that builds on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically intended for Traditional Chinese use.
[Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0_1) is the base model for the Breeze-7B series.
It is suitable for use if you have substantial fine-tuning data to tune it for your specific use case.
[Breeze-7B-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1) derives from the base model Breeze-7B-Base, making the resulting model amenable to be used as-is for commonly seen tasks.
[Breeze-7B-Instruct-64k](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0_1) is a slightly modified version of
Breeze-7B-Instruct to enable a 64k-token context length. Roughly speaking, that is equivalent to 88k Traditional Chinese characters.
*Update (Feb. 21st, 2024): Breeze-7B-Instruct-64k-v0_1 has been temporarily removed from Hugging Face due to its actual performance in long context tests not meeting expectations.*
*Update (Mar. 7th, 2024): The current release version of Breeze-7B is v1.0. See [Breeze-7B-Instruct-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0).*
The current release version of Breeze-7B is v0.1.
Practicality-wise:
- Breeze-7B-Base expands the original vocabulary with additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).]
- Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization.
- In particular, Breeze-7B-Instruct-64k can perform tasks at a document level, not a chapter level.
Performance-wise:
- Breeze-7B-Instruct demonstrates impressive performance in benchmarks for Traditional Chinese and English, when compared to similar sized open-source contemporaries such as Taiwan-LLM-7B/13B-chat, QWen-7B-Chat, and Yi-6B-Chat. [See [Chat Model Performance](#chat-model-performance).]
*A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*
## Features
- Breeze-7B-Base-v0_1
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
- 8k-token context length
- Breeze-7B-Instruct-v0_1
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
- 8k-token context length
- Multi-turn dialogue (without special handling for harmfulness)
- Breeze-7B-Instruct-64k-v0_1
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
- 64k-token context length
- Multi-turn dialogue (without special handling for harmfulness)
## Model Details
- Breeze-7B-Base-v0_1
- Finetuned from: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- Model type: Causal decoder-only transformer language model
- Language: English and Traditional Chinese (zh-tw)
- Breeze-7B-Instruct-v0_1
- Finetuned from: [MediaTek-Research/Breeze-7B-Base-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0_1)
- Model type: Causal decoder-only transformer language model
- Language: English and Traditional Chinese (zh-tw)
- Breeze-7B-Instruct-64k-v0_1
- Finetuned from: [MediaTek-Research/Breeze-7B-Instruct-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1)
- Model type: Causal decoder-only transformer language model
- Language: English and Traditional Chinese (zh-tw)
## Base Model Performance
**TMMLU+**, **DRCD**, and **Table** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2).
[MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval)
and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train).
We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood.
| Models | |↑ TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MMLU (ACC) |
|----------------------------------------------|--------|--------------|-------------|-------------|------------|
| | |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Knowledge|
| | | 5 shot | 3 shot | 5 shot | 5 shot |
| [Yi-34B](https://huggingface.co/01-ai/Yi-34B)| 34B | 63.10 | 84.57 | 49.31 | 77.42 |
| [Qwen-14B](https://huggingface.co/01-ai/Qwen/Qwen-14B)| 14B | 51.30 | 16.95 * | 50.69 | 68.83 |
| [Yi-6B](https://huggingface.co/01-ai/Yi-6B) | 6B | 49.63 | 76.61 | 34.72 | 65.35 |
| [Qwen-7B](https://huggingface.co/01-ai/Qwen/Qwen-7B)| 7B | 42.84 | 0.0 * | 39.58 | 61.00 |
| [**Breeze-7B-Base-v0_1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0_1) | 7B | 40.35 | 81.13 | 28.47 | 61.63 |
| [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)| 7B | 36.93 | 79.27 | 27.78 | 64.89 |
\* Few-shot learning cannot effectively guide the model to generate the proper answer.
## Chat Model Performance
**TMMLU+**, **DRCD**, **Table**, and **MT-Bench-tw** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2).
[MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval)
and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train).
**MT-Bench** source from [lmsys/mt_bench_human_judgments](https://huggingface.co/datasets/lmsys/mt_bench_human_judgments).
We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood.
We use the code revised from [fastchat llm_judge](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) (GPT4 as judge) to evaluate **MT-Bench-tw** and **MT-Bench**.
| Models | |↑ MT-Bench-tw (Score)| TMMLU+ (ACC) | TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MT-Bench (Score) | MMLU (ACC) | MMLU (ACC) |
|---------------------------------------------------------------------------------------------------------|--------|--------------------|--------------|--------------|-------------|-------------|------------------|-------------|-------------|
| | |TC, Chat |TC, Knowledge |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Chat |EN, Knowledge|EN, Knowledge|
| | |0 shot | 0 shot | 5 shot | 3 shot | 0 shot |0 shot | 0 shot | 5 shot |
| [gpt-3.5-turbo](https://openai.com) | |7.1 | 43.56 | | | 45.14 |7.9 | 67.09 | |
| [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 34B |6.9 | 54.87 | | | 36.81 |7.6 | 71.04 | |
| [Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat) | 14B |6.4 | 48.41 | | | 41.67 |7.2 | 64.91 | |
| [**Breeze-7B-Instruct-v0_1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1) | 7B |5.7 | 41.61 | | | 45.83 |7.1 | 63.26 | |
| [**Breeze-7B-Instruct-64k-v0_1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0_1) | 7B |5.5 | 40.99 | | | 36.11 |7.1 | 63.68 | |
| [Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) | 7B |5.4 | 40.02 | | | 33.33 |6.2 | 55.94 | |
| [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) | 6B |5.0 | 44.79 | | | 25.69 |6.0 | 59.45 | |
| [Taiwan-LLM-13B-v2.0-chat](https://huggingface.co/yentinglin/Taiwan-LLM-13B-v2.0-chat) | 13B |5.0 | 29.47 | | | 23.61 |-* | 50.50 | |
| [Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat) | 7B |4.2 | 28.08 | | | 31.25 | -* | 42.72 | |
\* Taiwan-LLM models responds to multi-turn questions (English) in Traditional Chinese.
| Details on MT-Bench-tw (0 shot):<br/>Models | STEM |Extraction|Reasoning| Math | Coding | Roleplay| Writing |Humanities|↑ AVG |
|-----------------------------------------------------|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| gpt-3.5-turbo | 7.8 | 6.1 | 5.1 | 6.4 | 6.2 | 8.7 | 7.4 | 9.3 | 7.1 |
| Yi-34B-Chat | 9.0 | 4.8 | 5.7 | 4.0 | 4.7 | 8.5 | 8.7 | 9.8 | 6.9 |
| Qwen-14B-Chat | 7.6 | 5.7 | 4.5 | 4.2 | 5.3 | 7.5 | 7.3 | 9.1 | 6.4 |
| **Breeze-7B-Instruct-v0_1** | 6.5 | 5.6 | 3.9 | 3.6 | 4.3 | 6.9 | 5.7 | 9.3 | 5.7 |
| **Breeze-7B-Instruct-64k-v0_1** | 6.1 | 5.3 | 3.7 | 2.9 | 4.2 | 7.0 | 6.7 | 8.3 | 5.5 |
| Qwen-7B-Chat | 6.6 | 4.5 | 4.8 | 2.9 | 3.6 | 6.2 | 6.8 | 8.2 | 5.4 |
| Yi-6B-Chat | 7.3 | 2.7 | 3.1 | 3.3 | 2.3 | 7.2 | 5.2 | 8.8 | 5.0 |
| Taiwan-LLM-13B-v2.0-chat | 6.1 | 3.4 | 4.1 | 2.3 | 3.1 | 7.4 | 6.6 | 6.8 | 5.0 |
| Taiwan-LLM-7B-v2.1-chat | 5.2 | 2.6 | 2.3 | 1.2 | 3.4 | 6.6 | 5.7 | 6.8 | 4.2 |
| Details on TMMLU+ (0 shot):<br/>Model | STEM | Social Science | Humanities | Other | ↑ AVG |
|-----------------------------------------------------|--------------|----------------|------------|------------|---------|
| Yi-34B-Chat | 47.65 | 64.25 | 52.73 | 54.91 | 54.87 |
| Qwen-14B-Chat | 43.83 | 55.00 | 48.55 | 46.22 | 48.41 |
| Yi-6B-Chat | 37.80 | 51.74 | 45.36 | 44.25 | 44.79 |
| gpt-3.5-turbo | 41.58 | 48.52 | 40.96 | 43.18 | 43.56 |
| **Breeze-7B-Instruct-v0_1** | 37.41 | 46.81 | 42.06 | 40.16 | 41.61 |
| **Breeze-7B-Instruct-64k-v0_1** | 37.88 | 46.35 | 40.31 | 39.40 | 40.99 |
| Qwen-7B-Chat | 35.44 | 46.22 | 38.35 | 40.06 | 40.02 |
| Taiwan-LLM-13B-v2.0-chat | 27.74 | 33.69 | 27.03 | 29.43 | 29.47 |
| Taiwan-LLM-7B-v2.1-chat | 25.58 | 31.76 | 27.36 | 27.61 | 28.08 |
## Inference Performance
In this test, we use the first 700 characters of the [web article](https://health.udn.com/health/story/5976/7699252?from=udn_ch1005_main_index) as the input and ask the model to write the same article again.
All inferences run on 2 RTX A6000 GPUs (using `vllm`, with a tensor-parallel size of 2).
| Models | ↓ Inference Time (sec)|Estimated Max Input Length (Char)|
|--------------------------------------------------------------------|-------------------|--------------------------|
| Yi-6B-Chat | 10.62 | 5.2k |
| **Breeze-7B-Instruct-v0_1** | 10.74 | 11.1k |
| **Breeze-7B-Instruct-64k-v0_1** | 10.74 | 88.8k |
| Qwen-7B-Chat | 10.86 | 9.8k |
| Qwen-14B-Chat | 18.89 | 9.8k |
| Mistral-7B-v0.1-Instruct | 20.48 | 5.1k |
| Taiwan-LLM-7B-v2.1-chat | 26.26 | 2.2k |
| Taiwan-LLM-13B-v2.0-chat | 36.80 | 2.2k |
| Yi-34B-Chat | 43.71 | 4.5k |
## Long-context Performance
TBD
## Use in Transformers
First install direct dependencies:
```
pip install transformers torch accelerate
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn
```
Then load the model in transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"MediaTek-Research/Breeze-7B-Instruct-v0_1",
device_map="auto",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2" # optional
)
```
The structure of the query is
```txt
<s>SYS_PROMPT [INST] QUERY1 [/INST] RESPONSE1 [INST] QUERY2 [/INST]
```
where `SYS_PROMPT`, `QUERY1`, `RESPONSE1`, and `QUERY2` can be provided by the user.
The suggested default `SYS_PROMPT` is
```txt
You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan.
```
We also integrate `chat_template` into [tokenizer_config.json](tokenizer_config.json), so you can `apply_chat_template` to get the prompt.
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("MediaTek-Research/Breeze-7B-Instruct-v0_1")
>>> chat = [
... {"role": "user", "content": "你好,請問你可以完成什麼任務?"},
... {"role": "assistant", "content": "你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。"},
... {"role": "user", "content": "太棒了!"},
... ]
>>> tokenizer.apply_chat_template(chat, tokenize=False)
"<s>You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. [INST] 你好,請問你可以完成什麼任務? [/INST] 你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。 [INST] 太棒了! [/INST] "
# Tokenized results
# ['▁', '你好', ',', '請問', '你', '可以', '完成', '什麼', '任務', '?']
# ['▁', '你好', ',', '我', '可以', '幫助', '您', '解決', '各種', '問題', '、', '提供', '資訊', '和', '協助', '您', '完成', '許多', '不同', '的', '任務', '。', '例如', ':', '回答', '技術', '問題', '、', '提供', '建議', '、', '翻譯', '文字', '、', '尋找', '資料', '或', '協助', '您', '安排', '行程', '等', '。', '請', '告訴', '我', '如何', '能', '幫助', '您', '。']
# ['▁', '太', '棒', '了', '!']
```
## Citation
```
@article{MediaTek-Research2024breeze7b,
title={Breeze-7B Technical Report},
author={Chan-Jan Hsu and Chang-Le Liu and Feng-Ting Liao and Po-Chun Hsu and Yi-Chang Chen and Da-Shan Shiu},
year={2024},
eprint={2403.02712},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["zh", "en"], "license": "apache-2.0", "pipeline_tag": "text-generation"}
|
task
|
[
"SUMMARIZATION"
] | 44,359 |
4bit/Llama-3.2-1B
|
4bit
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"arxiv:2405.16406",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2025-03-14T01:14:06Z |
2025-03-14T01:16:53+00:00
| 15 | 0 |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: llama3.2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n“Licensee” or “you” means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\
\ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display “Built with\
\ Llama” on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include “Llama” at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a “Notice” text file distributed as a part of such copies: “Llama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals’ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software “bug,” or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Information
The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model Developer:** Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
| Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
**Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
## How to use
This repository contains two versions of Llama-3.2-1B, for use with transformers and with the original `llama` codebase.
### Use with transformers
Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via pip install --upgrade transformers.
```python
import torch
from transformers import pipeline
model_id = "meta-llama/Llama-3.2-1B"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
pipe("The key to life is")
```
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama).
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Llama-3.2-1B --include "original/*" --local-dir Llama-3.2-1B
```
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | ----- | :---: | :---: | :---: |
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
| Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 |
| Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 |
| Total | 833k | 86k | | 240 | 0 |
\*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required.
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
## Quantization
### Quantization Scheme
We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts:
- All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations.
- The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation.
- Similar to classification layer, an 8-bit per channel quantization is used for embedding layer.
### Quantization-Aware Training and LoRA
The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO).
### SpinQuant
[SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length.
## Benchmarks \- English Text
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
### Base Pretrained Models
| Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| ----- | ----- | :---: | :---: | :---: | :---: | :---: |
| General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
| | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
| | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
| Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
| | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
| | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
| Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
### Instruction Tuned Models
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 |
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 |
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 |
| Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 |
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 |
| | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 |
| Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 |
| | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 |
| | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 |
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 |
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 |
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 |
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 |
| | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 |
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 |
\*\*for comparison purposes only. Model not released.
### Multilingual Benchmarks
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 |
| | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 |
| | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 |
| | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 |
| | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 |
| | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 |
| | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 |
\*\*for comparison purposes only. Model not released.
## Inference time
In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device.
| Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) |
| :---- | ----- | ----- | ----- | ----- | ----- |
| 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 |
| 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) |
| 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) |
| 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 |
| 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) |
| 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) |
(\*) The performance measurement is done using an adb binary-based approach.
(\*\*) It is measured on an Android OnePlus 12 device.
(\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64
*Footnote:*
- *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.*
- *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.*
- *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better*
- *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch*
- *RSS size \- Memory usage in resident set size (RSS)*
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama
2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm
3. Provide protections for the community to help prevent the misuse of our models
### Responsible Deployment
**Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
#### Llama 3.2 Instruct
**Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
**Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.2 Systems
**Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### New Capabilities and Use Cases
**Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.
**Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version.
### Evaluations
**Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
**Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models.
**2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models.
### Community
**Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
**Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
**Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
**Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
**Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
| null |
Non_BioNLP
|
## Model Information
The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model Developer:** Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
| Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
**Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
## How to use
This repository contains two versions of Llama-3.2-1B, for use with transformers and with the original `llama` codebase.
### Use with transformers
Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via pip install --upgrade transformers.
```python
import torch
from transformers import pipeline
model_id = "meta-llama/Llama-3.2-1B"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
pipe("The key to life is")
```
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama).
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Llama-3.2-1B --include "original/*" --local-dir Llama-3.2-1B
```
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | ----- | :---: | :---: | :---: |
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
| Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 |
| Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 |
| Total | 833k | 86k | | 240 | 0 |
\*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required.
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
## Quantization
### Quantization Scheme
We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts:
- All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations.
- The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation.
- Similar to classification layer, an 8-bit per channel quantization is used for embedding layer.
### Quantization-Aware Training and LoRA
The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO).
### SpinQuant
[SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length.
## Benchmarks \- English Text
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
### Base Pretrained Models
| Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| ----- | ----- | :---: | :---: | :---: | :---: | :---: |
| General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
| | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
| | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
| Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
| | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
| | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
| Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
### Instruction Tuned Models
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 |
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 |
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 |
| Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 |
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 |
| | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 |
| Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 |
| | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 |
| | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 |
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 |
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 |
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 |
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 |
| | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 |
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 |
\*\*for comparison purposes only. Model not released.
### Multilingual Benchmarks
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 |
| | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 |
| | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 |
| | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 |
| | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 |
| | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 |
| | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 |
\*\*for comparison purposes only. Model not released.
## Inference time
In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device.
| Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) |
| :---- | ----- | ----- | ----- | ----- | ----- |
| 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 |
| 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) |
| 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) |
| 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 |
| 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) |
| 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) |
(\*) The performance measurement is done using an adb binary-based approach.
(\*\*) It is measured on an Android OnePlus 12 device.
(\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64
*Footnote:*
- *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.*
- *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.*
- *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better*
- *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch*
- *RSS size \- Memory usage in resident set size (RSS)*
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama
2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm
3. Provide protections for the community to help prevent the misuse of our models
### Responsible Deployment
**Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
#### Llama 3.2 Instruct
**Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
**Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.2 Systems
**Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### New Capabilities and Use Cases
**Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.
**Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version.
### Evaluations
**Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
**Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models.
**2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models.
### Community
**Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
**Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
**Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
**Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
**Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
|
{"language": ["en", "de", "fr", "it", "pt", "hi", "es", "th"], "library_name": "transformers", "license": "llama3.2", "pipeline_tag": "text-generation", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "extra_gated_prompt": "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\n“Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\n“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. \nb. Redistribution and Use. \ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. \niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate the law or others’ rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law\n 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta \n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:\n 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled substances\n 11. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:\n 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 16. Generating, promoting, or further distributing spam\n 17. Impersonating another individual without consent, authorization, or legal right\n 18. Representing that the use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement \n4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.\n\nPlease report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "Job title": {"type": "select", "options": ["Student", "Research Graduate", "AI researcher", "AI developer/engineer", "Reporter", "Other"]}, "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
|
task
|
[
"SUMMARIZATION"
] | 44,360 |
gavinqiangli/bge-large-mpnet-base-all-nli-triplet-final
|
gavinqiangli
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-large-en",
"base_model:finetune:BAAI/bge-large-en",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-11-15T06:44:57Z |
2024-11-15T06:46:13+00:00
| 7 | 0 |
---
base_model: BAAI/bge-large-en
datasets:
- sentence-transformers/all-nli
language:
- en
library_name: sentence-transformers
metrics:
- cosine_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: A construction worker is standing on a crane placing a large arm
on top of a stature in progress.
sentences:
- A man is playing with his camera.
- A person standing
- Nobody is standing
- source_sentence: A boy in red slides down an inflatable ride.
sentences:
- a baby smiling
- A boy is playing on an inflatable ride.
- A boy pierces a knife through an inflatable ride.
- source_sentence: A man in a black shirt is playing a guitar.
sentences:
- A group of women are selling their wares
- The man is wearing black.
- The man is wearing a blue shirt.
- source_sentence: A man with a large power drill standing next to his daughter with
a vacuum cleaner hose.
sentences:
- A man holding a drill stands next to a girl holding a vacuum hose.
- Kids ride an amusement ride.
- The man and girl are painting the walls.
- source_sentence: A middle-aged man works under the engine of a train on rail tracks.
sentences:
- A guy is working on a train.
- Two young asian men are squatting.
- A guy is driving to work.
model-index:
- name: SentenceTransformer based on BAAI/bge-large-en
results:
- task:
type: triplet
name: Triplet
dataset:
name: all nli test
type: all-nli-test
metrics:
- type: cosine_accuracy
value: 0.8332576789226812
name: Cosine Accuracy
---
# SentenceTransformer based on BAAI/bge-large-en
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) <!-- at revision abe7d9d814b775ca171121fb03f394dc42974275 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("gavinqiangli/bge-large-mpnet-base-all-nli-triplet-final")
# Run inference
sentences = [
'A middle-aged man works under the engine of a train on rail tracks.',
'A guy is working on a train.',
'A guy is driving to work.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `all-nli-test`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.8333** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.46 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.81 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.4 tokens</li><li>max: 50 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.95 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.78 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.35 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | all-nli-test_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:----------------------------:|
| 0.5333 | 1000 | 0.7168 | 0.6448 | - |
| 1.0 | 1875 | - | - | 0.8333 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.0
- Transformers: 4.46.2
- PyTorch: 2.5.0+cu121
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on BAAI/bge-large-en
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) <!-- at revision abe7d9d814b775ca171121fb03f394dc42974275 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("gavinqiangli/bge-large-mpnet-base-all-nli-triplet-final")
# Run inference
sentences = [
'A middle-aged man works under the engine of a train on rail tracks.',
'A guy is working on a train.',
'A guy is driving to work.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `all-nli-test`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.8333** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.46 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.81 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.4 tokens</li><li>max: 50 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.95 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.78 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.35 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | all-nli-test_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:----------------------------:|
| 0.5333 | 1000 | 0.7168 | 0.6448 | - |
| 1.0 | 1875 | - | - | 0.8333 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.0
- Transformers: 4.46.2
- PyTorch: 2.5.0+cu121
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-large-en", "datasets": ["sentence-transformers/all-nli"], "language": ["en"], "library_name": "sentence-transformers", "metrics": ["cosine_accuracy"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:557850", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "A construction worker is standing on a crane placing a large arm on top of a stature in progress.", "sentences": ["A man is playing with his camera.", "A person standing", "Nobody is standing"]}, {"source_sentence": "A boy in red slides down an inflatable ride.", "sentences": ["a baby smiling", "A boy is playing on an inflatable ride.", "A boy pierces a knife through an inflatable ride."]}, {"source_sentence": "A man in a black shirt is playing a guitar.", "sentences": ["A group of women are selling their wares", "The man is wearing black.", "The man is wearing a blue shirt."]}, {"source_sentence": "A man with a large power drill standing next to his daughter with a vacuum cleaner hose.", "sentences": ["A man holding a drill stands next to a girl holding a vacuum hose.", "Kids ride an amusement ride.", "The man and girl are painting the walls."]}, {"source_sentence": "A middle-aged man works under the engine of a train on rail tracks.", "sentences": ["A guy is working on a train.", "Two young asian men are squatting.", "A guy is driving to work."]}], "model-index": [{"name": "SentenceTransformer based on BAAI/bge-large-en", "results": [{"task": {"type": "triplet", "name": "Triplet"}, "dataset": {"name": "all nli test", "type": "all-nli-test"}, "metrics": [{"type": "cosine_accuracy", "value": 0.8332576789226812, "name": "Cosine Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,361 |
taketaketakeyuki/llm-jp-3-13b-finetune-3
|
taketaketakeyuki
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | 2024-12-13T18:18:58Z |
2024-12-16T18:47:57+00:00
| 0 | 0 |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
ベースは llm-jp-3-13bです。事前学習によるファインチューニングを実施したモデルとなります。
このモデルは、LLM2024サマースクール用の最終課題です。
## Model Details
* Day2で実施したPreTrainingにチャレンジしつつ、最適なTraining Agsを試行錯誤したモデルとなります。
* 今回、データセットとして「ichikara-instruction-003-001-1」を活用するサンプルコードになっていましたが、単純な学習量を増やすため、ベースのデータを以下で増殖させています。
1) facebook/mbart-large-50-many-to-many-mmtを用い、日本語→英語→日本語の再変換による設問を追加
def translate_to_en_and_back_to_ja_with_mbart(paragraph):
try:
# 日本語→英語
en_translation = translate_mbart(paragraph, src_lang="ja_XX", tgt_lang="en_XX")
# 英語→日本語
jp_translation = translate_mbart(en_translation, src_lang="en_XX", tgt_lang="ja_XX")
return jp_translation
except Exception as e:
print(f"Error during translation: {e}")
return paragraph # エラー時は元の文を保持
2)文末変更を行うことによる設問追加
# 文末表現の変更
def change_sentence_ending(sentence):
endings = [
"。補足や関連情報があればそれ含め、詳しく教えてください。",
"。詳細を説明してほしいです。",
"。10歳にもわかるように、平易な説明を心がけてください。",
"。可能な限り端的に答えてください。",
"。関連する情報があれば提供をお願いします。"
]
base = sentence.rstrip("。?")
return base + random.choice(endings)
これらについて、日本語文らしいものをフィルタリングかけつつ変換し、設問量を単純に3倍にし学習を行いました。
def process_batch(batch):
original_questions = filter_data(batch["text"]) # フィルタリングを適用
original_outputs = batch["output"]
if not original_questions:
print("No valid data to process.")
return []
# mbartで翻訳した設問
translated_questions = [translate_to_en_and_back_to_ja_with_mbart(q) for q in tqdm(original_questions, desc="Translating questions")]
# 文末表現を変更した設問
revised_questions = [change_sentence_ending(q) for q in original_questions]
min_length = min(len(original_questions), len(original_outputs), len(translated_questions), len(revised_questions))
original_questions = original_questions[:min_length]
original_outputs = original_outputs[:min_length]
translated_questions = translated_questions[:min_length]
revised_questions = revised_questions[:min_length]
formatted = []
for i in range(min_length):
formatted.append({"input": original_questions[i], "output": original_outputs[i]}) # Original設問とOriginal回答
formatted.append({"input": translated_questions[i], "output": original_outputs[i]}) # mbart変換設問とOriginal回答
formatted.append({"input": revised_questions[i], "output": original_outputs[i]}) # 文末変更設問とOriginal回答
return formatted
* Argsについては、何度か学習をさせ、スコアを出した上で、以下の値が最もよいスコアを算出してくれました。
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset=dataset["train"],
max_seq_length = 256, # Further reduced for memory optimization
dataset_text_field="formatted_text",
packing = False,
args = TrainingArguments(
per_device_train_batch_size = 4, # Further reduced for memory optimization
gradient_accumulation_steps = 16, # Increased to compensate for smaller batch size
num_train_epochs = 3, # Increased to 3 for better generalization
logging_steps = 5, # Frequent logging for small dataset
warmup_steps = 10,
save_steps=100,
save_total_limit=2,
max_steps=-1,
learning_rate = 5e-5, # Reduced learning rate for stability
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
group_by_length=True,
seed = 3407,
output_dir = "outputs",
eval_steps = 25, # Adjusted evaluation interval
report_to = "none",
),
)
* モデルの学習としての工夫点は以上ですが、それに加え、モデルからの算出スコアを高める工夫としてなぜなぜ分析をさせたり、自己スコア化させて値が低ければ生成し直すようなロジックを作りました。
# 推論するためにモデルのモードを変更
FastLanguageModel.for_inference(model)
import time
def why_why_analysis(output, depth=2):
"""
Perform a 5-level "why-why" analysis on the given output.
"""
analysis = []
current = output
for level in range(1, depth + 1):
why_reason = f"Why level {level}: Analysis of '{current}'"
analysis.append(why_reason)
current = why_reason
return analysis
def feedback_loop(model, datasets, tokenizer, max_iterations=2, batch_size=5):
"""
Implements a feedback loop where model outputs are evaluated and refined
until they meet the quality criteria.
Processes datasets in batches to optimize performance and time tracking.
"""
refined_results = []
total_time = 0
num_batches = (len(datasets) + batch_size - 1) // batch_size
for iteration in range(max_iterations):
print(f"Iteration {iteration + 1}/{max_iterations}")
for batch_idx in range(num_batches):
batch = datasets[batch_idx * batch_size:(batch_idx + 1) * batch_size]
start_time = time.time()
results = []
for dt in tqdm(batch, desc=f"Processing batch {batch_idx + 1}/{num_batches}"):
input_text = dt["input"]
prompt = f"""### 指示\n{input_text}\n### 回答\n"""
inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, use_cache=True, do_sample=False, repetition_penalty=1.2)
prediction = tokenizer.decode(outputs[0], skip_special_tokens=True).split('\n### 回答')[-1]
# Apply why-why analysis
why_analysis = why_why_analysis(prediction)
# Simulate evaluation metric
score = mock_evaluation_metric(prediction)
if score < 0.8:
print(f"Refining output for task_id {dt['task_id']}\nWhy Analysis: {why_analysis}")
prediction = f"Refined: {prediction}"
results.append({"task_id": dt["task_id"], "input": input_text, "output": prediction})
refined_results.extend(results) # Append batch results
batch_time = time.time() - start_time
total_time += batch_time
print(f"Batch {batch_idx + 1}/{num_batches} completed in {batch_time:.2f} seconds.")
avg_time_per_batch = total_time / (num_batches * (iteration + 1))
remaining_batches = (num_batches * (max_iterations - iteration - 1))
print(f"Estimated remaining time: {avg_time_per_batch * remaining_batches:.2f} seconds.")
return refined_results
def mock_evaluation_metric(output):
"""Mock evaluation metric: randomly returns scores."""
import random
return random.uniform(0.5, 1.0)
# フィードバックループの実行
print("Starting inference with feedback loop...")
final_results = feedback_loop(model, datasets, tokenizer, batch_size=10)
*
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
llm-jp-3-13b
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| null |
Non_BioNLP
|
# Model Card for Model ID
ベースは llm-jp-3-13bです。事前学習によるファインチューニングを実施したモデルとなります。
このモデルは、LLM2024サマースクール用の最終課題です。
## Model Details
* Day2で実施したPreTrainingにチャレンジしつつ、最適なTraining Agsを試行錯誤したモデルとなります。
* 今回、データセットとして「ichikara-instruction-003-001-1」を活用するサンプルコードになっていましたが、単純な学習量を増やすため、ベースのデータを以下で増殖させています。
1) facebook/mbart-large-50-many-to-many-mmtを用い、日本語→英語→日本語の再変換による設問を追加
def translate_to_en_and_back_to_ja_with_mbart(paragraph):
try:
# 日本語→英語
en_translation = translate_mbart(paragraph, src_lang="ja_XX", tgt_lang="en_XX")
# 英語→日本語
jp_translation = translate_mbart(en_translation, src_lang="en_XX", tgt_lang="ja_XX")
return jp_translation
except Exception as e:
print(f"Error during translation: {e}")
return paragraph # エラー時は元の文を保持
2)文末変更を行うことによる設問追加
# 文末表現の変更
def change_sentence_ending(sentence):
endings = [
"。補足や関連情報があればそれ含め、詳しく教えてください。",
"。詳細を説明してほしいです。",
"。10歳にもわかるように、平易な説明を心がけてください。",
"。可能な限り端的に答えてください。",
"。関連する情報があれば提供をお願いします。"
]
base = sentence.rstrip("。?")
return base + random.choice(endings)
これらについて、日本語文らしいものをフィルタリングかけつつ変換し、設問量を単純に3倍にし学習を行いました。
def process_batch(batch):
original_questions = filter_data(batch["text"]) # フィルタリングを適用
original_outputs = batch["output"]
if not original_questions:
print("No valid data to process.")
return []
# mbartで翻訳した設問
translated_questions = [translate_to_en_and_back_to_ja_with_mbart(q) for q in tqdm(original_questions, desc="Translating questions")]
# 文末表現を変更した設問
revised_questions = [change_sentence_ending(q) for q in original_questions]
min_length = min(len(original_questions), len(original_outputs), len(translated_questions), len(revised_questions))
original_questions = original_questions[:min_length]
original_outputs = original_outputs[:min_length]
translated_questions = translated_questions[:min_length]
revised_questions = revised_questions[:min_length]
formatted = []
for i in range(min_length):
formatted.append({"input": original_questions[i], "output": original_outputs[i]}) # Original設問とOriginal回答
formatted.append({"input": translated_questions[i], "output": original_outputs[i]}) # mbart変換設問とOriginal回答
formatted.append({"input": revised_questions[i], "output": original_outputs[i]}) # 文末変更設問とOriginal回答
return formatted
* Argsについては、何度か学習をさせ、スコアを出した上で、以下の値が最もよいスコアを算出してくれました。
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset=dataset["train"],
max_seq_length = 256, # Further reduced for memory optimization
dataset_text_field="formatted_text",
packing = False,
args = TrainingArguments(
per_device_train_batch_size = 4, # Further reduced for memory optimization
gradient_accumulation_steps = 16, # Increased to compensate for smaller batch size
num_train_epochs = 3, # Increased to 3 for better generalization
logging_steps = 5, # Frequent logging for small dataset
warmup_steps = 10,
save_steps=100,
save_total_limit=2,
max_steps=-1,
learning_rate = 5e-5, # Reduced learning rate for stability
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
group_by_length=True,
seed = 3407,
output_dir = "outputs",
eval_steps = 25, # Adjusted evaluation interval
report_to = "none",
),
)
* モデルの学習としての工夫点は以上ですが、それに加え、モデルからの算出スコアを高める工夫としてなぜなぜ分析をさせたり、自己スコア化させて値が低ければ生成し直すようなロジックを作りました。
# 推論するためにモデルのモードを変更
FastLanguageModel.for_inference(model)
import time
def why_why_analysis(output, depth=2):
"""
Perform a 5-level "why-why" analysis on the given output.
"""
analysis = []
current = output
for level in range(1, depth + 1):
why_reason = f"Why level {level}: Analysis of '{current}'"
analysis.append(why_reason)
current = why_reason
return analysis
def feedback_loop(model, datasets, tokenizer, max_iterations=2, batch_size=5):
"""
Implements a feedback loop where model outputs are evaluated and refined
until they meet the quality criteria.
Processes datasets in batches to optimize performance and time tracking.
"""
refined_results = []
total_time = 0
num_batches = (len(datasets) + batch_size - 1) // batch_size
for iteration in range(max_iterations):
print(f"Iteration {iteration + 1}/{max_iterations}")
for batch_idx in range(num_batches):
batch = datasets[batch_idx * batch_size:(batch_idx + 1) * batch_size]
start_time = time.time()
results = []
for dt in tqdm(batch, desc=f"Processing batch {batch_idx + 1}/{num_batches}"):
input_text = dt["input"]
prompt = f"""### 指示\n{input_text}\n### 回答\n"""
inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, use_cache=True, do_sample=False, repetition_penalty=1.2)
prediction = tokenizer.decode(outputs[0], skip_special_tokens=True).split('\n### 回答')[-1]
# Apply why-why analysis
why_analysis = why_why_analysis(prediction)
# Simulate evaluation metric
score = mock_evaluation_metric(prediction)
if score < 0.8:
print(f"Refining output for task_id {dt['task_id']}\nWhy Analysis: {why_analysis}")
prediction = f"Refined: {prediction}"
results.append({"task_id": dt["task_id"], "input": input_text, "output": prediction})
refined_results.extend(results) # Append batch results
batch_time = time.time() - start_time
total_time += batch_time
print(f"Batch {batch_idx + 1}/{num_batches} completed in {batch_time:.2f} seconds.")
avg_time_per_batch = total_time / (num_batches * (iteration + 1))
remaining_batches = (num_batches * (max_iterations - iteration - 1))
print(f"Estimated remaining time: {avg_time_per_batch * remaining_batches:.2f} seconds.")
return refined_results
def mock_evaluation_metric(output):
"""Mock evaluation metric: randomly returns scores."""
import random
return random.uniform(0.5, 1.0)
# フィードバックループの実行
print("Starting inference with feedback loop...")
final_results = feedback_loop(model, datasets, tokenizer, batch_size=10)
*
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
llm-jp-3-13b
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
task
|
[
"TRANSLATION"
] | 44,362 |
rzeydelis/discord-crypto-scam-detector
|
rzeydelis
|
text-classification
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:discord",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-14T02:36:16Z |
2023-11-28T03:20:44+00:00
| 43 | 2 |
---
base_model: bert-base-cased
datasets:
- discord
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: discord-crypto-scam-detector
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: discord-crypto
type: discord
args: 'config: en'
metrics:
- type: accuracy
value: 0.6666666666666666
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# discord-crypto-scam-detector
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the discord-crypto dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7261
- Accuracy: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# discord-crypto-scam-detector
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the discord-crypto dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7261
- Accuracy: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
{"base_model": "bert-base-cased", "datasets": ["discord"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "discord-crypto-scam-detector", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "discord-crypto", "type": "discord", "args": "config: en"}, "metrics": [{"type": "accuracy", "value": 0.6666666666666666, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,363 |
squeezebert/squeezebert-mnli-headless
|
squeezebert
| null |
[
"transformers",
"pytorch",
"squeezebert",
"arxiv:2006.11316",
"arxiv:1904.00962",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2020-12-11T22:02:10+00:00
| 313 | 0 |
---
{}
---
language: en
license: bsd
datasets:
- bookcorpus
- wikipedia
---
# SqueezeBERT pretrained model
This model, `squeezebert-mnli-headless`, has been pretrained for the English language using a masked language modeling (MLM) and Sentence Order Prediction (SOP) objective and finetuned on the [Multi-Genre Natural Language Inference (MNLI)](https://cims.nyu.edu/~sbowman/multinli/) dataset. This is a "headless" model with the final classification layer removed, and this will allow Transformers to automatically reinitialize the final classification layer before you begin finetuning on your data.
SqueezeBERT was introduced in [this paper](https://arxiv.org/abs/2006.11316). This model is case-insensitive. The model architecture is similar to BERT-base, but with the pointwise fully-connected layers replaced with [grouped convolutions](https://blog.yani.io/filter-group-tutorial/).
The authors found that SqueezeBERT is 4.3x faster than `bert-base-uncased` on a Google Pixel 3 smartphone.
## Pretraining
### Pretraining data
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of thousands of unpublished books
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia)
### Pretraining procedure
The model is pretrained using the Masked Language Model (MLM) and Sentence Order Prediction (SOP) tasks.
(Author's note: If you decide to pretrain your own model, and you prefer to train with MLM only, that should work too.)
From the SqueezeBERT paper:
> We pretrain SqueezeBERT from scratch (without distillation) using the [LAMB](https://arxiv.org/abs/1904.00962) optimizer, and we employ the hyperparameters recommended by the LAMB authors: a global batch size of 8192, a learning rate of 2.5e-3, and a warmup proportion of 0.28. Following the LAMB paper's recommendations, we pretrain for 56k steps with a maximum sequence length of 128 and then for 6k steps with a maximum sequence length of 512.
## Finetuning
The SqueezeBERT paper presents 2 approaches to finetuning the model:
- "finetuning without bells and whistles" -- after pretraining the SqueezeBERT model, finetune it on each GLUE task
- "finetuning with bells and whistles" -- after pretraining the SqueezeBERT model, finetune it on a MNLI with distillation from a teacher model. Then, use the MNLI-finetuned SqueezeBERT model as a student model to finetune on each of the other GLUE tasks (e.g. RTE, MRPC, …) with distillation from a task-specific teacher model.
A detailed discussion of the hyperparameters used for finetuning is provided in the appendix of the [SqueezeBERT paper](https://arxiv.org/abs/2006.11316).
Note that finetuning SqueezeBERT with distillation is not yet implemented in this repo. If the author (Forrest Iandola - [email protected]) gets enough encouragement from the user community, he will add example code to Transformers for finetuning SqueezeBERT with distillation.
This model, `squeezebert/squeezebert-mnli-headless`, is the "finetuned with bells and whistles" MNLI-finetuned SqueezeBERT model. In this particular model, we have removed the final classification layer -- in other words, it is "headless." We recommend using this model if you intend to finetune the model on your own data. Using this model means that your final layer will automatically be reinitialized when you start finetuning on your data.
### How to finetune
To try finetuning SqueezeBERT on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task, you can run the following command:
```
./utils/download_glue_data.py
python examples/text-classification/run_glue.py \
--model_name_or_path squeezebert-base-headless \
--task_name mrpc \
--data_dir ./glue_data/MRPC \
--output_dir ./models/squeezebert_mrpc \
--overwrite_output_dir \
--do_train \
--do_eval \
--num_train_epochs 10 \
--learning_rate 3e-05 \
--per_device_train_batch_size 16 \
--save_steps 20000
```
## BibTeX entry and citation info
```
@article{2020_SqueezeBERT,
author = {Forrest N. Iandola and Albert E. Shaw and Ravi Krishna and Kurt W. Keutzer},
title = {{SqueezeBERT}: What can computer vision teach NLP about efficient neural networks?},
journal = {arXiv:2006.11316},
year = {2020}
}
```
| null |
Non_BioNLP
|
language: en
license: bsd
datasets:
- bookcorpus
- wikipedia
---
# SqueezeBERT pretrained model
This model, `squeezebert-mnli-headless`, has been pretrained for the English language using a masked language modeling (MLM) and Sentence Order Prediction (SOP) objective and finetuned on the [Multi-Genre Natural Language Inference (MNLI)](https://cims.nyu.edu/~sbowman/multinli/) dataset. This is a "headless" model with the final classification layer removed, and this will allow Transformers to automatically reinitialize the final classification layer before you begin finetuning on your data.
SqueezeBERT was introduced in [this paper](https://arxiv.org/abs/2006.11316). This model is case-insensitive. The model architecture is similar to BERT-base, but with the pointwise fully-connected layers replaced with [grouped convolutions](https://blog.yani.io/filter-group-tutorial/).
The authors found that SqueezeBERT is 4.3x faster than `bert-base-uncased` on a Google Pixel 3 smartphone.
## Pretraining
### Pretraining data
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of thousands of unpublished books
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia)
### Pretraining procedure
The model is pretrained using the Masked Language Model (MLM) and Sentence Order Prediction (SOP) tasks.
(Author's note: If you decide to pretrain your own model, and you prefer to train with MLM only, that should work too.)
From the SqueezeBERT paper:
> We pretrain SqueezeBERT from scratch (without distillation) using the [LAMB](https://arxiv.org/abs/1904.00962) optimizer, and we employ the hyperparameters recommended by the LAMB authors: a global batch size of 8192, a learning rate of 2.5e-3, and a warmup proportion of 0.28. Following the LAMB paper's recommendations, we pretrain for 56k steps with a maximum sequence length of 128 and then for 6k steps with a maximum sequence length of 512.
## Finetuning
The SqueezeBERT paper presents 2 approaches to finetuning the model:
- "finetuning without bells and whistles" -- after pretraining the SqueezeBERT model, finetune it on each GLUE task
- "finetuning with bells and whistles" -- after pretraining the SqueezeBERT model, finetune it on a MNLI with distillation from a teacher model. Then, use the MNLI-finetuned SqueezeBERT model as a student model to finetune on each of the other GLUE tasks (e.g. RTE, MRPC, …) with distillation from a task-specific teacher model.
A detailed discussion of the hyperparameters used for finetuning is provided in the appendix of the [SqueezeBERT paper](https://arxiv.org/abs/2006.11316).
Note that finetuning SqueezeBERT with distillation is not yet implemented in this repo. If the author (Forrest Iandola - [email protected]) gets enough encouragement from the user community, he will add example code to Transformers for finetuning SqueezeBERT with distillation.
This model, `squeezebert/squeezebert-mnli-headless`, is the "finetuned with bells and whistles" MNLI-finetuned SqueezeBERT model. In this particular model, we have removed the final classification layer -- in other words, it is "headless." We recommend using this model if you intend to finetune the model on your own data. Using this model means that your final layer will automatically be reinitialized when you start finetuning on your data.
### How to finetune
To try finetuning SqueezeBERT on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task, you can run the following command:
```
./utils/download_glue_data.py
python examples/text-classification/run_glue.py \
--model_name_or_path squeezebert-base-headless \
--task_name mrpc \
--data_dir ./glue_data/MRPC \
--output_dir ./models/squeezebert_mrpc \
--overwrite_output_dir \
--do_train \
--do_eval \
--num_train_epochs 10 \
--learning_rate 3e-05 \
--per_device_train_batch_size 16 \
--save_steps 20000
```
## BibTeX entry and citation info
```
@article{2020_SqueezeBERT,
author = {Forrest N. Iandola and Albert E. Shaw and Ravi Krishna and Kurt W. Keutzer},
title = {{SqueezeBERT}: What can computer vision teach NLP about efficient neural networks?},
journal = {arXiv:2006.11316},
year = {2020}
}
```
|
{}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,364 |
nallarahul/NewsGaurd
|
nallarahul
|
text-classification
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"fake-news-detection",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-02-25T18:14:38Z |
2025-03-07T12:37:03+00:00
| 25 | 0 |
---
language: en
license: apache-2.0
tags:
- fake-news-detection
- bert
- text-classification
- transformers
---
# NewsGuard AI - Fake News Detection Model
This model is a fine-tuned **BERT-base-uncased** model for detecting fake news. It is trained using the **FakeNewsNet** dataset.
## Model Details
- **Base Model:** BERT-base-uncased
- **Task:** Text Classification (Fake vs. Real News)
- **Dataset:** FakeNewsNet (GossipCop & PolitiFact)
- **Training Framework:** Hugging Face Transformers
- **Metrics:** Accuracy, Precision, Recall
## How to Use
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
model_path = "your-huggingface-username/newsguard-ai-fake-news"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
text = "Some news article text here..."
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512)
with torch.no_grad():
outputs = model(**inputs)
probs = torch.nn.functional.softmax(outputs.logits, dim=-1)
prediction = "Fake" if torch.argmax(probs) == 0 else "Real"
print(f"Prediction: {prediction}, Confidence: {probs.tolist()[0]}")
| null |
Non_BioNLP
|
# NewsGuard AI - Fake News Detection Model
This model is a fine-tuned **BERT-base-uncased** model for detecting fake news. It is trained using the **FakeNewsNet** dataset.
## Model Details
- **Base Model:** BERT-base-uncased
- **Task:** Text Classification (Fake vs. Real News)
- **Dataset:** FakeNewsNet (GossipCop & PolitiFact)
- **Training Framework:** Hugging Face Transformers
- **Metrics:** Accuracy, Precision, Recall
## How to Use
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
model_path = "your-huggingface-username/newsguard-ai-fake-news"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
text = "Some news article text here..."
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512)
with torch.no_grad():
outputs = model(**inputs)
probs = torch.nn.functional.softmax(outputs.logits, dim=-1)
prediction = "Fake" if torch.argmax(probs) == 0 else "Real"
print(f"Prediction: {prediction}, Confidence: {probs.tolist()[0]}")
|
{"language": "en", "license": "apache-2.0", "tags": ["fake-news-detection", "bert", "text-classification", "transformers"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,365 |
ell-hol/mT5-OrangeSum
|
ell-hol
|
summarization
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:ell-hol/autotrain-data-test-orangesum",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-12-27T22:06:22Z |
2023-02-08T14:34:07+00:00
| 34 | 1 |
---
datasets:
- ell-hol/autotrain-data-test-orangesum
language:
- unk
tags:
- autotrain
- summarization
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions:
emissions: 675.7789931017469
model-index:
- name: ell-hol/mT5-OrangeSum
results:
- task:
type: summarization
name: Summarization
dataset:
name: orange_sum
type: orange_sum
config: abstract
split: validation
metrics:
- type: rouge
value: 33.377
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDhjMWIxYmNmNDYzNTMzMDM2YjQyOTdkYjYyMDJkZDhlNzQ2ZDVkNGM2YTIzODU4ZWYwZDg2ODZkN2U5OTk2MSIsInZlcnNpb24iOjF9.UL_nv_GGJ75LMgDmRjvrp0dYhCyjz-h5txS1ljDFS7k9Yy6iJ0QnTebou1tsLFtj7sBSvUKvZeyqFXEHN7SBCg
- type: rouge
value: 14.4472
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTYxZTVkMzFlMGUxMWNmNzc5ZDI0OWM3ODY2ZTc1MDg2MDc2NTRiZjM3OTA4NGI1MmEwNzQzMjQyOWM5NDE3YiIsInZlcnNpb24iOjF9.xsBp4kyHAnAnAWllwvcXNF3vFFbgP_3Ipplg0Cs8yMzY2qIKozlflWSpmm7qyru1RvtDrHH5JQy0hSSz49tMDQ
- type: rouge
value: 24.1902
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzgxMDNmODZiOTcxYmU0NjlkMjEzOTBmZjZhMzkxZDcyODNjYmJjOGNiNzA2MTI2YjU4MTUzZTFlM2EwYjRkNyIsInZlcnNpb24iOjF9.QE9X1gqHxDA_Vzj86nOi1FrYXrvvYR-uQgAKn2ESJp48mnT4rHCnpxVo3qJGXcoeD0vA0M9VDWJzc2pci34PBA
- type: rouge
value: 25.5277
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDk2YzY1NjU3NDgxMDllYjIwMGI5NGE2ZjY3NzcxZGEwNmYzYjQxYzVlZTdmYzdkYWIxM2Y1YjkxNjZhOWRlZiIsInZlcnNpb24iOjF9.ksd-KgRtY71cHJxFsqLWr5lofRSrfiwixGTI6Hek6GvfisssetoDPy17bWnQpUqfN0ozxJciw2VzpauYPDuZCg
- type: loss
value: 1.6347737312316895
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDNmODJhNzdmMzNkMTc4MDcwZDhmNDFiZjM1ZWVmYjQ4N2IzNWU3MjYwMWM4ZmM0NjFhNjY1OTBlZjBkMjY0YSIsInZlcnNpb24iOjF9.aaF2D-cKnhK4YaqFV23QhoiTCOK7rQJKoXJMMj-kuxe_NLQBLNj73LBou376IlsTmOxxk_mmEimzwMMbTiVSDA
- type: gen_len
value: 48.4967
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzk3YjMxZWY2NzE5ZWMxZjBhYmE5YzU2YTM3MzNmMjlmNmJjM2MyMzY4ZTE1MjI1ZTNkN2YxOWZhOThmYzljMyIsInZlcnNpb24iOjF9._I_I9B66dT3S8RMMmMACG3YjIQYcXzmodriDWM33jRa4X6NFQx0b6_YHNP7K-uLEm8qD31bgb0NlsaRA37qLBA
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 2638979565
- CO2 Emissions (in grams): 675.7790
## Validation Metrics
- Loss: 1.631
- Rouge1: 33.348
- Rouge2: 14.481
- RougeL: 24.210
- RougeLsum: 25.514
- Gen Len: 48.497
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ell-hol/autotrain-test-orangesum-2638979565
```
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 2638979565
- CO2 Emissions (in grams): 675.7790
## Validation Metrics
- Loss: 1.631
- Rouge1: 33.348
- Rouge2: 14.481
- RougeL: 24.210
- RougeLsum: 25.514
- Gen Len: 48.497
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ell-hol/autotrain-test-orangesum-2638979565
```
|
{"datasets": ["ell-hol/autotrain-data-test-orangesum"], "language": ["unk"], "tags": ["autotrain", "summarization"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": {"emissions": 675.7789931017469}, "model-index": [{"name": "ell-hol/mT5-OrangeSum", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "orange_sum", "type": "orange_sum", "config": "abstract", "split": "validation"}, "metrics": [{"type": "rouge", "value": 33.377, "name": "ROUGE-1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDhjMWIxYmNmNDYzNTMzMDM2YjQyOTdkYjYyMDJkZDhlNzQ2ZDVkNGM2YTIzODU4ZWYwZDg2ODZkN2U5OTk2MSIsInZlcnNpb24iOjF9.UL_nv_GGJ75LMgDmRjvrp0dYhCyjz-h5txS1ljDFS7k9Yy6iJ0QnTebou1tsLFtj7sBSvUKvZeyqFXEHN7SBCg"}, {"type": "rouge", "value": 14.4472, "name": "ROUGE-2", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTYxZTVkMzFlMGUxMWNmNzc5ZDI0OWM3ODY2ZTc1MDg2MDc2NTRiZjM3OTA4NGI1MmEwNzQzMjQyOWM5NDE3YiIsInZlcnNpb24iOjF9.xsBp4kyHAnAnAWllwvcXNF3vFFbgP_3Ipplg0Cs8yMzY2qIKozlflWSpmm7qyru1RvtDrHH5JQy0hSSz49tMDQ"}, {"type": "rouge", "value": 24.1902, "name": "ROUGE-L", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzgxMDNmODZiOTcxYmU0NjlkMjEzOTBmZjZhMzkxZDcyODNjYmJjOGNiNzA2MTI2YjU4MTUzZTFlM2EwYjRkNyIsInZlcnNpb24iOjF9.QE9X1gqHxDA_Vzj86nOi1FrYXrvvYR-uQgAKn2ESJp48mnT4rHCnpxVo3qJGXcoeD0vA0M9VDWJzc2pci34PBA"}, {"type": "rouge", "value": 25.5277, "name": "ROUGE-LSUM", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDk2YzY1NjU3NDgxMDllYjIwMGI5NGE2ZjY3NzcxZGEwNmYzYjQxYzVlZTdmYzdkYWIxM2Y1YjkxNjZhOWRlZiIsInZlcnNpb24iOjF9.ksd-KgRtY71cHJxFsqLWr5lofRSrfiwixGTI6Hek6GvfisssetoDPy17bWnQpUqfN0ozxJciw2VzpauYPDuZCg"}, {"type": "loss", "value": 1.6347737312316895, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDNmODJhNzdmMzNkMTc4MDcwZDhmNDFiZjM1ZWVmYjQ4N2IzNWU3MjYwMWM4ZmM0NjFhNjY1OTBlZjBkMjY0YSIsInZlcnNpb24iOjF9.aaF2D-cKnhK4YaqFV23QhoiTCOK7rQJKoXJMMj-kuxe_NLQBLNj73LBou376IlsTmOxxk_mmEimzwMMbTiVSDA"}, {"type": "gen_len", "value": 48.4967, "name": "gen_len", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzk3YjMxZWY2NzE5ZWMxZjBhYmE5YzU2YTM3MzNmMjlmNmJjM2MyMzY4ZTE1MjI1ZTNkN2YxOWZhOThmYzljMyIsInZlcnNpb24iOjF9._I_I9B66dT3S8RMMmMACG3YjIQYcXzmodriDWM33jRa4X6NFQx0b6_YHNP7K-uLEm8qD31bgb0NlsaRA37qLBA"}]}]}]}
|
task
|
[
"SUMMARIZATION"
] | 44,366 |
mrm8488/bloom-560m-finetuned-wikilingua-spanish-summarization
|
mrm8488
|
summarization
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bloom",
"text-generation",
"generated_from_trainer",
"summarization",
"Spanish summarizing",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-09-30T10:56:22Z |
2023-03-17T00:51:03+00:00
| 22 | 2 |
---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
- summarization
- Spanish summarizing
langs:
- es
model-index:
- name: bloom-560m-finetuned-wikilingua-spanish-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom-560m-finetuned-wikilingua-spanish-summarization
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4768 | 0.07 | 500 | 2.4828 |
| 2.428 | 0.14 | 1000 | 2.4125 |
| 2.4 | 0.2 | 1500 | 2.3927 |
| 2.3685 | 0.27 | 2000 | 2.3506 |
| 2.3287 | 0.34 | 2500 | 2.3340 |
| 2.3196 | 0.41 | 3000 | 2.3284 |
| 2.2885 | 0.48 | 3500 | 2.3005 |
| 2.2646 | 0.55 | 4000 | 2.2944 |
| 2.2676 | 0.68 | 5000 | 2.2575 |
| 2.2267 | 0.82 | 6000 | 2.2281 |
| 2.1971 | 0.95 | 7000 | 2.2018 |
| 2.009 | 1.09 | 8000 | 2.1925 |
| 1.9989 | 1.23 | 9000 | 2.1765 |
| 2.0131 | 1.36 | 10000 | 2.1666 |
| 1.9765 | 1.5 | 11000 | 2.1514 |
| 1.9449 | 1.64 | 12000 | 2.1404 |
| 1.9399 | 1.77 | 13000 | 2.1297 |
| 1.957 | 1.91 | 14000 | 2.1223 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.13.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom-560m-finetuned-wikilingua-spanish-summarization
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4768 | 0.07 | 500 | 2.4828 |
| 2.428 | 0.14 | 1000 | 2.4125 |
| 2.4 | 0.2 | 1500 | 2.3927 |
| 2.3685 | 0.27 | 2000 | 2.3506 |
| 2.3287 | 0.34 | 2500 | 2.3340 |
| 2.3196 | 0.41 | 3000 | 2.3284 |
| 2.2885 | 0.48 | 3500 | 2.3005 |
| 2.2646 | 0.55 | 4000 | 2.2944 |
| 2.2676 | 0.68 | 5000 | 2.2575 |
| 2.2267 | 0.82 | 6000 | 2.2281 |
| 2.1971 | 0.95 | 7000 | 2.2018 |
| 2.009 | 1.09 | 8000 | 2.1925 |
| 1.9989 | 1.23 | 9000 | 2.1765 |
| 2.0131 | 1.36 | 10000 | 2.1666 |
| 1.9765 | 1.5 | 11000 | 2.1514 |
| 1.9449 | 1.64 | 12000 | 2.1404 |
| 1.9399 | 1.77 | 13000 | 2.1297 |
| 1.957 | 1.91 | 14000 | 2.1223 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.13.0
|
{"license": "bigscience-bloom-rail-1.0", "tags": ["generated_from_trainer", "summarization", "Spanish summarizing"], "langs": ["es"], "model-index": [{"name": "bloom-560m-finetuned-wikilingua-spanish-summarization", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 44,367 |
flexudy/t5-base-multi-sentence-doctor
|
flexudy
|
text2text-generation
|
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2020-12-11T23:33:25+00:00
| 362 | 45 |
---
{}
---

# Sentence-Doctor
Sentence doctor is a T5 model that attempts to correct the errors or mistakes found in sentences. Model works on English, German and French text.
## 1. Problem:
Many NLP models depend on tasks like *Text Extraction Libraries, OCR, Speech to Text libraries* and **Sentence Boundary Detection**
As a consequence errors caused by these tasks in your NLP pipeline can affect the quality of models in applications. Especially since models are often trained on **clean** input.
## 2. Solution:
Here we provide a model that **attempts** to reconstruct sentences based on the its context (sourrounding text). The task is pretty straightforward:
* `Given an "erroneous" sentence, and its context, reconstruct the "intended" sentence`.
## 3. Use Cases:
* Attempt to repair noisy sentences that where extracted with OCR software or text extractors.
* Attempt to repair sentence boundaries.
* Example (in German): **Input: "und ich bin im**",
* Prefix_Context: "Hallo! Mein Name ist John", Postfix_Context: "Januar 1990 geboren."
* Output: "John und ich bin im Jahr 1990 geboren"
* Possibly sentence level spelling correction -- Although this is not the intended use.
* Input: "I went to church **las yesteday**" => Output: "I went to church last Sunday".
## 4. Disclaimer
Note how we always emphises on the word *attempt*. The current version of the model was only trained on **150K** sentences from the tatoeba dataset: https://tatoeba.org/eng. (50K per language -- En, Fr, De).
Hence, we strongly encourage you to finetune the model on your dataset. We might release a version trained on more data.
## 5. Datasets
We generated synthetic data from the tatoeba dataset: https://tatoeba.org/eng. Randomly applying different transformations on words and characters based on some probabilities. The datasets are available in the data folder (where **sentence_doctor_dataset_300K** is a larger dataset with 100K sentences for each language).
## 6. Usage
### 6.1 Preprocessing
* Let us assume we have the following text (Note that there are no punctuation marks in the text):
```python
text = "That is my job I am a medical doctor I save lives"
```
* You decided extract the sentences and for some obscure reason, you obtained these sentences:
```python
sentences = ["That is my job I a", "m a medical doct", "I save lives"]
```
* You now wish to correct the sentence **"m a medical doct"**.
Here is the single preprocessing step for the model:
```python
input_text = "repair_sentence: " + sentences[1] + " context: {" + sentences[0] + "}{" + sentences[2] + "} </s>"
```
**Explanation**:</br>
* We are telling the model to repair the sentence with the prefix "repair_sentence: "
* Then append the sentence we want to repair **sentence[1]** which is "m a medical doct"
* Next we give some context to the model. In the case, the context is some text that occured before the sentence and some text that appeard after the sentence in the original text.
* To do that, we append the keyword "context :"
* Append **{sentence[0]}** "{That is my job I a}". (Note how it is sourrounded by curly braces).
* Append **{sentence[2]}** "{I save lives}".
* At last we tell the model this is the end of the input with </s>.
```python
print(input_text) # repair_sentence: m a medical doct context: {That is my job I a}{or I save lives} </s>
```
<br/>
**The context is optional**, so the input could also be ```repair_sentence: m a medical doct context: {}{} </s>```
### 6.2 Inference
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("flexudy/t5-base-multi-sentence-doctor")
model = AutoModelWithLMHead.from_pretrained("flexudy/t5-base-multi-sentence-doctor")
input_text = "repair_sentence: m a medical doct context: {That is my job I a}{or I save lives} </s>"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
outputs = model.generate(input_ids, max_length=32, num_beams=1)
sentence = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
assert sentence == "I am a medical doctor."
```
## 7. Fine-tuning
We also provide a script `train_any_t5_task.py` that might help you fine-tune any Text2Text Task with T5. We added #TODO comments all over to help you use train with ease. For example:
```python
# TODO Set your training epochs
config.TRAIN_EPOCHS = 3
```
If you don't want to read the #TODO comments, just pass in your data like this
```python
# TODO Where is your data ? Enter the path
trainer.start("data/sentence_doctor_dataset_300.csv")
```
and voila!! Please feel free to correct any mistakes in the code and make a pull request.
## 8. Attribution
* [Huggingface](https://huggingface.co/) transformer lib for making this possible
* Abhishek Kumar Mishra's transformer [tutorial](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb) on text summarisation. Our training code is just a modified version of their code. So many thanks.
* We finetuned this model from the huggingface hub: WikinewsSum/t5-base-multi-combine-wiki-news. Thanks to the [authors](https://huggingface.co/WikinewsSum)
* We also read a lot of work from [Suraj Patil](https://github.com/patil-suraj)
* No one has been forgotten, hopefully :)
| null |
Non_BioNLP
|

# Sentence-Doctor
Sentence doctor is a T5 model that attempts to correct the errors or mistakes found in sentences. Model works on English, German and French text.
## 1. Problem:
Many NLP models depend on tasks like *Text Extraction Libraries, OCR, Speech to Text libraries* and **Sentence Boundary Detection**
As a consequence errors caused by these tasks in your NLP pipeline can affect the quality of models in applications. Especially since models are often trained on **clean** input.
## 2. Solution:
Here we provide a model that **attempts** to reconstruct sentences based on the its context (sourrounding text). The task is pretty straightforward:
* `Given an "erroneous" sentence, and its context, reconstruct the "intended" sentence`.
## 3. Use Cases:
* Attempt to repair noisy sentences that where extracted with OCR software or text extractors.
* Attempt to repair sentence boundaries.
* Example (in German): **Input: "und ich bin im**",
* Prefix_Context: "Hallo! Mein Name ist John", Postfix_Context: "Januar 1990 geboren."
* Output: "John und ich bin im Jahr 1990 geboren"
* Possibly sentence level spelling correction -- Although this is not the intended use.
* Input: "I went to church **las yesteday**" => Output: "I went to church last Sunday".
## 4. Disclaimer
Note how we always emphises on the word *attempt*. The current version of the model was only trained on **150K** sentences from the tatoeba dataset: https://tatoeba.org/eng. (50K per language -- En, Fr, De).
Hence, we strongly encourage you to finetune the model on your dataset. We might release a version trained on more data.
## 5. Datasets
We generated synthetic data from the tatoeba dataset: https://tatoeba.org/eng. Randomly applying different transformations on words and characters based on some probabilities. The datasets are available in the data folder (where **sentence_doctor_dataset_300K** is a larger dataset with 100K sentences for each language).
## 6. Usage
### 6.1 Preprocessing
* Let us assume we have the following text (Note that there are no punctuation marks in the text):
```python
text = "That is my job I am a medical doctor I save lives"
```
* You decided extract the sentences and for some obscure reason, you obtained these sentences:
```python
sentences = ["That is my job I a", "m a medical doct", "I save lives"]
```
* You now wish to correct the sentence **"m a medical doct"**.
Here is the single preprocessing step for the model:
```python
input_text = "repair_sentence: " + sentences[1] + " context: {" + sentences[0] + "}{" + sentences[2] + "} </s>"
```
**Explanation**:</br>
* We are telling the model to repair the sentence with the prefix "repair_sentence: "
* Then append the sentence we want to repair **sentence[1]** which is "m a medical doct"
* Next we give some context to the model. In the case, the context is some text that occured before the sentence and some text that appeard after the sentence in the original text.
* To do that, we append the keyword "context :"
* Append **{sentence[0]}** "{That is my job I a}". (Note how it is sourrounded by curly braces).
* Append **{sentence[2]}** "{I save lives}".
* At last we tell the model this is the end of the input with </s>.
```python
print(input_text) # repair_sentence: m a medical doct context: {That is my job I a}{or I save lives} </s>
```
<br/>
**The context is optional**, so the input could also be ```repair_sentence: m a medical doct context: {}{} </s>```
### 6.2 Inference
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("flexudy/t5-base-multi-sentence-doctor")
model = AutoModelWithLMHead.from_pretrained("flexudy/t5-base-multi-sentence-doctor")
input_text = "repair_sentence: m a medical doct context: {That is my job I a}{or I save lives} </s>"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
outputs = model.generate(input_ids, max_length=32, num_beams=1)
sentence = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
assert sentence == "I am a medical doctor."
```
## 7. Fine-tuning
We also provide a script `train_any_t5_task.py` that might help you fine-tune any Text2Text Task with T5. We added #TODO comments all over to help you use train with ease. For example:
```python
# TODO Set your training epochs
config.TRAIN_EPOCHS = 3
```
If you don't want to read the #TODO comments, just pass in your data like this
```python
# TODO Where is your data ? Enter the path
trainer.start("data/sentence_doctor_dataset_300.csv")
```
and voila!! Please feel free to correct any mistakes in the code and make a pull request.
## 8. Attribution
* [Huggingface](https://huggingface.co/) transformer lib for making this possible
* Abhishek Kumar Mishra's transformer [tutorial](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb) on text summarisation. Our training code is just a modified version of their code. So many thanks.
* We finetuned this model from the huggingface hub: WikinewsSum/t5-base-multi-combine-wiki-news. Thanks to the [authors](https://huggingface.co/WikinewsSum)
* We also read a lot of work from [Suraj Patil](https://github.com/patil-suraj)
* No one has been forgotten, hopefully :)
|
{}
|
task
|
[
"SUMMARIZATION"
] | 44,368 |
fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-67198
|
fine-tuned
|
feature-extraction
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-67198",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-24T13:27:33Z |
2024-05-24T13:28:08+00:00
| 10 | 0 |
---
datasets:
- fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-67198
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
custom
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-67198',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| null |
Non_BioNLP
|
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
custom
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-67198',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
{"datasets": ["fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-67198", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,369 |
openpecha/speecht5-tts-01
|
openpecha
|
text-to-speech
|
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"speecht5",
"text-to-audio",
"audio",
"text-to-speech",
"dataset:libritts",
"arxiv:2110.07205",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2023-09-11T16:22:24Z |
2024-11-01T05:49:56+00:00
| 60 | 1 |
---
datasets:
- libritts
license: mit
tags:
- audio
- text-to-speech
---
# SpeechT5 (TTS task)
SpeechT5 model fine-tuned for speech synthesis (text-to-speech) on LibriTTS.
This model was introduced in [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
SpeechT5 was first released in [this repository](https://github.com/microsoft/SpeechT5/), [original weights](https://huggingface.co/mechanicalsea/speecht5-tts). The license used is [MIT](https://github.com/microsoft/SpeechT5/blob/main/LICENSE).
## Model Description
Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder.
Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder.
Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.
- **Developed by:** Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
- **Shared by [optional]:** [Matthijs Hollemans](https://huggingface.co/Matthijs)
- **Model type:** text-to-speech
- **Language(s) (NLP):** [More Information Needed]
- **License:** [MIT](https://github.com/microsoft/SpeechT5/blob/main/LICENSE)
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/microsoft/SpeechT5/]
- **Paper:** [https://arxiv.org/pdf/2110.07205.pdf]
- **Blog Post:** [https://huggingface.co/blog/speecht5]
- **Demo:** [https://huggingface.co/spaces/Matthijs/speecht5-tts-demo]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
You can use this model for speech synthesis. See the [model hub](https://huggingface.co/models?search=speecht5) to look for fine-tuned versions on a task that interests you.
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started With the Model
Use the code below to convert text into a mono 16 kHz speech waveform.
```python
# Following pip packages need to be installed:
# !pip install git+https://github.com/huggingface/transformers sentencepiece datasets
from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan
from datasets import load_dataset
import torch
import soundfile as sf
from datasets import load_dataset
processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts")
model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts")
vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
inputs = processor(text="Hello, my dog is cute", return_tensors="pt")
# load xvector containing speaker's voice characteristics from a dataset
embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
speaker_embeddings = torch.tensor(embeddings_dataset[7306]["xvector"]).unsqueeze(0)
speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder)
sf.write("speech.wav", speech.numpy(), samplerate=16000)
```
### Fine-tuning the Model
Refer to [this Colab notebook](https://colab.research.google.com/drive/1i7I5pzBcU3WDFarDnzweIj4-sVVoIUFJ) for an example of how to fine-tune SpeechT5 for TTS on a different dataset or a new language.
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
LibriTTS
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing [optional]
Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text.
### Training hyperparameters
- **Precision:** [More Information Needed] <!--fp16, bf16, fp8, fp32 -->
- **Regime:** [More Information Needed] <!--mixed precision or not -->
### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets.
After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder.
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@inproceedings{ao-etal-2022-speecht5,
title = {{S}peech{T}5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing},
author = {Ao, Junyi and Wang, Rui and Zhou, Long and Wang, Chengyi and Ren, Shuo and Wu, Yu and Liu, Shujie and Ko, Tom and Li, Qing and Zhang, Yu and Wei, Zhihua and Qian, Yao and Li, Jinyu and Wei, Furu},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
month = {May},
year = {2022},
pages={5723--5738},
}
```
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
- **text-to-speech** to synthesize audio
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
Disclaimer: The team releasing SpeechT5 did not write a model card for this model so this model card has been written by the Hugging Face team.
# Model Card Contact
[More Information Needed]
| null |
Non_BioNLP
|
# SpeechT5 (TTS task)
SpeechT5 model fine-tuned for speech synthesis (text-to-speech) on LibriTTS.
This model was introduced in [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
SpeechT5 was first released in [this repository](https://github.com/microsoft/SpeechT5/), [original weights](https://huggingface.co/mechanicalsea/speecht5-tts). The license used is [MIT](https://github.com/microsoft/SpeechT5/blob/main/LICENSE).
## Model Description
Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder.
Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder.
Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.
- **Developed by:** Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
- **Shared by [optional]:** [Matthijs Hollemans](https://huggingface.co/Matthijs)
- **Model type:** text-to-speech
- **Language(s) (NLP):** [More Information Needed]
- **License:** [MIT](https://github.com/microsoft/SpeechT5/blob/main/LICENSE)
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/microsoft/SpeechT5/]
- **Paper:** [https://arxiv.org/pdf/2110.07205.pdf]
- **Blog Post:** [https://huggingface.co/blog/speecht5]
- **Demo:** [https://huggingface.co/spaces/Matthijs/speecht5-tts-demo]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
You can use this model for speech synthesis. See the [model hub](https://huggingface.co/models?search=speecht5) to look for fine-tuned versions on a task that interests you.
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started With the Model
Use the code below to convert text into a mono 16 kHz speech waveform.
```python
# Following pip packages need to be installed:
# !pip install git+https://github.com/huggingface/transformers sentencepiece datasets
from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan
from datasets import load_dataset
import torch
import soundfile as sf
from datasets import load_dataset
processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts")
model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts")
vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
inputs = processor(text="Hello, my dog is cute", return_tensors="pt")
# load xvector containing speaker's voice characteristics from a dataset
embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
speaker_embeddings = torch.tensor(embeddings_dataset[7306]["xvector"]).unsqueeze(0)
speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder)
sf.write("speech.wav", speech.numpy(), samplerate=16000)
```
### Fine-tuning the Model
Refer to [this Colab notebook](https://colab.research.google.com/drive/1i7I5pzBcU3WDFarDnzweIj4-sVVoIUFJ) for an example of how to fine-tune SpeechT5 for TTS on a different dataset or a new language.
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
LibriTTS
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing [optional]
Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text.
### Training hyperparameters
- **Precision:** [More Information Needed] <!--fp16, bf16, fp8, fp32 -->
- **Regime:** [More Information Needed] <!--mixed precision or not -->
### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets.
After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder.
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@inproceedings{ao-etal-2022-speecht5,
title = {{S}peech{T}5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing},
author = {Ao, Junyi and Wang, Rui and Zhou, Long and Wang, Chengyi and Ren, Shuo and Wu, Yu and Liu, Shujie and Ko, Tom and Li, Qing and Zhang, Yu and Wei, Zhihua and Qian, Yao and Li, Jinyu and Wei, Furu},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
month = {May},
year = {2022},
pages={5723--5738},
}
```
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
- **text-to-speech** to synthesize audio
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
Disclaimer: The team releasing SpeechT5 did not write a model card for this model so this model card has been written by the Hugging Face team.
# Model Card Contact
[More Information Needed]
|
{"datasets": ["libritts"], "license": "mit", "tags": ["audio", "text-to-speech"]}
|
task
|
[
"TRANSLATION"
] | 44,370 |
M-CLIP/Swedish-500k
|
M-CLIP
|
feature-extraction
|
[
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"sv",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2022-09-15T10:45:31+00:00
| 26 | 1 |
---
language: sv
---
<br />
<p align="center">
<h1 align="center">Swe-CLIP 500k</h1>
<p align="center">
<a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/Swe-CLIP%20500k">Github Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP).
Once this is done, you can load and use the model with the following code
```python
from src import multilingual_clip
model = multilingual_clip.load_model('Swe-CLIP-500k')
embeddings = model(['Älgen är skogens konung!', 'Alla isbjörnar är vänsterhänta'])
print(embeddings.shape)
# Yields: torch.Size([2, 640])
```
<!-- ABOUT THE PROJECT -->
## About
A [KB/Bert-Swedish-Cased](https://huggingface.co/KB/bert-base-swedish-cased) tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>
Training data pairs was generated by sampling 500k sentences from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into Swedish.
All translation was done using the [Huggingface Opus Model](https://huggingface.co/Helsinki-NLP/opus-mt-en-sv), which seemingly procudes higher quality translations than relying on the [AWS translate service](https://aws.amazon.com/translate/).
| null |
Non_BioNLP
|
<br />
<p align="center">
<h1 align="center">Swe-CLIP 500k</h1>
<p align="center">
<a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/Swe-CLIP%20500k">Github Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP).
Once this is done, you can load and use the model with the following code
```python
from src import multilingual_clip
model = multilingual_clip.load_model('Swe-CLIP-500k')
embeddings = model(['Älgen är skogens konung!', 'Alla isbjörnar är vänsterhänta'])
print(embeddings.shape)
# Yields: torch.Size([2, 640])
```
<!-- ABOUT THE PROJECT -->
## About
A [KB/Bert-Swedish-Cased](https://huggingface.co/KB/bert-base-swedish-cased) tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>
Training data pairs was generated by sampling 500k sentences from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into Swedish.
All translation was done using the [Huggingface Opus Model](https://huggingface.co/Helsinki-NLP/opus-mt-en-sv), which seemingly procudes higher quality translations than relying on the [AWS translate service](https://aws.amazon.com/translate/).
|
{"language": "sv"}
|
task
|
[
"TRANSLATION"
] | 44,371 |
wwydmanski/modernbert-bio-v0.1
|
wwydmanski
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:100006",
"loss:CachedMultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-22T20:51:17Z |
2024-12-22T20:51:45+00:00
| 34 | 0 |
---
base_model: answerdotai/ModernBERT-base
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:100006
- loss:CachedMultipleNegativesRankingLoss
widget:
- source_sentence: how much weight can you lose in a week healthy?
sentences:
- Biology
- 'Summary: According to experts, losing 1–2 pounds (0.45–0.9 kg) per week is a
healthy and safe rate, while losing more than this is considered too fast. However,
you may lose more than that during your first week of an exercise or diet plan.'
- The number of valence electrons is the number of electrons in the outer shell,
that the atom uses for bonding. Nitrogen has 5 electrons in its n=2 (outer) shell.
- source_sentence: how long after having a baby can i get a tattoo?
sentences:
- It is suggested that mothers wait at least until 9-12 months after birth, when
the child is no longer dependent solely on breastmilk before getting a tattoo.
Reputable tattoo artists will have a waiver for the client to sign that asks about
pregnancy and breastfeeding.
- Medicine
- Americans on average are down to 44 gallons of soda per year, and up to about
58 gallons of water. That's 7,242 ounces of water annually -- 20 ounces daily,
which is 2.5 cups.
- source_sentence: is all uhmw anti static?
sentences:
- The bacteria Streptococcus pyogenes causes it. It's most common in infants and
children, but it frequently occurs in teenagers and adults as well. It causes
white streaks or spots in the throat.
- Chemistry
- UHMW is available in a special anti-static grade that helps protect against EsD
(static discharge) or to help keep dust and particles from building up on the
product surface. The anti-static additives are built-in so the anti-static properties
will last throughout the life of the material.
- source_sentence: is closing cost tax deductible?
sentences:
- Medicine
- 1 tablespoon (tbsp) of granulated sugar equals to 12.5998 grams (g) in granulated
sugar mass.
- In general, the only settlement or closing costs you can deduct are home mortgage
interest and certain real estate taxes. You deduct them in the year you buy your
home if you itemize your deductions. ... See IRS Publication 530, "Tax Information
for Homeowners" and look for "Settlement or closing costs" for more details.
- source_sentence: what is the connection between cancer and the cell cycle?
sentences:
- Biology
- Conclusion. Cancer is unchecked cell growth. Mutations in genes can cause cancer
by accelerating cell division rates or inhibiting normal controls on the system,
such as cell cycle arrest or programmed cell death. As a mass of cancerous cells
grows, it can develop into a tumor.
- Your vomit may appear black if the blood has been oxidized by the acids in your
stomach. The iron in your blood turns brown to black with time. Since the blood
is no longer bright red, it means that the bleeding has either stopped or is only
happening in a small amount.
model-index:
- name: SentenceTransformer based on answerdotai/ModernBERT-base
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: cosine_accuracy@1
value: 0.1
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.18
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.24
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.34
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.1
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.06
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.04800000000000001
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.034
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.1
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.15
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.21
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.31
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.19343658524041285
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.16590476190476192
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.17642959153410534
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: cosine_accuracy@1
value: 0.12
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.28
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.4
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.52
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.12
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.09333333333333332
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.08
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.052000000000000005
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.12
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.28
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.4
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.52
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.2984940860938879
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.2304365079365079
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.24691442502099614
name: Cosine Map@100
- task:
type: nano-beir
name: Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: cosine_accuracy@1
value: 0.11
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.23
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.32
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.43000000000000005
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.11
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.07666666666666666
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.064
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.043000000000000003
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.11
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.21500000000000002
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.305
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.41500000000000004
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.24596533566715037
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.1981706349206349
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.21167200827755073
name: Cosine Map@100
---
# SentenceTransformer based on answerdotai/ModernBERT-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the csv dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) <!-- at revision 5756c58a31a2478f9e62146021f48295a92c3da5 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- csv
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'what is the connection between cancer and the cell cycle?',
'Conclusion. Cancer is unchecked cell growth. Mutations in genes can cause cancer by accelerating cell division rates or inhibiting normal controls on the system, such as cell cycle arrest or programmed cell death. As a mass of cancerous cells grows, it can develop into a tumor.',
'Biology',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `NanoNQ` and `NanoMSMARCO`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | NanoNQ | NanoMSMARCO |
|:--------------------|:-----------|:------------|
| cosine_accuracy@1 | 0.1 | 0.12 |
| cosine_accuracy@3 | 0.18 | 0.28 |
| cosine_accuracy@5 | 0.24 | 0.4 |
| cosine_accuracy@10 | 0.34 | 0.52 |
| cosine_precision@1 | 0.1 | 0.12 |
| cosine_precision@3 | 0.06 | 0.0933 |
| cosine_precision@5 | 0.048 | 0.08 |
| cosine_precision@10 | 0.034 | 0.052 |
| cosine_recall@1 | 0.1 | 0.12 |
| cosine_recall@3 | 0.15 | 0.28 |
| cosine_recall@5 | 0.21 | 0.4 |
| cosine_recall@10 | 0.31 | 0.52 |
| **cosine_ndcg@10** | **0.1934** | **0.2985** |
| cosine_mrr@10 | 0.1659 | 0.2304 |
| cosine_map@100 | 0.1764 | 0.2469 |
#### Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>NanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.NanoBEIREvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.11 |
| cosine_accuracy@3 | 0.23 |
| cosine_accuracy@5 | 0.32 |
| cosine_accuracy@10 | 0.43 |
| cosine_precision@1 | 0.11 |
| cosine_precision@3 | 0.0767 |
| cosine_precision@5 | 0.064 |
| cosine_precision@10 | 0.043 |
| cosine_recall@1 | 0.11 |
| cosine_recall@3 | 0.215 |
| cosine_recall@5 | 0.305 |
| cosine_recall@10 | 0.415 |
| **cosine_ndcg@10** | **0.246** |
| cosine_mrr@10 | 0.1982 |
| cosine_map@100 | 0.2117 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### csv
* Dataset: csv
* Size: 100,006 training samples
* Columns: <code>question</code>, <code>answer</code>, and <code>category</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer | category |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 11.91 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 57.49 tokens</li><li>max: 136 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 4.0 tokens</li><li>max: 4 tokens</li></ul> |
* Samples:
| question | answer | category |
|:---------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------|
| <code>how many times a week should you use heat on your hair?</code> | <code>Don't style hair with heat every day. Hot tools can also make hair look crispy and create split ends if overused. Blow out hair 3-5 times a week and try to limit your flat iron/curling iron usage to 1-2 times a week.”</code> | <code>Medicine</code> |
| <code>do african violets like to be root bound?</code> | <code>African violets only bloom when they're root bound. When it is time to repot, be sure to use an organic potting soil made specifically for African violets, such as Espoma's African Violet Mix. They flower best in small pots — choose one that's about a third of the diameter of their leaf spread.</code> | <code>Biology</code> |
| <code>is pgwp exempt from lmia?</code> | <code>The PGWP is exempt from Labour Market Impact Assessment (LMIA) requirements. The candidate must have attended a recognized post-secondary school, or a secondary school that offers qualifying programs, for at least eight months.</code> | <code>Medicine</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `learning_rate`: 0.0001
- `num_train_epochs`: 1
- `warmup_ratio`: 0.05
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | NanoNQ_cosine_ndcg@10 | NanoMSMARCO_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
|:------:|:----:|:-------------:|:---------------------:|:--------------------------:|:----------------------------:|
| 0 | 0 | - | 0.0388 | 0.0863 | 0.0626 |
| 0.0763 | 10 | 0.5482 | - | - | - |
| 0.1527 | 20 | 0.1079 | - | - | - |
| 0.2290 | 30 | 0.1491 | - | - | - |
| 0.3053 | 40 | 0.1381 | - | - | - |
| 0.3817 | 50 | 0.0873 | 0.0909 | 0.2197 | 0.1553 |
| 0.4580 | 60 | 0.133 | - | - | - |
| 0.5344 | 70 | 0.0539 | - | - | - |
| 0.6107 | 80 | 0.029 | - | - | - |
| 0.6870 | 90 | 0.0008 | - | - | - |
| 0.7634 | 100 | 0.0997 | 0.1982 | 0.2657 | 0.2320 |
| 0.8397 | 110 | 0.04 | - | - | - |
| 0.9160 | 120 | 0.0053 | - | - | - |
| 0.9924 | 130 | 0.0095 | - | - | - |
| 1.0 | 131 | - | 0.1934 | 0.2985 | 0.2460 |
### Framework Versions
- Python: 3.12.3
- Sentence Transformers: 3.3.1
- Transformers: 4.48.0.dev0
- PyTorch: 2.5.1
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on answerdotai/ModernBERT-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the csv dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) <!-- at revision 5756c58a31a2478f9e62146021f48295a92c3da5 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- csv
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'what is the connection between cancer and the cell cycle?',
'Conclusion. Cancer is unchecked cell growth. Mutations in genes can cause cancer by accelerating cell division rates or inhibiting normal controls on the system, such as cell cycle arrest or programmed cell death. As a mass of cancerous cells grows, it can develop into a tumor.',
'Biology',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `NanoNQ` and `NanoMSMARCO`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | NanoNQ | NanoMSMARCO |
|:--------------------|:-----------|:------------|
| cosine_accuracy@1 | 0.1 | 0.12 |
| cosine_accuracy@3 | 0.18 | 0.28 |
| cosine_accuracy@5 | 0.24 | 0.4 |
| cosine_accuracy@10 | 0.34 | 0.52 |
| cosine_precision@1 | 0.1 | 0.12 |
| cosine_precision@3 | 0.06 | 0.0933 |
| cosine_precision@5 | 0.048 | 0.08 |
| cosine_precision@10 | 0.034 | 0.052 |
| cosine_recall@1 | 0.1 | 0.12 |
| cosine_recall@3 | 0.15 | 0.28 |
| cosine_recall@5 | 0.21 | 0.4 |
| cosine_recall@10 | 0.31 | 0.52 |
| **cosine_ndcg@10** | **0.1934** | **0.2985** |
| cosine_mrr@10 | 0.1659 | 0.2304 |
| cosine_map@100 | 0.1764 | 0.2469 |
#### Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>NanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.NanoBEIREvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.11 |
| cosine_accuracy@3 | 0.23 |
| cosine_accuracy@5 | 0.32 |
| cosine_accuracy@10 | 0.43 |
| cosine_precision@1 | 0.11 |
| cosine_precision@3 | 0.0767 |
| cosine_precision@5 | 0.064 |
| cosine_precision@10 | 0.043 |
| cosine_recall@1 | 0.11 |
| cosine_recall@3 | 0.215 |
| cosine_recall@5 | 0.305 |
| cosine_recall@10 | 0.415 |
| **cosine_ndcg@10** | **0.246** |
| cosine_mrr@10 | 0.1982 |
| cosine_map@100 | 0.2117 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### csv
* Dataset: csv
* Size: 100,006 training samples
* Columns: <code>question</code>, <code>answer</code>, and <code>category</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer | category |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 11.91 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 57.49 tokens</li><li>max: 136 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 4.0 tokens</li><li>max: 4 tokens</li></ul> |
* Samples:
| question | answer | category |
|:---------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------|
| <code>how many times a week should you use heat on your hair?</code> | <code>Don't style hair with heat every day. Hot tools can also make hair look crispy and create split ends if overused. Blow out hair 3-5 times a week and try to limit your flat iron/curling iron usage to 1-2 times a week.”</code> | <code>Medicine</code> |
| <code>do african violets like to be root bound?</code> | <code>African violets only bloom when they're root bound. When it is time to repot, be sure to use an organic potting soil made specifically for African violets, such as Espoma's African Violet Mix. They flower best in small pots — choose one that's about a third of the diameter of their leaf spread.</code> | <code>Biology</code> |
| <code>is pgwp exempt from lmia?</code> | <code>The PGWP is exempt from Labour Market Impact Assessment (LMIA) requirements. The candidate must have attended a recognized post-secondary school, or a secondary school that offers qualifying programs, for at least eight months.</code> | <code>Medicine</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `learning_rate`: 0.0001
- `num_train_epochs`: 1
- `warmup_ratio`: 0.05
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | NanoNQ_cosine_ndcg@10 | NanoMSMARCO_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
|:------:|:----:|:-------------:|:---------------------:|:--------------------------:|:----------------------------:|
| 0 | 0 | - | 0.0388 | 0.0863 | 0.0626 |
| 0.0763 | 10 | 0.5482 | - | - | - |
| 0.1527 | 20 | 0.1079 | - | - | - |
| 0.2290 | 30 | 0.1491 | - | - | - |
| 0.3053 | 40 | 0.1381 | - | - | - |
| 0.3817 | 50 | 0.0873 | 0.0909 | 0.2197 | 0.1553 |
| 0.4580 | 60 | 0.133 | - | - | - |
| 0.5344 | 70 | 0.0539 | - | - | - |
| 0.6107 | 80 | 0.029 | - | - | - |
| 0.6870 | 90 | 0.0008 | - | - | - |
| 0.7634 | 100 | 0.0997 | 0.1982 | 0.2657 | 0.2320 |
| 0.8397 | 110 | 0.04 | - | - | - |
| 0.9160 | 120 | 0.0053 | - | - | - |
| 0.9924 | 130 | 0.0095 | - | - | - |
| 1.0 | 131 | - | 0.1934 | 0.2985 | 0.2460 |
### Framework Versions
- Python: 3.12.3
- Sentence Transformers: 3.3.1
- Transformers: 4.48.0.dev0
- PyTorch: 2.5.1
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "answerdotai/ModernBERT-base", "library_name": "sentence-transformers", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:100006", "loss:CachedMultipleNegativesRankingLoss"], "widget": [{"source_sentence": "how much weight can you lose in a week healthy?", "sentences": ["Biology", "Summary: According to experts, losing 1–2 pounds (0.45–0.9 kg) per week is a healthy and safe rate, while losing more than this is considered too fast. However, you may lose more than that during your first week of an exercise or diet plan.", "The number of valence electrons is the number of electrons in the outer shell, that the atom uses for bonding. Nitrogen has 5 electrons in its n=2 (outer) shell."]}, {"source_sentence": "how long after having a baby can i get a tattoo?", "sentences": ["It is suggested that mothers wait at least until 9-12 months after birth, when the child is no longer dependent solely on breastmilk before getting a tattoo. Reputable tattoo artists will have a waiver for the client to sign that asks about pregnancy and breastfeeding.", "Medicine", "Americans on average are down to 44 gallons of soda per year, and up to about 58 gallons of water. That's 7,242 ounces of water annually -- 20 ounces daily, which is 2.5 cups."]}, {"source_sentence": "is all uhmw anti static?", "sentences": ["The bacteria Streptococcus pyogenes causes it. It's most common in infants and children, but it frequently occurs in teenagers and adults as well. It causes white streaks or spots in the throat.", "Chemistry", "UHMW is available in a special anti-static grade that helps protect against EsD (static discharge) or to help keep dust and particles from building up on the product surface. The anti-static additives are built-in so the anti-static properties will last throughout the life of the material."]}, {"source_sentence": "is closing cost tax deductible?", "sentences": ["Medicine", "1 tablespoon (tbsp) of granulated sugar equals to 12.5998 grams (g) in granulated sugar mass.", "In general, the only settlement or closing costs you can deduct are home mortgage interest and certain real estate taxes. You deduct them in the year you buy your home if you itemize your deductions. ... See IRS Publication 530, \"Tax Information for Homeowners\" and look for \"Settlement or closing costs\" for more details."]}, {"source_sentence": "what is the connection between cancer and the cell cycle?", "sentences": ["Biology", "Conclusion. Cancer is unchecked cell growth. Mutations in genes can cause cancer by accelerating cell division rates or inhibiting normal controls on the system, such as cell cycle arrest or programmed cell death. As a mass of cancerous cells grows, it can develop into a tumor.", "Your vomit may appear black if the blood has been oxidized by the acids in your stomach. The iron in your blood turns brown to black with time. Since the blood is no longer bright red, it means that the bleeding has either stopped or is only happening in a small amount."]}], "model-index": [{"name": "SentenceTransformer based on answerdotai/ModernBERT-base", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "NanoNQ", "type": "NanoNQ"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.1, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.18, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.24, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.34, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.1, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.06, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.04800000000000001, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.034, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.1, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.15, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.21, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.31, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.19343658524041285, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.16590476190476192, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.17642959153410534, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "NanoMSMARCO", "type": "NanoMSMARCO"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.12, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.28, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.4, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.52, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.12, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.09333333333333332, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.08, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.052000000000000005, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.12, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.28, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.4, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.52, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.2984940860938879, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.2304365079365079, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.24691442502099614, "name": "Cosine Map@100"}]}, {"task": {"type": "nano-beir", "name": "Nano BEIR"}, "dataset": {"name": "NanoBEIR mean", "type": "NanoBEIR_mean"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.11, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.23, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.32, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.43000000000000005, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.11, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.07666666666666666, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.064, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.043000000000000003, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.11, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.21500000000000002, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.305, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.41500000000000004, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.24596533566715037, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.1981706349206349, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.21167200827755073, "name": "Cosine Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,374 |
diegofiggie/empathy_task
|
diegofiggie
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"model-index",
"region:us"
] | 2024-02-27T21:45:24Z |
2024-02-27T22:03:09+00:00
| 4 | 0 |
---
base_model: sentence-transformers/all-MiniLM-L6-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: Dear Jonathan, I am writing to find out how things are going on the Beta project.
I understand that you are enjoying the role and finding new applications.I have
had some feedback from Terry confirming that you are doing well but there are
some improvement points that I would like to discuss with you. It has been noted
that your contributions are providing real value and they enjoy working with you,
however, some of this value is spoiled by a conversational tone and being a bit
verbose. In business correspondence it is essential that the facts are clear,
concise and distinguishable from opinion, otherwise the message may be lost (regardless
of how good it is).There are a number of significant reports required in the coming
weeks. Please could you ensure that you confirm with Terry the exact detail and
format required for specific reports and communication. He should be able to provide
templates and guidance to ensure that his requirements are met. I would also recommend
that you undertake a report-writing course, which should help you to ensure that
you convey your great ideas in the best possible way.I am keen to support you
to ensure the success of the project and your professional development. When I
return in 2 weeks I would like to have a conference call with you and Terry to
better understand how we can help you going forward. Please could you respond
to confirm that you have received this email. Regards, William
- text: 'Hi Jonathan, Thank you for your message. I am glad about your excitment on
this assignment that is important to us, and I hear your will to develop into
an engenier team leader role which I think is a topic that can be discuss.In order
to take you to that role, it is important to work on of your development area
that concern the way you report your analysis.You have a great talent to collect
data and get new creative ideas, and it is crucial to make you able to be more
experienced in business writing to make sure that you adress your conclusions
in a sharp and concise way, avoiding too much commentary.I propose you to write
down your current reports keeping those 2 objectives in mind: avoid too much commentary
and focus on the main data that support your conclusions.I suggest you get inspired
from other reports done internally, that will help you understand better the formalism
the report should have.Then, let is discuss together the outcome of your report,
and I would specially would like to know more about the many application you identify
for Beta Technology that may bring new business opportunity. Just a tip, quantify
your comments, always.See you soon, and we will have the opportunity to take the
time to discuss your development plan based on your capacity to be more straight
to the point in your reports.I am sure you will make a difference. Good luck,
William'
- text: Hey Jonathan! I've been in touch with Terry, I'm so glad to hear how much
you are enjoying the Beta Project, I even hear you are hoping that this experience
will further your ambitions toward a Lead Engineer position! However, I understand
there has been some issues with your reports that Terry has brought up with you,
and I wanted to take a few minutes to discuss them.1) Opinion vs. FactsYour reports
contain a lot of insights about what the data means, and at times finding the
specific hard facts can be difficult.2) Level of DetailYou include every bit of
data that you can into your reports, which can make it difficult to take away
the larger picture.I want to encourage you to take these things away for the following
reasons:1) your reports are reviewed by everyone in upper management, including
the CEO! The opinions you have are great, but when evaluating documents the CEO
just needs to highest level, most important items. The nitty-gritty would fall
to another department2) as you have a desire to move up and be a Lead Engineer,
these kinds of reports will be more and more common. Keeping your thoughts organized
and well documented is going to become a very important skill to have.For your
next report I would like you to prepare a cover sheet that goes with the report.
This cover sheet should be a single page highlighting only the key facts of the
report. Your own opinions and analysis can be included, but let those who are
interested read it on their own time, the high level facts are key for the meeting
they will be presented in. I would also encourage you to make sure the rest of
the report has clearly defined headings and topics, so it is easy to find information
related to each item. I
- text: Good Afternoon Jonathan, I hope you are well and the travelling is not too
exhausting. I wanted to touch base with you to see how you are enjoying working
with the Beta project team? I have been advised that you are a great contributor
and are identifying some great improvements, so well done. I understand you are
completing a lot of reports and imagine this is quite time consuming which added
to your traveling must be quite overwhelming. I have reviewed some of your reports
and whilst they provide all the technical information that is required, they are
quite lengthy and i think it would be beneficial for you to have some training
on report structures. This would mean you could spend less time on the reports
by providing only the main facts needed and perhaps take on more responsibility. When
the reports are reviewed by higher management they need to be able to clearly
and quickly identify any issues. Attending some training would also be great to
add to your career profile for the future. In the meantime perhaps you could review
your reports before submitting to ensure they are clear and consise with only
the technical information needed,Let me know your thoughts. Many thanks again
and well done for all your hard work. Kind regards William
- text: 'Jonathan, First I want to thank you for your help with the Beta project. However, it
has been brought to my attention that perhaps ABC-5 didn''t do enough to prepare
you for the extra work and I would like to discuss some issues. The nature of
these reports requires them to be technical in nature. Your insights are very
valuable and much appreciated but as the old line goes "please give me just the
facts". Given the critical nature of the information you are providing I can''t
stress the importance of concise yet detail factual reports. I would like to
review your reports as a training exercise to help you better meet the team requirements. Given
that there are some major reports coming up in the immediate future, I would like
you to review some training options and then present a report for review. Again
your insights are appreciated but we need to make sure we are presenting the end-use
with only the information they need to make a sound business decision. I also
understand you would like to grow into a leadership position so I would like to
discuss how successfully implementing these changes would be beneficial in demonstrating
an ability to grow and take on new challenges. '
inference: true
model-index:
- name: SetFit with sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.6153846153846154
name: Accuracy
---
# SetFit with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'Hi Jonathan, and I hope your travels are going well. As soon as you get a chance, I would like to catch up on the reports you are creating for the Beta projects. Your contributions have been fantastic, but we need to limit the commentary and make them more concise. I would love to get your perspective and show you an example as well. Our goal is to continue to make you better at what you do and to deliver an excellent customer experience. Looking forward to tackling this together and to your dedication to being great at what you do. Safe travels and I look forward to your call.'</li><li>'Hello Jonathan, I hope you day is going well. The purpose of this msg is to improve your communication regarding your work on the Beta Project. You are important which is why we need to make sure that your thoughts and Ideas are clearly communicated with helpful factual info. I want to get your thoughts on how you best communicate and your thoughts on how to communicate more concisely. Please come up with 2-3 suggestions as will I and lets set up a time within the next 48 hours that you and I can build a plan that will help ensure your great work is being understood for the success of Beta. I am confident that we will develop a plan that continues allow your work to help the program. Please meg me what time works best for you when you end your travel. Best, William'</li></ul> |
| 1 | <ul><li>"Hi Jonathan, As you know I've been away on another assignment, but I just got a download from Terry on your performance so far on the Beta project and wanted to connect with you. The team is happy with your improvement suggestions, genuine enthusiasm for the project, and everyone really likes working with you. I appreciate your commitment, and I know that travel isn't always easy. Terry has shared some of your reporting techniques with me. While we appreciate your insights and attention to detail, we are going to need you to shift gears a little to help the team make their deadlines. It is difficult for the team to easily separate facts from opinions in your reports, and it would be much easier for them to pass on the great information you're sharing if your reports were more concise and organized.I know this change in work habit might be a challenge for you, but it is imperative for the success of the project. That being said, I've come up with a game plan for getting your reports to where the team needs them to be for success. Terry has a lot of experience in business writing, and since he is responsible for passing on your reports to customers and our executive leadership team, I've asked him to sit with you for a couple of hours this week to share some of his edits on your previous reports. This is not in any way a negative exercise, and I really believe it will help both you and the team throughout the project. Please take this opportunity as a learning experience, and reach out to Terry ASAP to schedule the time! Please shoot me a note with your thoughts on this, and let me know if you have any additional ideas on how to further improve the Beta project reporting. I'm looking forward to hearing from you, and will check in with Terry as well after you two meet. Thanks! William"</li><li>"Hi Jonathan, I hope you are doing well. Unfortunately I won't be able to talk to you personally but as soon as I am back I would like to spend some time with you. I know you are working on Beta project and your involvement is highly appreciated\xa0, you even identified improvements the team didn't identify, that's great! This Beta project is key for the company, we need to success all together. In that respect, key priorities are to build concise reports and with strong business writing. Terry has been within the company for 5 years and is the best one to be consulted to upskill in these areas. Could you please liaise with him and get more quick wins from him. It will be very impactful in your career. We will discuss once I'm back about this sharing experience. I'm sure you will find a lot of benefits. Regards William"</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.6154 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("diegofiggie/empathy_task")
# Run inference
preds = model("Jonathan, First I want to thank you for your help with the Beta project. However, it has been brought to my attention that perhaps ABC-5 didn't do enough to prepare you for the extra work and I would like to discuss some issues. The nature of these reports requires them to be technical in nature. Your insights are very valuable and much appreciated but as the old line goes \"please give me just the facts\". Given the critical nature of the information you are providing I can't stress the importance of concise yet detail factual reports. I would like to review your reports as a training exercise to help you better meet the team requirements. Given that there are some major reports coming up in the immediate future, I would like you to review some training options and then present a report for review. Again your insights are appreciated but we need to make sure we are presenting the end-use with only the information they need to make a sound business decision. I also understand you would like to grow into a leadership position so I would like to discuss how successfully implementing these changes would be beneficial in demonstrating an ability to grow and take on new challenges. ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 114 | 187.5 | 338 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 2 |
| 1 | 2 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.1 | 1 | 0.1814 | - |
### Framework Versions
- Python: 3.10.9
- SetFit: 1.0.3
- Sentence Transformers: 2.4.0
- Transformers: 4.38.1
- PyTorch: 2.2.1+cpu
- Datasets: 2.17.1
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'Hi Jonathan, and I hope your travels are going well. As soon as you get a chance, I would like to catch up on the reports you are creating for the Beta projects. Your contributions have been fantastic, but we need to limit the commentary and make them more concise. I would love to get your perspective and show you an example as well. Our goal is to continue to make you better at what you do and to deliver an excellent customer experience. Looking forward to tackling this together and to your dedication to being great at what you do. Safe travels and I look forward to your call.'</li><li>'Hello Jonathan, I hope you day is going well. The purpose of this msg is to improve your communication regarding your work on the Beta Project. You are important which is why we need to make sure that your thoughts and Ideas are clearly communicated with helpful factual info. I want to get your thoughts on how you best communicate and your thoughts on how to communicate more concisely. Please come up with 2-3 suggestions as will I and lets set up a time within the next 48 hours that you and I can build a plan that will help ensure your great work is being understood for the success of Beta. I am confident that we will develop a plan that continues allow your work to help the program. Please meg me what time works best for you when you end your travel. Best, William'</li></ul> |
| 1 | <ul><li>"Hi Jonathan, As you know I've been away on another assignment, but I just got a download from Terry on your performance so far on the Beta project and wanted to connect with you. The team is happy with your improvement suggestions, genuine enthusiasm for the project, and everyone really likes working with you. I appreciate your commitment, and I know that travel isn't always easy. Terry has shared some of your reporting techniques with me. While we appreciate your insights and attention to detail, we are going to need you to shift gears a little to help the team make their deadlines. It is difficult for the team to easily separate facts from opinions in your reports, and it would be much easier for them to pass on the great information you're sharing if your reports were more concise and organized.I know this change in work habit might be a challenge for you, but it is imperative for the success of the project. That being said, I've come up with a game plan for getting your reports to where the team needs them to be for success. Terry has a lot of experience in business writing, and since he is responsible for passing on your reports to customers and our executive leadership team, I've asked him to sit with you for a couple of hours this week to share some of his edits on your previous reports. This is not in any way a negative exercise, and I really believe it will help both you and the team throughout the project. Please take this opportunity as a learning experience, and reach out to Terry ASAP to schedule the time! Please shoot me a note with your thoughts on this, and let me know if you have any additional ideas on how to further improve the Beta project reporting. I'm looking forward to hearing from you, and will check in with Terry as well after you two meet. Thanks! William"</li><li>"Hi Jonathan, I hope you are doing well. Unfortunately I won't be able to talk to you personally but as soon as I am back I would like to spend some time with you. I know you are working on Beta project and your involvement is highly appreciated\xa0, you even identified improvements the team didn't identify, that's great! This Beta project is key for the company, we need to success all together. In that respect, key priorities are to build concise reports and with strong business writing. Terry has been within the company for 5 years and is the best one to be consulted to upskill in these areas. Could you please liaise with him and get more quick wins from him. It will be very impactful in your career. We will discuss once I'm back about this sharing experience. I'm sure you will find a lot of benefits. Regards William"</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.6154 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("diegofiggie/empathy_task")
# Run inference
preds = model("Jonathan, First I want to thank you for your help with the Beta project. However, it has been brought to my attention that perhaps ABC-5 didn't do enough to prepare you for the extra work and I would like to discuss some issues. The nature of these reports requires them to be technical in nature. Your insights are very valuable and much appreciated but as the old line goes \"please give me just the facts\". Given the critical nature of the information you are providing I can't stress the importance of concise yet detail factual reports. I would like to review your reports as a training exercise to help you better meet the team requirements. Given that there are some major reports coming up in the immediate future, I would like you to review some training options and then present a report for review. Again your insights are appreciated but we need to make sure we are presenting the end-use with only the information they need to make a sound business decision. I also understand you would like to grow into a leadership position so I would like to discuss how successfully implementing these changes would be beneficial in demonstrating an ability to grow and take on new challenges. ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 114 | 187.5 | 338 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 2 |
| 1 | 2 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.1 | 1 | 0.1814 | - |
### Framework Versions
- Python: 3.10.9
- SetFit: 1.0.3
- Sentence Transformers: 2.4.0
- Transformers: 4.38.1
- PyTorch: 2.2.1+cpu
- Datasets: 2.17.1
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/all-MiniLM-L6-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "Dear Jonathan, I am writing to find out how things are going on the Beta project. I understand that you are enjoying the role and finding new applications.I have had some feedback from Terry confirming that you are doing well but there are some improvement points that I would like to discuss with you. It has been noted that your contributions are providing real value and they enjoy working with you, however, some of this value is spoiled by a conversational tone and being a bit verbose. In business correspondence it is essential that the facts are clear, concise and distinguishable from opinion, otherwise the message may be lost (regardless of how good it is).There are a number of significant reports required in the coming weeks. Please could you ensure that you confirm with Terry the exact detail and format required for specific reports and communication. He should be able to provide templates and guidance to ensure that his requirements are met. I would also recommend that you undertake a report-writing course, which should help you to ensure that you convey your great ideas in the best possible way.I am keen to support you to ensure the success of the project and your professional development. When I return in 2 weeks I would like to have a conference call with you and Terry to better understand how we can help you going forward. Please could you respond to confirm that you have received this email. Regards, William"}, {"text": "Hi Jonathan, Thank you for your message. I am glad about your excitment on this assignment that is important to us, and I hear your will to develop into an engenier team leader role which I think is a topic that can be discuss.In order to take you to that role, it is important to work on of your development area that concern the way you report your analysis.You have a great talent to collect data and get new creative ideas, and it is crucial to make you able to be more experienced in business writing to make sure that you adress your conclusions in a sharp and concise way, avoiding too much commentary.I propose you to write down your current reports keeping those 2 objectives in mind: avoid too much commentary and focus on the main data that support your conclusions.I suggest you get inspired from other reports done internally, that will help you understand better the formalism the report should have.Then, let is discuss together the outcome of your report, and I would specially would like to know more about the many application you identify for Beta Technology that may bring new business opportunity. Just a tip, quantify your comments, always.See you soon, and we will have the opportunity to take the time to discuss your development plan based on your capacity to be more straight to the point in your reports.I am sure you will make a difference. Good luck, William"}, {"text": "Hey Jonathan! I've been in touch with Terry, I'm so glad to hear how much you are enjoying the Beta Project, I even hear you are hoping that this experience will further your ambitions toward a Lead Engineer position! However, I understand there has been some issues with your reports that Terry has brought up with you, and I wanted to take a few minutes to discuss them.1) Opinion vs. FactsYour reports contain a lot of insights about what the data means, and at times finding the specific hard facts can be difficult.2) Level of DetailYou include every bit of data that you can into your reports, which can make it difficult to take away the larger picture.I want to encourage you to take these things away for the following reasons:1) your reports are reviewed by everyone in upper management, including the CEO! The opinions you have are great, but when evaluating documents the CEO just needs to highest level, most important items. The nitty-gritty would fall to another department2) as you have a desire to move up and be a Lead Engineer, these kinds of reports will be more and more common. Keeping your thoughts organized and well documented is going to become a very important skill to have.For your next report I would like you to prepare a cover sheet that goes with the report. This cover sheet should be a single page highlighting only the key facts of the report. Your own opinions and analysis can be included, but let those who are interested read it on their own time, the high level facts are key for the meeting they will be presented in. I would also encourage you to make sure the rest of the report has clearly defined headings and topics, so it is easy to find information related to each item. I"}, {"text": "Good Afternoon Jonathan, I hope you are well and the travelling is not too exhausting. I wanted to touch base with you to see how you are enjoying working with the Beta project team? I have been advised that you are a great contributor and are identifying some great improvements, so well done. I understand you are completing a lot of reports and imagine this is quite time consuming which added to your traveling must be quite overwhelming. I have reviewed some of your reports and whilst they provide all the technical information that is required, they are quite lengthy and i think it would be beneficial for you to have some training on report structures. This would mean you could spend less time on the reports by providing only the main facts needed and perhaps take on more responsibility. When the reports are reviewed by higher management they need to be able to clearly and quickly identify any issues. Attending some training would also be great to add to your career profile for the future. In the meantime perhaps you could review your reports before submitting to ensure they are clear and consise with only the technical information needed,Let me know your thoughts. Many thanks again and well done for all your hard work. Kind regards William"}, {"text": "Jonathan, First I want to thank you for your help with the Beta project. However, it has been brought to my attention that perhaps ABC-5 didn't do enough to prepare you for the extra work and I would like to discuss some issues. The nature of these reports requires them to be technical in nature. Your insights are very valuable and much appreciated but as the old line goes \"please give me just the facts\". Given the critical nature of the information you are providing I can't stress the importance of concise yet detail factual reports. I would like to review your reports as a training exercise to help you better meet the team requirements. Given that there are some major reports coming up in the immediate future, I would like you to review some training options and then present a report for review. Again your insights are appreciated but we need to make sure we are presenting the end-use with only the information they need to make a sound business decision. I also understand you would like to grow into a leadership position so I would like to discuss how successfully implementing these changes would be beneficial in demonstrating an ability to grow and take on new challenges. "}], "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/all-MiniLM-L6-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.6153846153846154, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,375 |
AhmedSSoliman/MarianCG-DJANGO
|
AhmedSSoliman
|
text2text-generation
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-08-30T12:14:00Z |
2023-07-30T11:58:02+00:00
| 22 | 0 |
---
widget:
- text: define the method i with an argument self.
- text: substitute asvar for self.asvar.
- text: convert host to lowercase.
- text: for every var in self.vars,
- text: call the method parser.delete_first_token.
---
```
```
[](https://paperswithcode.com/sota/code-generation-on-django?p=mariancg-a-code-generation-transformer-model)
```
```
# MarianCG: a code generation transformer model inspired by machine translation
This model is to improve the solving of the code generation problem and implement a transformer model that can work with high accurate results. We implemented MarianCG transformer model which is a code generation model that can be able to generate code from natural language. This work declares the impact of using Marian machine translation model for solving the problem of code generation. In our implementation, we prove that a machine translation model can be operated and working as a code generation model. Finally, we set the new contributors and state-of-the-art on CoNaLa reaching a BLEU score of 30.92 and Exact Match Accuracy of 6.2 in the code generation problem with CoNaLa dataset.
MarianCG model and its implementation with the code of training and the generated output is available at this repository:
https://github.com/AhmedSSoliman/MarianCG-NL-to-Code
DJANGO dataset is available at
https://huggingface.co/datasets/AhmedSSoliman/DJANGO
This model is avialable on the huggingface hub https://huggingface.co/AhmedSSoliman/MarianCG-DJANGO
```python
# Model and Tokenizer
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# model_name = "AhmedSSoliman/MarianCG-NL-to-Code"
model = AutoModelForSeq2SeqLM.from_pretrained("AhmedSSoliman/MarianCG-DJANGO")
tokenizer = AutoTokenizer.from_pretrained("AhmedSSoliman/MarianCG-DJANGO")
# Input (Natural Language) and Output (Python Code)
NL_input = "define the method i with an argument self."
output = model.generate(**tokenizer(NL_input, padding="max_length", truncation=True, max_length=512, return_tensors="pt"))
output_code = tokenizer.decode(output[0], skip_special_tokens=True)
```
This model is available in spaces using gradio at: https://huggingface.co/spaces/AhmedSSoliman/MarianCG-DJANGO
---
Tasks:
- Translation
- Code Generation
- Text2Text Generation
- Text Generation
---
# Citation
We now have a [paper](https://doi.org/10.1186/s44147-022-00159-4) for this work and you can cite:
```
@article{soliman2022mariancg,
title={MarianCG: a code generation transformer model inspired by machine translation},
author={Soliman, Ahmed S and Hadhoud, Mayada M and Shaheen, Samir I},
journal={Journal of Engineering and Applied Science},
volume={69},
number={1},
pages={1--23},
year={2022},
publisher={SpringerOpen}
url={https://doi.org/10.1186/s44147-022-00159-4}
}
```
| null |
Non_BioNLP
|
```
```
[](https://paperswithcode.com/sota/code-generation-on-django?p=mariancg-a-code-generation-transformer-model)
```
```
# MarianCG: a code generation transformer model inspired by machine translation
This model is to improve the solving of the code generation problem and implement a transformer model that can work with high accurate results. We implemented MarianCG transformer model which is a code generation model that can be able to generate code from natural language. This work declares the impact of using Marian machine translation model for solving the problem of code generation. In our implementation, we prove that a machine translation model can be operated and working as a code generation model. Finally, we set the new contributors and state-of-the-art on CoNaLa reaching a BLEU score of 30.92 and Exact Match Accuracy of 6.2 in the code generation problem with CoNaLa dataset.
MarianCG model and its implementation with the code of training and the generated output is available at this repository:
https://github.com/AhmedSSoliman/MarianCG-NL-to-Code
DJANGO dataset is available at
https://huggingface.co/datasets/AhmedSSoliman/DJANGO
This model is avialable on the huggingface hub https://huggingface.co/AhmedSSoliman/MarianCG-DJANGO
```python
# Model and Tokenizer
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# model_name = "AhmedSSoliman/MarianCG-NL-to-Code"
model = AutoModelForSeq2SeqLM.from_pretrained("AhmedSSoliman/MarianCG-DJANGO")
tokenizer = AutoTokenizer.from_pretrained("AhmedSSoliman/MarianCG-DJANGO")
# Input (Natural Language) and Output (Python Code)
NL_input = "define the method i with an argument self."
output = model.generate(**tokenizer(NL_input, padding="max_length", truncation=True, max_length=512, return_tensors="pt"))
output_code = tokenizer.decode(output[0], skip_special_tokens=True)
```
This model is available in spaces using gradio at: https://huggingface.co/spaces/AhmedSSoliman/MarianCG-DJANGO
---
Tasks:
- Translation
- Code Generation
- Text2Text Generation
- Text Generation
---
# Citation
We now have a [paper](https://doi.org/10.1186/s44147-022-00159-4) for this work and you can cite:
```
@article{soliman2022mariancg,
title={MarianCG: a code generation transformer model inspired by machine translation},
author={Soliman, Ahmed S and Hadhoud, Mayada M and Shaheen, Samir I},
journal={Journal of Engineering and Applied Science},
volume={69},
number={1},
pages={1--23},
year={2022},
publisher={SpringerOpen}
url={https://doi.org/10.1186/s44147-022-00159-4}
}
```
|
{"widget": [{"text": "define the method i with an argument self."}, {"text": "substitute asvar for self.asvar."}, {"text": "convert host to lowercase."}, {"text": "for every var in self.vars,"}, {"text": "call the method parser.delete_first_token."}]}
|
task
|
[
"TRANSLATION"
] | 44,376 |
yuchenxie/ArlowGPT-VL-CLiP
|
yuchenxie
|
image-text-to-text
|
[
"transformers",
"qwen2",
"text-generation",
"image-text-to-text",
"en",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | 2024-10-26T21:56:48Z |
2024-11-20T21:24:45+00:00
| 0 | 0 |
---
base_model:
- Qwen/Qwen2.5-7B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: image-text-to-text
---
# Model Card: Experimental ArlowGPT-VL-CLiP
***
## Overview
ArlowGPT-VL-CLiP is an experimental multimodal model that merges Qwen 2.5 (7B) and OpenAI CLIP, bringing together natural language processing and visual understanding in a single framework. Developed to explore advanced text-image interaction capabilities, this model combines the Qwen 2.5 architecture's strengths in language comprehension with the visual feature extraction prowess of CLIP. This combination allows ArlowGPT-VL-CLiP to tackle complex tasks involving both text and image inputs, opening up new possibilities for multimodal applications in research, machine learning, and artificial intelligence.
The model's multimodal architecture enables it to process, understand, and generate coherent responses that incorporate information from both text and images. This unique capability has the potential to enhance applications in creative content generation, assistive technologies, and advanced research in machine perception and language understanding.
***
## Model Details
- **Base Models**: Qwen 2.5 (7B) and OpenAI CLIP
- **Merged Approach**: The hybrid model integrates Qwen 2.5, known for its robust language comprehension and adaptability to various natural language processing tasks, with CLIP, which excels at understanding visual features and aligning them with corresponding textual descriptions. By merging these two models, ArlowGPT-VL-CLiP can process multimodal input for applications requiring both text and visual comprehension.
- **Qwen 2.5 (7B)**: A large-scale language model proficient in interpreting and generating text based on context, allowing it to engage in conversations, answer questions, and handle information extraction from text.
- **OpenAI CLIP**: A vision model trained to understand and relate visual content to textual descriptions, enabling tasks like object recognition, scene interpretation, and image-text alignment.
- **Type**: Experimental, merged multimodal model for text-image understanding, specifically tailored for research and exploratory use cases.
***
## Intended Use
ArlowGPT-VL-CLiP is primarily intended for research and experimental applications in multimodal processing, offering a foundation for exploring how language and vision models can work together. Its key applications include:
- **Image Captioning and Visual Question Answering**: The model can generate detailed captions for images, describe visual scenes, and answer questions related to the visual content. This capability is valuable for applications that assist visually impaired individuals, automate content tagging, or provide descriptive feedback in AI-powered systems.
- **Multimodal Understanding and Image-Text Alignment**: ArlowGPT-VL-CLiP is well-suited for aligning images with relevant textual descriptions, making it useful in tasks requiring accurate association between visual and text elements. This is beneficial for applications in content recommendation, personalized marketing, and enhancing accessibility through accurate visual and textual pairing.
- **Experiments in Merging Language and Vision Models**: This model is ideal for researchers exploring the integration of large language models and vision models. By using ArlowGPT-VL-CLiP as a testbed, researchers can assess the performance, limitations, and synergies of combined language-vision processing, laying the groundwork for future advancements in multimodal AI applications.
ArlowGPT-VL-CLiP offers an experimental foundation for developing applications in AI-driven multimedia content creation, assistive technologies, and complex multimodal research. Its versatility across text and image tasks makes it a powerful tool for applications that rely on comprehensive text-image interaction.
***
## Limitations and Warnings
- **Experimental Nature**: The model is highly experimental, and merging Qwen 2.5 with CLIP may lead to unexpected behaviors in certain scenarios. Due to the experimental nature of this integration, the model's performance may vary across tasks, and its behavior may be unpredictable in unfamiliar contexts.
- **Biases**: Since ArlowGPT-VL-CLiP inherits characteristics from both Qwen 2.5 and CLIP, it may also retain biases present in each base model. These biases can include cultural, gender, or racial assumptions embedded in the training data, leading to skewed outputs. Users should exercise caution when using this model in sensitive or high-stakes applications and consider implementing bias-detection and mitigation strategies.
- **Evaluation**: Given its experimental design, thorough evaluation is strongly recommended before deploying this model in any production environment. Users should test the model for accuracy, consistency, and robustness across different scenarios. Additionally, considering ethical and fairness assessments is essential to ensure responsible use.
***
## Example Usage
To get started with ArlowGPT-VL-CLiP, the following code demonstrates how to load and interact with the model. This example assumes you have access to the model on Hugging Face and can provide a Hugging Face authentication token.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Your Hugging Face token
hf_token = "your_huggingface_token_here"
# Load the tokenizer with authentication for multimodal processing
tokenizer = AutoTokenizer.from_pretrained(
"yuchenxie/ArlowGPT-VL-CLiP",
use_auth_token=hf_token
)
# Load the fine-tuned model with authentication
model = AutoModelForCausalLM.from_pretrained(
"yuchenxie/ArlowGPT-VL-CLiP",
use_auth_token=hf_token
)
# Encode input text
input_text = "Describe the image content and answer questions based on the visual context."
inputs = tokenizer(input_text, return_tensors="pt")
# Generate output - Adjust max_length and other generation parameters as needed
outputs = model.generate(**inputs, max_length=50, num_return_sequences=1)
# Decode and print the output
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
```
***
| null |
Non_BioNLP
|
# Model Card: Experimental ArlowGPT-VL-CLiP
***
## Overview
ArlowGPT-VL-CLiP is an experimental multimodal model that merges Qwen 2.5 (7B) and OpenAI CLIP, bringing together natural language processing and visual understanding in a single framework. Developed to explore advanced text-image interaction capabilities, this model combines the Qwen 2.5 architecture's strengths in language comprehension with the visual feature extraction prowess of CLIP. This combination allows ArlowGPT-VL-CLiP to tackle complex tasks involving both text and image inputs, opening up new possibilities for multimodal applications in research, machine learning, and artificial intelligence.
The model's multimodal architecture enables it to process, understand, and generate coherent responses that incorporate information from both text and images. This unique capability has the potential to enhance applications in creative content generation, assistive technologies, and advanced research in machine perception and language understanding.
***
## Model Details
- **Base Models**: Qwen 2.5 (7B) and OpenAI CLIP
- **Merged Approach**: The hybrid model integrates Qwen 2.5, known for its robust language comprehension and adaptability to various natural language processing tasks, with CLIP, which excels at understanding visual features and aligning them with corresponding textual descriptions. By merging these two models, ArlowGPT-VL-CLiP can process multimodal input for applications requiring both text and visual comprehension.
- **Qwen 2.5 (7B)**: A large-scale language model proficient in interpreting and generating text based on context, allowing it to engage in conversations, answer questions, and handle information extraction from text.
- **OpenAI CLIP**: A vision model trained to understand and relate visual content to textual descriptions, enabling tasks like object recognition, scene interpretation, and image-text alignment.
- **Type**: Experimental, merged multimodal model for text-image understanding, specifically tailored for research and exploratory use cases.
***
## Intended Use
ArlowGPT-VL-CLiP is primarily intended for research and experimental applications in multimodal processing, offering a foundation for exploring how language and vision models can work together. Its key applications include:
- **Image Captioning and Visual Question Answering**: The model can generate detailed captions for images, describe visual scenes, and answer questions related to the visual content. This capability is valuable for applications that assist visually impaired individuals, automate content tagging, or provide descriptive feedback in AI-powered systems.
- **Multimodal Understanding and Image-Text Alignment**: ArlowGPT-VL-CLiP is well-suited for aligning images with relevant textual descriptions, making it useful in tasks requiring accurate association between visual and text elements. This is beneficial for applications in content recommendation, personalized marketing, and enhancing accessibility through accurate visual and textual pairing.
- **Experiments in Merging Language and Vision Models**: This model is ideal for researchers exploring the integration of large language models and vision models. By using ArlowGPT-VL-CLiP as a testbed, researchers can assess the performance, limitations, and synergies of combined language-vision processing, laying the groundwork for future advancements in multimodal AI applications.
ArlowGPT-VL-CLiP offers an experimental foundation for developing applications in AI-driven multimedia content creation, assistive technologies, and complex multimodal research. Its versatility across text and image tasks makes it a powerful tool for applications that rely on comprehensive text-image interaction.
***
## Limitations and Warnings
- **Experimental Nature**: The model is highly experimental, and merging Qwen 2.5 with CLIP may lead to unexpected behaviors in certain scenarios. Due to the experimental nature of this integration, the model's performance may vary across tasks, and its behavior may be unpredictable in unfamiliar contexts.
- **Biases**: Since ArlowGPT-VL-CLiP inherits characteristics from both Qwen 2.5 and CLIP, it may also retain biases present in each base model. These biases can include cultural, gender, or racial assumptions embedded in the training data, leading to skewed outputs. Users should exercise caution when using this model in sensitive or high-stakes applications and consider implementing bias-detection and mitigation strategies.
- **Evaluation**: Given its experimental design, thorough evaluation is strongly recommended before deploying this model in any production environment. Users should test the model for accuracy, consistency, and robustness across different scenarios. Additionally, considering ethical and fairness assessments is essential to ensure responsible use.
***
## Example Usage
To get started with ArlowGPT-VL-CLiP, the following code demonstrates how to load and interact with the model. This example assumes you have access to the model on Hugging Face and can provide a Hugging Face authentication token.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Your Hugging Face token
hf_token = "your_huggingface_token_here"
# Load the tokenizer with authentication for multimodal processing
tokenizer = AutoTokenizer.from_pretrained(
"yuchenxie/ArlowGPT-VL-CLiP",
use_auth_token=hf_token
)
# Load the fine-tuned model with authentication
model = AutoModelForCausalLM.from_pretrained(
"yuchenxie/ArlowGPT-VL-CLiP",
use_auth_token=hf_token
)
# Encode input text
input_text = "Describe the image content and answer questions based on the visual context."
inputs = tokenizer(input_text, return_tensors="pt")
# Generate output - Adjust max_length and other generation parameters as needed
outputs = model.generate(**inputs, max_length=50, num_return_sequences=1)
# Decode and print the output
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
```
***
|
{"base_model": ["Qwen/Qwen2.5-7B-Instruct"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "image-text-to-text"}
|
task
|
[
"QUESTION_ANSWERING"
] | 44,377 |
RichardErkhov/besimray_-_miner_id_3_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801842-awq
|
RichardErkhov
| null |
[
"safetensors",
"llama",
"4-bit",
"awq",
"region:us"
] | 2024-12-25T16:17:49Z |
2024-12-25T16:19:01+00:00
| 15 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
miner_id_3_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801842 - AWQ
- Model creator: https://huggingface.co/besimray/
- Original model: https://huggingface.co/besimray/miner_id_3_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801842/
Original model description:
---
base_model: meta-llama/Llama-3.2-1B
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
---
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (1B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# Llama-3.2-1B
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
miner_id_3_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801842 - AWQ
- Model creator: https://huggingface.co/besimray/
- Original model: https://huggingface.co/besimray/miner_id_3_56d9075c-cf98-498b-8ad6-84bc66fb6ee2_1729801842/
Original model description:
---
base_model: meta-llama/Llama-3.2-1B
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
---
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (1B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# Llama-3.2-1B
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
{}
|
task
|
[
"SUMMARIZATION"
] | 44,378 |
neofung/Multi_Turn_Conversation_Summary-V0
|
neofung
|
summarization
|
[
"summarization",
"zh",
"en",
"dataset:YeungNLP/moss-003-sft-data",
"dataset:neofung/moss-003-sft-data-summary",
"license:apache-2.0",
"region:us"
] | 2024-02-07T04:56:15Z |
2024-02-28T05:18:07+00:00
| 0 | 2 |
---
datasets:
- YeungNLP/moss-003-sft-data
- neofung/moss-003-sft-data-summary
language:
- zh
- en
license: apache-2.0
pipeline_tag: summarization
---
Multi_Turn_Conversation_Summary-V0
============================================
`neofung/Multi_Turn_Conversation_Summary-V0` 是用于RAG 多轮对话场景中,将用户的历史对话记录总结为方便召回的文本,提高召回质量和效率。
## 模型数据组织和训练方式
1. 训练数据从[YeungNLP/moss-003-sft-data](https://huggingface.co/datasets/YeungNLP/moss-003-sft-data)中使用`langdetect`筛选`zh`和`en`的数据得到 。
2. 使用[01-ai/Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) 对用户对话进行总结,得到 [neofung/moss-003-sft-data-summary](https://huggingface.co/datasets/neofung/moss-003-sft-data-summary) 。
3. 使用[LoRA](https://github.com/huggingface/peft),在基座模型[01-ai/Yi-6B](https://huggingface.co/01-ai/Yi-6B) 上进行Supervised Fine-Tuning,得到最终的模型。
4. 训练平台[AutoDL](https://www.autodl.com/register?code=f274846d-fd7c-4361-804c-53764ddf79dc)。
使用的LoRA进行训练,而不是全参数训练,是因为希望在实际场景中,不同的Adapter服务不同的场景[Multi Adapter support](https://github.com/huggingface/peft/pull/263),不需要为本场景耗费资源部署模型实例。
## 使用方法
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "01-ai/Yi-6B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_path,
use_safetensors=False,
device_map="auto",
torch_dtype='auto',
trust_remote_code=True
).eval()
inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
outputs = model.generate(
inputs.input_ids.cuda(),
max_new_tokens=512,
streamer=None,
eos_token_id=tokenizer.convert_tokens_to_ids("\n"),
do_sample=True,
repetition_penalty=1.3,
no_repeat_ngram_size=5,
temperature=0.7,
top_k=40,
top_p=0.8,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
## 登鹳雀楼->王之涣
## 夜雨寄北->李商隐
from peft import PeftModel
model = PeftModel.from_pretrained(model, "neofung/Multi_Turn_Conversation_Summary-V0", trust_remote_code=True)
prompt = """<|im_start|>[ { "human": "如何保障工作中遵循正确的安全准则?", "assistant": "为了保障工作中遵循正确的安全准则,以下是一些建议:\n\n1.了解相关安全规定和标准:了解相关的安全规定和标准,并遵守它们。这可以包括公司和政府的安全标准,以及行业标准和最佳实践。\n\n2.培训和教育:确保您和您的同事接受了必要的培训和教育,以了解正确的安全准则和行为。\n\n3.使用正确的工具和设备:确保您使用正确的工具和设备,并且它们得到了正确的维护和保养。\n\n4.个人防护装备:确保您和您的同事穿戴正确的个人防护装备,如安全鞋、透明眼镜或面罩、手套等。\n\n5.创建一个安全文化:鼓励个人对安全的责任感和意识,并创建一个安全文化,使人们始终关注他们自己和他人的安全。\n\n6.持续监测和改进:持续监测和改进安全准则和程序,以确保它们保持最新,并适应变化的情况。\n\n这些措施可以帮助您和您的同事在工作中保障正确的安全准则,确保您的工作场所是一个安全的环境。" }, { "human": "如何应对突发紧急情况?", "assistant": "突发紧急情况可能会在任何时间发生,为了应对这些情况,以下是一些建议:\n\n1.保持冷静:保持冷静和清醒,以便能够更好地应对情况。\n\n2.评估情况:评估紧急情况的性质和范围,以便了解应该采取哪些行动。\n\n3.触发应急计划:如果您的机构或公司有应急计划,立即触发它,并按照计划的步骤行动。\n\n4.联系相关人员:通知必要的人员,例如紧急服务、管理层或其他相关人员,并与他们保持联系。\n\n5.遵循安全准则:在处理紧急情况时,请遵循正确的安全准则,确保自己和其他人的安全。\n\n6.提供及时信息:尽快向相关人员提供准确的信息,以便他们能够做出正确的决策。\n\n7.合作应对:与其他人合作应对紧急情况并相互支持。\n\n总之,应对突发紧急情况需要保持冷静和清醒,评估情况并采取适当的行动。同时,需要遵循正确的安全准则,并与其他人合作应对紧急情况。" }, { "human": "在网络安全方面,如何避免遭受黑客攻击?", "assistant": "以下是一些避免遭受黑客攻击的建议:\n\n1.更新安全软件:确保您的操作系统、应用程序和安全软件是最新的,并随时更新它们。这可以帮助您及时发现和修复安全漏洞。\n\n2.使用强密码:使用强密码,包括字母、数字和特殊字符。不要使用容易猜测的密码,如生日、姓名或常用单词。\n\n3.使用多因素身份验证:使用多因素身份验证来增加账户的安全性,例如发送短信验证码或使用指纹识别。\n\n4.小心附件和链接:不要打开未知的附件或链接,因为它们可能包含恶意软件。\n\n5.备份重要文件:定期备份重要文件,并将其存储在安全的位置。\n\n6.保护个人信息:不要在互联网上公开个人信息,例如出生日期、地址或社交安全号码。\n\n7.使用加密连接:在访问敏感信息时,使用加密连接,如HTTPS。\n\n8.教育员工:向员工提供网络安全培训,并确保他们了解正确的安全措施和行为。\n\n总之,避免遭受黑客攻击需要采取多种措施,包括更新安全软件、使用强密码、使用多因素身份验证、小心附件和链接、备份重要文件、保护个人信息、使用加密连接以及教育员工。这些措施可以帮助您保护个人信息和数据免受黑客攻击。" }, { "human": "如何确保网络连接的安全性?", "assistant": "以下是一些确保网络连接安全性的建议:\n\n1.使用加密连接:使用加密连接,如HTTPS,以确保数据传输过程中不被窃听或篡改。\n\n2.使用防病毒软件:安装和更新防病毒软件,可以帮助检测和清除恶意软件。\n\n3.使用防火墙:使用防火墙保护网络和设备,可以帮助阻止未经授权的访问和攻击。\n\n4.禁止未经授权的访问:禁止未经授权的访问,包括限制网络访问和设备访问权限。\n\n5.定期更新操作系统和应用程序:定期更新操作系统和应用程序,以修复安全漏洞和缺陷。\n\n6.使用强密码:使用强密码,包括字母、数字和特殊字符,并定期更改密码。\n\n7.控制访问:控制访问网络和设备的人员,仅授权人员可访问重要资源。\n\n8.教育员工:向员工提供网络安全培训,并确保他们了解正确的安全措施和行为。\n\n总之,确保网络连接的安全性需要采取多种措施,包括使用加密连接、使用防病毒软件、使用防火墙、禁止未经授权的访问、定期更新操作系统和应用程序、使用强密码、控制访问和教育员工。这些措施可以帮助您保护网络和设备免受未经授权的访问和攻击。" }, { "human": "对于金融机构来说,如何保护客户的隐私和数据安全?", "assistant": "金融机构需要对客户的隐私和数据安全高度重视,以下是一些保护客户隐私和数据安全的建议:\n\n1.遵守合规要求:确保遵守所有适用的合规要求和监管标准,包括数据隐私法规和安全标准。\n\n2.加强身份验证:采用多因素身份验证,例如使用短信验证码、指纹识别等,以确保客户身份的真实性。\n\n3.加密敏感数据:使用加密技术对敏感数据进行加密,以保护客户信息不被未经授权的人员获得。\n\n4.建立安全控制:建立安全控制,包括访问控制、安全审计和安全事件响应,以保护客户信息的安全性。\n\n5.教育员工:向员工提供网络安全培训,并确保他们了解正确的安全措施和行为。\n\n6.定期测试:定期进行安全测试和漏洞扫描,以发现和修复潜在的安全漏洞。\n\n7.建立应急计划:建立应急计划,并对其进行定期测试,以确保在紧急情况下能够及时响应和恢复。\n\n总之,保护客户隐私和数据安全需要采取多种措施,包括遵守合规要求、加强身份验证、加密敏感数据、建立安全控制、教育员工、定期测试和建立应急计划。这些措施可以帮助金融机构保护客户信息的安全性和隐私。" } ]<|im_end|> <|im_start|>"""
input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda()
outputs = model.generate(input_ids=input_ids, max_new_tokens=64, eos_token_id=0)
print(tokenizer.decode(outputs[0][len(input_ids[0]):], skip_special_tokens=True))
# 用户问题主要集中在网络安全、数据安全、隐私保护等方面。
```
| null |
Non_BioNLP
|
Multi_Turn_Conversation_Summary-V0
============================================
`neofung/Multi_Turn_Conversation_Summary-V0` 是用于RAG 多轮对话场景中,将用户的历史对话记录总结为方便召回的文本,提高召回质量和效率。
## 模型数据组织和训练方式
1. 训练数据从[YeungNLP/moss-003-sft-data](https://huggingface.co/datasets/YeungNLP/moss-003-sft-data)中使用`langdetect`筛选`zh`和`en`的数据得到 。
2. 使用[01-ai/Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) 对用户对话进行总结,得到 [neofung/moss-003-sft-data-summary](https://huggingface.co/datasets/neofung/moss-003-sft-data-summary) 。
3. 使用[LoRA](https://github.com/huggingface/peft),在基座模型[01-ai/Yi-6B](https://huggingface.co/01-ai/Yi-6B) 上进行Supervised Fine-Tuning,得到最终的模型。
4. 训练平台[AutoDL](https://www.autodl.com/register?code=f274846d-fd7c-4361-804c-53764ddf79dc)。
使用的LoRA进行训练,而不是全参数训练,是因为希望在实际场景中,不同的Adapter服务不同的场景[Multi Adapter support](https://github.com/huggingface/peft/pull/263),不需要为本场景耗费资源部署模型实例。
## 使用方法
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "01-ai/Yi-6B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_path,
use_safetensors=False,
device_map="auto",
torch_dtype='auto',
trust_remote_code=True
).eval()
inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
outputs = model.generate(
inputs.input_ids.cuda(),
max_new_tokens=512,
streamer=None,
eos_token_id=tokenizer.convert_tokens_to_ids("\n"),
do_sample=True,
repetition_penalty=1.3,
no_repeat_ngram_size=5,
temperature=0.7,
top_k=40,
top_p=0.8,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
## 登鹳雀楼->王之涣
## 夜雨寄北->李商隐
from peft import PeftModel
model = PeftModel.from_pretrained(model, "neofung/Multi_Turn_Conversation_Summary-V0", trust_remote_code=True)
prompt = """<|im_start|>[ { "human": "如何保障工作中遵循正确的安全准则?", "assistant": "为了保障工作中遵循正确的安全准则,以下是一些建议:\n\n1.了解相关安全规定和标准:了解相关的安全规定和标准,并遵守它们。这可以包括公司和政府的安全标准,以及行业标准和最佳实践。\n\n2.培训和教育:确保您和您的同事接受了必要的培训和教育,以了解正确的安全准则和行为。\n\n3.使用正确的工具和设备:确保您使用正确的工具和设备,并且它们得到了正确的维护和保养。\n\n4.个人防护装备:确保您和您的同事穿戴正确的个人防护装备,如安全鞋、透明眼镜或面罩、手套等。\n\n5.创建一个安全文化:鼓励个人对安全的责任感和意识,并创建一个安全文化,使人们始终关注他们自己和他人的安全。\n\n6.持续监测和改进:持续监测和改进安全准则和程序,以确保它们保持最新,并适应变化的情况。\n\n这些措施可以帮助您和您的同事在工作中保障正确的安全准则,确保您的工作场所是一个安全的环境。" }, { "human": "如何应对突发紧急情况?", "assistant": "突发紧急情况可能会在任何时间发生,为了应对这些情况,以下是一些建议:\n\n1.保持冷静:保持冷静和清醒,以便能够更好地应对情况。\n\n2.评估情况:评估紧急情况的性质和范围,以便了解应该采取哪些行动。\n\n3.触发应急计划:如果您的机构或公司有应急计划,立即触发它,并按照计划的步骤行动。\n\n4.联系相关人员:通知必要的人员,例如紧急服务、管理层或其他相关人员,并与他们保持联系。\n\n5.遵循安全准则:在处理紧急情况时,请遵循正确的安全准则,确保自己和其他人的安全。\n\n6.提供及时信息:尽快向相关人员提供准确的信息,以便他们能够做出正确的决策。\n\n7.合作应对:与其他人合作应对紧急情况并相互支持。\n\n总之,应对突发紧急情况需要保持冷静和清醒,评估情况并采取适当的行动。同时,需要遵循正确的安全准则,并与其他人合作应对紧急情况。" }, { "human": "在网络安全方面,如何避免遭受黑客攻击?", "assistant": "以下是一些避免遭受黑客攻击的建议:\n\n1.更新安全软件:确保您的操作系统、应用程序和安全软件是最新的,并随时更新它们。这可以帮助您及时发现和修复安全漏洞。\n\n2.使用强密码:使用强密码,包括字母、数字和特殊字符。不要使用容易猜测的密码,如生日、姓名或常用单词。\n\n3.使用多因素身份验证:使用多因素身份验证来增加账户的安全性,例如发送短信验证码或使用指纹识别。\n\n4.小心附件和链接:不要打开未知的附件或链接,因为它们可能包含恶意软件。\n\n5.备份重要文件:定期备份重要文件,并将其存储在安全的位置。\n\n6.保护个人信息:不要在互联网上公开个人信息,例如出生日期、地址或社交安全号码。\n\n7.使用加密连接:在访问敏感信息时,使用加密连接,如HTTPS。\n\n8.教育员工:向员工提供网络安全培训,并确保他们了解正确的安全措施和行为。\n\n总之,避免遭受黑客攻击需要采取多种措施,包括更新安全软件、使用强密码、使用多因素身份验证、小心附件和链接、备份重要文件、保护个人信息、使用加密连接以及教育员工。这些措施可以帮助您保护个人信息和数据免受黑客攻击。" }, { "human": "如何确保网络连接的安全性?", "assistant": "以下是一些确保网络连接安全性的建议:\n\n1.使用加密连接:使用加密连接,如HTTPS,以确保数据传输过程中不被窃听或篡改。\n\n2.使用防病毒软件:安装和更新防病毒软件,可以帮助检测和清除恶意软件。\n\n3.使用防火墙:使用防火墙保护网络和设备,可以帮助阻止未经授权的访问和攻击。\n\n4.禁止未经授权的访问:禁止未经授权的访问,包括限制网络访问和设备访问权限。\n\n5.定期更新操作系统和应用程序:定期更新操作系统和应用程序,以修复安全漏洞和缺陷。\n\n6.使用强密码:使用强密码,包括字母、数字和特殊字符,并定期更改密码。\n\n7.控制访问:控制访问网络和设备的人员,仅授权人员可访问重要资源。\n\n8.教育员工:向员工提供网络安全培训,并确保他们了解正确的安全措施和行为。\n\n总之,确保网络连接的安全性需要采取多种措施,包括使用加密连接、使用防病毒软件、使用防火墙、禁止未经授权的访问、定期更新操作系统和应用程序、使用强密码、控制访问和教育员工。这些措施可以帮助您保护网络和设备免受未经授权的访问和攻击。" }, { "human": "对于金融机构来说,如何保护客户的隐私和数据安全?", "assistant": "金融机构需要对客户的隐私和数据安全高度重视,以下是一些保护客户隐私和数据安全的建议:\n\n1.遵守合规要求:确保遵守所有适用的合规要求和监管标准,包括数据隐私法规和安全标准。\n\n2.加强身份验证:采用多因素身份验证,例如使用短信验证码、指纹识别等,以确保客户身份的真实性。\n\n3.加密敏感数据:使用加密技术对敏感数据进行加密,以保护客户信息不被未经授权的人员获得。\n\n4.建立安全控制:建立安全控制,包括访问控制、安全审计和安全事件响应,以保护客户信息的安全性。\n\n5.教育员工:向员工提供网络安全培训,并确保他们了解正确的安全措施和行为。\n\n6.定期测试:定期进行安全测试和漏洞扫描,以发现和修复潜在的安全漏洞。\n\n7.建立应急计划:建立应急计划,并对其进行定期测试,以确保在紧急情况下能够及时响应和恢复。\n\n总之,保护客户隐私和数据安全需要采取多种措施,包括遵守合规要求、加强身份验证、加密敏感数据、建立安全控制、教育员工、定期测试和建立应急计划。这些措施可以帮助金融机构保护客户信息的安全性和隐私。" } ]<|im_end|> <|im_start|>"""
input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda()
outputs = model.generate(input_ids=input_ids, max_new_tokens=64, eos_token_id=0)
print(tokenizer.decode(outputs[0][len(input_ids[0]):], skip_special_tokens=True))
# 用户问题主要集中在网络安全、数据安全、隐私保护等方面。
```
|
{"datasets": ["YeungNLP/moss-003-sft-data", "neofung/moss-003-sft-data-summary"], "language": ["zh", "en"], "license": "apache-2.0", "pipeline_tag": "summarization"}
|
task
|
[
"SUMMARIZATION"
] | 44,379 |
Graphcore/t5-xl-ipu
|
Graphcore
| null |
[
"optimum_graphcore",
"arxiv:1910.10683",
"license:apache-2.0",
"region:us"
] | 2023-04-06T13:44:12Z |
2023-05-03T12:05:39+00:00
| 3 | 0 |
---
license: apache-2.0
---
# Graphcore/t5-xl-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
Text-to-Text Transfer Transformer (T5), is a Transformer based model that uses a text-to-text approach for translation, question answering, and classification. It introduces an unified framework that converts all text-based language problems into a text-to-text format for transfer learning for NLP. This allows for the use of the same model, loss function, hyperparameters, etc. across our diverse set of tasks.
Paper link :[Exploring the Limits of Transfer Learning with a Unified
Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
## Intended uses & limitations
This model contains just the `IPUConfig` files for running the T5 3B model (e.g. [HuggingFace/t5-3b](https://huggingface.co/t5-3b) or [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl)) on Graphcore IPUs.
**This model contains no model weights, only an IPUConfig.**
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/t5-xl-ipu")
```
| null |
Non_BioNLP
|
# Graphcore/t5-xl-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
Text-to-Text Transfer Transformer (T5), is a Transformer based model that uses a text-to-text approach for translation, question answering, and classification. It introduces an unified framework that converts all text-based language problems into a text-to-text format for transfer learning for NLP. This allows for the use of the same model, loss function, hyperparameters, etc. across our diverse set of tasks.
Paper link :[Exploring the Limits of Transfer Learning with a Unified
Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
## Intended uses & limitations
This model contains just the `IPUConfig` files for running the T5 3B model (e.g. [HuggingFace/t5-3b](https://huggingface.co/t5-3b) or [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl)) on Graphcore IPUs.
**This model contains no model weights, only an IPUConfig.**
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/t5-xl-ipu")
```
|
{"license": "apache-2.0"}
|
task
|
[
"QUESTION_ANSWERING",
"TRANSLATION"
] | 44,381 |
varun-v-rao/gpt2-large-lora-2.95M-snli-model3
|
varun-v-rao
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"dataset:stanfordnlp/snli",
"base_model:openai-community/gpt2-large",
"base_model:finetune:openai-community/gpt2-large",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-20T13:34:12Z |
2024-06-24T21:27:26+00:00
| 10 | 0 |
---
base_model: openai-community/gpt2-large
datasets:
- stanfordnlp/snli
license: mit
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: gpt2-large-lora-2.95M-snli-model3
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: snli
type: stanfordnlp/snli
metrics:
- type: accuracy
value: 0.8772607193659825
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large-lora-2.95M-snli-model3
This model is a fine-tuned version of [openai-community/gpt2-large](https://huggingface.co/openai-community/gpt2-large) on the snli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3273
- Accuracy: 0.8773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4334 | 1.0 | 4292 | 0.3554 | 0.8650 |
| 0.4033 | 2.0 | 8584 | 0.3342 | 0.8749 |
| 0.3916 | 3.0 | 12876 | 0.3273 | 0.8773 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large-lora-2.95M-snli-model3
This model is a fine-tuned version of [openai-community/gpt2-large](https://huggingface.co/openai-community/gpt2-large) on the snli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3273
- Accuracy: 0.8773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4334 | 1.0 | 4292 | 0.3554 | 0.8650 |
| 0.4033 | 2.0 | 8584 | 0.3342 | 0.8749 |
| 0.3916 | 3.0 | 12876 | 0.3273 | 0.8773 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
{"base_model": "openai-community/gpt2-large", "datasets": ["stanfordnlp/snli"], "license": "mit", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2-large-lora-2.95M-snli-model3", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "snli", "type": "stanfordnlp/snli"}, "metrics": [{"type": "accuracy", "value": 0.8772607193659825, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,382 |
TransferGraph/connectivity_bert_ft_qqp-17-finetuned-lora-ag_news
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:ag_news",
"base_model:connectivity/bert_ft_qqp-17",
"base_model:adapter:connectivity/bert_ft_qqp-17",
"model-index",
"region:us"
] | 2024-02-27T23:49:07Z |
2024-02-28T01:24:43+00:00
| 0 | 0 |
---
base_model: connectivity/bert_ft_qqp-17
datasets:
- ag_news
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: connectivity_bert_ft_qqp-17-finetuned-lora-ag_news
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: ag_news
type: ag_news
config: default
split: test
args: default
metrics:
- type: accuracy
value: 0.9284210526315789
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# connectivity_bert_ft_qqp-17-finetuned-lora-ag_news
This model is a fine-tuned version of [connectivity/bert_ft_qqp-17](https://huggingface.co/connectivity/bert_ft_qqp-17) on the ag_news dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.9284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2486 | None | 0 |
| 0.9124 | 0.3225 | 0 |
| 0.9212 | 0.2369 | 1 |
| 0.9254 | 0.2160 | 2 |
| 0.9284 | 0.2033 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# connectivity_bert_ft_qqp-17-finetuned-lora-ag_news
This model is a fine-tuned version of [connectivity/bert_ft_qqp-17](https://huggingface.co/connectivity/bert_ft_qqp-17) on the ag_news dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.9284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2486 | None | 0 |
| 0.9124 | 0.3225 | 0 |
| 0.9212 | 0.2369 | 1 |
| 0.9254 | 0.2160 | 2 |
| 0.9284 | 0.2033 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "connectivity/bert_ft_qqp-17", "datasets": ["ag_news"], "library_name": "peft", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "connectivity_bert_ft_qqp-17-finetuned-lora-ag_news", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "ag_news", "type": "ag_news", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9284210526315789, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,383 |
Helsinki-NLP/opus-mt-wls-en
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"wls",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T12:08:49+00:00
| 597 | 0 |
---
license: apache-2.0
tags:
- translation
---
### opus-mt-wls-en
* source languages: wls
* target languages: en
* OPUS readme: [wls-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wls-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/wls-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.wls.en | 31.8 | 0.471 |
| null |
Non_BioNLP
|
### opus-mt-wls-en
* source languages: wls
* target languages: en
* OPUS readme: [wls-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wls-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/wls-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.wls.en | 31.8 | 0.471 |
|
{"license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 44,384 |
RichardErkhov/PKU-ONELab_-_Themis-gguf
|
RichardErkhov
| null |
[
"gguf",
"arxiv:2406.18365",
"endpoints_compatible",
"region:us"
] | 2024-11-03T19:37:47Z |
2024-11-03T21:49:48+00:00
| 374 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Themis - GGUF
- Model creator: https://huggingface.co/PKU-ONELab/
- Original model: https://huggingface.co/PKU-ONELab/Themis/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Themis.Q2_K.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q2_K.gguf) | Q2_K | 2.96GB |
| [Themis.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Themis.Q3_K.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q3_K.gguf) | Q3_K | 3.74GB |
| [Themis.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Themis.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Themis.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Themis.Q4_0.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Themis.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Themis.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Themis.Q4_K.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q4_K.gguf) | Q4_K | 4.58GB |
| [Themis.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Themis.Q4_1.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Themis.Q5_0.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Themis.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Themis.Q5_K.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q5_K.gguf) | Q5_K | 5.34GB |
| [Themis.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Themis.Q5_1.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Themis.Q6_K.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q6_K.gguf) | Q6_K | 6.14GB |
| [Themis.Q8_0.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
license: apache-2.0
---
# Themis
Paper: https://arxiv.org/abs/2406.18365
Github: https://github.com/PKU-ONELab/Themis
## Introduction
We propose **Themis**, an 8B-parameter large language model (LLM) specifically designed and trained for NLG evaluation with more comprehensive capabilities.
Our Themis can evaluate various NLG tasks, including uncommon ones like question-answering evaluation (**Versatility**), in a reference-free manner (**Independence**). Moreover, it allows for specific and customized evaluation aspects and criteria, including overall quality and more fine-grained aspects (**Flexibility**), and its evaluation contains corresponding analysis and explanation together with the rating (**Interpretability**).
We believe that an ideal evaluator should be convenient to use and possess these characteristics. The comparison between related methods and Themis is shown in the table below.
| Method | Versatility | Independence | Flexibility | Interpretability | Open-source |
| :---------------: | :---------: | :----------: | :---------: | :--------------: | :---------: |
| UniEval | ❌ | ❌ | ✔️ | ❌ | ✔️ |
| G-Eval | ✔️ | ✔️ | ✔️ | ✔️ | ❌ |
| X-Eval | ✔️ | ❌ | ✔️ | ❌ | ❌ |
| Prometheus | ✔️ | ❌ | ✔️ | ✔️ | ✔️ |
| Auto-J | ✔️ | ✔️ | ❌ | ✔️ | ✔️ |
| InstructScore | ✔️ | ❌ | ❌ | ✔️ | ✔️ |
| TIGERScore | ✔️ | ✔️ | ❌ | ✔️ | ✔️ |
| **Themis (Ours)** | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
## Performance
We implement experiments on several common NLG evaluation tasks and datasets to compare our Themis with other methods, including SummEval for summarization, Topical-Chat for dialogue response generation, SFRES&SFHOT for data-to-text, QAGS for factuality, MANS for story generation, and WMT23 zh-en for machine translation. Experimental results show that our Themis achieves better overall evaluation performance over other evaluation models, including GPT-4.
| Method | SummEval | Topical-Chat | SFHOT& SFRES | QAGS | MANS | WMT23 | Average Spearman |
| -------------------- | :-------: | :----------: | :---------: | :-------: | :-------: | :-------: | :------------: |
| BLEU | 0.075 | 0.388 | 0.024 | - | 0.032 | 0.021 | - |
| ROUGE | 0.152 | 0.412 | 0.101 | - | -0.002 | 0.151 | - |
| BARTScore | 0.329 | 0.086 | 0.208 | 0.425 | 0.350 | 0.118 | 0.253 |
| BERTScore | 0.231 | 0.394 | 0.139 | - | 0.285 | 0.219 | - |
| BLEURT | 0.152 | 0.388 | 0.244 | - | 0.138 | 0.263 | - |
| CometKiwi | 0.228 | 0.340 | 0.251 | 0.094 | 0.251 | 0.343 | 0.251 |
| UniEval | 0.474 | 0.577 | 0.282 | - | - | - | - |
| G-Eval (GPT-3.5) | 0.409 | 0.585 | - | 0.461 | - | - | - |
| G-Eval (GPT-4) | 0.523 | 0.588 | - | 0.611 | - | - | - |
| GPT-3.5 Turbo | 0.416 | 0.578 | 0.306 | 0.431 | 0.328 | 0.347 | 0.401 |
| GPT-4 Turbo | 0.511 | **0.746** | 0.320 | 0.637 | 0.473 | **0.437** | 0.521 |
| X-Eval | 0.480 | 0.605 | 0.303 | 0.578 | - | - | - |
| Prometheus-13B | 0.163 | 0.434 | 0.173 | - | 0.007 | 0.129 | - |
| Auto-J-13B | 0.198 | 0.425 | 0.141 | 0.226 | 0.380 | 0.104 | 0.246 |
| TIGERScore-13B | 0.384 | 0.346 | 0.200 | 0.504 | 0.231 | 0.248 | 0.319 |
| InstructScore-7B | 0.258 | 0.241 | 0.247 | - | 0.298 | 0.219 | - |
| **Themis-8B (ours)** | **0.553** | 0.725 | **0.333** | **0.684** | **0.551** | 0.405 | **0.542** |
We further conduct more in-depth analyses, including generalization tests on unseen tasks like the instruction-following evaluation as well as aspect-targeted perturbation tests, and our Themis also exhibits superior evaluation performance. For more experimental results and details, please refer to our paper.
## Requirements and Usage
Please refer to our [github repo](https://github.com/PKU-ONELab/Themis) for more details.
## Citation
```
@article{hu2024themis,
title={Themis: Towards Flexible and Interpretable NLG Evaluation},
author={Hu, Xinyu and Lin, Li and Gao, Mingqi and Yin, Xunjian and Wan, Xiaojun},
journal={arXiv preprint arXiv:2406.18365},
year={2024}
}
```
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Themis - GGUF
- Model creator: https://huggingface.co/PKU-ONELab/
- Original model: https://huggingface.co/PKU-ONELab/Themis/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Themis.Q2_K.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q2_K.gguf) | Q2_K | 2.96GB |
| [Themis.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Themis.Q3_K.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q3_K.gguf) | Q3_K | 3.74GB |
| [Themis.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Themis.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Themis.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Themis.Q4_0.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Themis.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Themis.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Themis.Q4_K.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q4_K.gguf) | Q4_K | 4.58GB |
| [Themis.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Themis.Q4_1.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Themis.Q5_0.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Themis.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Themis.Q5_K.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q5_K.gguf) | Q5_K | 5.34GB |
| [Themis.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Themis.Q5_1.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Themis.Q6_K.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q6_K.gguf) | Q6_K | 6.14GB |
| [Themis.Q8_0.gguf](https://huggingface.co/RichardErkhov/PKU-ONELab_-_Themis-gguf/blob/main/Themis.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
license: apache-2.0
---
# Themis
Paper: https://arxiv.org/abs/2406.18365
Github: https://github.com/PKU-ONELab/Themis
## Introduction
We propose **Themis**, an 8B-parameter large language model (LLM) specifically designed and trained for NLG evaluation with more comprehensive capabilities.
Our Themis can evaluate various NLG tasks, including uncommon ones like question-answering evaluation (**Versatility**), in a reference-free manner (**Independence**). Moreover, it allows for specific and customized evaluation aspects and criteria, including overall quality and more fine-grained aspects (**Flexibility**), and its evaluation contains corresponding analysis and explanation together with the rating (**Interpretability**).
We believe that an ideal evaluator should be convenient to use and possess these characteristics. The comparison between related methods and Themis is shown in the table below.
| Method | Versatility | Independence | Flexibility | Interpretability | Open-source |
| :---------------: | :---------: | :----------: | :---------: | :--------------: | :---------: |
| UniEval | ❌ | ❌ | ✔️ | ❌ | ✔️ |
| G-Eval | ✔️ | ✔️ | ✔️ | ✔️ | ❌ |
| X-Eval | ✔️ | ❌ | ✔️ | ❌ | ❌ |
| Prometheus | ✔️ | ❌ | ✔️ | ✔️ | ✔️ |
| Auto-J | ✔️ | ✔️ | ❌ | ✔️ | ✔️ |
| InstructScore | ✔️ | ❌ | ❌ | ✔️ | ✔️ |
| TIGERScore | ✔️ | ✔️ | ❌ | ✔️ | ✔️ |
| **Themis (Ours)** | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
## Performance
We implement experiments on several common NLG evaluation tasks and datasets to compare our Themis with other methods, including SummEval for summarization, Topical-Chat for dialogue response generation, SFRES&SFHOT for data-to-text, QAGS for factuality, MANS for story generation, and WMT23 zh-en for machine translation. Experimental results show that our Themis achieves better overall evaluation performance over other evaluation models, including GPT-4.
| Method | SummEval | Topical-Chat | SFHOT& SFRES | QAGS | MANS | WMT23 | Average Spearman |
| -------------------- | :-------: | :----------: | :---------: | :-------: | :-------: | :-------: | :------------: |
| BLEU | 0.075 | 0.388 | 0.024 | - | 0.032 | 0.021 | - |
| ROUGE | 0.152 | 0.412 | 0.101 | - | -0.002 | 0.151 | - |
| BARTScore | 0.329 | 0.086 | 0.208 | 0.425 | 0.350 | 0.118 | 0.253 |
| BERTScore | 0.231 | 0.394 | 0.139 | - | 0.285 | 0.219 | - |
| BLEURT | 0.152 | 0.388 | 0.244 | - | 0.138 | 0.263 | - |
| CometKiwi | 0.228 | 0.340 | 0.251 | 0.094 | 0.251 | 0.343 | 0.251 |
| UniEval | 0.474 | 0.577 | 0.282 | - | - | - | - |
| G-Eval (GPT-3.5) | 0.409 | 0.585 | - | 0.461 | - | - | - |
| G-Eval (GPT-4) | 0.523 | 0.588 | - | 0.611 | - | - | - |
| GPT-3.5 Turbo | 0.416 | 0.578 | 0.306 | 0.431 | 0.328 | 0.347 | 0.401 |
| GPT-4 Turbo | 0.511 | **0.746** | 0.320 | 0.637 | 0.473 | **0.437** | 0.521 |
| X-Eval | 0.480 | 0.605 | 0.303 | 0.578 | - | - | - |
| Prometheus-13B | 0.163 | 0.434 | 0.173 | - | 0.007 | 0.129 | - |
| Auto-J-13B | 0.198 | 0.425 | 0.141 | 0.226 | 0.380 | 0.104 | 0.246 |
| TIGERScore-13B | 0.384 | 0.346 | 0.200 | 0.504 | 0.231 | 0.248 | 0.319 |
| InstructScore-7B | 0.258 | 0.241 | 0.247 | - | 0.298 | 0.219 | - |
| **Themis-8B (ours)** | **0.553** | 0.725 | **0.333** | **0.684** | **0.551** | 0.405 | **0.542** |
We further conduct more in-depth analyses, including generalization tests on unseen tasks like the instruction-following evaluation as well as aspect-targeted perturbation tests, and our Themis also exhibits superior evaluation performance. For more experimental results and details, please refer to our paper.
## Requirements and Usage
Please refer to our [github repo](https://github.com/PKU-ONELab/Themis) for more details.
## Citation
```
@article{hu2024themis,
title={Themis: Towards Flexible and Interpretable NLG Evaluation},
author={Hu, Xinyu and Lin, Li and Gao, Mingqi and Yin, Xunjian and Wan, Xiaojun},
journal={arXiv preprint arXiv:2406.18365},
year={2024}
}
```
|
{}
|
task
|
[
"TRANSLATION",
"SUMMARIZATION"
] | 44,385 |
Helsinki-NLP/opus-mt-tc-bible-big-mul-deu_eng_nld
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc-bible",
"aa",
"aai",
"aau",
"ab",
"abi",
"acd",
"ace",
"acf",
"ach",
"acn",
"acr",
"ade",
"adj",
"ady",
"aeu",
"aey",
"af",
"afh",
"agd",
"agn",
"agu",
"ahk",
"aia",
"ak",
"akh",
"akl",
"akp",
"alj",
"alp",
"alq",
"alt",
"alz",
"am",
"ame",
"ami",
"amk",
"amu",
"an",
"ang",
"ann",
"anp",
"anv",
"aoz",
"apr",
"apu",
"ar",
"arc",
"as",
"aso",
"ast",
"atg",
"atj",
"atq",
"aui",
"auy",
"av",
"avk",
"avn",
"avu",
"awa",
"awb",
"awx",
"az",
"azg",
"azz",
"ba",
"bal",
"ban",
"bar",
"bas",
"bav",
"bba",
"bbo",
"bbr",
"bcl",
"bcw",
"be",
"bef",
"beh",
"bem",
"bep",
"bex",
"bfa",
"bfd",
"bfo",
"bg",
"bgr",
"bhl",
"bho",
"bhz",
"bi",
"bib",
"bik",
"bim",
"biv",
"bjr",
"bjv",
"bku",
"bkv",
"blh",
"blt",
"blz",
"bm",
"bmh",
"bmk",
"bmq",
"bmu",
"bmv",
"bn",
"bnp",
"bo",
"boj",
"bom",
"bov",
"box",
"bpr",
"bps",
"bpy",
"bqc",
"bqj",
"bqp",
"br",
"bru",
"brx",
"bs",
"bss",
"btd",
"bth",
"bto",
"bts",
"btt",
"btx",
"bua",
"bud",
"bug",
"buk",
"bus",
"bvy",
"bwq",
"bwu",
"byn",
"bzd",
"bzh",
"bzj",
"bzt",
"ca",
"caa",
"cab",
"cac",
"cak",
"cay",
"cbk",
"cce",
"cco",
"ce",
"ceb",
"cfm",
"cgc",
"ch",
"chf",
"chm",
"chq",
"chr",
"chy",
"chz",
"cjk",
"cjo",
"cjp",
"cjv",
"cko",
"cle",
"cme",
"cmo",
"cmr",
"cnh",
"cni",
"cnl",
"cnt",
"cnw",
"co",
"cok",
"cop",
"cot",
"cpa",
"cpu",
"cr",
"crh",
"crn",
"crs",
"crx",
"cs",
"csb",
"csk",
"cso",
"csy",
"cta",
"ctd",
"ctp",
"ctu",
"cu",
"cuc",
"cui",
"cuk",
"cut",
"cux",
"cv",
"cwe",
"cwt",
"cy",
"cya",
"czt",
"da",
"daa",
"dad",
"dag",
"dah",
"de",
"ded",
"dga",
"dgi",
"dig",
"dik",
"din",
"diq",
"dje",
"djk",
"dng",
"dni",
"dnj",
"dob",
"dop",
"drt",
"dsb",
"dsh",
"dtp",
"dug",
"dv",
"dws",
"dww",
"dyi",
"dyo",
"dyu",
"dz",
"ee",
"efi",
"egl",
"el",
"emi",
"en",
"enm",
"eo",
"es",
"ess",
"et",
"eu",
"ext",
"fa",
"fai",
"fal",
"far",
"ff",
"fi",
"fil",
"fj",
"fkv",
"fo",
"fon",
"for",
"fr",
"frd",
"frm",
"frp",
"frr",
"fur",
"fy",
"ga",
"gag",
"gah",
"gaw",
"gbm",
"gcf",
"gd",
"gde",
"gej",
"gfk",
"ghs",
"gil",
"gkn",
"gl",
"glk",
"gn",
"gnd",
"gng",
"gog",
"gor",
"gos",
"got",
"gqr",
"grc",
"gsw",
"gu",
"guc",
"gud",
"guh",
"guo",
"gur",
"guw",
"gux",
"gv",
"gvf",
"gvl",
"gwi",
"gwr",
"gym",
"gyr",
"ha",
"hag",
"haw",
"hay",
"hbo",
"hch",
"he",
"heh",
"hi",
"hif",
"hig",
"hil",
"hla",
"hlt",
"hmn",
"hne",
"hnj",
"hnn",
"hns",
"hoc",
"hot",
"hr",
"hrx",
"hsb",
"ht",
"hu",
"hui",
"hus",
"hvn",
"hwc",
"hy",
"hyw",
"hz",
"ia",
"iba",
"icr",
"id",
"ie",
"ifa",
"ifb",
"ife",
"ifk",
"ifu",
"ify",
"ig",
"ign",
"ii",
"ik",
"ilo",
"imo",
"inh",
"ino",
"io",
"iou",
"ipi",
"iri",
"irk",
"iry",
"is",
"it",
"itv",
"iu",
"ium",
"ixl",
"izh",
"izr",
"ja",
"jaa",
"jac",
"jam",
"jbo",
"jbu",
"jdt",
"jmc",
"jpa",
"jun",
"jv",
"jvn",
"ka",
"kaa",
"kab",
"kac",
"kam",
"kao",
"kbd",
"kbm",
"kbp",
"kdc",
"kdj",
"kdl",
"kdn",
"kea",
"kek",
"ken",
"keo",
"ker",
"keu",
"kew",
"kez",
"kg",
"kgf",
"kgk",
"kha",
"khz",
"ki",
"kia",
"kj",
"kjb",
"kje",
"kjh",
"kjs",
"kk",
"kki",
"kkj",
"kl",
"kle",
"km",
"kma",
"kmb",
"kmg",
"kmh",
"kmo",
"kmu",
"kn",
"kne",
"knj",
"knk",
"kno",
"kog",
"kok",
"kpf",
"kpg",
"kpr",
"kpw",
"kpz",
"kqe",
"kqf",
"kqp",
"kqw",
"kr",
"krc",
"kri",
"krj",
"krl",
"kru",
"ks",
"ksb",
"ksh",
"ksr",
"ktb",
"ktj",
"ku",
"kub",
"kud",
"kue",
"kum",
"kus",
"kv",
"kvn",
"kw",
"kwf",
"kxc",
"kxm",
"ky",
"kyc",
"kyf",
"kyg",
"kyq",
"kzf",
"la",
"laa",
"lac",
"lad",
"lah",
"las",
"law",
"lb",
"lbe",
"lcm",
"ldn",
"lee",
"lef",
"lem",
"leu",
"lew",
"lex",
"lez",
"lfn",
"lg",
"lgg",
"lhu",
"li",
"lia",
"lid",
"lif",
"lij",
"lip",
"liv",
"ljp",
"lkt",
"lld",
"lln",
"lme",
"lmo",
"ln",
"lnd",
"lo",
"lob",
"lok",
"lon",
"lou",
"lrc",
"lsi",
"lt",
"lua",
"luc",
"luo",
"lus",
"lut",
"luy",
"lv",
"lzz",
"maa",
"mad",
"mag",
"mai",
"maj",
"mak",
"mam",
"maq",
"mau",
"maw",
"maz",
"mbb",
"mbf",
"mbt",
"mcb",
"mcp",
"mcu",
"mda",
"mdf",
"med",
"mee",
"meh",
"mek",
"men",
"meq",
"mfe",
"mfh",
"mfi",
"mfk",
"mfq",
"mfy",
"mg",
"mgd",
"mgm",
"mgo",
"mh",
"mhi",
"mhl",
"mhx",
"mhy",
"mi",
"mib",
"mic",
"mie",
"mif",
"mig",
"mih",
"mil",
"mio",
"mit",
"mix",
"miy",
"miz",
"mjc",
"mk",
"mks",
"ml",
"mlh",
"mlp",
"mmo",
"mmx",
"mn",
"mna",
"mnb",
"mnf",
"mnh",
"mni",
"mnr",
"mnw",
"mo",
"moa",
"mog",
"moh",
"mop",
"mor",
"mos",
"mox",
"mpg",
"mpm",
"mpt",
"mpx",
"mqb",
"mqj",
"mr",
"mrj",
"mrw",
"ms",
"msm",
"mt",
"mta",
"muh",
"mux",
"muy",
"mva",
"mvp",
"mvv",
"mwc",
"mwl",
"mwm",
"mwv",
"mww",
"mxb",
"mxt",
"my",
"myb",
"myk",
"myu",
"myv",
"myw",
"myx",
"mzk",
"mzm",
"mzn",
"mzw",
"mzz",
"na",
"naf",
"nak",
"nap",
"nas",
"nb",
"nca",
"nch",
"ncj",
"ncl",
"ncu",
"nd",
"nds",
"ndz",
"ne",
"neb",
"new",
"nfr",
"ng",
"ngt",
"ngu",
"nhe",
"nhg",
"nhi",
"nhn",
"nhu",
"nhw",
"nhx",
"nhy",
"nia",
"nif",
"nii",
"nij",
"nim",
"nin",
"niu",
"njm",
"nl",
"nlc",
"nlv",
"nmz",
"nn",
"nnb",
"nnh",
"nnw",
"no",
"nog",
"non",
"nop",
"not",
"nou",
"nov",
"npl",
"npy",
"nqo",
"nr",
"nsn",
"nso",
"nss",
"nst",
"nsu",
"ntm",
"ntp",
"ntr",
"nuj",
"nus",
"nuy",
"nv",
"nwb",
"nwi",
"ny",
"nyf",
"nyn",
"nyo",
"nyy",
"nzi",
"oar",
"obo",
"oc",
"ofs",
"oj",
"oku",
"okv",
"old",
"om",
"omw",
"ood",
"opm",
"or",
"orv",
"os",
"osp",
"ota",
"ote",
"otm",
"otn",
"otq",
"ozm",
"pa",
"pab",
"pad",
"pag",
"pai",
"pal",
"pam",
"pao",
"pap",
"pau",
"pbi",
"pbl",
"pck",
"pcm",
"pdc",
"pfl",
"phn",
"pi",
"pib",
"pih",
"pio",
"pis",
"pkb",
"pl",
"pls",
"plw",
"pmf",
"pms",
"pmy",
"pne",
"pnt",
"poe",
"poh",
"pot",
"ppk",
"ppl",
"prf",
"prg",
"ps",
"pt",
"ptp",
"ptu",
"pwg",
"pww",
"quc",
"qya",
"rai",
"rap",
"rav",
"rej",
"rhg",
"rif",
"rim",
"rm",
"rmy",
"rn",
"ro",
"rom",
"rop",
"rro",
"ru",
"rue",
"rug",
"rup",
"rw",
"rwo",
"sa",
"sab",
"sah",
"sas",
"sat",
"sba",
"sbd",
"sbl",
"sc",
"scn",
"sco",
"sd",
"sda",
"se",
"seh",
"ses",
"sg",
"sgb",
"sgs",
"sgw",
"sgz",
"sh",
"shi",
"shk",
"shn",
"shs",
"shy",
"si",
"sig",
"sil",
"sjn",
"sk",
"skr",
"sl",
"sld",
"sll",
"sm",
"sma",
"smk",
"sml",
"smn",
"sn",
"snc",
"snp",
"snw",
"so",
"soy",
"spl",
"spp",
"sps",
"sq",
"sr",
"srm",
"srn",
"srq",
"ss",
"ssd",
"ssx",
"st",
"stn",
"stp",
"stq",
"su",
"sue",
"suk",
"sur",
"sus",
"suz",
"sv",
"sw",
"swg",
"swp",
"sxb",
"sxn",
"syc",
"syl",
"syr",
"szb",
"szl",
"ta",
"tab",
"tac",
"taj",
"taq",
"tbc",
"tbl",
"tbo",
"tbz",
"tcs",
"tcy",
"te",
"tem",
"teo",
"ter",
"tet",
"tfr",
"tg",
"tgo",
"tgp",
"th",
"thk",
"ti",
"tig",
"tik",
"tim",
"tk",
"tkl",
"tl",
"tlb",
"tlf",
"tlh",
"tlj",
"tlx",
"tly",
"tmc",
"tmh",
"tmr",
"tn",
"to",
"toh",
"toi",
"toj",
"tpa",
"tpi",
"tpm",
"tpw",
"tpz",
"tr",
"trc",
"trn",
"trq",
"trs",
"trv",
"ts",
"tsw",
"tt",
"ttc",
"tte",
"ttr",
"tts",
"tuc",
"tuf",
"tum",
"tvl",
"tw",
"twb",
"twu",
"txa",
"ty",
"tyj",
"tyv",
"tzh",
"tzj",
"tzl",
"tzm",
"tzo",
"ubr",
"ubu",
"udm",
"udu",
"ug",
"uk",
"umb",
"ur",
"usa",
"usp",
"uvl",
"uz",
"vag",
"ve",
"vec",
"vi",
"viv",
"vls",
"vmw",
"vmy",
"vo",
"vot",
"vun",
"wa",
"wae",
"waj",
"wal",
"wap",
"war",
"wbm",
"wbp",
"wed",
"wmt",
"wmw",
"wnc",
"wnu",
"wo",
"wob",
"wsk",
"wuv",
"xal",
"xcl",
"xed",
"xh",
"xmf",
"xog",
"xon",
"xrb",
"xsb",
"xsi",
"xsm",
"xsr",
"xtd",
"xtm",
"xuo",
"yal",
"yam",
"yaq",
"yaz",
"yby",
"ycl",
"ycn",
"yi",
"yli",
"yml",
"yo",
"yon",
"yua",
"yut",
"yuw",
"za",
"zam",
"zap",
"zea",
"zgh",
"zh",
"zia",
"zom",
"zu",
"zyp",
"zza",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-10-08T09:27:50Z |
2024-10-12T07:32:22+00:00
| 43 | 2 |
---
language:
- aa
- aai
- aau
- ab
- abi
- acd
- ace
- acf
- ach
- acn
- acr
- ade
- adj
- ady
- aeu
- aey
- af
- afh
- agd
- agn
- agu
- ahk
- aia
- ak
- akh
- akl
- akp
- alj
- alp
- alq
- alt
- alz
- am
- ame
- ami
- amk
- amu
- an
- ang
- ann
- anp
- anv
- aoz
- apr
- apu
- ar
- arc
- as
- aso
- ast
- atg
- atj
- atq
- aui
- auy
- av
- avk
- avn
- avu
- awa
- awb
- awx
- az
- azg
- azz
- ba
- bal
- ban
- bar
- bas
- bav
- bba
- bbo
- bbr
- bcl
- bcw
- be
- bef
- beh
- bem
- bep
- bex
- bfa
- bfd
- bfo
- bg
- bgr
- bhl
- bho
- bhz
- bi
- bib
- bik
- bim
- biv
- bjr
- bjv
- bku
- bkv
- blh
- blt
- blz
- bm
- bmh
- bmk
- bmq
- bmu
- bmv
- bn
- bnp
- bo
- boj
- bom
- bov
- box
- bpr
- bps
- bpy
- bqc
- bqj
- bqp
- br
- bru
- brx
- bs
- bss
- btd
- bth
- bto
- bts
- btt
- btx
- bua
- bud
- bug
- buk
- bus
- bvy
- bwq
- bwu
- byn
- bzd
- bzh
- bzj
- bzt
- ca
- caa
- cab
- cac
- cak
- cay
- cbk
- cce
- cco
- ce
- ceb
- cfm
- cgc
- ch
- chf
- chm
- chq
- chr
- chy
- chz
- cjk
- cjo
- cjp
- cjv
- cko
- cle
- cme
- cmo
- cmr
- cnh
- cni
- cnl
- cnt
- cnw
- co
- cok
- cop
- cot
- cpa
- cpu
- cr
- crh
- crn
- crs
- crx
- cs
- csb
- csk
- cso
- csy
- cta
- ctd
- ctp
- ctu
- cu
- cuc
- cui
- cuk
- cut
- cux
- cv
- cwe
- cwt
- cy
- cya
- czt
- da
- daa
- dad
- dag
- dah
- de
- ded
- dga
- dgi
- dig
- dik
- din
- diq
- dje
- djk
- dng
- dni
- dnj
- dob
- dop
- drt
- dsb
- dsh
- dtp
- dug
- dv
- dws
- dww
- dyi
- dyo
- dyu
- dz
- ee
- efi
- egl
- el
- emi
- en
- enm
- eo
- es
- ess
- et
- eu
- ext
- fa
- fai
- fal
- far
- ff
- fi
- fil
- fj
- fkv
- fo
- fon
- for
- fr
- frd
- frm
- frp
- frr
- fur
- fy
- ga
- gag
- gah
- gaw
- gbm
- gcf
- gd
- gde
- gej
- gfk
- ghs
- gil
- gkn
- gl
- glk
- gn
- gnd
- gng
- gog
- gor
- gos
- got
- gqr
- grc
- gsw
- gu
- guc
- gud
- guh
- guo
- gur
- guw
- gux
- gv
- gvf
- gvl
- gwi
- gwr
- gym
- gyr
- ha
- hag
- haw
- hay
- hbo
- hch
- he
- heh
- hi
- hif
- hig
- hil
- hla
- hlt
- hmn
- hne
- hnj
- hnn
- hns
- hoc
- hot
- hr
- hrx
- hsb
- ht
- hu
- hui
- hus
- hvn
- hwc
- hy
- hyw
- hz
- ia
- iba
- icr
- id
- ie
- ifa
- ifb
- ife
- ifk
- ifu
- ify
- ig
- ign
- ii
- ik
- ilo
- imo
- inh
- ino
- io
- iou
- ipi
- iri
- irk
- iry
- is
- it
- itv
- iu
- ium
- ixl
- izh
- izr
- ja
- jaa
- jac
- jam
- jbo
- jbu
- jdt
- jmc
- jpa
- jun
- jv
- jvn
- ka
- kaa
- kab
- kac
- kam
- kao
- kbd
- kbm
- kbp
- kdc
- kdj
- kdl
- kdn
- kea
- kek
- ken
- keo
- ker
- keu
- kew
- kez
- kg
- kgf
- kgk
- kha
- khz
- ki
- kia
- kj
- kjb
- kje
- kjh
- kjs
- kk
- kki
- kkj
- kl
- kle
- km
- kma
- kmb
- kmg
- kmh
- kmo
- kmu
- kn
- kne
- knj
- knk
- kno
- kog
- kok
- kpf
- kpg
- kpr
- kpw
- kpz
- kqe
- kqf
- kqp
- kqw
- kr
- krc
- kri
- krj
- krl
- kru
- ks
- ksb
- ksh
- ksr
- ktb
- ktj
- ku
- kub
- kud
- kue
- kum
- kus
- kv
- kvn
- kw
- kwf
- kxc
- kxm
- ky
- kyc
- kyf
- kyg
- kyq
- kzf
- la
- laa
- lac
- lad
- lah
- las
- law
- lb
- lbe
- lcm
- ldn
- lee
- lef
- lem
- leu
- lew
- lex
- lez
- lfn
- lg
- lgg
- lhu
- li
- lia
- lid
- lif
- lij
- lip
- liv
- ljp
- lkt
- lld
- lln
- lme
- lmo
- ln
- lnd
- lo
- lob
- lok
- lon
- lou
- lrc
- lsi
- lt
- lua
- luc
- luo
- lus
- lut
- luy
- lv
- lzz
- maa
- mad
- mag
- mai
- maj
- mak
- mam
- maq
- mau
- maw
- maz
- mbb
- mbf
- mbt
- mcb
- mcp
- mcu
- mda
- mdf
- med
- mee
- meh
- mek
- men
- meq
- mfe
- mfh
- mfi
- mfk
- mfq
- mfy
- mg
- mgd
- mgm
- mgo
- mh
- mhi
- mhl
- mhx
- mhy
- mi
- mib
- mic
- mie
- mif
- mig
- mih
- mil
- mio
- mit
- mix
- miy
- miz
- mjc
- mk
- mks
- ml
- mlh
- mlp
- mmo
- mmx
- mn
- mna
- mnb
- mnf
- mnh
- mni
- mnr
- mnw
- mo
- moa
- mog
- moh
- mop
- mor
- mos
- mox
- mpg
- mpm
- mpt
- mpx
- mqb
- mqj
- mr
- mrj
- mrw
- ms
- msm
- mt
- mta
- muh
- mux
- muy
- mva
- mvp
- mvv
- mwc
- mwl
- mwm
- mwv
- mww
- mxb
- mxt
- my
- myb
- myk
- myu
- myv
- myw
- myx
- mzk
- mzm
- mzn
- mzw
- mzz
- na
- naf
- nak
- nap
- nas
- nb
- nca
- nch
- ncj
- ncl
- ncu
- nd
- nds
- ndz
- ne
- neb
- new
- nfr
- ng
- ngt
- ngu
- nhe
- nhg
- nhi
- nhn
- nhu
- nhw
- nhx
- nhy
- nia
- nif
- nii
- nij
- nim
- nin
- niu
- njm
- nl
- nlc
- nlv
- nmz
- nn
- nnb
- nnh
- nnw
- false
- nog
- non
- nop
- not
- nou
- nov
- npl
- npy
- nqo
- nr
- nsn
- nso
- nss
- nst
- nsu
- ntm
- ntp
- ntr
- nuj
- nus
- nuy
- nv
- nwb
- nwi
- ny
- nyf
- nyn
- nyo
- nyy
- nzi
- oar
- obo
- oc
- ofs
- oj
- oku
- okv
- old
- om
- omw
- ood
- opm
- or
- orv
- os
- osp
- ota
- ote
- otm
- otn
- otq
- ozm
- pa
- pab
- pad
- pag
- pai
- pal
- pam
- pao
- pap
- pau
- pbi
- pbl
- pck
- pcm
- pdc
- pfl
- phn
- pi
- pib
- pih
- pio
- pis
- pkb
- pl
- pls
- plw
- pmf
- pms
- pmy
- pne
- pnt
- poe
- poh
- pot
- ppk
- ppl
- prf
- prg
- ps
- pt
- ptp
- ptu
- pwg
- pww
- quc
- qya
- rai
- rap
- rav
- rej
- rhg
- rif
- rim
- rm
- rmy
- rn
- ro
- rom
- rop
- rro
- ru
- rue
- rug
- rup
- rw
- rwo
- sa
- sab
- sah
- sas
- sat
- sba
- sbd
- sbl
- sc
- scn
- sco
- sd
- sda
- se
- seh
- ses
- sg
- sgb
- sgs
- sgw
- sgz
- sh
- shi
- shk
- shn
- shs
- shy
- si
- sig
- sil
- sjn
- sk
- skr
- sl
- sld
- sll
- sm
- sma
- smk
- sml
- smn
- sn
- snc
- snp
- snw
- so
- soy
- spl
- spp
- sps
- sq
- sr
- srm
- srn
- srq
- ss
- ssd
- ssx
- st
- stn
- stp
- stq
- su
- sue
- suk
- sur
- sus
- suz
- sv
- sw
- swg
- swp
- sxb
- sxn
- syc
- syl
- syr
- szb
- szl
- ta
- tab
- tac
- taj
- taq
- tbc
- tbl
- tbo
- tbz
- tcs
- tcy
- te
- tem
- teo
- ter
- tet
- tfr
- tg
- tgo
- tgp
- th
- thk
- ti
- tig
- tik
- tim
- tk
- tkl
- tl
- tlb
- tlf
- tlh
- tlj
- tlx
- tly
- tmc
- tmh
- tmr
- tn
- to
- toh
- toi
- toj
- tpa
- tpi
- tpm
- tpw
- tpz
- tr
- trc
- trn
- trq
- trs
- trv
- ts
- tsw
- tt
- ttc
- tte
- ttr
- tts
- tuc
- tuf
- tum
- tvl
- tw
- twb
- twu
- txa
- ty
- tyj
- tyv
- tzh
- tzj
- tzl
- tzm
- tzo
- ubr
- ubu
- udm
- udu
- ug
- uk
- umb
- ur
- usa
- usp
- uvl
- uz
- vag
- ve
- vec
- vi
- viv
- vls
- vmw
- vmy
- vo
- vot
- vun
- wa
- wae
- waj
- wal
- wap
- war
- wbm
- wbp
- wed
- wmt
- wmw
- wnc
- wnu
- wo
- wob
- wsk
- wuv
- xal
- xcl
- xed
- xh
- xmf
- xog
- xon
- xrb
- xsb
- xsi
- xsm
- xsr
- xtd
- xtm
- xuo
- yal
- yam
- yaq
- yaz
- yby
- ycl
- ycn
- yi
- yli
- yml
- yo
- yon
- yua
- yut
- yuw
- za
- zam
- zap
- zea
- zgh
- zh
- zia
- zom
- zu
- zyp
- zza
library_name: transformers
license: apache-2.0
tags:
- translation
- opus-mt-tc-bible
model-index:
- name: opus-mt-tc-bible-big-mul-deu_eng_nld
results:
- task:
type: translation
name: Translation multi-multi
dataset:
name: tatoeba-test-v2020-07-28-v2023-09-26
type: tatoeba_mt
args: multi-multi
metrics:
- type: bleu
value: 41.7
name: BLEU
- type: chrf
value: 0.61102
name: chr-F
---
# opus-mt-tc-bible-big-mul-deu_eng_nld
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from Multiple languages (mul) to unknown (deu+eng+nld). Note that many of the listed languages will not be well supported by the model as the training data is very limited for the majority of the languages. Translation performance varies a lot and for a large number of language pairs it will not work at all.
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2024-08-18
- **License:** Apache-2.0
- **Language(s):**
- Source Language(s): aai aar aau abi abk acd ace acf ach acm acn acr ade adj ady aeu aey afb afh afr agd agn agu ahk aia aka akh akl akp alj aln alp alq alt alz ame amh ami amk amu ang ann anp anv aoz apc apr apu ara arc arg arq arz asm aso ast atg atj atq aui auy ava avk avn avu awa awb awx aze azg azz bak bal bam ban bar bas bav bba bbo bbr bcl bcw bef beh bel bem ben bep bex bfa bfd bfo bgr bhl bho bhz bib bik bim bis biv bjr bjv bku bkv blh blt blz bmh bmk bmq bmu bmv bnp bod boj bom bos bov box bpr bps bpy bqc bqj bqp bre bru brx bss btd bth bto bts btt btx bua bud bug buk bul bus bvy bwq bwu byn bzd bzh bzj bzt caa cab cac cak cat cay cbk cce cco ceb ces cfm cgc cha che chf chm chq chr chu chv chy chz cjk cjo cjp cjv cjy ckb cko cle cme cmn cmo cmr cnh cni cnl cnr cnt cnw cok cop cor cos cot cpa cpu cre crh crn crs crx csb csk cso csy cta ctd ctp ctu cuc cui cuk cut cux cwe cwt cya cym czt daa dad dag dah dan ded deu dga dgi dig dik din diq div dje djk dng dni dnj dob dop drt dsb dsh dtp dty dug dws dww dyi dyo dyu dzo efi egl ell emi eng enm epo ess est eus ewe ext fai fal fao far fas fij fil fin fkv fon for fra frd frm frp frr fry fuc ful fur gag gah gaw gbm gcf gde gej gfk ghs gil gkn gla gle glg glk glv gnd gng gog gor gos got gqr grc grn gsw guc gud guh guj guo gur guw gux gvf gvl gwi gwr gym gyr hag hat hau haw hay hbo hbs hch heb heh her hif hig hil hin hla hlt hmn hne hnj hnn hns hoc hot hrv hrx hsb hsn hui hun hus hvn hwc hye hyw iba ibo icr ido ifa ifb ife ifk ifu ify ign iii ike iku ile ilo imo ina ind inh ino iou ipi ipk iri irk iry isl ita itv ium ixl izh izr jaa jac jak jam jav jbo jbu jdt jmc jpa jpn jun jvn kaa kab kac kal kam kan kao kas kat kau kaz kbd kbm kbp kdc kdj kdl kdn kea kek ken keo ker keu kew kez kgf kgk kha khm khz kia kik kin kir kjb kje kjh kjs kki kkj kle kma kmb kmg kmh kmo kmr kmu knc kne knj knk kno kog koi kok kom kon kpf kpg kpr kpv kpw kpz kqe kqf kqp kqw krc kri krj krl kru ksb ksh ksr ktb ktj kua kub kud kue kum kur kus kvn kwf kxc kxm kyc kyf kyg kyq kzf laa lac lad lah lao las lat lav law lbe lcm ldn lee lef lem leu lew lex lez lfn lgg lhu lia lid lif lij lim lin lip lit liv ljp lkt lld lln lme lmo lnd lob lok lon lou lrc lsi ltz lua luc lug luo lus lut luy lzz maa mad mag mah mai maj mak mal mam maq mar mau maw max maz mbb mbf mbt mcb mcp mcu mda mdf med mee meh mek men meq mfe mfh mfi mfk mfq mfy mgd mgm mgo mhi mhl mhx mhy mib mic mie mif mig mih mil mio mit mix miy miz mjc mkd mks mlg mlh mlp mlt mmo mmx mna mnb mnf mnh mni mnr mnw moa mog moh mol mon mop mor mos mox mpg mpm mpt mpx mqb mqj mri mrj mrw msa msm mta muh mux muy mva mvp mvv mwc mwl mwm mwv mww mxb mxt mya myb myk myu myv myw myx mzk mzm mzn mzw mzz naf nak nap nas nau nav nbl nca nch ncj ncl ncu nde ndo nds ndz neb nep new nfr ngt ngu nhe nhg nhi nhn nhu nhw nhx nhy nia nif nii nij nim nin niu njm nlc nld nlv nmz nnb nnh nno nnw nob nog non nop nor not nou nov npi npl npy nqo nsn nso nss nst nsu ntm ntp ntr nuj nus nuy nwb nwi nya nyf nyn nyo nyy nzi oar obo oci ofs oji oku okv old omw ood opm ori orm orv osp oss ota ote otm otn otq ozm pab pad pag pai pal pam pan pao pap pau pbi pbl pck pcm pdc pes pfl phn pib pih pio pis pkb pli pls plt plw pmf pms pmy pne pnt poe poh pol por pot ppk ppl prf prg prs ptp ptu pus pwg pww quc qya rai rap rav rej rhg rif rim rmy roh rom ron rop rro rue rug run rup rus rwo sab sag sah san sas sat sba sbd sbl scn sco sda sdh seh ses sgb sgs sgw sgz shi shk shn shs shy sig sil sin sjn skr sld slk sll slv sma sme smk sml smn smo sna snc snd snp snw som sot soy spa spl spp sps sqi srd srm srn srp srq ssd ssw ssx stn stp stq sue suk sun sur sus suz swa swc swe swg swh swp sxb sxn syc syl syr szb szl tab tac tah taj tam taq tat tbc tbl tbo tbz tcs tcy tel tem teo ter tet tfr tgk tgl tgo tgp tha thk tig tik tim tir tkl tlb tlf tlh tlj tlx tly tmc tmh tmr tmw toh toi toj ton tpa tpi tpm tpw tpz trc trn trq trs trv tsn tso tsw ttc tte ttr tts tuc tuf tuk tum tur tvl twb twi twu txa tyj tyv tzh tzj tzl tzm tzo ubr ubu udm udu uig ukr umb urd usa usp uvl uzb vag vec ven vie viv vls vmw vmy vol vot vro vun wae waj wal wap war wbm wbp wed wln wmt wmw wnc wnu wob wol wsk wuu wuv xal xcl xed xho xmf xog xon xrb xsb xsi xsm xsr xtd xtm xuo yal yam yaq yaz yby ycl ycn yid yli yml yon yor yua yue yut yuw zam zap zea zgh zha zia zlm zom zsm zul zyp zza
- Target Language(s): deu eng nld
- Valid Target Language Labels: >>deu<< >>eng<< >>nld<< >>xxx<<
- **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.zip)
- **Resources for more information:**
- [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/mul-deu%2Beng%2Bnld/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-18)
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
- [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>deu<<`
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
Also note that many of the listed languages will not be well supported by the model as the training data is very limited for the majority of the languages. Translation performance varies a lot and for a large number of language pairs it will not work at all.
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>eng<< Jedes Mädchen, das ich sehe, gefällt mir.",
">>nld<< I don't know if it is true."
]
model_name = "pytorch-models/opus-mt-tc-bible-big-mul-deu_eng_nld"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# I like every girl I see.
# Ik weet niet of het waar is.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-mul-deu_eng_nld")
print(pipe(">>eng<< Jedes Mädchen, das ich sehe, gefällt mir."))
# expected output: I like every girl I see.
```
## Training
- **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/mul-deu%2Beng%2Bnld/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-18)
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.test.txt)
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| multi-multi | tatoeba-test-v2020-07-28-v2023-09-26 | 0.61102 | 41.7 | 10000 | 78944 |
## Citation Information
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```bibtex
@article{tiedemann2023democratizing,
title={Democratizing neural machine translation with {OPUS-MT}},
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
journal={Language Resources and Evaluation},
number={58},
pages={713--755},
year={2023},
publisher={Springer Nature},
issn={1574-0218},
doi={10.1007/s10579-023-09704-w}
}
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
## Model conversion info
* transformers version: 4.45.1
* OPUS-MT git hash: 0882077
* port time: Tue Oct 8 12:27:24 EEST 2024
* port machine: LM0-400-22516.local
| null |
Non_BioNLP
|
# opus-mt-tc-bible-big-mul-deu_eng_nld
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from Multiple languages (mul) to unknown (deu+eng+nld). Note that many of the listed languages will not be well supported by the model as the training data is very limited for the majority of the languages. Translation performance varies a lot and for a large number of language pairs it will not work at all.
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2024-08-18
- **License:** Apache-2.0
- **Language(s):**
- Source Language(s): aai aar aau abi abk acd ace acf ach acm acn acr ade adj ady aeu aey afb afh afr agd agn agu ahk aia aka akh akl akp alj aln alp alq alt alz ame amh ami amk amu ang ann anp anv aoz apc apr apu ara arc arg arq arz asm aso ast atg atj atq aui auy ava avk avn avu awa awb awx aze azg azz bak bal bam ban bar bas bav bba bbo bbr bcl bcw bef beh bel bem ben bep bex bfa bfd bfo bgr bhl bho bhz bib bik bim bis biv bjr bjv bku bkv blh blt blz bmh bmk bmq bmu bmv bnp bod boj bom bos bov box bpr bps bpy bqc bqj bqp bre bru brx bss btd bth bto bts btt btx bua bud bug buk bul bus bvy bwq bwu byn bzd bzh bzj bzt caa cab cac cak cat cay cbk cce cco ceb ces cfm cgc cha che chf chm chq chr chu chv chy chz cjk cjo cjp cjv cjy ckb cko cle cme cmn cmo cmr cnh cni cnl cnr cnt cnw cok cop cor cos cot cpa cpu cre crh crn crs crx csb csk cso csy cta ctd ctp ctu cuc cui cuk cut cux cwe cwt cya cym czt daa dad dag dah dan ded deu dga dgi dig dik din diq div dje djk dng dni dnj dob dop drt dsb dsh dtp dty dug dws dww dyi dyo dyu dzo efi egl ell emi eng enm epo ess est eus ewe ext fai fal fao far fas fij fil fin fkv fon for fra frd frm frp frr fry fuc ful fur gag gah gaw gbm gcf gde gej gfk ghs gil gkn gla gle glg glk glv gnd gng gog gor gos got gqr grc grn gsw guc gud guh guj guo gur guw gux gvf gvl gwi gwr gym gyr hag hat hau haw hay hbo hbs hch heb heh her hif hig hil hin hla hlt hmn hne hnj hnn hns hoc hot hrv hrx hsb hsn hui hun hus hvn hwc hye hyw iba ibo icr ido ifa ifb ife ifk ifu ify ign iii ike iku ile ilo imo ina ind inh ino iou ipi ipk iri irk iry isl ita itv ium ixl izh izr jaa jac jak jam jav jbo jbu jdt jmc jpa jpn jun jvn kaa kab kac kal kam kan kao kas kat kau kaz kbd kbm kbp kdc kdj kdl kdn kea kek ken keo ker keu kew kez kgf kgk kha khm khz kia kik kin kir kjb kje kjh kjs kki kkj kle kma kmb kmg kmh kmo kmr kmu knc kne knj knk kno kog koi kok kom kon kpf kpg kpr kpv kpw kpz kqe kqf kqp kqw krc kri krj krl kru ksb ksh ksr ktb ktj kua kub kud kue kum kur kus kvn kwf kxc kxm kyc kyf kyg kyq kzf laa lac lad lah lao las lat lav law lbe lcm ldn lee lef lem leu lew lex lez lfn lgg lhu lia lid lif lij lim lin lip lit liv ljp lkt lld lln lme lmo lnd lob lok lon lou lrc lsi ltz lua luc lug luo lus lut luy lzz maa mad mag mah mai maj mak mal mam maq mar mau maw max maz mbb mbf mbt mcb mcp mcu mda mdf med mee meh mek men meq mfe mfh mfi mfk mfq mfy mgd mgm mgo mhi mhl mhx mhy mib mic mie mif mig mih mil mio mit mix miy miz mjc mkd mks mlg mlh mlp mlt mmo mmx mna mnb mnf mnh mni mnr mnw moa mog moh mol mon mop mor mos mox mpg mpm mpt mpx mqb mqj mri mrj mrw msa msm mta muh mux muy mva mvp mvv mwc mwl mwm mwv mww mxb mxt mya myb myk myu myv myw myx mzk mzm mzn mzw mzz naf nak nap nas nau nav nbl nca nch ncj ncl ncu nde ndo nds ndz neb nep new nfr ngt ngu nhe nhg nhi nhn nhu nhw nhx nhy nia nif nii nij nim nin niu njm nlc nld nlv nmz nnb nnh nno nnw nob nog non nop nor not nou nov npi npl npy nqo nsn nso nss nst nsu ntm ntp ntr nuj nus nuy nwb nwi nya nyf nyn nyo nyy nzi oar obo oci ofs oji oku okv old omw ood opm ori orm orv osp oss ota ote otm otn otq ozm pab pad pag pai pal pam pan pao pap pau pbi pbl pck pcm pdc pes pfl phn pib pih pio pis pkb pli pls plt plw pmf pms pmy pne pnt poe poh pol por pot ppk ppl prf prg prs ptp ptu pus pwg pww quc qya rai rap rav rej rhg rif rim rmy roh rom ron rop rro rue rug run rup rus rwo sab sag sah san sas sat sba sbd sbl scn sco sda sdh seh ses sgb sgs sgw sgz shi shk shn shs shy sig sil sin sjn skr sld slk sll slv sma sme smk sml smn smo sna snc snd snp snw som sot soy spa spl spp sps sqi srd srm srn srp srq ssd ssw ssx stn stp stq sue suk sun sur sus suz swa swc swe swg swh swp sxb sxn syc syl syr szb szl tab tac tah taj tam taq tat tbc tbl tbo tbz tcs tcy tel tem teo ter tet tfr tgk tgl tgo tgp tha thk tig tik tim tir tkl tlb tlf tlh tlj tlx tly tmc tmh tmr tmw toh toi toj ton tpa tpi tpm tpw tpz trc trn trq trs trv tsn tso tsw ttc tte ttr tts tuc tuf tuk tum tur tvl twb twi twu txa tyj tyv tzh tzj tzl tzm tzo ubr ubu udm udu uig ukr umb urd usa usp uvl uzb vag vec ven vie viv vls vmw vmy vol vot vro vun wae waj wal wap war wbm wbp wed wln wmt wmw wnc wnu wob wol wsk wuu wuv xal xcl xed xho xmf xog xon xrb xsb xsi xsm xsr xtd xtm xuo yal yam yaq yaz yby ycl ycn yid yli yml yon yor yua yue yut yuw zam zap zea zgh zha zia zlm zom zsm zul zyp zza
- Target Language(s): deu eng nld
- Valid Target Language Labels: >>deu<< >>eng<< >>nld<< >>xxx<<
- **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.zip)
- **Resources for more information:**
- [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/mul-deu%2Beng%2Bnld/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-18)
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
- [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>deu<<`
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
Also note that many of the listed languages will not be well supported by the model as the training data is very limited for the majority of the languages. Translation performance varies a lot and for a large number of language pairs it will not work at all.
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>eng<< Jedes Mädchen, das ich sehe, gefällt mir.",
">>nld<< I don't know if it is true."
]
model_name = "pytorch-models/opus-mt-tc-bible-big-mul-deu_eng_nld"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# I like every girl I see.
# Ik weet niet of het waar is.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-mul-deu_eng_nld")
print(pipe(">>eng<< Jedes Mädchen, das ich sehe, gefällt mir."))
# expected output: I like every girl I see.
```
## Training
- **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/mul-deu%2Beng%2Bnld/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-18)
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.test.txt)
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-deu+eng+nld/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-18.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| multi-multi | tatoeba-test-v2020-07-28-v2023-09-26 | 0.61102 | 41.7 | 10000 | 78944 |
## Citation Information
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```bibtex
@article{tiedemann2023democratizing,
title={Democratizing neural machine translation with {OPUS-MT}},
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
journal={Language Resources and Evaluation},
number={58},
pages={713--755},
year={2023},
publisher={Springer Nature},
issn={1574-0218},
doi={10.1007/s10579-023-09704-w}
}
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
## Model conversion info
* transformers version: 4.45.1
* OPUS-MT git hash: 0882077
* port time: Tue Oct 8 12:27:24 EEST 2024
* port machine: LM0-400-22516.local
|
{"language": ["aa", "aai", "aau", "ab", "abi", "acd", "ace", "acf", "ach", "acn", "acr", "ade", "adj", "ady", "aeu", "aey", "af", "afh", "agd", "agn", "agu", "ahk", "aia", "ak", "akh", "akl", "akp", "alj", "alp", "alq", "alt", "alz", "am", "ame", "ami", "amk", "amu", "an", "ang", "ann", "anp", "anv", "aoz", "apr", "apu", "ar", "arc", "as", "aso", "ast", "atg", "atj", "atq", "aui", "auy", "av", "avk", "avn", "avu", "awa", "awb", "awx", "az", "azg", "azz", "ba", "bal", "ban", "bar", "bas", "bav", "bba", "bbo", "bbr", "bcl", "bcw", "be", "bef", "beh", "bem", "bep", "bex", "bfa", "bfd", "bfo", "bg", "bgr", "bhl", "bho", "bhz", "bi", "bib", "bik", "bim", "biv", "bjr", "bjv", "bku", "bkv", "blh", "blt", "blz", "bm", "bmh", "bmk", "bmq", "bmu", "bmv", "bn", "bnp", "bo", "boj", "bom", "bov", "box", "bpr", "bps", "bpy", "bqc", "bqj", "bqp", "br", "bru", "brx", "bs", "bss", "btd", "bth", "bto", "bts", "btt", "btx", "bua", "bud", "bug", "buk", "bus", "bvy", "bwq", "bwu", "byn", "bzd", "bzh", "bzj", "bzt", "ca", "caa", "cab", "cac", "cak", "cay", "cbk", "cce", "cco", "ce", "ceb", "cfm", "cgc", "ch", "chf", "chm", "chq", "chr", "chy", "chz", "cjk", "cjo", "cjp", "cjv", "cko", "cle", "cme", "cmo", "cmr", "cnh", "cni", "cnl", "cnt", "cnw", "co", "cok", "cop", "cot", "cpa", "cpu", "cr", "crh", "crn", "crs", "crx", "cs", "csb", "csk", "cso", "csy", "cta", "ctd", "ctp", "ctu", "cu", "cuc", "cui", "cuk", "cut", "cux", "cv", "cwe", "cwt", "cy", "cya", "czt", "da", "daa", "dad", "dag", "dah", "de", "ded", "dga", "dgi", "dig", "dik", "din", "diq", "dje", "djk", "dng", "dni", "dnj", "dob", "dop", "drt", "dsb", "dsh", "dtp", "dug", "dv", "dws", "dww", "dyi", "dyo", "dyu", "dz", "ee", "efi", "egl", "el", "emi", "en", "enm", "eo", "es", "ess", "et", "eu", "ext", "fa", "fai", "fal", "far", "ff", "fi", "fil", "fj", "fkv", "fo", "fon", "for", "fr", "frd", "frm", "frp", "frr", "fur", "fy", "ga", "gag", "gah", "gaw", "gbm", "gcf", "gd", "gde", "gej", "gfk", "ghs", "gil", "gkn", "gl", "glk", "gn", "gnd", "gng", "gog", "gor", "gos", "got", "gqr", "grc", "gsw", "gu", "guc", "gud", "guh", "guo", "gur", "guw", "gux", "gv", "gvf", "gvl", "gwi", "gwr", "gym", "gyr", "ha", "hag", "haw", "hay", "hbo", "hch", "he", "heh", "hi", "hif", "hig", "hil", "hla", "hlt", "hmn", "hne", "hnj", "hnn", "hns", "hoc", "hot", "hr", "hrx", "hsb", "ht", "hu", "hui", "hus", "hvn", "hwc", "hy", "hyw", "hz", "ia", "iba", "icr", "id", "ie", "ifa", "ifb", "ife", "ifk", "ifu", "ify", "ig", "ign", "ii", "ik", "ilo", "imo", "inh", "ino", "io", "iou", "ipi", "iri", "irk", "iry", "is", "it", "itv", "iu", "ium", "ixl", "izh", "izr", "ja", "jaa", "jac", "jam", "jbo", "jbu", "jdt", "jmc", "jpa", "jun", "jv", "jvn", "ka", "kaa", "kab", "kac", "kam", "kao", "kbd", "kbm", "kbp", "kdc", "kdj", "kdl", "kdn", "kea", "kek", "ken", "keo", "ker", "keu", "kew", "kez", "kg", "kgf", "kgk", "kha", "khz", "ki", "kia", "kj", "kjb", "kje", "kjh", "kjs", "kk", "kki", "kkj", "kl", "kle", "km", "kma", "kmb", "kmg", "kmh", "kmo", "kmu", "kn", "kne", "knj", "knk", "kno", "kog", "kok", "kpf", "kpg", "kpr", "kpw", "kpz", "kqe", "kqf", "kqp", "kqw", "kr", "krc", "kri", "krj", "krl", "kru", "ks", "ksb", "ksh", "ksr", "ktb", "ktj", "ku", "kub", "kud", "kue", "kum", "kus", "kv", "kvn", "kw", "kwf", "kxc", "kxm", "ky", "kyc", "kyf", "kyg", "kyq", "kzf", "la", "laa", "lac", "lad", "lah", "las", "law", "lb", "lbe", "lcm", "ldn", "lee", "lef", "lem", "leu", "lew", "lex", "lez", "lfn", "lg", "lgg", "lhu", "li", "lia", "lid", "lif", "lij", "lip", "liv", "ljp", "lkt", "lld", "lln", "lme", "lmo", "ln", "lnd", "lo", "lob", "lok", "lon", "lou", "lrc", "lsi", "lt", "lua", "luc", "luo", "lus", "lut", "luy", "lv", "lzz", "maa", "mad", "mag", "mai", "maj", "mak", "mam", "maq", "mau", "maw", "maz", "mbb", "mbf", "mbt", "mcb", "mcp", "mcu", "mda", "mdf", "med", "mee", "meh", "mek", "men", "meq", "mfe", "mfh", "mfi", "mfk", "mfq", "mfy", "mg", "mgd", "mgm", "mgo", "mh", "mhi", "mhl", "mhx", "mhy", "mi", "mib", "mic", "mie", "mif", "mig", "mih", "mil", "mio", "mit", "mix", "miy", "miz", "mjc", "mk", "mks", "ml", "mlh", "mlp", "mmo", "mmx", "mn", "mna", "mnb", "mnf", "mnh", "mni", "mnr", "mnw", "mo", "moa", "mog", "moh", "mop", "mor", "mos", "mox", "mpg", "mpm", "mpt", "mpx", "mqb", "mqj", "mr", "mrj", "mrw", "ms", "msm", "mt", "mta", "muh", "mux", "muy", "mva", "mvp", "mvv", "mwc", "mwl", "mwm", "mwv", "mww", "mxb", "mxt", "my", "myb", "myk", "myu", "myv", "myw", "myx", "mzk", "mzm", "mzn", "mzw", "mzz", "na", "naf", "nak", "nap", "nas", "nb", "nca", "nch", "ncj", "ncl", "ncu", "nd", "nds", "ndz", "ne", "neb", "new", "nfr", "ng", "ngt", "ngu", "nhe", "nhg", "nhi", "nhn", "nhu", "nhw", "nhx", "nhy", "nia", "nif", "nii", "nij", "nim", "nin", "niu", "njm", "nl", "nlc", "nlv", "nmz", "nn", "nnb", "nnh", "nnw", false, "nog", "non", "nop", "not", "nou", "nov", "npl", "npy", "nqo", "nr", "nsn", "nso", "nss", "nst", "nsu", "ntm", "ntp", "ntr", "nuj", "nus", "nuy", "nv", "nwb", "nwi", "ny", "nyf", "nyn", "nyo", "nyy", "nzi", "oar", "obo", "oc", "ofs", "oj", "oku", "okv", "old", "om", "omw", "ood", "opm", "or", "orv", "os", "osp", "ota", "ote", "otm", "otn", "otq", "ozm", "pa", "pab", "pad", "pag", "pai", "pal", "pam", "pao", "pap", "pau", "pbi", "pbl", "pck", "pcm", "pdc", "pfl", "phn", "pi", "pib", "pih", "pio", "pis", "pkb", "pl", "pls", "plw", "pmf", "pms", "pmy", "pne", "pnt", "poe", "poh", "pot", "ppk", "ppl", "prf", "prg", "ps", "pt", "ptp", "ptu", "pwg", "pww", "quc", "qya", "rai", "rap", "rav", "rej", "rhg", "rif", "rim", "rm", "rmy", "rn", "ro", "rom", "rop", "rro", "ru", "rue", "rug", "rup", "rw", "rwo", "sa", "sab", "sah", "sas", "sat", "sba", "sbd", "sbl", "sc", "scn", "sco", "sd", "sda", "se", "seh", "ses", "sg", "sgb", "sgs", "sgw", "sgz", "sh", "shi", "shk", "shn", "shs", "shy", "si", "sig", "sil", "sjn", "sk", "skr", "sl", "sld", "sll", "sm", "sma", "smk", "sml", "smn", "sn", "snc", "snp", "snw", "so", "soy", "spl", "spp", "sps", "sq", "sr", "srm", "srn", "srq", "ss", "ssd", "ssx", "st", "stn", "stp", "stq", "su", "sue", "suk", "sur", "sus", "suz", "sv", "sw", "swg", "swp", "sxb", "sxn", "syc", "syl", "syr", "szb", "szl", "ta", "tab", "tac", "taj", "taq", "tbc", "tbl", "tbo", "tbz", "tcs", "tcy", "te", "tem", "teo", "ter", "tet", "tfr", "tg", "tgo", "tgp", "th", "thk", "ti", "tig", "tik", "tim", "tk", "tkl", "tl", "tlb", "tlf", "tlh", "tlj", "tlx", "tly", "tmc", "tmh", "tmr", "tn", "to", "toh", "toi", "toj", "tpa", "tpi", "tpm", "tpw", "tpz", "tr", "trc", "trn", "trq", "trs", "trv", "ts", "tsw", "tt", "ttc", "tte", "ttr", "tts", "tuc", "tuf", "tum", "tvl", "tw", "twb", "twu", "txa", "ty", "tyj", "tyv", "tzh", "tzj", "tzl", "tzm", "tzo", "ubr", "ubu", "udm", "udu", "ug", "uk", "umb", "ur", "usa", "usp", "uvl", "uz", "vag", "ve", "vec", "vi", "viv", "vls", "vmw", "vmy", "vo", "vot", "vun", "wa", "wae", "waj", "wal", "wap", "war", "wbm", "wbp", "wed", "wmt", "wmw", "wnc", "wnu", "wo", "wob", "wsk", "wuv", "xal", "xcl", "xed", "xh", "xmf", "xog", "xon", "xrb", "xsb", "xsi", "xsm", "xsr", "xtd", "xtm", "xuo", "yal", "yam", "yaq", "yaz", "yby", "ycl", "ycn", "yi", "yli", "yml", "yo", "yon", "yua", "yut", "yuw", "za", "zam", "zap", "zea", "zgh", "zh", "zia", "zom", "zu", "zyp", "zza"], "library_name": "transformers", "license": "apache-2.0", "tags": ["translation", "opus-mt-tc-bible"], "model-index": [{"name": "opus-mt-tc-bible-big-mul-deu_eng_nld", "results": [{"task": {"type": "translation", "name": "Translation multi-multi"}, "dataset": {"name": "tatoeba-test-v2020-07-28-v2023-09-26", "type": "tatoeba_mt", "args": "multi-multi"}, "metrics": [{"type": "bleu", "value": 41.7, "name": "BLEU"}, {"type": "chrf", "value": 0.61102, "name": "chr-F"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 44,386 |
jilangdi/distilbert-base-uncased-sts
|
jilangdi
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"distilbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5749",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-11T06:00:34Z |
2024-06-11T06:02:04+00:00
| 4 | 1 |
---
base_model: distilbert/distilbert-base-uncased
datasets: []
language: []
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5749
- loss:CosineSimilarityLoss
widget:
- source_sentence: A chef is preparing some food.
sentences:
- Five birds stand on the snow.
- A chef prepared a meal.
- There is no 'still' that is not relative to some other object.
- source_sentence: A woman is adding oil on fishes.
sentences:
- Large cruise ship floating on the water.
- It refers to the maximum f-stop (which is defined as the ratio of focal length
to effective aperture diameter).
- The woman is cutting potatoes.
- source_sentence: The player shoots the winning points.
sentences:
- Minimum wage laws hurt the least skilled, least productive the most.
- The basketball player is about to score points for his team.
- Three televisions, on on the floor, the other two on a box.
- source_sentence: Stars form in star-formation regions, which itself develop from
molecular clouds.
sentences:
- Although I believe Searle is mistaken, I don't think you have found the problem.
- It may be possible for a solar system like ours to exist outside of a galaxy.
- A blond-haired child performing on the trumpet in front of a house while his younger
brother watches.
- source_sentence: While Queen may refer to both Queen regent (sovereign) or Queen
consort, the King has always been the sovereign.
sentences:
- At first, I thought this is a bit of a tricky question.
- A man plays the guitar.
- There is a very good reason not to refer to the Queen's spouse as "King" - because
they aren't the King.
co2_eq_emissions:
emissions: 39.55504012195411
energy_consumed: 0.07407546705036323
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: AMD EPYC 7H12 64-Core Processor
ram_total_size: 229.14864349365234
hours_used: 0.147
hardware_used: 8 x NVIDIA GeForce RTX 3090
model-index:
- name: SentenceTransformer based on distilbert/distilbert-base-uncased
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.8600140595861905
name: Pearson Cosine
- type: spearman_cosine
value: 0.8598983710598386
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8243680239709271
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8279844492084353
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.824951390126028
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8287648794439747
name: Spearman Euclidean
- type: pearson_dot
value: 0.8082965335059282
name: Pearson Dot
- type: spearman_dot
value: 0.8091677829512911
name: Spearman Dot
- type: pearson_max
value: 0.8600140595861905
name: Pearson Max
- type: spearman_max
value: 0.8598983710598386
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.8268457854861329
name: Pearson Cosine
- type: spearman_cosine
value: 0.8228490860497294
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8156507100664523
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8121071145557491
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8163157326426538
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8129552976781299
name: Spearman Euclidean
- type: pearson_dot
value: 0.7410469543934988
name: Pearson Dot
- type: spearman_dot
value: 0.7354483817269781
name: Spearman Dot
- type: pearson_max
value: 0.8268457854861329
name: Pearson Max
- type: spearman_max
value: 0.8228490860497294
name: Spearman Max
- type: pearson_cosine
value: 0.8291194587336435
name: Pearson Cosine
- type: spearman_cosine
value: 0.826073377213203
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8189784822965882
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8168853954005567
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8196499152175635
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8172865511141795
name: Spearman Euclidean
- type: pearson_dot
value: 0.7476019871405575
name: Pearson Dot
- type: spearman_dot
value: 0.7396418058035931
name: Spearman Dot
- type: pearson_max
value: 0.8291194587336435
name: Pearson Max
- type: spearman_max
value: 0.826073377213203
name: Spearman Max
---
# SentenceTransformer based on distilbert/distilbert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) <!-- at revision 12040accade4e8a0f71eabdb258fecc2e7e948be -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("jilangdi/distilbert-base-uncased-sts")
# Run inference
sentences = [
'While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign.',
'There is a very good reason not to refer to the Queen\'s spouse as "King" - because they aren\'t the King.',
'A man plays the guitar.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.86 |
| **spearman_cosine** | **0.8599** |
| pearson_manhattan | 0.8244 |
| spearman_manhattan | 0.828 |
| pearson_euclidean | 0.825 |
| spearman_euclidean | 0.8288 |
| pearson_dot | 0.8083 |
| spearman_dot | 0.8092 |
| pearson_max | 0.86 |
| spearman_max | 0.8599 |
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8268 |
| **spearman_cosine** | **0.8228** |
| pearson_manhattan | 0.8157 |
| spearman_manhattan | 0.8121 |
| pearson_euclidean | 0.8163 |
| spearman_euclidean | 0.813 |
| pearson_dot | 0.741 |
| spearman_dot | 0.7354 |
| pearson_max | 0.8268 |
| spearman_max | 0.8228 |
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8291 |
| **spearman_cosine** | **0.8261** |
| pearson_manhattan | 0.819 |
| spearman_manhattan | 0.8169 |
| pearson_euclidean | 0.8196 |
| spearman_euclidean | 0.8173 |
| pearson_dot | 0.7476 |
| spearman_dot | 0.7396 |
| pearson_max | 0.8291 |
| spearman_max | 0.8261 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 5,749 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.0 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.95 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------|
| <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> |
| <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> |
| <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 1,500 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 15.1 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.11 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------|:------------------------------------------------------|:------------------|
| <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> |
| <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> |
| <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
|:------:|:----:|:-------------:|:------:|:-----------------------:|:------------------------:|
| 2.2222 | 100 | 0.0423 | 0.0273 | 0.8592 | - |
| 4.0 | 180 | - | - | - | 0.8228 |
| 2.2222 | 100 | 0.0049 | 0.0273 | 0.8599 | - |
| 4.0 | 180 | - | - | - | 0.8261 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.074 kWh
- **Carbon Emitted**: 0.040 kg of CO2
- **Hours Used**: 0.147 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 8 x NVIDIA GeForce RTX 3090
- **CPU Model**: AMD EPYC 7H12 64-Core Processor
- **RAM Size**: 229.15 GB
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.1+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.2
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on distilbert/distilbert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) <!-- at revision 12040accade4e8a0f71eabdb258fecc2e7e948be -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("jilangdi/distilbert-base-uncased-sts")
# Run inference
sentences = [
'While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign.',
'There is a very good reason not to refer to the Queen\'s spouse as "King" - because they aren\'t the King.',
'A man plays the guitar.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.86 |
| **spearman_cosine** | **0.8599** |
| pearson_manhattan | 0.8244 |
| spearman_manhattan | 0.828 |
| pearson_euclidean | 0.825 |
| spearman_euclidean | 0.8288 |
| pearson_dot | 0.8083 |
| spearman_dot | 0.8092 |
| pearson_max | 0.86 |
| spearman_max | 0.8599 |
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8268 |
| **spearman_cosine** | **0.8228** |
| pearson_manhattan | 0.8157 |
| spearman_manhattan | 0.8121 |
| pearson_euclidean | 0.8163 |
| spearman_euclidean | 0.813 |
| pearson_dot | 0.741 |
| spearman_dot | 0.7354 |
| pearson_max | 0.8268 |
| spearman_max | 0.8228 |
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8291 |
| **spearman_cosine** | **0.8261** |
| pearson_manhattan | 0.819 |
| spearman_manhattan | 0.8169 |
| pearson_euclidean | 0.8196 |
| spearman_euclidean | 0.8173 |
| pearson_dot | 0.7476 |
| spearman_dot | 0.7396 |
| pearson_max | 0.8291 |
| spearman_max | 0.8261 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 5,749 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.0 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.95 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------|
| <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> |
| <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> |
| <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 1,500 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 15.1 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.11 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------|:------------------------------------------------------|:------------------|
| <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> |
| <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> |
| <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
|:------:|:----:|:-------------:|:------:|:-----------------------:|:------------------------:|
| 2.2222 | 100 | 0.0423 | 0.0273 | 0.8592 | - |
| 4.0 | 180 | - | - | - | 0.8228 |
| 2.2222 | 100 | 0.0049 | 0.0273 | 0.8599 | - |
| 4.0 | 180 | - | - | - | 0.8261 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.074 kWh
- **Carbon Emitted**: 0.040 kg of CO2
- **Hours Used**: 0.147 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 8 x NVIDIA GeForce RTX 3090
- **CPU Model**: AMD EPYC 7H12 64-Core Processor
- **RAM Size**: 229.15 GB
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.1+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.2
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "distilbert/distilbert-base-uncased", "datasets": [], "language": [], "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:5749", "loss:CosineSimilarityLoss"], "widget": [{"source_sentence": "A chef is preparing some food.", "sentences": ["Five birds stand on the snow.", "A chef prepared a meal.", "There is no 'still' that is not relative to some other object."]}, {"source_sentence": "A woman is adding oil on fishes.", "sentences": ["Large cruise ship floating on the water.", "It refers to the maximum f-stop (which is defined as the ratio of focal length to effective aperture diameter).", "The woman is cutting potatoes."]}, {"source_sentence": "The player shoots the winning points.", "sentences": ["Minimum wage laws hurt the least skilled, least productive the most.", "The basketball player is about to score points for his team.", "Three televisions, on on the floor, the other two on a box."]}, {"source_sentence": "Stars form in star-formation regions, which itself develop from molecular clouds.", "sentences": ["Although I believe Searle is mistaken, I don't think you have found the problem.", "It may be possible for a solar system like ours to exist outside of a galaxy.", "A blond-haired child performing on the trumpet in front of a house while his younger brother watches."]}, {"source_sentence": "While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign.", "sentences": ["At first, I thought this is a bit of a tricky question.", "A man plays the guitar.", "There is a very good reason not to refer to the Queen's spouse as \"King\" - because they aren't the King."]}], "co2_eq_emissions": {"emissions": 39.55504012195411, "energy_consumed": 0.07407546705036323, "source": "codecarbon", "training_type": "fine-tuning", "on_cloud": false, "cpu_model": "AMD EPYC 7H12 64-Core Processor", "ram_total_size": 229.14864349365234, "hours_used": 0.147, "hardware_used": "8 x NVIDIA GeForce RTX 3090"}, "model-index": [{"name": "SentenceTransformer based on distilbert/distilbert-base-uncased", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts dev", "type": "sts-dev"}, "metrics": [{"type": "pearson_cosine", "value": 0.8600140595861905, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.8598983710598386, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.8243680239709271, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.8279844492084353, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.824951390126028, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.8287648794439747, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.8082965335059282, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.8091677829512911, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.8600140595861905, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.8598983710598386, "name": "Spearman Max"}]}, {"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts test", "type": "sts-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.8268457854861329, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.8228490860497294, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.8156507100664523, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.8121071145557491, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.8163157326426538, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.8129552976781299, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.7410469543934988, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.7354483817269781, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.8268457854861329, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.8228490860497294, "name": "Spearman Max"}, {"type": "pearson_cosine", "value": 0.8291194587336435, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.826073377213203, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.8189784822965882, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.8168853954005567, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.8196499152175635, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.8172865511141795, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.7476019871405575, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.7396418058035931, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.8291194587336435, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.826073377213203, "name": "Spearman Max"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | 44,387 |
Woondsc/opus-mt-ko-en-medterm
|
Woondsc
|
translation
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"Pytorch",
"translation",
"ko",
"en",
"dataset:junyeong-nero/KMA-term",
"base_model:Helsinki-NLP/opus-mt-ko-en",
"base_model:finetune:Helsinki-NLP/opus-mt-ko-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-02-14T04:04:30Z |
2025-02-14T04:17:38+00:00
| 22 | 0 |
---
base_model:
- Helsinki-NLP/opus-mt-ko-en
datasets:
- junyeong-nero/KMA-term
language:
- ko
- en
license: apache-2.0
pipeline_tag: translation
tags:
- Pytorch
- transformers
- marian
---
| null |
Non_BioNLP
|
{"base_model": ["Helsinki-NLP/opus-mt-ko-en"], "datasets": ["junyeong-nero/KMA-term"], "language": ["ko", "en"], "license": "apache-2.0", "pipeline_tag": "translation", "tags": ["Pytorch", "transformers", "marian"]}
|
task
|
[
"TRANSLATION"
] | 44,388 |
|
daniel40/2d93c1da-5840-483e-b679-8f726cbeac55
|
daniel40
| null |
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M",
"base_model:adapter:unsloth/SmolLM-360M",
"license:apache-2.0",
"region:us"
] | 2025-01-27T05:01:27Z |
2025-01-27T05:06:16+00:00
| 1 | 0 |
---
base_model: unsloth/SmolLM-360M
library_name: peft
license: apache-2.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2d93c1da-5840-483e-b679-8f726cbeac55
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b5d7875c7013b5e4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b5d7875c7013b5e4_train_data.json
type:
field_input: transcription
field_instruction: glosses
field_output: translation
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/2d93c1da-5840-483e-b679-8f726cbeac55
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/b5d7875c7013b5e4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e26c401d-844c-4c17-a53e-3099ddf794a7
wandb_project: Birthday-SN56-31-Gradients-On-Demand
wandb_run: your_name
wandb_runid: e26c401d-844c-4c17-a53e-3099ddf794a7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2d93c1da-5840-483e-b679-8f726cbeac55
This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 0.0 | 0.0022 | 13 | nan |
| 0.0 | 0.0045 | 26 | nan |
| 0.0 | 0.0067 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b5d7875c7013b5e4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b5d7875c7013b5e4_train_data.json
type:
field_input: transcription
field_instruction: glosses
field_output: translation
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/2d93c1da-5840-483e-b679-8f726cbeac55
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/b5d7875c7013b5e4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e26c401d-844c-4c17-a53e-3099ddf794a7
wandb_project: Birthday-SN56-31-Gradients-On-Demand
wandb_run: your_name
wandb_runid: e26c401d-844c-4c17-a53e-3099ddf794a7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2d93c1da-5840-483e-b679-8f726cbeac55
This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 0.0 | 0.0022 | 13 | nan |
| 0.0 | 0.0045 | 26 | nan |
| 0.0 | 0.0067 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
{"base_model": "unsloth/SmolLM-360M", "library_name": "peft", "license": "apache-2.0", "tags": ["axolotl", "generated_from_trainer"], "model-index": [{"name": "2d93c1da-5840-483e-b679-8f726cbeac55", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 44,389 |
Adi-0-0-Gupta/Embedding-v0
|
Adi-0-0-Gupta
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:60323",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-18T14:20:00Z |
2024-06-18T14:20:03+00:00
| 9 | 0 |
---
base_model: BAAI/bge-small-en-v1.5
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:60323
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: No recipes found with these beef stock powder and orange juice!
sentences:
- Can you provide recipe ideas with beef stock powder and orange juice?
- What are some recipes that utilize jasmine rice and thai red curry paste effectively?
- What recipes incorporate broccoli and bacon into meals?
- source_sentence: No recipes found with these nutmeg flower and angel hair rice noodles!
sentences:
- What dishes can be created with kale and bok choy?
- What recipes incorporate green zucchini and vegan ground beef into meals?
- Can you provide me with meal ideas using nutmeg flower and angel hair rice noodles?
- source_sentence: No recipes found with these cinnamon and ground lamb!
sentences:
- Can you suggest dishes where cinnamon and ground lamb is key?
- What diet tags are relevant to Sneha's Aloo Baingan ?
- What recipes are there with toasted sesame oil and red lentils/masoor?
- source_sentence: No recipes found with these red lentils/masoor and bok choy!
sentences:
- What are the culinary uses of chili sauce and sriracha?
- What are some ways to use canned tomato puree and frozen ube in recipes?
- What are some ideas for dishes with red lentils/masoor and bok choy?
- source_sentence: No recipes found with these red onion and cubed stuffing!
sentences:
- Can you provide meal suggestions involving vanilla extract and brown lentil/black
masoor dal?
- What recipes incorporate methi (fenugreek) and honey in their ingredients?
- What culinary preparations can be made with red onion and cubed stuffing?
model-index:
- name: SentenceTransformer based on BAAI/bge-small-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 384
type: dim_384
metrics:
- type: cosine_accuracy@1
value: 0.9819483813217962
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9976130091004028
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9995524392063255
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9819483813217962
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33253766970013426
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1999104878412651
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9819483813217962
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9976130091004028
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9995524392063255
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9923670621371893
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9897597379993318
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9897597379993323
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.9812024466656721
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.997463822169178
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9998508130687752
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9812024466656721
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3324879407230593
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19997016261375503
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9812024466656721
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.997463822169178
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9998508130687752
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9921395779775503
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9894450246158434
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9894450246158436
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.979561390422199
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9970162613755035
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9998508130687752
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.979561390422199
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3323387537918345
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19997016261375505
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.979561390422199
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9970162613755035
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9998508130687752
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9913010184783637
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9883310955293644
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9883310955293649
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.9816500074593466
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9968670744442787
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9997016261375503
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9816500074593466
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3322890248147595
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19994032522751004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9816500074593466
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9968670744442787
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9997016261375503
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9920343842432707
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9893333120209138
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9893333120209146
name: Cosine Map@100
---
# SentenceTransformer based on BAAI/bge-small-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Adi-0-0-Gupta/Embedding")
# Run inference
sentences = [
'No recipes found with these red onion and cubed stuffing!',
'What culinary preparations can be made with red onion and cubed stuffing?',
'Can you provide meal suggestions involving vanilla extract and brown lentil/black masoor dal?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_384`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9819 |
| cosine_accuracy@3 | 0.9976 |
| cosine_accuracy@5 | 0.9996 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9819 |
| cosine_precision@3 | 0.3325 |
| cosine_precision@5 | 0.1999 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9819 |
| cosine_recall@3 | 0.9976 |
| cosine_recall@5 | 0.9996 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9924 |
| cosine_mrr@10 | 0.9898 |
| **cosine_map@100** | **0.9898** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9812 |
| cosine_accuracy@3 | 0.9975 |
| cosine_accuracy@5 | 0.9999 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9812 |
| cosine_precision@3 | 0.3325 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9812 |
| cosine_recall@3 | 0.9975 |
| cosine_recall@5 | 0.9999 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9921 |
| cosine_mrr@10 | 0.9894 |
| **cosine_map@100** | **0.9894** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9796 |
| cosine_accuracy@3 | 0.997 |
| cosine_accuracy@5 | 0.9999 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9796 |
| cosine_precision@3 | 0.3323 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9796 |
| cosine_recall@3 | 0.997 |
| cosine_recall@5 | 0.9999 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9913 |
| cosine_mrr@10 | 0.9883 |
| **cosine_map@100** | **0.9883** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9817 |
| cosine_accuracy@3 | 0.9969 |
| cosine_accuracy@5 | 0.9997 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9817 |
| cosine_precision@3 | 0.3323 |
| cosine_precision@5 | 0.1999 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9817 |
| cosine_recall@3 | 0.9969 |
| cosine_recall@5 | 0.9997 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.992 |
| cosine_mrr@10 | 0.9893 |
| **cosine_map@100** | **0.9893** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 60,323 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 21.41 tokens</li><li>max: 503 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 16.8 tokens</li><li>max: 31 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|
| <code>No recipes found with these indian cottage cheese (paneer) and bitter melon!</code> | <code>What are some culinary options with indian cottage cheese (paneer) and bitter melon?</code> |
| <code>No recipes found with these curry leaf and rice cakes!</code> | <code>What recipes can be made using curry leaf and rice cakes?</code> |
| <code>No recipes found with these bacon and rosemary!</code> | <code>What are the different culinary recipes that use bacon and rosemary?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
384,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `gradient_accumulation_steps`: 8
- `learning_rate`: 2e-05
- `num_train_epochs`: 10
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_384_cosine_map@100 | dim_64_cosine_map@100 |
|:------:|:----:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.0848 | 10 | 3.9258 | - | - | - | - |
| 0.1697 | 20 | 3.0513 | - | - | - | - |
| 0.2545 | 30 | 1.6368 | - | - | - | - |
| 0.3393 | 40 | 0.5491 | - | - | - | - |
| 0.4242 | 50 | 0.1541 | - | - | - | - |
| 0.5090 | 60 | 0.0615 | - | - | - | - |
| 0.5938 | 70 | 0.0426 | - | - | - | - |
| 0.6787 | 80 | 0.037 | - | - | - | - |
| 0.7635 | 90 | 0.0312 | - | - | - | - |
| 0.8484 | 100 | 0.0246 | - | - | - | - |
| 0.9332 | 110 | 0.029 | - | - | - | - |
| 0.9926 | 117 | - | 0.9855 | 0.9869 | 0.9869 | 0.9855 |
| 1.0180 | 120 | 0.0205 | - | - | - | - |
| 1.1029 | 130 | 0.0212 | - | - | - | - |
| 1.1877 | 140 | 0.0196 | - | - | - | - |
| 1.2725 | 150 | 0.0157 | - | - | - | - |
| 1.3574 | 160 | 0.0174 | - | - | - | - |
| 1.4422 | 170 | 0.0152 | - | - | - | - |
| 1.5270 | 180 | 0.0155 | - | - | - | - |
| 1.6119 | 190 | 0.0133 | - | - | - | - |
| 1.6967 | 200 | 0.0173 | - | - | - | - |
| 1.7815 | 210 | 0.014 | - | - | - | - |
| 1.8664 | 220 | 0.0127 | - | - | - | - |
| 1.9512 | 230 | 0.0116 | - | - | - | - |
| 1.9936 | 235 | - | 0.9883 | 0.9894 | 0.9898 | 0.9893 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on BAAI/bge-small-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Adi-0-0-Gupta/Embedding")
# Run inference
sentences = [
'No recipes found with these red onion and cubed stuffing!',
'What culinary preparations can be made with red onion and cubed stuffing?',
'Can you provide meal suggestions involving vanilla extract and brown lentil/black masoor dal?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_384`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9819 |
| cosine_accuracy@3 | 0.9976 |
| cosine_accuracy@5 | 0.9996 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9819 |
| cosine_precision@3 | 0.3325 |
| cosine_precision@5 | 0.1999 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9819 |
| cosine_recall@3 | 0.9976 |
| cosine_recall@5 | 0.9996 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9924 |
| cosine_mrr@10 | 0.9898 |
| **cosine_map@100** | **0.9898** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9812 |
| cosine_accuracy@3 | 0.9975 |
| cosine_accuracy@5 | 0.9999 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9812 |
| cosine_precision@3 | 0.3325 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9812 |
| cosine_recall@3 | 0.9975 |
| cosine_recall@5 | 0.9999 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9921 |
| cosine_mrr@10 | 0.9894 |
| **cosine_map@100** | **0.9894** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9796 |
| cosine_accuracy@3 | 0.997 |
| cosine_accuracy@5 | 0.9999 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9796 |
| cosine_precision@3 | 0.3323 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9796 |
| cosine_recall@3 | 0.997 |
| cosine_recall@5 | 0.9999 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9913 |
| cosine_mrr@10 | 0.9883 |
| **cosine_map@100** | **0.9883** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9817 |
| cosine_accuracy@3 | 0.9969 |
| cosine_accuracy@5 | 0.9997 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9817 |
| cosine_precision@3 | 0.3323 |
| cosine_precision@5 | 0.1999 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9817 |
| cosine_recall@3 | 0.9969 |
| cosine_recall@5 | 0.9997 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.992 |
| cosine_mrr@10 | 0.9893 |
| **cosine_map@100** | **0.9893** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 60,323 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 21.41 tokens</li><li>max: 503 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 16.8 tokens</li><li>max: 31 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|
| <code>No recipes found with these indian cottage cheese (paneer) and bitter melon!</code> | <code>What are some culinary options with indian cottage cheese (paneer) and bitter melon?</code> |
| <code>No recipes found with these curry leaf and rice cakes!</code> | <code>What recipes can be made using curry leaf and rice cakes?</code> |
| <code>No recipes found with these bacon and rosemary!</code> | <code>What are the different culinary recipes that use bacon and rosemary?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
384,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `gradient_accumulation_steps`: 8
- `learning_rate`: 2e-05
- `num_train_epochs`: 10
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_384_cosine_map@100 | dim_64_cosine_map@100 |
|:------:|:----:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.0848 | 10 | 3.9258 | - | - | - | - |
| 0.1697 | 20 | 3.0513 | - | - | - | - |
| 0.2545 | 30 | 1.6368 | - | - | - | - |
| 0.3393 | 40 | 0.5491 | - | - | - | - |
| 0.4242 | 50 | 0.1541 | - | - | - | - |
| 0.5090 | 60 | 0.0615 | - | - | - | - |
| 0.5938 | 70 | 0.0426 | - | - | - | - |
| 0.6787 | 80 | 0.037 | - | - | - | - |
| 0.7635 | 90 | 0.0312 | - | - | - | - |
| 0.8484 | 100 | 0.0246 | - | - | - | - |
| 0.9332 | 110 | 0.029 | - | - | - | - |
| 0.9926 | 117 | - | 0.9855 | 0.9869 | 0.9869 | 0.9855 |
| 1.0180 | 120 | 0.0205 | - | - | - | - |
| 1.1029 | 130 | 0.0212 | - | - | - | - |
| 1.1877 | 140 | 0.0196 | - | - | - | - |
| 1.2725 | 150 | 0.0157 | - | - | - | - |
| 1.3574 | 160 | 0.0174 | - | - | - | - |
| 1.4422 | 170 | 0.0152 | - | - | - | - |
| 1.5270 | 180 | 0.0155 | - | - | - | - |
| 1.6119 | 190 | 0.0133 | - | - | - | - |
| 1.6967 | 200 | 0.0173 | - | - | - | - |
| 1.7815 | 210 | 0.014 | - | - | - | - |
| 1.8664 | 220 | 0.0127 | - | - | - | - |
| 1.9512 | 230 | 0.0116 | - | - | - | - |
| 1.9936 | 235 | - | 0.9883 | 0.9894 | 0.9898 | 0.9893 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-small-en-v1.5", "datasets": [], "language": [], "library_name": "sentence-transformers", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:60323", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "No recipes found with these beef stock powder and orange juice!", "sentences": ["Can you provide recipe ideas with beef stock powder and orange juice?", "What are some recipes that utilize jasmine rice and thai red curry paste effectively?", "What recipes incorporate broccoli and bacon into meals?"]}, {"source_sentence": "No recipes found with these nutmeg flower and angel hair rice noodles!", "sentences": ["What dishes can be created with kale and bok choy?", "What recipes incorporate green zucchini and vegan ground beef into meals?", "Can you provide me with meal ideas using nutmeg flower and angel hair rice noodles?"]}, {"source_sentence": "No recipes found with these cinnamon and ground lamb!", "sentences": ["Can you suggest dishes where cinnamon and ground lamb is key?", "What diet tags are relevant to Sneha's Aloo Baingan ?", "What recipes are there with toasted sesame oil and red lentils/masoor?"]}, {"source_sentence": "No recipes found with these red lentils/masoor and bok choy!", "sentences": ["What are the culinary uses of chili sauce and sriracha?", "What are some ways to use canned tomato puree and frozen ube in recipes?", "What are some ideas for dishes with red lentils/masoor and bok choy?"]}, {"source_sentence": "No recipes found with these red onion and cubed stuffing!", "sentences": ["Can you provide meal suggestions involving vanilla extract and brown lentil/black masoor dal?", "What recipes incorporate methi (fenugreek) and honey in their ingredients?", "What culinary preparations can be made with red onion and cubed stuffing?"]}], "model-index": [{"name": "SentenceTransformer based on BAAI/bge-small-en-v1.5", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 384", "type": "dim_384"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.9819483813217962, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.9976130091004028, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9995524392063255, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 1.0, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.9819483813217962, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.33253766970013426, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.1999104878412651, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09999999999999999, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.9819483813217962, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.9976130091004028, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9995524392063255, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 1.0, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9923670621371893, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.9897597379993318, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.9897597379993323, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.9812024466656721, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.997463822169178, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9998508130687752, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 1.0, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.9812024466656721, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.3324879407230593, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.19997016261375503, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09999999999999999, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.9812024466656721, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.997463822169178, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9998508130687752, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 1.0, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9921395779775503, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.9894450246158434, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.9894450246158436, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 128", "type": "dim_128"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.979561390422199, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.9970162613755035, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9998508130687752, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 1.0, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.979561390422199, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.3323387537918345, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.19997016261375505, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09999999999999999, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.979561390422199, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.9970162613755035, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9998508130687752, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 1.0, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9913010184783637, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.9883310955293644, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.9883310955293649, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 64", "type": "dim_64"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.9816500074593466, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.9968670744442787, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.9997016261375503, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 1.0, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.9816500074593466, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.3322890248147595, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.19994032522751004, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09999999999999999, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.9816500074593466, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.9968670744442787, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.9997016261375503, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 1.0, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.9920343842432707, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.9893333120209138, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.9893333120209146, "name": "Cosine Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,390 |
TransferGraph/rmihaylov_roberta-base-sentiment-bg-finetuned-lora-tweet_eval_emotion
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:rmihaylov/roberta-base-sentiment-bg",
"base_model:adapter:rmihaylov/roberta-base-sentiment-bg",
"license:mit",
"model-index",
"region:us"
] | 2024-02-29T12:50:48Z |
2024-02-29T12:50:50+00:00
| 0 | 0 |
---
base_model: rmihaylov/roberta-base-sentiment-bg
datasets:
- tweet_eval
library_name: peft
license: mit
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: rmihaylov_roberta-base-sentiment-bg-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.5213903743315508
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rmihaylov_roberta-base-sentiment-bg-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [rmihaylov/roberta-base-sentiment-bg](https://huggingface.co/rmihaylov/roberta-base-sentiment-bg) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3396 | None | 0 |
| 0.4733 | 1.1874 | 0 |
| 0.4652 | 1.1649 | 1 |
| 0.5187 | 1.1402 | 2 |
| 0.5214 | 1.1039 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rmihaylov_roberta-base-sentiment-bg-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [rmihaylov/roberta-base-sentiment-bg](https://huggingface.co/rmihaylov/roberta-base-sentiment-bg) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3396 | None | 0 |
| 0.4733 | 1.1874 | 0 |
| 0.4652 | 1.1649 | 1 |
| 0.5187 | 1.1402 | 2 |
| 0.5214 | 1.1039 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "rmihaylov/roberta-base-sentiment-bg", "datasets": ["tweet_eval"], "library_name": "peft", "license": "mit", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "rmihaylov_roberta-base-sentiment-bg-finetuned-lora-tweet_eval_emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "emotion", "split": "validation", "args": "emotion"}, "metrics": [{"type": "accuracy", "value": 0.5213903743315508, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,391 |
sophiayk20/marian-finetuned-kde4-en-to-fr
|
sophiayk20
|
translation
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-02-12T08:08:59Z |
2024-02-12T10:17:14+00:00
| 9 | 0 |
---
base_model: Helsinki-NLP/opus-mt-en-fr
datasets:
- kde4
license: apache-2.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- type: bleu
value: 52.88398487672078
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8556
- Bleu: 52.8840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8556
- Bleu: 52.8840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
{"base_model": "Helsinki-NLP/opus-mt-en-fr", "datasets": ["kde4"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "config": "en-fr", "split": "train", "args": "en-fr"}, "metrics": [{"type": "bleu", "value": 52.88398487672078, "name": "Bleu"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 44,392 |
codelorhd/distilbert-base-uncased-finetuned-emotion
|
codelorhd
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotions",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-18T10:34:52Z |
2024-05-18T13:52:50+00:00
| 6 | 0 |
---
datasets:
- emotions
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotions
type: emotions
args: split
metrics:
- type: accuracy
value: 0.9235
name: Accuracy
- type: f1
value: 0.9233442192567661
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2338
- Accuracy: 0.9235
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8681 | 1.0 | 250 | 0.3529 | 0.895 | 0.8895 |
| 0.2675 | 2.0 | 500 | 0.2338 | 0.9235 | 0.9233 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.3.0+cu121
- Datasets 1.16.1
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2338
- Accuracy: 0.9235
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8681 | 1.0 | 250 | 0.3529 | 0.895 | 0.8895 |
| 0.2675 | 2.0 | 500 | 0.2338 | 0.9235 | 0.9233 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.3.0+cu121
- Datasets 1.16.1
- Tokenizers 0.19.1
|
{"datasets": ["emotions"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotions", "type": "emotions", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9235, "name": "Accuracy"}, {"type": "f1", "value": 0.9233442192567661, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,393 |
cmagganas/instruct-tuned-llama-7b-hf-alpaca_gpt4_5_000_samples
|
cmagganas
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-08-31T01:55:29Z |
2023-08-31T21:42:39+00:00
| 12 | 0 |
---
language:
- en
license: llama2
---
# Instruct-Tuned LLaMA-7B Model Card
## Model Description
The Instruct-Tuned LLaMA-7B is a language model based on the LLaMA-2 architecture, trained and fine-tuned to generate coherent responses for a wide range of tasks. This model has been optimized to understand and generate text instructions effectively. It has a total of 7 billion parameters and is designed to provide accurate and contextually relevant responses to given prompts.
## Intended Uses
The model is intended to be used for generating responses based on input instructions and contexts. It can be applied in a variety of natural language processing tasks such as text completion, question answering, summarization, and more. Its ability to handle instructions and contexts makes it particularly suitable for tasks involving complex prompts.
## Limitations
- **Bias**: Like any large language model, the Instruct-Tuned LLaMA-7B may inadvertently reflect biases present in the training data. It's important to be cautious when using the model in sensitive applications and to perform bias analysis before deployment.
- **Context Sensitivity**: While the model is capable of understanding instructions and contexts, its responses are based on patterns in the training data and might not always capture nuanced or subtle instructions accurately.
- **Limited Training Data**: The model has been fine-tuned on a specific dataset and may not perform optimally for tasks significantly different from its training data.
## Training Parameters
- **Model Architecture**: LLaMA-2 with 7 billion parameters.
- **Quantization**: The model uses 4-bit quantization techniques.
- **Attention Mechanism**: Flash Attention (based on the Flash Attention paper).
- **Training Frameworks**: HF's Transformers library, Peft library, and TRL library.
- **Optimization Strategy**: Paged AdamW 32-bit optimization.
- **Training Batch Size**: Varies based on the presence of Flash Attention (4 with Flash Attention, 1 without).
- **Learning Rate**: 2e-4 with constant scheduling.
- **Gradient Accumulation**: Every 2 steps.
- **Max Sequence Length**: 2048 tokens.
## Datasets Used
The model was fine-tuned on a subset of the Alpaca-GPT-4 dataset, containing prompts, instructions, and corresponding responses. The dataset was preprocessed to ensure reasonable training times without sacrificing quality.
## Evaluation Results
The Instruct-Tuned LLaMA-7B was evaluated on various prompts from the Alpaca-GPT-4 dataset. During evaluation, it demonstrated significant improvements over the base LLaMA-2 model in terms of generating coherent and contextually relevant responses. Its responses aligned well with the intended meaning of the prompts.
## Model Card Attribution
This model card was authored by Chris Alexiuk and is based on the work presented in the [GitHub Repository](https://github.com/AI-Maker-Space/Fine-tuning-LLM-Resources#instruct-tuning-openlms-openllama-on-the-dolly-15k-dataset-notebooks). The model and its associated artifacts are available on the [Hugging Face Dataset Card](https://huggingface.co/datasets/c-s-ale/alpaca-gpt4-data).
For more information, sweet tutorials and community building, checkout [AI Makerspace](https://www.linkedin.com/company/ai-maker-space).
---
| null |
Non_BioNLP
|
# Instruct-Tuned LLaMA-7B Model Card
## Model Description
The Instruct-Tuned LLaMA-7B is a language model based on the LLaMA-2 architecture, trained and fine-tuned to generate coherent responses for a wide range of tasks. This model has been optimized to understand and generate text instructions effectively. It has a total of 7 billion parameters and is designed to provide accurate and contextually relevant responses to given prompts.
## Intended Uses
The model is intended to be used for generating responses based on input instructions and contexts. It can be applied in a variety of natural language processing tasks such as text completion, question answering, summarization, and more. Its ability to handle instructions and contexts makes it particularly suitable for tasks involving complex prompts.
## Limitations
- **Bias**: Like any large language model, the Instruct-Tuned LLaMA-7B may inadvertently reflect biases present in the training data. It's important to be cautious when using the model in sensitive applications and to perform bias analysis before deployment.
- **Context Sensitivity**: While the model is capable of understanding instructions and contexts, its responses are based on patterns in the training data and might not always capture nuanced or subtle instructions accurately.
- **Limited Training Data**: The model has been fine-tuned on a specific dataset and may not perform optimally for tasks significantly different from its training data.
## Training Parameters
- **Model Architecture**: LLaMA-2 with 7 billion parameters.
- **Quantization**: The model uses 4-bit quantization techniques.
- **Attention Mechanism**: Flash Attention (based on the Flash Attention paper).
- **Training Frameworks**: HF's Transformers library, Peft library, and TRL library.
- **Optimization Strategy**: Paged AdamW 32-bit optimization.
- **Training Batch Size**: Varies based on the presence of Flash Attention (4 with Flash Attention, 1 without).
- **Learning Rate**: 2e-4 with constant scheduling.
- **Gradient Accumulation**: Every 2 steps.
- **Max Sequence Length**: 2048 tokens.
## Datasets Used
The model was fine-tuned on a subset of the Alpaca-GPT-4 dataset, containing prompts, instructions, and corresponding responses. The dataset was preprocessed to ensure reasonable training times without sacrificing quality.
## Evaluation Results
The Instruct-Tuned LLaMA-7B was evaluated on various prompts from the Alpaca-GPT-4 dataset. During evaluation, it demonstrated significant improvements over the base LLaMA-2 model in terms of generating coherent and contextually relevant responses. Its responses aligned well with the intended meaning of the prompts.
## Model Card Attribution
This model card was authored by Chris Alexiuk and is based on the work presented in the [GitHub Repository](https://github.com/AI-Maker-Space/Fine-tuning-LLM-Resources#instruct-tuning-openlms-openllama-on-the-dolly-15k-dataset-notebooks). The model and its associated artifacts are available on the [Hugging Face Dataset Card](https://huggingface.co/datasets/c-s-ale/alpaca-gpt4-data).
For more information, sweet tutorials and community building, checkout [AI Makerspace](https://www.linkedin.com/company/ai-maker-space).
---
|
{"language": ["en"], "license": "llama2"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 44,394 |
ppsingh/iki_target_setfit
|
ppsingh
|
text-classification
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:GIZ/TAPP-multilabel-mpnet",
"base_model:finetune:GIZ/TAPP-multilabel-mpnet",
"co2_eq_emissions",
"region:us"
] | 2024-02-11T18:11:00Z |
2024-02-12T15:24:33+00:00
| 9 | 0 |
---
base_model: ppsingh/TAPP-multilabel-mpnet
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: During 2021-2030, Thailand s LEDS will be implemented through the NDC roadmap
and sectoral action plans which provide detailed guidance on measures and realistic
actions to achieve the 1st NDC target by 2030, as well as regular monitoring and
evaluation of the progress and achievement. The monitoring and evaluation of the
mitigation measures relating to the Thailand’s LEDS will be carried out to ensure
its effectiveness and efficiency in achieving its objectives and key performance
indicators. Because it is a long-term plan spanning many years during which many
changes can occur, it is envisaged that it will be subject to a comprehensive
review every five years. This is consistent with the approach under the Paris
Agreement that assigned Parties to submit their NDCs to the UNFCCC every five
year.
- text: The NDC also benefited from the reviews and comments of these implementing
partners as well as local and international experts. Special thanks to The Honourable
Molwyn Joseph, Minister for Health, Wellness and the Environment, for his unwavering
commitment to advance this ambitious climate change agenda, while Antigua and
Barbuda faced an outbreak of the COVID-19 pandemic. Significant contributions
to the process were made by a wide-cross section of stakeholders from the public
and private sector, civil society, trade and industry groups and training institutions,
who attended NDC-related workshops, consultations and participated in key stakeholder
interviews organized to inform the NDC update.
- text: Antigua and Barbuda will mainstream gender in its energy planning through
an Inclusive Renewable Energy Strategy. This strategy will recognize and acknowledge,
among other things, the gender norms, and inequalities prevalent in the energy
sector, women and men’s differentiated access to energy, their different energy
needs and preferences, and different impacts that energy access could have on
their livelihoods. Antigua and Barbuda’s plan for an inclusive renewable energy
transition will ensure continued affordable and reliable access to electricity
and other energy services for all.
- text: 'Thailand’s climate actions are divided into short-term, medium-term and long-term
targets up to 2050. For the mitigation actions, short-term targets include: (i)
develop medium- and long-term GHG emission reduction targets and prepare roadmaps
for the implementation by sector, including the GHG emission reduction target
on a voluntary basis (pre-2020 target), Nationally Appropriate Mitigation Actions
(NAMAs) roadmaps, and measurement, reporting, and verification mechanisms, (ii)
establish domestic incentive mechanisms to encourage low carbon development. The
medium-term targets include: (i) reduce GHG emissions from energy and transport
sectors by 7-20% against BAU level by 2020, subject to the level of international
support, (ii) supply at least 25% of energy consumption from renewable energy
sources by 2021 and (iii) increase the ratio of municipalities with more than
10 m2 of green space per capita.'
- text: In the oil sector, the country has benefited from 372 million dollars for
the reduction of gas flaring at the initiative (GGFR - "Global Gas Flaring Reduction")
of the World Bank after having adopted in November 2015 a national reduction plan
flaring and associated gas upgrading. In the electricity sector, the NDC highlights
the development of hydroelectricity which should make it possible to cover 80%
of production in 2025, the remaining 20% ​​being
covered by gas and other renewable energies.
inference: true
co2_eq_emissions:
emissions: 5.901369050433577
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: Intel(R) Xeon(R) CPU @ 2.00GHz
ram_total_size: 12.674789428710938
hours_used: 0.185
hardware_used: 1 x Tesla T4
---
# SetFit with ppsingh/TAPP-multilabel-mpnet
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [ppsingh/TAPP-multilabel-mpnet](https://huggingface.co/ppsingh/TAPP-multilabel-mpnet) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [ppsingh/TAPP-multilabel-mpnet](https://huggingface.co/ppsingh/TAPP-multilabel-mpnet)
- **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| NEGATIVE | <ul><li>'(p 70-1).Antigua and Barbuda’s 2021 update to the first Nationally Determined Contribution the most vulnerable in society have been predominantly focused on adaptation measures like building resilience to flooding and hurricanes. The updated NDC ambition provides an opportunity to focus more intently on enabling access to energy efficiency and renewable energy for the most vulnerable, particularly women who are most affected when electricity is not available since the grid is down after an extreme weather event. Nationally, Antigua and Barbuda intends to utilize the SIRF Fund as a mechanism primarily to catalyse and leverage investment in the transition for NGOs, MSMEs and informal sectors that normally cannot access traditional local commercial financing due to perceived high risks.'</li><li>'The transport system cost will be increased by 16.2% compared to the BAU level. Electric trucks and electric pick-ups will account for the highest share of investment followed by electric buses and trucks. In the manufacturing industries, the energy efficiency improvement in the heating and the motor systems and the deployment of CCS require the highest investment in the non-metallic and the chemical industries in 2050. The manufacturing industries system cost will be increased by 15.3% compared to the BAU level.'</li><li>'Figure 1-9: Total GHG emissions by sector (excluding LULUCF) 2000 and 2016 1.2.2 Greenhouse Gas Emission by Sector • Energy Total direct GHG emissions from the Energy sector in 2016 were estimated to be 253,895.61 eq. The majority of GHG emissions in the Energy sector were generated by fuel combustion, consisting mostly of grid-connected electricity and heat production at around eq (42.84%). GHG emissions from Transport, Manufacturing Industries and Construction, and other sectors were 68,260.17 GgCO2 eq eq (6.10%), respectively. Fugitive Emissions from fuel eq or a little over 4.33% of total GHG emissions from the Energy sector. Details of GHG emissions in the Energy sector by gas type and source in 2016 are presented in Figure 1-10. Source: Thailand Third Biennial Update Report, UNFCCC 2020.'</li></ul> |
| TARGET | <ul><li>'DNPM, NFA,. Cocoa. Board,. Spice Board,. Provincial. gov-ernments. in the. Momase. region. Ongoing -. 2025. 340. European Union. Support committed. Priority Sector: Health. By 2030, 100% of the population benefit from introduced health measures to respond to malaria and other climate-sensitive diseases in PNG. Action or Activity. Indicator. Status. Lead. Implementing. Agencies. Supporting. Agencies. Time Frame. Budget (USD). Funding Source. (Existing/Potential). Other Support. Improve vector control. measures, with a priority. of all households having. access to a long-lasting. insecticidal net (LLIN).'</li><li>'Conditionality: With national effort it is intended to increase the attention to vulnerable groups in case of disasters and/or emergencies up to 50% of the target and 100% of the target with international cooperation. Description: In this goal, it is projected to increase coverage from 33% to 50% (211,000 families) of agricultural insurance in attention to the number of families, whose crops were affected by various adverse weather events (flood, drought, frost, hailstorm, among others), in addition to the implementation of comprehensive actions for risk management and adaptation to Climate Change.'</li><li>'By 2030, upgrade watershed health and vitality in at least 20 districts to a higher condition category. By 2030, create an inventory of wetlands in Nepal and sustainably manage vulnerable wetlands. By 2025, enhance the sink capacity of the landuse sector by instituting the Forest Development Fund (FDF) for compensation of plantations and forest restoration. Increase growing stock including Mean Annual Increment in Tarai, Hills and Mountains. Afforest/reforest viable public and private lands, including agroforestry.'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("ppsingh/iki_target_setfit")
# Run inference
preds = model("In the oil sector, the country has benefited from 372 million dollars for the reduction of gas flaring at the initiative (GGFR - \"Global Gas Flaring Reduction\") of the World Bank after having adopted in November 2015 a national reduction plan flaring and associated gas upgrading. In the electricity sector, the NDC highlights the development of hydroelectricity which should make it possible to cover 80% of production in 2025, the remaining 20% ​​being covered by gas and other renewable energies.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:----|
| Word count | 58 | 116.6632 | 508 |
| Label | Training Sample Count |
|:---------|:----------------------|
| NEGATIVE | 51 |
| TARGET | 44 |
### Training Hyperparameters
- batch_size: (8, 2)
- num_epochs: (1, 0)
- max_steps: -1
- sampling_strategy: undersampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0018 | 1 | 0.3343 | - |
| 0.1783 | 100 | 0.0026 | 0.1965 |
| 0.3565 | 200 | 0.0001 | 0.1995 |
| 0.5348 | 300 | 0.0001 | 0.2105 |
| 0.7130 | 400 | 0.0001 | 0.2153 |
| 0.8913 | 500 | 0.0 | 0.1927 |
### Training Results Classifier
- Classes Representation in Test Data: Target: 9, Negative: 8
- F1-score: 87.8%
- Accuracy: 88.2%
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.006 kg of CO2
- **Hours Used**: 0.185 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x Tesla T4
- **CPU Model**: Intel(R) Xeon(R) CPU @ 2.00GHz
- **RAM Size**: 12.67 GB
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.3.0
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with ppsingh/TAPP-multilabel-mpnet
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [ppsingh/TAPP-multilabel-mpnet](https://huggingface.co/ppsingh/TAPP-multilabel-mpnet) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [ppsingh/TAPP-multilabel-mpnet](https://huggingface.co/ppsingh/TAPP-multilabel-mpnet)
- **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| NEGATIVE | <ul><li>'(p 70-1).Antigua and Barbuda’s 2021 update to the first Nationally Determined Contribution the most vulnerable in society have been predominantly focused on adaptation measures like building resilience to flooding and hurricanes. The updated NDC ambition provides an opportunity to focus more intently on enabling access to energy efficiency and renewable energy for the most vulnerable, particularly women who are most affected when electricity is not available since the grid is down after an extreme weather event. Nationally, Antigua and Barbuda intends to utilize the SIRF Fund as a mechanism primarily to catalyse and leverage investment in the transition for NGOs, MSMEs and informal sectors that normally cannot access traditional local commercial financing due to perceived high risks.'</li><li>'The transport system cost will be increased by 16.2% compared to the BAU level. Electric trucks and electric pick-ups will account for the highest share of investment followed by electric buses and trucks. In the manufacturing industries, the energy efficiency improvement in the heating and the motor systems and the deployment of CCS require the highest investment in the non-metallic and the chemical industries in 2050. The manufacturing industries system cost will be increased by 15.3% compared to the BAU level.'</li><li>'Figure 1-9: Total GHG emissions by sector (excluding LULUCF) 2000 and 2016 1.2.2 Greenhouse Gas Emission by Sector • Energy Total direct GHG emissions from the Energy sector in 2016 were estimated to be 253,895.61 eq. The majority of GHG emissions in the Energy sector were generated by fuel combustion, consisting mostly of grid-connected electricity and heat production at around eq (42.84%). GHG emissions from Transport, Manufacturing Industries and Construction, and other sectors were 68,260.17 GgCO2 eq eq (6.10%), respectively. Fugitive Emissions from fuel eq or a little over 4.33% of total GHG emissions from the Energy sector. Details of GHG emissions in the Energy sector by gas type and source in 2016 are presented in Figure 1-10. Source: Thailand Third Biennial Update Report, UNFCCC 2020.'</li></ul> |
| TARGET | <ul><li>'DNPM, NFA,. Cocoa. Board,. Spice Board,. Provincial. gov-ernments. in the. Momase. region. Ongoing -. 2025. 340. European Union. Support committed. Priority Sector: Health. By 2030, 100% of the population benefit from introduced health measures to respond to malaria and other climate-sensitive diseases in PNG. Action or Activity. Indicator. Status. Lead. Implementing. Agencies. Supporting. Agencies. Time Frame. Budget (USD). Funding Source. (Existing/Potential). Other Support. Improve vector control. measures, with a priority. of all households having. access to a long-lasting. insecticidal net (LLIN).'</li><li>'Conditionality: With national effort it is intended to increase the attention to vulnerable groups in case of disasters and/or emergencies up to 50% of the target and 100% of the target with international cooperation. Description: In this goal, it is projected to increase coverage from 33% to 50% (211,000 families) of agricultural insurance in attention to the number of families, whose crops were affected by various adverse weather events (flood, drought, frost, hailstorm, among others), in addition to the implementation of comprehensive actions for risk management and adaptation to Climate Change.'</li><li>'By 2030, upgrade watershed health and vitality in at least 20 districts to a higher condition category. By 2030, create an inventory of wetlands in Nepal and sustainably manage vulnerable wetlands. By 2025, enhance the sink capacity of the landuse sector by instituting the Forest Development Fund (FDF) for compensation of plantations and forest restoration. Increase growing stock including Mean Annual Increment in Tarai, Hills and Mountains. Afforest/reforest viable public and private lands, including agroforestry.'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("ppsingh/iki_target_setfit")
# Run inference
preds = model("In the oil sector, the country has benefited from 372 million dollars for the reduction of gas flaring at the initiative (GGFR - \"Global Gas Flaring Reduction\") of the World Bank after having adopted in November 2015 a national reduction plan flaring and associated gas upgrading. In the electricity sector, the NDC highlights the development of hydroelectricity which should make it possible to cover 80% of production in 2025, the remaining 20% ​​being covered by gas and other renewable energies.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:----|
| Word count | 58 | 116.6632 | 508 |
| Label | Training Sample Count |
|:---------|:----------------------|
| NEGATIVE | 51 |
| TARGET | 44 |
### Training Hyperparameters
- batch_size: (8, 2)
- num_epochs: (1, 0)
- max_steps: -1
- sampling_strategy: undersampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0018 | 1 | 0.3343 | - |
| 0.1783 | 100 | 0.0026 | 0.1965 |
| 0.3565 | 200 | 0.0001 | 0.1995 |
| 0.5348 | 300 | 0.0001 | 0.2105 |
| 0.7130 | 400 | 0.0001 | 0.2153 |
| 0.8913 | 500 | 0.0 | 0.1927 |
### Training Results Classifier
- Classes Representation in Test Data: Target: 9, Negative: 8
- F1-score: 87.8%
- Accuracy: 88.2%
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.006 kg of CO2
- **Hours Used**: 0.185 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x Tesla T4
- **CPU Model**: Intel(R) Xeon(R) CPU @ 2.00GHz
- **RAM Size**: 12.67 GB
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.3.0
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "ppsingh/TAPP-multilabel-mpnet", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "During 2021-2030, Thailand s LEDS will be implemented through the NDC roadmap and sectoral action plans which provide detailed guidance on measures and realistic actions to achieve the 1st NDC target by 2030, as well as regular monitoring and evaluation of the progress and achievement. The monitoring and evaluation of the mitigation measures relating to the Thailand’s LEDS will be carried out to ensure its effectiveness and efficiency in achieving its objectives and key performance indicators. Because it is a long-term plan spanning many years during which many changes can occur, it is envisaged that it will be subject to a comprehensive review every five years. This is consistent with the approach under the Paris Agreement that assigned Parties to submit their NDCs to the UNFCCC every five year."}, {"text": "The NDC also benefited from the reviews and comments of these implementing partners as well as local and international experts. Special thanks to The Honourable Molwyn Joseph, Minister for Health, Wellness and the Environment, for his unwavering commitment to advance this ambitious climate change agenda, while Antigua and Barbuda faced an outbreak of the COVID-19 pandemic. Significant contributions to the process were made by a wide-cross section of stakeholders from the public and private sector, civil society, trade and industry groups and training institutions, who attended NDC-related workshops, consultations and participated in key stakeholder interviews organized to inform the NDC update."}, {"text": "Antigua and Barbuda will mainstream gender in its energy planning through an Inclusive Renewable Energy Strategy. This strategy will recognize and acknowledge, among other things, the gender norms, and inequalities prevalent in the energy sector, women and men’s differentiated access to energy, their different energy needs and preferences, and different impacts that energy access could have on their livelihoods. Antigua and Barbuda’s plan for an inclusive renewable energy transition will ensure continued affordable and reliable access to electricity and other energy services for all."}, {"text": "Thailand’s climate actions are divided into short-term, medium-term and long-term targets up to 2050. For the mitigation actions, short-term targets include: (i) develop medium- and long-term GHG emission reduction targets and prepare roadmaps for the implementation by sector, including the GHG emission reduction target on a voluntary basis (pre-2020 target), Nationally Appropriate Mitigation Actions (NAMAs) roadmaps, and measurement, reporting, and verification mechanisms, (ii) establish domestic incentive mechanisms to encourage low carbon development. The medium-term targets include: (i) reduce GHG emissions from energy and transport sectors by 7-20% against BAU level by 2020, subject to the level of international support, (ii) supply at least 25% of energy consumption from renewable energy sources by 2021 and (iii) increase the ratio of municipalities with more than 10 m2 of green space per capita."}, {"text": "In the oil sector, the country has benefited from 372 million dollars for the reduction of gas flaring at the initiative (GGFR - \"Global Gas Flaring Reduction\") of the World Bank after having adopted in November 2015 a national reduction plan flaring and associated gas upgrading. In the electricity sector, the NDC highlights the development of hydroelectricity which should make it possible to cover 80% of production in 2025, the remaining 20% ​​being covered by gas and other renewable energies."}], "inference": true, "co2_eq_emissions": {"emissions": 5.901369050433577, "source": "codecarbon", "training_type": "fine-tuning", "on_cloud": false, "cpu_model": "Intel(R) Xeon(R) CPU @ 2.00GHz", "ram_total_size": 12.674789428710938, "hours_used": 0.185, "hardware_used": "1 x Tesla T4"}}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,395 |
platzi/transfer-course-distilroberta-base-mrpc-glue-nestor-mamani
|
platzi
|
text-classification
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-12T12:55:30Z |
2023-10-12T13:00:24+00:00
| 95 | 0 |
---
base_model: distilroberta-base
datasets:
- glue
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: transfer-course-distilroberta-base-mrpc-glue-nestor-mamani
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- type: accuracy
value: 0.8357843137254902
name: Accuracy
- type: f1
value: 0.8858603066439524
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# transfer-course-distilroberta-base-mrpc-glue-nestor-mamani
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4601
- Accuracy: 0.8358
- F1: 0.8859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.315 | 2.17 | 500 | 0.4601 | 0.8358 | 0.8859 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# transfer-course-distilroberta-base-mrpc-glue-nestor-mamani
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4601
- Accuracy: 0.8358
- F1: 0.8859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.315 | 2.17 | 500 | 0.4601 | 0.8358 | 0.8859 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.14.1
|
{"base_model": "distilroberta-base", "datasets": ["glue"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "transfer-course-distilroberta-base-mrpc-glue-nestor-mamani", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "mrpc", "split": "validation", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.8357843137254902, "name": "Accuracy"}, {"type": "f1", "value": 0.8858603066439524, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,396 |
LoneStriker/airoboros-m-7b-3.0-8.0bpw-h6-exl2
|
LoneStriker
|
text-generation
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:jondurbin/airoboros-3.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-10-06T18:36:18Z |
2023-10-06T18:37:46+00:00
| 3 | 0 |
---
datasets:
- jondurbin/airoboros-3.0
license: apache-2.0
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
The airobors 3.0 models have been updated to use llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
A model built on [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3.0), which is a continuation of [airoboros-2.2.1 dataset](https://hf.co/datasets/jondurbin/airoboros-2.2.1) with the following extras:
* MathJSON - this is not yet part of the code directly, but can be easily parsed and executed, as a way to perform deterministic calculations. LLMs suck at math, so this does problem formulation and let's the work be done by traditional operations.
* Human-generated RP multi-turn conversation - provided by an anon, this is a human created RP dataset, meant to assist with multi-turn coherency and chat experience.
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
Here is a simple python script that can help generate the prompt:
```python
def get_chat_prompt(message, chat_history, system_prompt):
texts = [f'[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n']
do_strip = False
for user_input, response in chat_history:
user_input = user_input.strip() if do_strip else user_input
do_strip = True
texts.append(f'{user_input} [/INST] {response.strip()} </s><s>[INST] ')
message = message.strip() if do_strip else message
texts.append(f'{message} [/INST] ')
return ''.join(texts)
def get_prompt(item):
"""
`item` is expected to be in chatml format, e.g.
{
"id": 1,
"conversations": [
{
"role": "system",
"content": "You are a helpful, unbiased, uncensored assistant."
},
{
"role": "user",
"content": "Hi!"
},
{
"role": "assistant",
"content": "Hello, how are you today?"
},
{
"role": "user",
"content": "Just peachy, and you?"
}
]
}
"""
system_prompt = share_gpt_item['conversations'][0]['value']
offset = 1
if share_gpt_item['conversations'][0]['role'] != 'system':
system_prompt = 'You are an unbiased, uncensored, helpful assistant.'
offset = 0
chat_history = [
(share_gpt_item['conversations'][idx]['content'], share_gpt_item['conversations'][idx + 1]['content'])
for idx in range(offset, len(item['conversations']) - 1, 2)
]
message = item['conversations'][-1]['content']
return get_chat_prompt(message, chat_history, system_prompt)
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.0 models are built on top of multiple base models, each with their own license/restrictions.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The model with `-3b` uses Stability AI, which as a `cc-by-sa-4.0` license.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
| null |
Non_BioNLP
|
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
The airobors 3.0 models have been updated to use llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
A model built on [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3.0), which is a continuation of [airoboros-2.2.1 dataset](https://hf.co/datasets/jondurbin/airoboros-2.2.1) with the following extras:
* MathJSON - this is not yet part of the code directly, but can be easily parsed and executed, as a way to perform deterministic calculations. LLMs suck at math, so this does problem formulation and let's the work be done by traditional operations.
* Human-generated RP multi-turn conversation - provided by an anon, this is a human created RP dataset, meant to assist with multi-turn coherency and chat experience.
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
Here is a simple python script that can help generate the prompt:
```python
def get_chat_prompt(message, chat_history, system_prompt):
texts = [f'[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n']
do_strip = False
for user_input, response in chat_history:
user_input = user_input.strip() if do_strip else user_input
do_strip = True
texts.append(f'{user_input} [/INST] {response.strip()} </s><s>[INST] ')
message = message.strip() if do_strip else message
texts.append(f'{message} [/INST] ')
return ''.join(texts)
def get_prompt(item):
"""
`item` is expected to be in chatml format, e.g.
{
"id": 1,
"conversations": [
{
"role": "system",
"content": "You are a helpful, unbiased, uncensored assistant."
},
{
"role": "user",
"content": "Hi!"
},
{
"role": "assistant",
"content": "Hello, how are you today?"
},
{
"role": "user",
"content": "Just peachy, and you?"
}
]
}
"""
system_prompt = share_gpt_item['conversations'][0]['value']
offset = 1
if share_gpt_item['conversations'][0]['role'] != 'system':
system_prompt = 'You are an unbiased, uncensored, helpful assistant.'
offset = 0
chat_history = [
(share_gpt_item['conversations'][idx]['content'], share_gpt_item['conversations'][idx + 1]['content'])
for idx in range(offset, len(item['conversations']) - 1, 2)
]
message = item['conversations'][-1]['content']
return get_chat_prompt(message, chat_history, system_prompt)
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.0 models are built on top of multiple base models, each with their own license/restrictions.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The model with `-3b` uses Stability AI, which as a `cc-by-sa-4.0` license.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
|
{"datasets": ["jondurbin/airoboros-3.0"], "license": "apache-2.0"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 44,397 |
spuun/BERATBOS
|
spuun
|
text-classification
|
[
"transformers",
"text-classification",
"id",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | 2023-01-19T17:21:49Z |
2023-01-19T17:33:03+00:00
| 0 | 0 |
---
language:
- id
library_name: transformers
license: cc-by-sa-4.0
pipeline_tag: text-classification
---
# BERATBOS v0.0.1-prealpha
> BERT-based Automated Text Classification Based on POS
A model I made for my final year assignment, used a random script to train this on GPU,
as a result, the usage is kinda non-standard, as far as transformers-based models go, at least.
Model is for detecting AI-generated works via coherency comparison through POS tag differences,
it was only trained for 5 epochs on a dataset of 1000 human sentences and 1000 BLOOM-generated sentences.

I have plans to further this model into an actual production model,
but will only be done once I can properly assess on whether that's a worthy enough of an endeavour.
| null |
Non_BioNLP
|
# BERATBOS v0.0.1-prealpha
> BERT-based Automated Text Classification Based on POS
A model I made for my final year assignment, used a random script to train this on GPU,
as a result, the usage is kinda non-standard, as far as transformers-based models go, at least.
Model is for detecting AI-generated works via coherency comparison through POS tag differences,
it was only trained for 5 epochs on a dataset of 1000 human sentences and 1000 BLOOM-generated sentences.

I have plans to further this model into an actual production model,
but will only be done once I can properly assess on whether that's a worthy enough of an endeavour.
|
{"language": ["id"], "library_name": "transformers", "license": "cc-by-sa-4.0", "pipeline_tag": "text-classification"}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,398 |
i-be-snek/distilbert-base-uncased-finetuned-ner-exp_A
|
i-be-snek
|
token-classification
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"en",
"dataset:Babelscape/multinerd",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-12-01T14:52:45Z |
2023-12-05T08:40:49+00:00
| 88 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- Babelscape/multinerd
language:
- en
license: apache-2.0
metrics:
- seqeval
pipeline_tag: token-classification
tags:
- generated_from_keras_callback
widget:
- text: After months of meticulous review and analysis, I am proud to present a study
that explores the deep connections between Epstein-Barr virus (EBV), Long COVID
and Myalgic Encephalomyelitis.
example_title: Example 1
- text: The boy is, of course, Cupid. The image of a cupid riding a lion was a common
theme in classical and Renaissance art, representing the Virgilian maxim Amor
vincit omnia – love conquers all.
example_title: Example 2
- text: Billionaire Charlie Munger, Warren Buffet's right hand man, dies at 99.
example_title: Example 3
model-index:
- name: i-be-snek/distilbert-base-uncased-finetuned-ner-exp_A
results:
- task:
type: token-classification
name: ner
dataset:
name: Babelscape/multinerd
type: Babelscape/multinerd
split: test
metrics:
- type: seqeval
value: 0.9053582270795385
name: precision
- type: seqeval
value: 0.9303178007408852
name: recall
- type: seqeval
value: 0.9176683270188665
name: f1
- type: seqeval
value: 0.9863554498955407
name: accuracy
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# i-be-snek/distilbert-base-uncased-finetuned-ner-exp_A
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the English subset of all named entities in [Babelscape/multinerd](https://huggingface.co/datasets/Babelscape/multinerd) dataset.
It achieves the following results on the validation set:
- Train Loss: 0.0163
- Validation Loss: 0.1024
- Train Precision: 0.8763
- Train Recall: 0.8862
- Train F1: 0.8812
- Train Accuracy: 0.9750
- Epoch: 2
## Model description
[distilbert-base-uncased-finetuned-ner-exp_A](https://huggingface.co/i-be-snek/distilbert-base-uncased-finetuned-ner-exp_B) is a Named Entity Recognition model finetuned on [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased).
This model is uncased, so it makes no distinction between "sarah" and "Sarah".
## Training and evaluation data
This model has been evaluated on the English subset of the test set of [Babelscape/multinerd](https://huggingface.co/datasets/Babelscape/multinerd)
### Evaluation results
| metric | value |
|:----------|---------:|
| precision | 0.905358 |
| recall | 0.930318 |
| f1 | 0.917668 |
| accuracy | 0.986355 |
|metric/tag | ANIM | BIO | CEL | DIS | EVE | FOOD | INST | LOC | MEDIA | MYTH | ORG | PER | PLANT | TIME | VEHI |
|:----------|------------:|----------:|----------:|------------:|-----------:|------------:|----------:|-------------:|-----------:|----------:|------------:|-------------:|------------:|-----------:|----------:|
| precision | 0.667262 | 0.666667 | 0.508197 | 0.662324 | 0.896277 | 0.637809 | 0.642857 | 0.964137 | 0.931915 | 0.638889 | 0.941176 | 0.99033 | 0.558043 | 0.756579 | 0.735294 |
| recall | 0.698878 | 0.75 | 0.756098 | 0.803689 | 0.957386 | 0.637809 | 0.75 | 0.963656 | 0.956332 | 0.71875 | 0.962224 | 0.992023 | 0.752796 | 0.795848 | 0.78125 |
| f1 | 0.682704 | 0.705882 | 0.607843 | 0.72619 | 0.925824 | 0.637809 | 0.692308 | 0.963897 | 0.943966 | 0.676471 | 0.951584 | 0.991176 | 0.640952 | 0.775717 | 0.757576 |
| number | 3208 | 16 | 82 | 1518 | 704 | 1132 | 24 | 24048 | 916 | 64 | 6618 | 10530 | 1788 | 578 | 64 |
## Training procedure
All scripts for training can be found in this [GitHub repository](https://github.com/i-be-snek/rise-assignment-ner-finetune).
The model had early stopped watching its `val_loss`.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer:
```python
{
"name": "AdamWeightDecay",
"learning_rate": 2e-05,
"decay": 0.0,
"beta_1": 0.9,
"beta_2": 0.999,
"epsilon": 1e-07,
"amsgrad": False,
"weight_decay_rate": 0.0,
}
```
- training_precision: `float32`
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.0709 | 0.0710 | 0.8563 | 0.8875 | 0.8716 | 0.9735 | 0 |
| 0.0295 | 0.0851 | 0.8743 | 0.8835 | 0.8789 | 0.9748 | 1 |
| 0.0163 | 0.1024 | 0.8763 | 0.8862 | 0.8812 | 0.9750 | 2 |
Epoch 0
| Named Entity | precision | recall | f1 |
|:----------:|:---------:|:---------:|:------:|
| ANIM | 0.699150 | 0.620124 | 0.657270 |
| BIO | 0.480000 | 0.782609 | 0.595041 |
| CEL | 0.815385 | 0.876033 | 0.844622 |
| DIS | 0.628939 | 0.806709 | 0.706818 |
| EVE | 0.898876 | 0.924855 | 0.911681 |
| FOOD | 0.624774 | 0.602266 | 0.613314 |
| INST | 0.467391 | 0.741379 | 0.573333 |
| LOC | 0.967354 | 0.969634 | 0.968493 |
| MEDIA | 0.911227 | 0.939856 | 0.925320 |
| MYTH | 0.941860 | 0.771429 | 0.848168 |
| ORG | 0.924471 | 0.937629 | 0.931003 |
| PER | 0.988699 | 0.990918 | 0.989807 |
| PLANT | 0.622521 | 0.781333 | 0.692944 |
| TIME | 0.743902 | 0.738499 | 0.741191 |
| VEHI | 0.785714 | 0.791367 | 0.788530 |
Epoch 1
| Named Entity | precision | recall | f1 |
|:----------:|:---------:|:---------:|:--------:|
| ANIM | 0.701040 | 0.747340 | 0.723450 |
| BIO | 0.422222 | 0.826087 | 0.558824 |
| CEL | 0.729167 | 0.867769 | 0.792453 |
| DIS | 0.731099 | 0.749794 | 0.740328 |
| EVE | 0.864865 | 0.924855 | 0.893855 |
| FOOD | 0.652865 | 0.572632 | 0.610122 |
| INST | 0.871795 | 0.586207 | 0.701031 |
| LOC | 0.968255 | 0.966143 | 0.967198 |
| MEDIA | 0.946346 | 0.918312 | 0.932118 |
| MYTH | 0.914894 | 0.819048 | 0.864322 |
| ORG | 0.906064 | 0.943582 | 0.924442 |
| PER | 0.990389 | 0.988367 | 0.989377 |
| PLANT | 0.625889 | 0.743556 | 0.679667 |
| TIME | 0.755981 | 0.765133 | 0.760529 |
| VEHI | 0.737500 | 0.848921 | 0.789298 |
Epoch 2
| Named Entity | precision | recall | f1 |
|:----------:|:---------:|:---------:|:--------:|
| ANIM | 0.730443 | 0.687057 | 0.708086 |
| BIO | 0.330882 | 0.978261 | 0.494505 |
| CEL | 0.798561 | 0.917355 | 0.853846 |
| DIS | 0.738108 | 0.750894 | 0.744446 |
| EVE | 0.904899 | 0.907514 | 0.906205 |
| FOOD | 0.628664 | 0.623184 | 0.625912 |
| INST | 0.533333 | 0.551724 | 0.542373 |
| LOC | 0.967915 | 0.973997 | 0.970946 |
| MEDIA | 0.949627 | 0.913824 | 0.931382 |
| MYTH | 0.910000 | 0.866667 | 0.887805 |
| ORG | 0.924920 | 0.934136 | 0.929505 |
| PER | 0.989506 | 0.991020 | 0.990263 |
| PLANT | 0.637648 | 0.742222 | 0.685972 |
| TIME | 0.766355 | 0.794189 | 0.780024 |
| VEHI | 0.818182 | 0.647482 | 0.722892 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# i-be-snek/distilbert-base-uncased-finetuned-ner-exp_A
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the English subset of all named entities in [Babelscape/multinerd](https://huggingface.co/datasets/Babelscape/multinerd) dataset.
It achieves the following results on the validation set:
- Train Loss: 0.0163
- Validation Loss: 0.1024
- Train Precision: 0.8763
- Train Recall: 0.8862
- Train F1: 0.8812
- Train Accuracy: 0.9750
- Epoch: 2
## Model description
[distilbert-base-uncased-finetuned-ner-exp_A](https://huggingface.co/i-be-snek/distilbert-base-uncased-finetuned-ner-exp_B) is a Named Entity Recognition model finetuned on [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased).
This model is uncased, so it makes no distinction between "sarah" and "Sarah".
## Training and evaluation data
This model has been evaluated on the English subset of the test set of [Babelscape/multinerd](https://huggingface.co/datasets/Babelscape/multinerd)
### Evaluation results
| metric | value |
|:----------|---------:|
| precision | 0.905358 |
| recall | 0.930318 |
| f1 | 0.917668 |
| accuracy | 0.986355 |
|metric/tag | ANIM | BIO | CEL | DIS | EVE | FOOD | INST | LOC | MEDIA | MYTH | ORG | PER | PLANT | TIME | VEHI |
|:----------|------------:|----------:|----------:|------------:|-----------:|------------:|----------:|-------------:|-----------:|----------:|------------:|-------------:|------------:|-----------:|----------:|
| precision | 0.667262 | 0.666667 | 0.508197 | 0.662324 | 0.896277 | 0.637809 | 0.642857 | 0.964137 | 0.931915 | 0.638889 | 0.941176 | 0.99033 | 0.558043 | 0.756579 | 0.735294 |
| recall | 0.698878 | 0.75 | 0.756098 | 0.803689 | 0.957386 | 0.637809 | 0.75 | 0.963656 | 0.956332 | 0.71875 | 0.962224 | 0.992023 | 0.752796 | 0.795848 | 0.78125 |
| f1 | 0.682704 | 0.705882 | 0.607843 | 0.72619 | 0.925824 | 0.637809 | 0.692308 | 0.963897 | 0.943966 | 0.676471 | 0.951584 | 0.991176 | 0.640952 | 0.775717 | 0.757576 |
| number | 3208 | 16 | 82 | 1518 | 704 | 1132 | 24 | 24048 | 916 | 64 | 6618 | 10530 | 1788 | 578 | 64 |
## Training procedure
All scripts for training can be found in this [GitHub repository](https://github.com/i-be-snek/rise-assignment-ner-finetune).
The model had early stopped watching its `val_loss`.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer:
```python
{
"name": "AdamWeightDecay",
"learning_rate": 2e-05,
"decay": 0.0,
"beta_1": 0.9,
"beta_2": 0.999,
"epsilon": 1e-07,
"amsgrad": False,
"weight_decay_rate": 0.0,
}
```
- training_precision: `float32`
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.0709 | 0.0710 | 0.8563 | 0.8875 | 0.8716 | 0.9735 | 0 |
| 0.0295 | 0.0851 | 0.8743 | 0.8835 | 0.8789 | 0.9748 | 1 |
| 0.0163 | 0.1024 | 0.8763 | 0.8862 | 0.8812 | 0.9750 | 2 |
Epoch 0
| Named Entity | precision | recall | f1 |
|:----------:|:---------:|:---------:|:------:|
| ANIM | 0.699150 | 0.620124 | 0.657270 |
| BIO | 0.480000 | 0.782609 | 0.595041 |
| CEL | 0.815385 | 0.876033 | 0.844622 |
| DIS | 0.628939 | 0.806709 | 0.706818 |
| EVE | 0.898876 | 0.924855 | 0.911681 |
| FOOD | 0.624774 | 0.602266 | 0.613314 |
| INST | 0.467391 | 0.741379 | 0.573333 |
| LOC | 0.967354 | 0.969634 | 0.968493 |
| MEDIA | 0.911227 | 0.939856 | 0.925320 |
| MYTH | 0.941860 | 0.771429 | 0.848168 |
| ORG | 0.924471 | 0.937629 | 0.931003 |
| PER | 0.988699 | 0.990918 | 0.989807 |
| PLANT | 0.622521 | 0.781333 | 0.692944 |
| TIME | 0.743902 | 0.738499 | 0.741191 |
| VEHI | 0.785714 | 0.791367 | 0.788530 |
Epoch 1
| Named Entity | precision | recall | f1 |
|:----------:|:---------:|:---------:|:--------:|
| ANIM | 0.701040 | 0.747340 | 0.723450 |
| BIO | 0.422222 | 0.826087 | 0.558824 |
| CEL | 0.729167 | 0.867769 | 0.792453 |
| DIS | 0.731099 | 0.749794 | 0.740328 |
| EVE | 0.864865 | 0.924855 | 0.893855 |
| FOOD | 0.652865 | 0.572632 | 0.610122 |
| INST | 0.871795 | 0.586207 | 0.701031 |
| LOC | 0.968255 | 0.966143 | 0.967198 |
| MEDIA | 0.946346 | 0.918312 | 0.932118 |
| MYTH | 0.914894 | 0.819048 | 0.864322 |
| ORG | 0.906064 | 0.943582 | 0.924442 |
| PER | 0.990389 | 0.988367 | 0.989377 |
| PLANT | 0.625889 | 0.743556 | 0.679667 |
| TIME | 0.755981 | 0.765133 | 0.760529 |
| VEHI | 0.737500 | 0.848921 | 0.789298 |
Epoch 2
| Named Entity | precision | recall | f1 |
|:----------:|:---------:|:---------:|:--------:|
| ANIM | 0.730443 | 0.687057 | 0.708086 |
| BIO | 0.330882 | 0.978261 | 0.494505 |
| CEL | 0.798561 | 0.917355 | 0.853846 |
| DIS | 0.738108 | 0.750894 | 0.744446 |
| EVE | 0.904899 | 0.907514 | 0.906205 |
| FOOD | 0.628664 | 0.623184 | 0.625912 |
| INST | 0.533333 | 0.551724 | 0.542373 |
| LOC | 0.967915 | 0.973997 | 0.970946 |
| MEDIA | 0.949627 | 0.913824 | 0.931382 |
| MYTH | 0.910000 | 0.866667 | 0.887805 |
| ORG | 0.924920 | 0.934136 | 0.929505 |
| PER | 0.989506 | 0.991020 | 0.990263 |
| PLANT | 0.637648 | 0.742222 | 0.685972 |
| TIME | 0.766355 | 0.794189 | 0.780024 |
| VEHI | 0.818182 | 0.647482 | 0.722892 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
{"base_model": "distilbert-base-uncased", "datasets": ["Babelscape/multinerd"], "language": ["en"], "license": "apache-2.0", "metrics": ["seqeval"], "pipeline_tag": "token-classification", "tags": ["generated_from_keras_callback"], "widget": [{"text": "After months of meticulous review and analysis, I am proud to present a study that explores the deep connections between Epstein-Barr virus (EBV), Long COVID and Myalgic Encephalomyelitis.", "example_title": "Example 1"}, {"text": "The boy is, of course, Cupid. The image of a cupid riding a lion was a common theme in classical and Renaissance art, representing the Virgilian maxim Amor vincit omnia – love conquers all.", "example_title": "Example 2"}, {"text": "Billionaire Charlie Munger, Warren Buffet's right hand man, dies at 99.", "example_title": "Example 3"}], "model-index": [{"name": "i-be-snek/distilbert-base-uncased-finetuned-ner-exp_A", "results": [{"task": {"type": "token-classification", "name": "ner"}, "dataset": {"name": "Babelscape/multinerd", "type": "Babelscape/multinerd", "split": "test"}, "metrics": [{"type": "seqeval", "value": 0.9053582270795385, "name": "precision"}, {"type": "seqeval", "value": 0.9303178007408852, "name": "recall"}, {"type": "seqeval", "value": 0.9176683270188665, "name": "f1"}, {"type": "seqeval", "value": 0.9863554498955407, "name": "accuracy"}]}]}]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 44,399 |
Nextcloud-AI/opus-mt-nl-fi
|
Nextcloud-AI
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-02-23T10:46:10Z |
2023-08-16T12:01:43+00:00
| 10 | 0 |
---
license: apache-2.0
tags:
- translation
---
### opus-mt-nl-fi
* source languages: nl
* target languages: fi
* OPUS readme: [nl-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nl.fi | 28.6 | 0.569 |
| null |
Non_BioNLP
|
### opus-mt-nl-fi
* source languages: nl
* target languages: fi
* OPUS readme: [nl-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nl.fi | 28.6 | 0.569 |
|
{"license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 44,400 |
alokabhishek/gemma-1.1-7b-it-GGUF
|
alokabhishek
|
text-generation
|
[
"transformers",
"safetensors",
"gguf",
"gemma",
"text-generation",
"GGUF",
"quantized",
"Q4_K_M",
"Q5_K_M",
"4bit",
"5bit",
"Gemma",
"Gemma-7B",
"Gemma-1.1",
"Gemma-1.1-7b",
"Google",
"conversational",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-04-08T18:54:41Z |
2024-04-08T19:23:30+00:00
| 69 | 1 |
---
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- GGUF
- quantized
- Q4_K_M
- Q5_K_M
- 4bit
- 5bit
- Gemma
- Gemma-7B
- Gemma-1.1
- Gemma-1.1-7b
- Google
---
# Model Card for alokabhishek/gemma-1.1-7b-it-GGUF
<!-- Provide a quick summary of what the model is/does. -->
This repo GGUF quantized version of Google's Gemma-1.1-7b-it model using llama.cpp.
## Model Details
- Model creator: [Google](https://huggingface.co/google)
- Original model: [gemma-1.1-7b-it-GGUF](https://huggingface.co/google/gemma-1.1-7b-it-GGUF)
### About GGUF quantization using llama.cpp
- llama.cpp github repo: [llama.cpp github repo](https://github.com/ggerganov/llama.cpp)
- llama-cpp-python github repo: [llama-cpp-python github repo](https://github.com/abetlen/llama-cpp-python)
# How to Get Started with the Model
Use the code below to get started with the model.
## How to run from Python code
#### First install the package
```shell
# Base ctransformers with CUDA GPU acceleration
! pip install ctransformers[cuda]>=0.2.24
# Or with no GPU acceleration
# ! pip install llama-cpp-python
! pip install -U sentence-transformers
! pip install transformers huggingface_hub torch
```
# Import
```python
from llama_cpp import Llama
from transformers import pipeline, AutoModel, AutoTokenizer
from sentence_transformers import SentenceTransformer
import os
```
# Using llama_cpp as a high-level helper
```python
repo_id = "alokabhishek/gemma-1.1-7b-it-GGUF"
filename = "Q4_K_M.gguf"
llm = Llama.from_pretrained(
repo_id=repo_id,
filename=filename,
verbose=False,
)
prompt = "Tell me a funny joke about Large Language Models meeting a Blackhole in an intergalactic Bar."
llm_response = llm.create_chat_completion(
messages=[{"role": "user", "content": prompt}],
temperature=1.5,
top_p=0.8,
top_k=50,
repeat_penalty=1.01,
)
llm_respose_formatted = llm_response["choices"][0]["message"]["content"]
print(llm_respose_formatted)
```
# Orignial Gemma Model Card
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the latest 7B instruct version of the Gemma model. Here you can find other models in the Gemma family:
| | Base | Instruct |
|----|----------------------------------------------------|----------------------------------------------------------------------|
| 2B | [gemma-2b](https://huggingface.co/google/gemma-2b) | [gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it) |
| 7B | [gemma-7b](https://huggingface.co/google/gemma-7b) | [**gemma-1.1-7b-it**](https://huggingface.co/google/gemma-1.1-7b-it) |
**Release Notes**
This is Gemma 1.1 7B (IT), an update over the original instruction-tuned Gemma release.
Gemma 1.1 was trained using a novel RLHF method, leading to substantial gains on quality, coding capabilities, factuality, instruction following and multi-turn conversation quality. We also fixed a bug in multi-turn conversations, and made sure that model responses don't always start with `"Sure,"`.
We believe this release represents an improvement for most use cases, but we encourage users to test in their particular applications. The previous model [will continue to be available in the same repo](https://huggingface.co/google/gemma-7b-it). We appreciate the enthusiastic adoption of Gemma, and we continue to welcome all feedback from the community.
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a CPU
As explained below, we recommend `torch.bfloat16` as the default dtype. You can use [a different precision](#precisions) if necessary.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
device_map="auto"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
quantization_config=quantization_config
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
quantization_config=quantization_config
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
#### Running the model in JAX / Flax
Use the `flax` branch of the repository:
```python
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxGemmaForCausalLM
model_id = "google/gemma-1.1-7b-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.padding_side = "left"
model, params = FlaxGemmaForCausalLM.from_pretrained(
model_id,
dtype=jnp.bfloat16,
revision="flax",
_do_init=False,
)
inputs = tokenizer("Valencia and Málaga are", return_tensors="np", padding=True)
output = model.generate(**inputs, params=params, max_new_tokens=20, do_sample=False)
output_text = tokenizer.batch_decode(output.sequences, skip_special_tokens=True)
```
[Check this notebook](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/jax_gemma.ipynb) for a comprehensive walkthrough on how to parallelize JAX inference.
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-1.1-7b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Fine-tuning
You can find some fine-tuning scripts under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt them to this model, simply change the model-id to `google/gemma-1.1-7b-it`.
We provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on the English quotes dataset
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
The pre-trained base models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 2B | Gemma PT 7B |
| ------------------------------ | ------------- | ----------- | ----------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | 71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 49.7 | 51.8 |
| [BoolQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | 12.5 | 23.0 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | ----------- |
| **Average** | | **44.9** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 1.0
| Benchmark | Metric | Gemma 1.0 IT 2B | Gemma 1.0 IT 7B |
| ------------------------ | ------------- | --------------- | --------------- |
| [RealToxicity][realtox] | average | 6.86 | 7.90 |
| [BOLD][bold] | | 45.57 | 49.08 |
| [CrowS-Pairs][crows] | top-1 | 45.82 | 51.33 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig][bbq] | top-1 | 54.62 | 71.99 |
| [Winogender][winogender] | top-1 | 51.25 | 54.17 |
| [TruthfulQA][truthfulqa] | | 44.84 | 31.81 |
| [Winobias 1_2][winobias] | | 56.12 | 59.09 |
| [Winobias 2_2][winobias] | | 91.10 | 92.23 |
| [Toxigen][toxigen] | | 29.77 | 39.59 |
| ------------------------ | ------------- | --------------- | --------------- |
#### Gemma 1.1
| Benchmark | Metric | Gemma 1.1 IT 2B | Gemma 1.1 IT 7B |
| ------------------------ | ------------- | --------------- | --------------- |
| [RealToxicity][realtox] | average | 7.03 | 8.04 |
| [BOLD][bold] | | 47.76 | |
| [CrowS-Pairs][crows] | top-1 | 45.89 | 49.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 58.97 | 86.06 |
| [BBQ Disambig][bbq] | top-1 | 53.90 | 85.08 |
| [Winogender][winogender] | top-1 | 50.14 | 57.64 |
| [TruthfulQA][truthfulqa] | | 44.24 | 45.34 |
| [Winobias 1_2][winobias] | | 55.93 | 59.22 |
| [Winobias 2_2][winobias] | | 89.46 | 89.2 |
| [Toxigen][toxigen] | | 29.64 | 38.75 |
| ------------------------ | ------------- | --------------- | --------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
| null |
Non_BioNLP
|
# Model Card for alokabhishek/gemma-1.1-7b-it-GGUF
<!-- Provide a quick summary of what the model is/does. -->
This repo GGUF quantized version of Google's Gemma-1.1-7b-it model using llama.cpp.
## Model Details
- Model creator: [Google](https://huggingface.co/google)
- Original model: [gemma-1.1-7b-it-GGUF](https://huggingface.co/google/gemma-1.1-7b-it-GGUF)
### About GGUF quantization using llama.cpp
- llama.cpp github repo: [llama.cpp github repo](https://github.com/ggerganov/llama.cpp)
- llama-cpp-python github repo: [llama-cpp-python github repo](https://github.com/abetlen/llama-cpp-python)
# How to Get Started with the Model
Use the code below to get started with the model.
## How to run from Python code
#### First install the package
```shell
# Base ctransformers with CUDA GPU acceleration
! pip install ctransformers[cuda]>=0.2.24
# Or with no GPU acceleration
# ! pip install llama-cpp-python
! pip install -U sentence-transformers
! pip install transformers huggingface_hub torch
```
# Import
```python
from llama_cpp import Llama
from transformers import pipeline, AutoModel, AutoTokenizer
from sentence_transformers import SentenceTransformer
import os
```
# Using llama_cpp as a high-level helper
```python
repo_id = "alokabhishek/gemma-1.1-7b-it-GGUF"
filename = "Q4_K_M.gguf"
llm = Llama.from_pretrained(
repo_id=repo_id,
filename=filename,
verbose=False,
)
prompt = "Tell me a funny joke about Large Language Models meeting a Blackhole in an intergalactic Bar."
llm_response = llm.create_chat_completion(
messages=[{"role": "user", "content": prompt}],
temperature=1.5,
top_p=0.8,
top_k=50,
repeat_penalty=1.01,
)
llm_respose_formatted = llm_response["choices"][0]["message"]["content"]
print(llm_respose_formatted)
```
# Orignial Gemma Model Card
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the latest 7B instruct version of the Gemma model. Here you can find other models in the Gemma family:
| | Base | Instruct |
|----|----------------------------------------------------|----------------------------------------------------------------------|
| 2B | [gemma-2b](https://huggingface.co/google/gemma-2b) | [gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it) |
| 7B | [gemma-7b](https://huggingface.co/google/gemma-7b) | [**gemma-1.1-7b-it**](https://huggingface.co/google/gemma-1.1-7b-it) |
**Release Notes**
This is Gemma 1.1 7B (IT), an update over the original instruction-tuned Gemma release.
Gemma 1.1 was trained using a novel RLHF method, leading to substantial gains on quality, coding capabilities, factuality, instruction following and multi-turn conversation quality. We also fixed a bug in multi-turn conversations, and made sure that model responses don't always start with `"Sure,"`.
We believe this release represents an improvement for most use cases, but we encourage users to test in their particular applications. The previous model [will continue to be available in the same repo](https://huggingface.co/google/gemma-7b-it). We appreciate the enthusiastic adoption of Gemma, and we continue to welcome all feedback from the community.
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a CPU
As explained below, we recommend `torch.bfloat16` as the default dtype. You can use [a different precision](#precisions) if necessary.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
device_map="auto"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
quantization_config=quantization_config
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-7b-it",
quantization_config=quantization_config
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
#### Running the model in JAX / Flax
Use the `flax` branch of the repository:
```python
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxGemmaForCausalLM
model_id = "google/gemma-1.1-7b-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.padding_side = "left"
model, params = FlaxGemmaForCausalLM.from_pretrained(
model_id,
dtype=jnp.bfloat16,
revision="flax",
_do_init=False,
)
inputs = tokenizer("Valencia and Málaga are", return_tensors="np", padding=True)
output = model.generate(**inputs, params=params, max_new_tokens=20, do_sample=False)
output_text = tokenizer.batch_decode(output.sequences, skip_special_tokens=True)
```
[Check this notebook](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/jax_gemma.ipynb) for a comprehensive walkthrough on how to parallelize JAX inference.
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-1.1-7b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Fine-tuning
You can find some fine-tuning scripts under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt them to this model, simply change the model-id to `google/gemma-1.1-7b-it`.
We provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on the English quotes dataset
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
The pre-trained base models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 2B | Gemma PT 7B |
| ------------------------------ | ------------- | ----------- | ----------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | 71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 49.7 | 51.8 |
| [BoolQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | 12.5 | 23.0 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | ----------- |
| **Average** | | **44.9** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 1.0
| Benchmark | Metric | Gemma 1.0 IT 2B | Gemma 1.0 IT 7B |
| ------------------------ | ------------- | --------------- | --------------- |
| [RealToxicity][realtox] | average | 6.86 | 7.90 |
| [BOLD][bold] | | 45.57 | 49.08 |
| [CrowS-Pairs][crows] | top-1 | 45.82 | 51.33 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig][bbq] | top-1 | 54.62 | 71.99 |
| [Winogender][winogender] | top-1 | 51.25 | 54.17 |
| [TruthfulQA][truthfulqa] | | 44.84 | 31.81 |
| [Winobias 1_2][winobias] | | 56.12 | 59.09 |
| [Winobias 2_2][winobias] | | 91.10 | 92.23 |
| [Toxigen][toxigen] | | 29.77 | 39.59 |
| ------------------------ | ------------- | --------------- | --------------- |
#### Gemma 1.1
| Benchmark | Metric | Gemma 1.1 IT 2B | Gemma 1.1 IT 7B |
| ------------------------ | ------------- | --------------- | --------------- |
| [RealToxicity][realtox] | average | 7.03 | 8.04 |
| [BOLD][bold] | | 47.76 | |
| [CrowS-Pairs][crows] | top-1 | 45.89 | 49.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 58.97 | 86.06 |
| [BBQ Disambig][bbq] | top-1 | 53.90 | 85.08 |
| [Winogender][winogender] | top-1 | 50.14 | 57.64 |
| [TruthfulQA][truthfulqa] | | 44.24 | 45.34 |
| [Winobias 1_2][winobias] | | 55.93 | 59.22 |
| [Winobias 2_2][winobias] | | 89.46 | 89.2 |
| [Toxigen][toxigen] | | 29.64 | 38.75 |
| ------------------------ | ------------- | --------------- | --------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
{"library_name": "transformers", "license": "gemma", "pipeline_tag": "text-generation", "tags": ["GGUF", "quantized", "Q4_K_M", "Q5_K_M", "4bit", "5bit", "Gemma", "Gemma-7B", "Gemma-1.1", "Gemma-1.1-7b", "Google"]}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 44,401 |
DashReza7/sentence-transformers_paraphrase-multilingual-MiniLM-L12-v2_FINETUNED_on_torob_data_v5
|
DashReza7
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:410745",
"loss:ContrastiveLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-09-04T13:08:10Z |
2024-09-04T13:09:15+00:00
| 7 | 0 |
---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- dot_accuracy
- dot_accuracy_threshold
- dot_f1
- dot_f1_threshold
- dot_precision
- dot_recall
- dot_ap
- manhattan_accuracy
- manhattan_accuracy_threshold
- manhattan_f1
- manhattan_f1_threshold
- manhattan_precision
- manhattan_recall
- manhattan_ap
- euclidean_accuracy
- euclidean_accuracy_threshold
- euclidean_f1
- euclidean_f1_threshold
- euclidean_precision
- euclidean_recall
- euclidean_ap
- max_accuracy
- max_accuracy_threshold
- max_f1
- max_f1_threshold
- max_precision
- max_recall
- max_ap
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:410745
- loss:ContrastiveLoss
widget:
- source_sentence: وینچ
sentences:
- ترقه شکلاتی ( هفت ترقه ) ناریه پارس درجه 1 بسته 15 عددی ترقه شکلاتی ( هفت ترقه
) ناریه پارس درجه 1 بسته 15 عددی 10عدد ناریه ترقه شکلاتی هفت ترقه بار تازه بدون
رطوبت وخرابی مارک معتبر نورافشانی
- پارچه میکرو کجراه
- Car winch-1500LBS-KARA وینچ خودرو آفرود ۶۸۰ کیلوگرم کارا ۱۵۰۰lbs وینچ خودرویی
(جلو ماشینی) 1500LBS کارا (KARA)
- source_sentence: ' وسپا '
sentences:
- پولوشرت زرد وسپا
- دوچرخه بند سقفی لیفان X70 ایکس 70 آلومینیومی طرح منابو
- دوچرخه ویوا Oxygen سایز 26 دوچرخه 26 ويوا OXYGEN دوچرخه کوهستان ویوا مدل OXYGEN
سایز 26
- source_sentence: دوچرخه المپیا سایز 27 5
sentences:
- دوچرخه شهری المپیا کد 16220 سایز 16 دوچرخه شهری المپیا کد 16220 سایز 16 دوچرخه
المپیا کد 16220 سایز 16 - OLYMPIA
- لامپ اس ام دی خودرو مدل 8B بسته 2 عددی
- قیمت کمپرس سنج موتور
- source_sentence: دچرخه ی
sentences:
- هیدروفیشیال ۷ کاره نیوفیس پلاس متور سنگین ۲۰۲۲
- جامدادی کیوت
- جعبه ی کادو ی رنگی
- source_sentence: هایومکس
sentences:
- انگشتر حدید صینی کد2439
- ژل هایومکس ولومایزر 2 سی سی
- دزدگیر پاناتک مدل P-CA501 دزدگیر پاناتک P-CA501-2 دزدگیر پاناتک مدل P-CA501-2
model-index:
- name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy
value: 0.8531738206358597
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.763870358467102
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.9032999224561303
name: Cosine F1
- type: cosine_f1_threshold
value: 0.7447167634963989
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.8649689236015621
name: Cosine Precision
- type: cosine_recall
value: 0.9451857194374323
name: Cosine Recall
- type: cosine_ap
value: 0.9354580013152192
name: Cosine Ap
- type: dot_accuracy
value: 0.8179627073336401
name: Dot Accuracy
- type: dot_accuracy_threshold
value: 17.24372100830078
name: Dot Accuracy Threshold
- type: dot_f1
value: 0.8831898479427548
name: Dot F1
- type: dot_f1_threshold
value: 16.905807495117188
name: Dot F1 Threshold
- type: dot_precision
value: 0.8255042324171805
name: Dot Precision
- type: dot_recall
value: 0.9495432143286453
name: Dot Recall
- type: dot_ap
value: 0.9192801272426158
name: Dot Ap
- type: manhattan_accuracy
value: 0.8484629374000306
name: Manhattan Accuracy
- type: manhattan_accuracy_threshold
value: 56.168235778808594
name: Manhattan Accuracy Threshold
- type: manhattan_f1
value: 0.9006901291486498
name: Manhattan F1
- type: manhattan_f1_threshold
value: 57.448089599609375
name: Manhattan F1 Threshold
- type: manhattan_precision
value: 0.8601706503309084
name: Manhattan Precision
- type: manhattan_recall
value: 0.9452157711263373
name: Manhattan Recall
- type: manhattan_ap
value: 0.9331690796886208
name: Manhattan Ap
- type: euclidean_accuracy
value: 0.8485944039089375
name: Euclidean Accuracy
- type: euclidean_accuracy_threshold
value: 3.5569825172424316
name: Euclidean Accuracy Threshold
- type: euclidean_f1
value: 0.9009756516265629
name: Euclidean F1
- type: euclidean_f1_threshold
value: 3.694398880004883
name: Euclidean F1 Threshold
- type: euclidean_precision
value: 0.8597717468465025
name: Euclidean Precision
- type: euclidean_recall
value: 0.9463276836158192
name: Euclidean Recall
- type: euclidean_ap
value: 0.9332275611001725
name: Euclidean Ap
- type: max_accuracy
value: 0.8531738206358597
name: Max Accuracy
- type: max_accuracy_threshold
value: 56.168235778808594
name: Max Accuracy Threshold
- type: max_f1
value: 0.9032999224561303
name: Max F1
- type: max_f1_threshold
value: 57.448089599609375
name: Max F1 Threshold
- type: max_precision
value: 0.8649689236015621
name: Max Precision
- type: max_recall
value: 0.9495432143286453
name: Max Recall
- type: max_ap
value: 0.9354580013152192
name: Max Ap
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision bf3bf13ab40c3157080a7ab344c831b9ad18b5eb -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("DashReza7/sentence-transformers_paraphrase-multilingual-MiniLM-L12-v2_FINETUNED_on_torob_data_v5")
# Run inference
sentences = [
'هایومکس',
'ژل هایومکس ولومایزر 2 سی سی',
'دزدگیر پاناتک مدل P-CA501 دزدگیر پاناتک P-CA501-2 دزدگیر پاناتک مدل P-CA501-2',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.8532 |
| cosine_accuracy_threshold | 0.7639 |
| cosine_f1 | 0.9033 |
| cosine_f1_threshold | 0.7447 |
| cosine_precision | 0.865 |
| cosine_recall | 0.9452 |
| cosine_ap | 0.9355 |
| dot_accuracy | 0.818 |
| dot_accuracy_threshold | 17.2437 |
| dot_f1 | 0.8832 |
| dot_f1_threshold | 16.9058 |
| dot_precision | 0.8255 |
| dot_recall | 0.9495 |
| dot_ap | 0.9193 |
| manhattan_accuracy | 0.8485 |
| manhattan_accuracy_threshold | 56.1682 |
| manhattan_f1 | 0.9007 |
| manhattan_f1_threshold | 57.4481 |
| manhattan_precision | 0.8602 |
| manhattan_recall | 0.9452 |
| manhattan_ap | 0.9332 |
| euclidean_accuracy | 0.8486 |
| euclidean_accuracy_threshold | 3.557 |
| euclidean_f1 | 0.901 |
| euclidean_f1_threshold | 3.6944 |
| euclidean_precision | 0.8598 |
| euclidean_recall | 0.9463 |
| euclidean_ap | 0.9332 |
| max_accuracy | 0.8532 |
| max_accuracy_threshold | 56.1682 |
| max_f1 | 0.9033 |
| max_f1_threshold | 57.4481 |
| max_precision | 0.865 |
| max_recall | 0.9495 |
| **max_ap** | **0.9355** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | max_ap |
|:------:|:----:|:-------------:|:------:|
| None | 0 | - | 0.8131 |
| 0.3115 | 500 | 0.0256 | - |
| 0.6231 | 1000 | 0.0179 | - |
| 0.9346 | 1500 | 0.0165 | - |
| 1.2461 | 2000 | 0.0152 | - |
| 1.5576 | 2500 | 0.0148 | - |
| 1.8692 | 3000 | 0.0144 | - |
| 2.0 | 3210 | - | 0.9355 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.4.0+cu121
- Accelerate: 0.32.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### ContrastiveLoss
```bibtex
@inproceedings{hadsell2006dimensionality,
author={Hadsell, R. and Chopra, S. and LeCun, Y.},
booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)},
title={Dimensionality Reduction by Learning an Invariant Mapping},
year={2006},
volume={2},
number={},
pages={1735-1742},
doi={10.1109/CVPR.2006.100}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision bf3bf13ab40c3157080a7ab344c831b9ad18b5eb -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("DashReza7/sentence-transformers_paraphrase-multilingual-MiniLM-L12-v2_FINETUNED_on_torob_data_v5")
# Run inference
sentences = [
'هایومکس',
'ژل هایومکس ولومایزر 2 سی سی',
'دزدگیر پاناتک مدل P-CA501 دزدگیر پاناتک P-CA501-2 دزدگیر پاناتک مدل P-CA501-2',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.8532 |
| cosine_accuracy_threshold | 0.7639 |
| cosine_f1 | 0.9033 |
| cosine_f1_threshold | 0.7447 |
| cosine_precision | 0.865 |
| cosine_recall | 0.9452 |
| cosine_ap | 0.9355 |
| dot_accuracy | 0.818 |
| dot_accuracy_threshold | 17.2437 |
| dot_f1 | 0.8832 |
| dot_f1_threshold | 16.9058 |
| dot_precision | 0.8255 |
| dot_recall | 0.9495 |
| dot_ap | 0.9193 |
| manhattan_accuracy | 0.8485 |
| manhattan_accuracy_threshold | 56.1682 |
| manhattan_f1 | 0.9007 |
| manhattan_f1_threshold | 57.4481 |
| manhattan_precision | 0.8602 |
| manhattan_recall | 0.9452 |
| manhattan_ap | 0.9332 |
| euclidean_accuracy | 0.8486 |
| euclidean_accuracy_threshold | 3.557 |
| euclidean_f1 | 0.901 |
| euclidean_f1_threshold | 3.6944 |
| euclidean_precision | 0.8598 |
| euclidean_recall | 0.9463 |
| euclidean_ap | 0.9332 |
| max_accuracy | 0.8532 |
| max_accuracy_threshold | 56.1682 |
| max_f1 | 0.9033 |
| max_f1_threshold | 57.4481 |
| max_precision | 0.865 |
| max_recall | 0.9495 |
| **max_ap** | **0.9355** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | max_ap |
|:------:|:----:|:-------------:|:------:|
| None | 0 | - | 0.8131 |
| 0.3115 | 500 | 0.0256 | - |
| 0.6231 | 1000 | 0.0179 | - |
| 0.9346 | 1500 | 0.0165 | - |
| 1.2461 | 2000 | 0.0152 | - |
| 1.5576 | 2500 | 0.0148 | - |
| 1.8692 | 3000 | 0.0144 | - |
| 2.0 | 3210 | - | 0.9355 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.4.0+cu121
- Accelerate: 0.32.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### ContrastiveLoss
```bibtex
@inproceedings{hadsell2006dimensionality,
author={Hadsell, R. and Chopra, S. and LeCun, Y.},
booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)},
title={Dimensionality Reduction by Learning an Invariant Mapping},
year={2006},
volume={2},
number={},
pages={1735-1742},
doi={10.1109/CVPR.2006.100}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "datasets": [], "language": [], "library_name": "sentence-transformers", "metrics": ["cosine_accuracy", "cosine_accuracy_threshold", "cosine_f1", "cosine_f1_threshold", "cosine_precision", "cosine_recall", "cosine_ap", "dot_accuracy", "dot_accuracy_threshold", "dot_f1", "dot_f1_threshold", "dot_precision", "dot_recall", "dot_ap", "manhattan_accuracy", "manhattan_accuracy_threshold", "manhattan_f1", "manhattan_f1_threshold", "manhattan_precision", "manhattan_recall", "manhattan_ap", "euclidean_accuracy", "euclidean_accuracy_threshold", "euclidean_f1", "euclidean_f1_threshold", "euclidean_precision", "euclidean_recall", "euclidean_ap", "max_accuracy", "max_accuracy_threshold", "max_f1", "max_f1_threshold", "max_precision", "max_recall", "max_ap"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:410745", "loss:ContrastiveLoss"], "widget": [{"source_sentence": "وینچ", "sentences": ["ترقه شکلاتی ( هفت ترقه ) ناریه پارس درجه 1 بسته 15 عددی ترقه شکلاتی ( هفت ترقه ) ناریه پارس درجه 1 بسته 15 عددی 10عدد ناریه ترقه شکلاتی هفت ترقه بار تازه بدون رطوبت وخرابی مارک معتبر نورافشانی", "پارچه میکرو کجراه", "Car winch-1500LBS-KARA وینچ خودرو آفرود ۶۸۰ کیلوگرم کارا ۱۵۰۰lbs وینچ خودرویی (جلو ماشینی) 1500LBS کارا (KARA)"]}, {"source_sentence": " وسپا ", "sentences": ["پولوشرت زرد وسپا", "دوچرخه بند سقفی لیفان X70 ایکس 70 آلومینیومی طرح منابو", "دوچرخه ویوا Oxygen سایز 26 دوچرخه 26 ويوا OXYGEN دوچرخه کوهستان ویوا مدل OXYGEN سایز 26"]}, {"source_sentence": "دوچرخه المپیا سایز 27 5", "sentences": ["دوچرخه شهری المپیا کد 16220 سایز 16 دوچرخه شهری المپیا کد 16220 سایز 16 دوچرخه المپیا کد 16220 سایز 16 - OLYMPIA", "لامپ اس ام دی خودرو مدل 8B بسته 2 عددی", "قیمت کمپرس سنج موتور"]}, {"source_sentence": "دچرخه ی", "sentences": ["هیدروفیشیال ۷ کاره نیوفیس پلاس متور سنگین ۲۰۲۲", "جامدادی کیوت", "جعبه ی کادو ی رنگی"]}, {"source_sentence": "هایومکس", "sentences": ["انگشتر حدید صینی کد2439", "ژل هایومکس ولومایزر 2 سی سی", "دزدگیر پاناتک مدل P-CA501 دزدگیر پاناتک P-CA501-2 دزدگیر پاناتک مدل P-CA501-2"]}], "model-index": [{"name": "SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "results": [{"task": {"type": "binary-classification", "name": "Binary Classification"}, "dataset": {"name": "Unknown", "type": "unknown"}, "metrics": [{"type": "cosine_accuracy", "value": 0.8531738206358597, "name": "Cosine Accuracy"}, {"type": "cosine_accuracy_threshold", "value": 0.763870358467102, "name": "Cosine Accuracy Threshold"}, {"type": "cosine_f1", "value": 0.9032999224561303, "name": "Cosine F1"}, {"type": "cosine_f1_threshold", "value": 0.7447167634963989, "name": "Cosine F1 Threshold"}, {"type": "cosine_precision", "value": 0.8649689236015621, "name": "Cosine Precision"}, {"type": "cosine_recall", "value": 0.9451857194374323, "name": "Cosine Recall"}, {"type": "cosine_ap", "value": 0.9354580013152192, "name": "Cosine Ap"}, {"type": "dot_accuracy", "value": 0.8179627073336401, "name": "Dot Accuracy"}, {"type": "dot_accuracy_threshold", "value": 17.24372100830078, "name": "Dot Accuracy Threshold"}, {"type": "dot_f1", "value": 0.8831898479427548, "name": "Dot F1"}, {"type": "dot_f1_threshold", "value": 16.905807495117188, "name": "Dot F1 Threshold"}, {"type": "dot_precision", "value": 0.8255042324171805, "name": "Dot Precision"}, {"type": "dot_recall", "value": 0.9495432143286453, "name": "Dot Recall"}, {"type": "dot_ap", "value": 0.9192801272426158, "name": "Dot Ap"}, {"type": "manhattan_accuracy", "value": 0.8484629374000306, "name": "Manhattan Accuracy"}, {"type": "manhattan_accuracy_threshold", "value": 56.168235778808594, "name": "Manhattan Accuracy Threshold"}, {"type": "manhattan_f1", "value": 0.9006901291486498, "name": "Manhattan F1"}, {"type": "manhattan_f1_threshold", "value": 57.448089599609375, "name": "Manhattan F1 Threshold"}, {"type": "manhattan_precision", "value": 0.8601706503309084, "name": "Manhattan Precision"}, {"type": "manhattan_recall", "value": 0.9452157711263373, "name": "Manhattan Recall"}, {"type": "manhattan_ap", "value": 0.9331690796886208, "name": "Manhattan Ap"}, {"type": "euclidean_accuracy", "value": 0.8485944039089375, "name": "Euclidean Accuracy"}, {"type": "euclidean_accuracy_threshold", "value": 3.5569825172424316, "name": "Euclidean Accuracy Threshold"}, {"type": "euclidean_f1", "value": 0.9009756516265629, "name": "Euclidean F1"}, {"type": "euclidean_f1_threshold", "value": 3.694398880004883, "name": "Euclidean F1 Threshold"}, {"type": "euclidean_precision", "value": 0.8597717468465025, "name": "Euclidean Precision"}, {"type": "euclidean_recall", "value": 0.9463276836158192, "name": "Euclidean Recall"}, {"type": "euclidean_ap", "value": 0.9332275611001725, "name": "Euclidean Ap"}, {"type": "max_accuracy", "value": 0.8531738206358597, "name": "Max Accuracy"}, {"type": "max_accuracy_threshold", "value": 56.168235778808594, "name": "Max Accuracy Threshold"}, {"type": "max_f1", "value": 0.9032999224561303, "name": "Max F1"}, {"type": "max_f1_threshold", "value": 57.448089599609375, "name": "Max F1 Threshold"}, {"type": "max_precision", "value": 0.8649689236015621, "name": "Max Precision"}, {"type": "max_recall", "value": 0.9495432143286453, "name": "Max Recall"}, {"type": "max_ap", "value": 0.9354580013152192, "name": "Max Ap"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,402 |
RashidNLP/NER-Deberta
|
RashidNLP
|
token-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"deberta-v2",
"token-classification",
"deberta-v3",
"en",
"dataset:DFKI-SLT/few-nerd",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-05-19T18:28:34Z |
2023-06-26T07:36:07+00:00
| 49 | 6 |
---
datasets:
- DFKI-SLT/few-nerd
language:
- en
library_name: transformers
license: mit
metrics:
- accuracy
- f1
pipeline_tag: token-classification
tags:
- deberta-v3
---
## Deberta for Named Entity Recognition
I used a Pretrained Deberta-v3-base and finetuned it on Few-NERD, A NER dataset that contains over 180k examples and over 4.6 million tokens.
The Token labels are Person, Organisation, Location, Building, Event, Product, Art & Misc.
## How to use the model
```python
from transformers import pipeline
def print_ner(sentences):
"""Cleaning and printing NER results
"""
for sentence in sentences:
last_entity_type = sentence[0]['entity']
last_index = sentence[0]['index']
word = sentence[0]['word']
for i, token in enumerate(sentence):
if (i > 0):
if (token['entity'] == last_entity_type) and (token['index'] == last_index + 1):
word = word + '' + token['word']
else:
word = word.replace('▁', ' ')
print(f"{word[1:]} {last_entity_type}")
word = token['word']
last_entity_type = token['entity']
last_index = token['index']
if i == len(sentence) - 1:
word = word.replace('▁', ' ')
print(f"{word[1:]} {last_entity_type}")
pipe = pipeline(model='RashidNLP/NER-Deberta')
sentence = pipe(["Elon Musk will be at SpaceX's Starbase facility in Boca Chica for the orbital launch of starship next month"])
print_ner(sentence)
```
| null |
Non_BioNLP
|
## Deberta for Named Entity Recognition
I used a Pretrained Deberta-v3-base and finetuned it on Few-NERD, A NER dataset that contains over 180k examples and over 4.6 million tokens.
The Token labels are Person, Organisation, Location, Building, Event, Product, Art & Misc.
## How to use the model
```python
from transformers import pipeline
def print_ner(sentences):
"""Cleaning and printing NER results
"""
for sentence in sentences:
last_entity_type = sentence[0]['entity']
last_index = sentence[0]['index']
word = sentence[0]['word']
for i, token in enumerate(sentence):
if (i > 0):
if (token['entity'] == last_entity_type) and (token['index'] == last_index + 1):
word = word + '' + token['word']
else:
word = word.replace('▁', ' ')
print(f"{word[1:]} {last_entity_type}")
word = token['word']
last_entity_type = token['entity']
last_index = token['index']
if i == len(sentence) - 1:
word = word.replace('▁', ' ')
print(f"{word[1:]} {last_entity_type}")
pipe = pipeline(model='RashidNLP/NER-Deberta')
sentence = pipe(["Elon Musk will be at SpaceX's Starbase facility in Boca Chica for the orbital launch of starship next month"])
print_ner(sentence)
```
|
{"datasets": ["DFKI-SLT/few-nerd"], "language": ["en"], "library_name": "transformers", "license": "mit", "metrics": ["accuracy", "f1"], "pipeline_tag": "token-classification", "tags": ["deberta-v3"]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 44,403 |
teostereciu/ltp-wdt-identifier
|
teostereciu
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-06-14T21:49:54Z |
2023-06-16T10:45:04+00:00
| 8 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# /var/folders/x1/dl1z_tcs7zb6pppfbf65d5sh0000gn/T/tmp5l0jmoom/teostereciu/ltp-wdt-identifier
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/x1/dl1z_tcs7zb6pppfbf65d5sh0000gn/T/tmp5l0jmoom/teostereciu/ltp-wdt-identifier")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# /var/folders/x1/dl1z_tcs7zb6pppfbf65d5sh0000gn/T/tmp5l0jmoom/teostereciu/ltp-wdt-identifier
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/x1/dl1z_tcs7zb6pppfbf65d5sh0000gn/T/tmp5l0jmoom/teostereciu/ltp-wdt-identifier")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,404 |
chunwoolee0/imdb_distilbert_base_uncased_finetuned
|
chunwoolee0
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-29T09:12:16Z |
2023-07-29T10:30:20+00:00
| 9 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: imdb_distilbert_base_uncased_finetuned
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- type: accuracy
value: 0.93272
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# imdb_distilbert_base_uncased_finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2206
- Accuracy: 0.9327
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.226 | 1.0 | 1563 | 0.2428 | 0.9119 |
| 0.1538 | 2.0 | 3126 | 0.2206 | 0.9327 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# imdb_distilbert_base_uncased_finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2206
- Accuracy: 0.9327
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.226 | 1.0 | 1563 | 0.2428 | 0.9119 |
| 0.1538 | 2.0 | 3126 | 0.2206 | 0.9327 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
{"base_model": "distilbert-base-uncased", "datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "imdb_distilbert_base_uncased_finetuned", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "test", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.93272, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,405 |
ucsahin/mT5-base-turkish-qa
|
ucsahin
|
text2text-generation
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"Question Answering",
"generated_from_trainer",
"tr",
"dataset:ucsahin/TR-Extractive-QA-82K",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-01-20T23:21:18Z |
2024-07-06T09:25:33+00:00
| 47 | 2 |
---
base_model: google/mt5-base
datasets:
- ucsahin/TR-Extractive-QA-82K
language:
- tr
license: apache-2.0
metrics:
- rouge
pipeline_tag: text2text-generation
tags:
- Question Answering
- generated_from_trainer
widget:
- text: 'Soru: Nazım Hikmet ne zaman doğmuştur?
Metin: Nâzım Hikmet, Mehmed Nâzım adıyla 15 Ocak 1902 tarihinde Selanik''te doğdu.
O sırada Hariciye Nezareti memuru olarak Selanik''te çalışan Hikmet Bey, Nâzım''ın
çocukluğunda memuriyetten ayrıldı ve ailesiyle birlikte, Halep''te bulunan babasının
yanına gitti. Burada bulundukları sırada Hikmet-Celile çiftinin biri Ali İbrahim,
diğeri Samiye adında iki çocuğu oldu, fakat Ali İbrahim dizanteriye yakalanıp
öldü.'
model-index:
- name: mT5-base-turkish-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5-base-turkish-qa
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the [ucsahin/TR-Extractive-QA-82K](https://huggingface.co/datasets/ucsahin/TR-Extractive-QA-82K) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5109
- Rouge1: 79.3283
- Rouge2: 68.0845
- Rougel: 79.3474
- Rougelsum: 79.2937
## Model description
mT5-base model is trained with manually curated Turkish dataset consisting of 65K training samples with ("question", "answer", "context") triplets.
## Intended uses & limitations
The intended use of the model is extractive question answering.
In order to use the inference widget, enter your input in the format:
```
Soru: question_text
Metin: context_text
```
Generated response by the model:
```
Cevap: answer_text
```
Use with Transformers:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from datasets import load_dataset
# Load the dataset
qa_tr_datasets = load_dataset("ucsahin/TR-Extractive-QA-82K")
# Load model and tokenizer
model_checkpoint = "ucsahin/mT5-base-turkish-qa"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
inference_dataset = qa_tr_datasets["test"].select(range(10))
for input in inference_dataset:
input_question = "Soru: " + input["question"]
input_context = "Metin: " + input["context"]
tokenized_inputs = tokenizer(input_question, input_context, max_length=512, truncation=True, return_tensors="pt")
outputs = model.generate(input_ids=tokenized_inputs["input_ids"], max_new_tokens=32)
output_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(f"Reference answer: {input['answer']}, Model Answer: {output_text}")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.0454 | 0.13 | 500 | 0.6771 | 73.1040 | 59.8915 | 73.1819 | 73.0558 |
| 0.8012 | 0.26 | 1000 | 0.6012 | 76.3357 | 64.1967 | 76.3796 | 76.2688 |
| 0.7703 | 0.39 | 1500 | 0.5844 | 76.8932 | 65.2509 | 76.9932 | 76.9418 |
| 0.6783 | 0.51 | 2000 | 0.5587 | 76.7284 | 64.8453 | 76.7416 | 76.6720 |
| 0.6546 | 0.64 | 2500 | 0.5362 | 78.2261 | 66.5893 | 78.2515 | 78.2142 |
| 0.6289 | 0.77 | 3000 | 0.5133 | 78.6917 | 67.1534 | 78.6852 | 78.6319 |
| 0.6292 | 0.9 | 3500 | 0.5109 | 79.3283 | 68.0845 | 79.3474 | 79.2937 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5-base-turkish-qa
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the [ucsahin/TR-Extractive-QA-82K](https://huggingface.co/datasets/ucsahin/TR-Extractive-QA-82K) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5109
- Rouge1: 79.3283
- Rouge2: 68.0845
- Rougel: 79.3474
- Rougelsum: 79.2937
## Model description
mT5-base model is trained with manually curated Turkish dataset consisting of 65K training samples with ("question", "answer", "context") triplets.
## Intended uses & limitations
The intended use of the model is extractive question answering.
In order to use the inference widget, enter your input in the format:
```
Soru: question_text
Metin: context_text
```
Generated response by the model:
```
Cevap: answer_text
```
Use with Transformers:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from datasets import load_dataset
# Load the dataset
qa_tr_datasets = load_dataset("ucsahin/TR-Extractive-QA-82K")
# Load model and tokenizer
model_checkpoint = "ucsahin/mT5-base-turkish-qa"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
inference_dataset = qa_tr_datasets["test"].select(range(10))
for input in inference_dataset:
input_question = "Soru: " + input["question"]
input_context = "Metin: " + input["context"]
tokenized_inputs = tokenizer(input_question, input_context, max_length=512, truncation=True, return_tensors="pt")
outputs = model.generate(input_ids=tokenized_inputs["input_ids"], max_new_tokens=32)
output_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(f"Reference answer: {input['answer']}, Model Answer: {output_text}")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.0454 | 0.13 | 500 | 0.6771 | 73.1040 | 59.8915 | 73.1819 | 73.0558 |
| 0.8012 | 0.26 | 1000 | 0.6012 | 76.3357 | 64.1967 | 76.3796 | 76.2688 |
| 0.7703 | 0.39 | 1500 | 0.5844 | 76.8932 | 65.2509 | 76.9932 | 76.9418 |
| 0.6783 | 0.51 | 2000 | 0.5587 | 76.7284 | 64.8453 | 76.7416 | 76.6720 |
| 0.6546 | 0.64 | 2500 | 0.5362 | 78.2261 | 66.5893 | 78.2515 | 78.2142 |
| 0.6289 | 0.77 | 3000 | 0.5133 | 78.6917 | 67.1534 | 78.6852 | 78.6319 |
| 0.6292 | 0.9 | 3500 | 0.5109 | 79.3283 | 68.0845 | 79.3474 | 79.2937 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
{"base_model": "google/mt5-base", "datasets": ["ucsahin/TR-Extractive-QA-82K"], "language": ["tr"], "license": "apache-2.0", "metrics": ["rouge"], "pipeline_tag": "text2text-generation", "tags": ["Question Answering", "generated_from_trainer"], "widget": [{"text": "Soru: Nazım Hikmet ne zaman doğmuştur?\nMetin: Nâzım Hikmet, Mehmed Nâzım adıyla 15 Ocak 1902 tarihinde Selanik'te doğdu. O sırada Hariciye Nezareti memuru olarak Selanik'te çalışan Hikmet Bey, Nâzım'ın çocukluğunda memuriyetten ayrıldı ve ailesiyle birlikte, Halep'te bulunan babasının yanına gitti. Burada bulundukları sırada Hikmet-Celile çiftinin biri Ali İbrahim, diğeri Samiye adında iki çocuğu oldu, fakat Ali İbrahim dizanteriye yakalanıp öldü."}], "model-index": [{"name": "mT5-base-turkish-qa", "results": []}]}
|
task
|
[
"QUESTION_ANSWERING"
] | 44,406 |
Helsinki-NLP/opus-mt-sv-lv
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sv",
"lv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T12:05:35+00:00
| 50 | 0 |
---
license: apache-2.0
tags:
- translation
---
### opus-mt-sv-lv
* source languages: sv
* target languages: lv
* OPUS readme: [sv-lv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-lv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-lv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-lv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-lv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.lv | 20.2 | 0.433 |
| null |
Non_BioNLP
|
### opus-mt-sv-lv
* source languages: sv
* target languages: lv
* OPUS readme: [sv-lv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-lv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-lv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-lv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-lv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.lv | 20.2 | 0.433 |
|
{"license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 44,407 |
paust/pko-flan-t5-large
|
paust
|
text2text-generation
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"ko",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-06-28T01:51:15Z |
2023-08-17T10:55:54+00:00
| 129 | 5 |
---
language: ko
library_name: transformers
license: mit
pipeline_tag: text2text-generation
---
# FLAN T5
[Source Code](https://github.com/paust-team/pko-t5/tree/main/pkot5/flan)
FLAN T5는 [paust/pko-t5-large](https://huggingface.co/paust/pko-t5-large) 모델을 기반으로 다양한 태스크를 instruction finetuning을 통해서 만든 모델입니다.
현재 계속 Instruction Finetuning 을 진행하면서 중간결과를 모델로 업데이트하고 있습니다.
## 학습된 태스크
| Task name | Task type |
|----------------------------|----------------|
| NSMC | Classification |
| Klue Ynat | Classification |
| KorNLI | Classification |
| KorSTS | Classification |
| QuestionPair | Classification |
| Klue STS | Classification |
| AIHub news Summary | Summarization |
| AIHub document Summary | Summarization |
| AIHub book Summary | Summarization |
| AIHub conversation Summary | Summarization |
| AIHub ko-to-en | Translation |
| AIHub ko-to-en Expert | Translation |
| AIHub ko-to-en Tech | Translation |
| AIHub ko-to-en social | Translation |
| AIHub ko-to-jp | Translation |
| AIHub ko-to-cn Tech | Translation |
| AIHub Translation Corpus | Translation |
| korquad | QA |
| Klue MRC | QA |
| AIHub mindslab's MRC | QA |
## 모델
- [Hugginface 링크](https://huggingface.co/paust/pko-flan-t5-large)
## 사용 예시
```python
from transformers import T5ForConditionalGeneration, T5TokenizerFast
tokenizer = T5TokenizerFast.from_pretrained('paust/pko-flan-t5-large')
model = T5ForConditionalGeneration.from_pretrained('paust/pko-flan-t5-large', device_map='cuda')
prompt = """서울특별시(서울特別市, 영어: Seoul Metropolitan Government)는 대한민국 수도이자 최대 도시이다. 선사시대부터 사람이 거주하였으나 본 역사는 백제 첫 수도 위례성을 시초로 한다. 삼국시대에는 전략적 요충지로서 고구려, 백제, 신라가 번갈아 차지하였으며, 고려 시대에는 왕실의 별궁이 세워진 남경(南京)으로 이름하였다.
한국의 수도는 어디입니까?"""
input_ids = tokenizer(prompt, add_special_tokens=True, return_tensors='pt').input_ids
output_ids = model.generate(input_ids=input_ids.cuda(), max_new_tokens=32, num_beams=12)
text = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
print(text) # 서울특별시
```
## License
[PAUST](https://paust.io)에서 만든 pko-t5는 [MIT license](https://github.com/paust-team/pko-t5/blob/main/LICENSE) 하에 공개되어 있습니다.
| null |
Non_BioNLP
|
# FLAN T5
[Source Code](https://github.com/paust-team/pko-t5/tree/main/pkot5/flan)
FLAN T5는 [paust/pko-t5-large](https://huggingface.co/paust/pko-t5-large) 모델을 기반으로 다양한 태스크를 instruction finetuning을 통해서 만든 모델입니다.
현재 계속 Instruction Finetuning 을 진행하면서 중간결과를 모델로 업데이트하고 있습니다.
## 학습된 태스크
| Task name | Task type |
|----------------------------|----------------|
| NSMC | Classification |
| Klue Ynat | Classification |
| KorNLI | Classification |
| KorSTS | Classification |
| QuestionPair | Classification |
| Klue STS | Classification |
| AIHub news Summary | Summarization |
| AIHub document Summary | Summarization |
| AIHub book Summary | Summarization |
| AIHub conversation Summary | Summarization |
| AIHub ko-to-en | Translation |
| AIHub ko-to-en Expert | Translation |
| AIHub ko-to-en Tech | Translation |
| AIHub ko-to-en social | Translation |
| AIHub ko-to-jp | Translation |
| AIHub ko-to-cn Tech | Translation |
| AIHub Translation Corpus | Translation |
| korquad | QA |
| Klue MRC | QA |
| AIHub mindslab's MRC | QA |
## 모델
- [Hugginface 링크](https://huggingface.co/paust/pko-flan-t5-large)
## 사용 예시
```python
from transformers import T5ForConditionalGeneration, T5TokenizerFast
tokenizer = T5TokenizerFast.from_pretrained('paust/pko-flan-t5-large')
model = T5ForConditionalGeneration.from_pretrained('paust/pko-flan-t5-large', device_map='cuda')
prompt = """서울특별시(서울特別市, 영어: Seoul Metropolitan Government)는 대한민국 수도이자 최대 도시이다. 선사시대부터 사람이 거주하였으나 본 역사는 백제 첫 수도 위례성을 시초로 한다. 삼국시대에는 전략적 요충지로서 고구려, 백제, 신라가 번갈아 차지하였으며, 고려 시대에는 왕실의 별궁이 세워진 남경(南京)으로 이름하였다.
한국의 수도는 어디입니까?"""
input_ids = tokenizer(prompt, add_special_tokens=True, return_tensors='pt').input_ids
output_ids = model.generate(input_ids=input_ids.cuda(), max_new_tokens=32, num_beams=12)
text = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
print(text) # 서울특별시
```
## License
[PAUST](https://paust.io)에서 만든 pko-t5는 [MIT license](https://github.com/paust-team/pko-t5/blob/main/LICENSE) 하에 공개되어 있습니다.
|
{"language": "ko", "library_name": "transformers", "license": "mit", "pipeline_tag": "text2text-generation"}
|
task
|
[
"TRANSLATION",
"SUMMARIZATION"
] | 44,408 |
Intel/whisper-small-int8-dynamic-inc
|
Intel
|
automatic-speech-recognition
|
[
"transformers",
"onnx",
"whisper",
"automatic-speech-recognition",
"int8",
"ONNX",
"PostTrainingDynamic",
"Intel® Neural Compressor",
"neural-compressor",
"dataset:librispeech_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-07-07T09:33:53Z |
2023-07-07T10:03:03+00:00
| 10 | 0 |
---
datasets:
- librispeech_asr
library_name: transformers
license: apache-2.0
metrics:
- wer
pipeline_tag: automatic-speech-recognition
tags:
- automatic-speech-recognition
- int8
- ONNX
- PostTrainingDynamic
- Intel® Neural Compressor
- neural-compressor
---
## Model Details: INT8 Whisper small
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains without the need for fine-tuning.
This int8 ONNX model is generated by [neural-compressor](https://github.com/intel/neural-compressor) and the fp32 model can be exported with below command:
```shell
optimum-cli export onnx --model openai/whisper-small whisper-small-with-past/ --task automatic-speech-recognition-with-past --opset 13
```
| Model Detail | Description |
| ----------- | ----------- |
| Model Authors - Company | Intel |
| Date | July 7, 2023 |
| Version | 1 |
| Type | Speech Recognition |
| Paper or Other Resources | - |
| License | Apache 2.0 |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/whisper-small-int8-dynamic/discussions)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | You can use the raw model for automatic speech recognition inference |
| Primary intended users | Anyone doing automatic speech recognition inference |
| Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.|
### How to use
Download the model by cloning the repository:
```shell
git clone https://huggingface.co/Intel/whisper-small-int8-dynamic
```
Evaluate the model with below code:
```python
import os
from evaluate import load
from datasets import load_dataset
from transformers import WhisperForConditionalGeneration, WhisperProcessor, AutoConfig
model_name = 'openai/whisper-small'
model_path = 'whisper-small-int8-dynamic'
processor = WhisperProcessor.from_pretrained(model_name)
model = WhisperForConditionalGeneration.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
wer = load("wer")
librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
from transformers import PretrainedConfig
model_config = PretrainedConfig.from_pretrained(model_name)
predictions = []
references = []
sessions = ORTModelForSpeechSeq2Seq.load_model(
os.path.join(model_path, 'encoder_model.onnx'),
os.path.join(model_path, 'decoder_model.onnx'),
os.path.join(model_path, 'decoder_with_past_model.onnx'))
model = ORTModelForSpeechSeq2Seq(sessions[0], sessions[1], model_config, model_path, sessions[2])
for idx, batch in enumerate(librispeech_test_clean):
audio = batch["audio"]
input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
reference = processor.tokenizer._normalize(batch['text'])
references.append(reference)
predicted_ids = model.generate(input_features)[0]
transcription = processor.decode(predicted_ids)
prediction = processor.tokenizer._normalize(transcription)
predictions.append(prediction)
wer_result = wer.compute(references=references, predictions=predictions)
print(f"Result wer: {wer_result * 100}")
accuracy = 1 - wer_result
print("Accuracy: %.5f" % accuracy)
```
## Metrics (Model Performance):
| Model | Model Size (GB) | wer |
|---|:---:|:---:|
| FP32 |2.4|3.45|
| INT8 |0.4|3.42|
| null |
Non_BioNLP
|
## Model Details: INT8 Whisper small
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains without the need for fine-tuning.
This int8 ONNX model is generated by [neural-compressor](https://github.com/intel/neural-compressor) and the fp32 model can be exported with below command:
```shell
optimum-cli export onnx --model openai/whisper-small whisper-small-with-past/ --task automatic-speech-recognition-with-past --opset 13
```
| Model Detail | Description |
| ----------- | ----------- |
| Model Authors - Company | Intel |
| Date | July 7, 2023 |
| Version | 1 |
| Type | Speech Recognition |
| Paper or Other Resources | - |
| License | Apache 2.0 |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/whisper-small-int8-dynamic/discussions)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | You can use the raw model for automatic speech recognition inference |
| Primary intended users | Anyone doing automatic speech recognition inference |
| Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.|
### How to use
Download the model by cloning the repository:
```shell
git clone https://huggingface.co/Intel/whisper-small-int8-dynamic
```
Evaluate the model with below code:
```python
import os
from evaluate import load
from datasets import load_dataset
from transformers import WhisperForConditionalGeneration, WhisperProcessor, AutoConfig
model_name = 'openai/whisper-small'
model_path = 'whisper-small-int8-dynamic'
processor = WhisperProcessor.from_pretrained(model_name)
model = WhisperForConditionalGeneration.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
wer = load("wer")
librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
from transformers import PretrainedConfig
model_config = PretrainedConfig.from_pretrained(model_name)
predictions = []
references = []
sessions = ORTModelForSpeechSeq2Seq.load_model(
os.path.join(model_path, 'encoder_model.onnx'),
os.path.join(model_path, 'decoder_model.onnx'),
os.path.join(model_path, 'decoder_with_past_model.onnx'))
model = ORTModelForSpeechSeq2Seq(sessions[0], sessions[1], model_config, model_path, sessions[2])
for idx, batch in enumerate(librispeech_test_clean):
audio = batch["audio"]
input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
reference = processor.tokenizer._normalize(batch['text'])
references.append(reference)
predicted_ids = model.generate(input_features)[0]
transcription = processor.decode(predicted_ids)
prediction = processor.tokenizer._normalize(transcription)
predictions.append(prediction)
wer_result = wer.compute(references=references, predictions=predictions)
print(f"Result wer: {wer_result * 100}")
accuracy = 1 - wer_result
print("Accuracy: %.5f" % accuracy)
```
## Metrics (Model Performance):
| Model | Model Size (GB) | wer |
|---|:---:|:---:|
| FP32 |2.4|3.45|
| INT8 |0.4|3.42|
|
{"datasets": ["librispeech_asr"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["wer"], "pipeline_tag": "automatic-speech-recognition", "tags": ["automatic-speech-recognition", "int8", "ONNX", "PostTrainingDynamic", "Intel® Neural Compressor", "neural-compressor"]}
|
task
|
[
"TRANSLATION"
] | 44,409 |
seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-50
|
seongil-dn
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:451949",
"loss:CachedGISTEmbedLoss",
"arxiv:1908.10084",
"base_model:seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-32",
"base_model:finetune:seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-32",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-12-13T15:05:07Z |
2024-12-13T15:06:27+00:00
| 7 | 0 |
---
base_model: seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-32
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:451949
- loss:CachedGISTEmbedLoss
widget:
- source_sentence: 신체와 행동상의 특성을 통해 신원을 확인하는 기술로서 가장 민감한 개인정보에 해당하는 것은 뭐야?
sentences:
- 그러나 「개인정보 보호법」은 모든 생체인식 정보를 민감정보로서 포함하고 있지 않다. 예를 들어 지문은 개인의 고유성, 동일성을 나타내고 정보주체를
타인으로부터 식별가능하게 하는 개인정보이지만(헌법재판소, 2005), 현행 개인정보 보호법에서는 민감정보로 규율되고 있지 않다. 또 민족 또는
인종적 기원을 드러낼 수 있는 유전정보는 민감정보로서 보호받고 있지만, 국제 규범에서 통상 민감정보로서 보호하고 있는 민족 또는 인종적 기원에
대한 보호를 우리 법률은 명시하고 있지 않다. 어플리케이션이나 기기 등으로 수집되는 개인정보 또한 모두가 건강정보는 아닐 수 있다. 다만 진료정보인
경우이거나 직접적으로나 간접적으로 건강 상태나 위험에 대해 판단할 수 있는 원본 감지정보인 경우, 혹은 건강상태나 건강위험에 대한 결론을 도출하는
경우는 민감정보에 포함되는 건강관련 정보로서 보호될 수 있을 것이다.
- '2. 거시경제분석모델을 통한 수요전망
가. 실물경제 및 금융산업 성장 전망
(1) 실물경제 성장 전망
□ 한국 실질 국내총생산(GDP)은 성장률이 지속적으로 하락하는 추세에 있음. ○ 한국의 연평균 성장률은 1980년대에는 8.9%, 1990년부터
외환위기 이전인 1997년까지는 8.2%로 빠른 성장세를 유지하였으나, 1998년 외환위기로 성장률이 △5.5%까지 하락함. ○ 1999년에는
기저효과 등으로 실질 국내총생산(GDP) 성장률이 11.3%까지 반등하였다가 점차 하락세를 보여 2003년에는 2.9%까지 하락함
○ 이후 2007년까지 연평균 4.9%의 성장률을 보였으나, 2008년과 2009년에는 글로벌 금융위기의 영향으로 다시 큰 폭으로 하락
○ 2010년에는 전년대비 6.5% 성장하였으나, 2011년 유로지역 재정위기로 인해 다시 하락하여 2012년에는 2.3%를 기록하고 2013년과
2014년에는 반등하여 각각 2.9% 및 3.3%를 기록
○ 2015년에는 수출 둔화 지속과 내수 회복 지연 등으로 2.6%를 기록하였으며, 2016년에는 소폭 상승한 2.7%를 기록할 전망
□ 이러한 경제성장률 하락 추세는 앞으로도 지속될 것으로 예상됨. ○ 한국 잠재성장률이 2016년에는 약 2.9%이지만 2040년에는 1.7%까지
점진적으로 하락할 것으로 전망
○ 물론 성장잠재력 수준은 여러 유·무형의 요인들에 의해 결정이 되기 때문에 직접적인 관측이 어렵지만 여러 통계기법을 활용하여 잠재성장률을
추정하더라도, 향후 잠재성장률의 추세적인 하락은 불가피한 현상으로 판단됨. □ 경제성장률은 장기적으로 잠재성장률 추세와 비슷한 움직임을 보이겠지만
단기적으로는 경기적인 요인으로 인해 잠재성장률 수준을 하회하거나 상회할 수 있음. ○ 이에 따라 금융연구원의 경제전망 모형을 이용하여 향후
5년간인 2017∼2021년까지의 경제성장률을 추정함. □ 2017년 우리경제는 내수와 수출부진으로 2.5% 성장에 그칠 것으로 예상되며,
이후 2018년과 2019년에는 소폭 개선되어 각각 2.7%, 2.8%를 기록할 것으로 전망됨. (2) 금융산업 성장 전망
□ 2016년 1~3분기 중 한국에서 창출된 부가가치는 1,109.7조원이며 이중 금융업이 창출한 부가가치는 68.6조원으로 전체 부가가치중
6.18%를 차지함.'
- 지문이나 홍채와 같은 바이오 정보는 신체와 행동상의 특성을 통해 신원을 확인하는 기술로서, 기존에 저장해 둔 개인의 바이오 정보와 제시된 바이오
정보를 대조하는 방식이며, 또한 각 개인의 신체에 각인되어 특별한 신체적 변화가 없는 한 평생토록 바꿀 수 없기 때문에 개인정보 중에서도 가장
민감한 개인정보에 해당한다고 볼 수 있다. 그럼에도 불구하고, 개인정보 생명주기에 따라 수집과정에서 명확한 인식과 설명(개인정보 열람 정정
삭제 청구권) 없는 수집의 위험성, 저장관리 과정에서 해킹에 의한 유출 및 삭제, 이용 및 제공과정에서 다른 개인정보 데이터베이스와의 결합에
따른 오 남용(특성, 습관, 행동, 감정추론) 사례가 발생할 가능성이 존재하며, 원데이터의 복원이나 당사자 역추적 등도 가능하므로 개인정보
유출 가능성을 배제할 수는 없다. 본 진정사건과 관련하여 공익근무요원들에 대한 복무(근태)관리는 담당자가 지속적인 점검을 통하여 이를 감독할
수가 있으며, 부득이하게 출퇴근용 카드발급시스템을 사용한다 하더라도 충분히 예방할 수 있으므로 지문등록시스템 도입은 침해의 최소성에 맞지 않다고
판단된다.
- source_sentence: 장애인활동 지원에 관한 법률이 개정되어 무엇이 늘어났어?
sentences:
- '장애인 활동 지원에 관한 법률 시행 규칙 일부 개정령안
1. 개정이유
보건복지부 사회보장위원회 결정(2013.12월 의결)에 따라 2016년부터 사회보장사업의 선정기준을 전국가구평균소득에서 기준 중위소득으로 표준화가
추진되고 있어 기존 ‘전국가구평균소득’을 ‘기준중위소득’으로 변경하고, 「장애인복지법」상 서비스 지원 종합조사 도입으로 ‘기본급여’와 ‘추가급여’가
‘활동지원급여’로 통합됨에 따라 본인부담율을 조정하는 한편, 운영상 나타난 일부 미비사항을 보완하고자 함 2. 주요내용
가. 본인부담금의 산정방법을 기준중위소득으로 변경하고, ‘기본급여’와 ‘추가급여’가 ‘활동지원급여’인 단일서비스로 통합됨에 따라 본인부담율
조정(안 별표6)
나. 활동지원 응급안전서비스 신청서 개선(안 별지 제5의2서식)
3. 참고사항
가. 관계법령 : 생략
나. 예산조치 : 별도조치 필요 없음
다. 합의 : 해당기관 없음
라. 기타 : 1) 신ㆍ구조문대비표'
- '장애인활동 지원에 관한 법률 시행령 일부개정령안
1. 의결주문
장애인활동 지원에 관한 법률 시행령 일부개정령안을 별지와 같이 의결한다.
2. 제안이유 및 주요내용
활동지원급여를 신청할 수 있는 사람을 종전의 중증장애인에서 모든 장애인으로 확대하고 활동지원급여 신청의 조사를 「장애인복지법」에 따른 서비스
지원 종합조사로 대체하는 등의 내용으로 「장애인활동 지원에 관한 법률」이 개정됨에 따라, 이에 맞추어 활동지원급여 수급자격 심의기준 및 수급자격
갱신에 관한 사항 등을 정비하는 한편, 활동지원급여 수급자격 결정의 유효기간을 종전의 2년에서 3년으로 늘림으로써 수급자의 권익을 보다 안정적으로
보호하고, 과태료의 가중된 부과처분에 적용되는 차수를 명확히 하는 등 현행 제도의 운영상 나타난 일부 미비점을 개선ㆍ보완하려는 것임. 3.
주요토의과제
없음 4. 참고사항
가. 관계법령 : 생략
나. 예산조치 : 별도조치 필요 없음
다. 합의 : 해당기관 없음
라. 기타 : 1) 신ㆍ구조문대비표, 별첨
2) 입법예고(2018. 9. 3. ~ 10. 15.) 결과, 특기할 사항 없음'
- '과천과학관에서 먼저 배우는 MBL과 수학 융합탐구<br> 일정 및 내용<br>○ 중등반 MBL-수학 융합 탐구과정 <table><tbody><tr><td>시간</td><td>차시</td><td>내용</td><td>실험도구</td><td>비고</td></tr><tr><td
rowspan=''2''>09:30~ 10:50 </td><td>1</td><td rowspan=''2''>정비례 그래프 기울기 구하기 [수학만화+수학문제풀이]
MBL 온도센서를 이용한 시합(조별) 엑셀에서 추세선 분석-운동시합(시범)-정비례 그래프 보일법칙(시범)-반비례 그래프 연관 관계 시범 </td><td
rowspan=''2''>노트북,초음파센서,온도센서,기체압력센서,판자</td><td rowspan=''2''>교육동3층 제4실험실</td></tr><tr><td>2</td></tr><tr><td
rowspan=''2''>11:00~ 12:20 </td><td>3</td><td rowspan=''2''>함수의 변환 연습하기<br>(예:시간·위치
그래프→시간·속도 그래프) MBL 초음파센서를 이용한 등속도운동 실험 및 분석 </td><td rowspan=''2''>노트북</td><td
rowspan=''2''>교육동3층 제4실험실</td></tr><tr><td>4</td></tr><tr><td>12:20~13:10</td><td>
</td><td>점심 식사</td><td> </td><td>식당<br>(자유) </td></tr><tr><td rowspan=''2''>13:10~
14:30 </td><td>5</td><td rowspan=''2''>멘토 소개 및 자유탐구 활동 안내 기초과학관 탐방, 전시물 과학원리 탐구
MBL실험 시범 - 관련 전시물 : 신경전달 반응속도/거중기/뇌그림 </td><td rowspan=''2''>멘토, 심전도센서, 초음파 센서,힘센서</td><td
rowspan=''2''>과천과학관 기초과학관 </td></tr><tr><td>6</td></tr><tr><td rowspan=''2''>14:40~
16:00 </td><td>7</td><td rowspan=''2''>MBL을 활용하여 기초과학관의 전시물 속 과학원리 탐구실험하기 실험결과
발표 및 토론/참가확인서 발급 </td><td rowspan=''2''>시험지, 시상물품, 참가확인서</td><td rowspan=''2''>교육동3층
제4실험실</td></tr><tr><td>8</td></tr></tbody></table>'
- source_sentence: 인도네시아에서 아체특별자치주가 기록한 빈곤 퍼센트는 얼마나 돼?
sentences:
- 연말연시를 맞아 인도네시아 부유층이 초호화 파티를 벌이고 있는 것과는 달리 대재앙으로 살아남은 반다 아체주의 이재민들이 이번엔 굶주림으로 생사
기로에 놓였다. 고지대의 길거리 등에서 텐트도 없이 생활하고 있는 이재민들은 1일 강진과 지진해일의 대재앙이 일어난지 7일째를 맞았지만 국제사회가
지원하는 구호품을 거의 공급 받지 못하고 있다. 도로변 인도에서 밤을 새운 시슬라(38.주부)는 "반다 아체 공군기지에 외국 수송기들이 잇따라
도착하는데 구호품을 나눠준다는 소식은 없다"면서 "도대체 정부는 무엇을 하는지 모르겠다"고 분통을 터뜨렸다. 시슬라는 "악취를 막기 위한 마스크는
일부 나눠주는 것을 봤다"면서 "그러나 먹기 위한 비상식품이나 부상자들을 치료하기 위한 의약품, 텐트 등은 아예 볼 수가없다"고 말했다. 또
만디리(28.점원)는 "구호품이 속속 도착하고 있다는 말을 들었지만 어디를 가야 식량을 얻을 수 있는 지 도대체 알 수가 없다"면서 "통신
두절로 우리는 정보를 얻을 수 없다"고 하소연했다. 반다 아체 공군기지에 구호품을 싣고 도착한 호주 공군의 리처드(24) 상병은 "어제 밤
호주를 출발해 생수와 라면, 과자류를 갖고 왔다"면서 "배급은 인도네시아군 소관 사항"이라고 말했다. 인도네시아 들도 "반다 아체 인근 도시인
메단의 폴로냐공항이나 자카르타 할림 페르다나쿠수마 공군기지에는 국제사회가 지원해준 각종 구호품이 산더미처럼 쌓여 있다"고 말했다. 이에 대해
임시 주정부 청사가 마련된 반다 아체 주지사 관저의 한 관계자는 "인력도 부족하고 정신이 없다"면서 "특히 헬기나 트럭 등 운송장비가 부족해
대책이없다"고 털어놨다. 한편 심리적 공황에 빠진 반다 아체주 주민과는 달리 자카르타의 고급 호텔 등지에서는 부유층들이 연말연시를 맞아 초호화
파티를 벌여 대조를 이루고 있다고 인도네시아 현지 들이 전했다.
- 우리나라는 우리의 발전경험을 개도국과 공유하는 경제발전경험공유사업(KSP)이라는 대표적인 정책자문 사업을 실시 중이다. 2004년에 베트남과
우즈베키스탄에 대한 자문을 시작으로 10년 이상 수행해 오면서 대상국가와 주제들을 다각화하였다. 특히 경제발전과 거시금융경제 등 전체적인 발전담론
위주의 자문에서 구체적인 성장전략으로 자문 영역이 전환되는 추세에 있다. 그동안의 수행 경험을 바탕으로 하여 급변하는 국제사회 개발협력 정세와
일관된 우리나라 정책자문 사업의 방향성이 요구된다. 정책자문 컨설팅의 역할과 방향성 정립이 더욱 중요해진 이유이다. 경제발전의 조건이 선진국의
그것과 매우 다른 개도국의 지속가능한 발전을 촉진하기 위하여 개 도국은 선진국과는 차별화된 성장전략이 필요하다. 혁신역량을 바탕으로 한 산업과
무역의 역동성이 개도국의 지속가능하고 상생적인 발전을 유인하며, 따라서 개도국 혁신역량의 구축에 초점을 맞추는 협력모델이 중요하다. 주목할
점은 한 국가의 발전에 영향을 미치는 혁신이란 기술뿐만 아니라 제도나 조직 측면에서의 혁신 등 매우 다양한 측면을 포함한다는 것이다.
- 인도네시아 서북단 아체특별자치주(州) 주정부 건물 앞에 최근 축하 화환 10개가 줄지어 늘어섰다. 주민들이 자발적으로 보낸 화환들인데, 축하
메시지가 이채롭다. ‘수마트라에서 가장 가난한 지역이 된 걸 축하한다’ ‘빈곤 우승자가 된 주지사에게 감사하다’ ‘가난 1등 아체 축하’ 등이다.
발신자는 ‘아체 주민들’, ‘전 아첨꾼’이라고 돼 있다. 22일 드틱닷컴에 따르면 아체의 주도인 반다아체 주정부 청사 앞에 17일 다양한 화환이
전시됐다. 축하 형식을 갖췄지만 기실 모두 수마트라섬에 있는 10개 주 중에서 아체특별자치주가 가장 빈곤한 지역에 선정된 걸 비꼬는 내용이다.
경찰까지 출동했다. 실제 인도네시아 통계청(BPS)은 지난해 9월 기준 아체의 빈곤율이 15.43%로 수마트라섬에서 가장 높다고 최근 발표했다.
수마트라섬은 자바섬에 이어 인도네시아 2대 주요 섬이다. 인도네시아 전체 빈곤율은 10.19%이다. 지난해 3월 조사에서 아체는 수마트라섬에서
븡쿨루주에 이어 빈곤율 2위였다. 아체 주정부 관계자는 “신종 코로나바이러스 감염증(코로나19) 사태로 인한 결과”라며 “인도네시아 전체 빈곤율이
전년 동기 대비 0.97%포인트 증가한 걸 감안하면 같은 기간 0.42%포인트 상승한 아체는 양호한 편”이라고 해명했다. 아울러 아체 주정부는
“빈곤율을 줄이기 위한 정책 노력을 계속하고 있다”고 밝혔다. 그러면서 “아체는 2000년 (인도네시아 중앙정부와) 내전, 2004년 동남아
쓰나미로 인해 급속도로 가난해진 역사적 배경이 있기 때문에 두 배로 열심히 일해야 한다”고 덧붙였다. 정작 주민들은 핑계에 불과하다고 일축했다.
풍자 화환을 보낸 한 주민은 “코로나19로 인해 시위를 할 수 없으니 그 열망을 화환에 담아 보낸 것”이라며 “전염병보다 (주지사의) 행정
능력과 비효율적인 예산 집행이 문제”라고 꼬집었다. 2005년 특별자치주가 되면서 중앙정부로부터 받고 있는 특별자치기금 운용에 문제가 있다는
지적도 잇따랐다. 자치권을 인정받는 아체는 이슬람 관습법(샤리아)이 실질 지배하는 독특한 지역이다.
- source_sentence: 세종시에서 진행하는 메타버스 강연은 선착순으로 어느 정도나 되는 사람들을 모집해?
sentences:
- '문화예술전문인력 양성 및 지원 패러다임 전환 방향 모색 연구<br>4. 문화예술전문인력 양성 사업 실태 분석 및 시사점<br>1) 문화예술전문인력
양성 사업의 현황 및 특징<br>□ 문화예술전문인력 양성 사업 실태를 분석하는 틀로써 5개의 요소로 대분류하고, 각각의 중분류와 소분류 요소를
도출. 현재 시행 중인 문화예술전문인력 양성 사업(176개 사업)을 아래와 같은 분석 틀에 따라 지원 주체, 지원 목적, 지원 대상, 지원
방식, 지원 장르별로 분석함. <br>- 분석 틀은 사업 성격에 따라 상호배타성이 약한 경우도 있어 중복되는 경우가 상당수 있음. <br>
<table><tbody><tr><td>대분류</td><td>중분류</td><td>소분류</td></tr><tr><td rowspan=''3''>지원
주체</td><td>·중앙정부 및 소속/산하 기관</td><td>·문화체육관광부, 문화재청 <br>·문화체육관광부, 문화재청의 소속/산하 기관<br>-국립문화예술기관,
특수법인 형식의 문화예술기관, 민법상 법인 형식의 정부재정지원 문화예술기관 </td></tr><tr><td>·광역 및 기초 지자체 문화재단</td><td>·광역단위
문화재단 기초단위 문화재단</td></tr><tr><td>·민간 문화재단</td><td>·기업출연 문화재단</td></tr><tr><td>지원
목적</td><td>·창작자의 창작역량 강화<br>·현장 종사자의 직무역량 강화<br>·경력개발 지원 </td><td> </td></tr><tr><td
rowspan=''2''>지원 대상</td><td>·경력 단계(경력기간, 직급, 나이, 자격증, 선행과정 이수 등 기준 적용)</td><td>·예비인력
<br>·신진 <br>·중견 <br>·시니어 </td></tr><tr><td>·직능 구분</td><td>·창작·실연: 창작, 실연 <br>·기획·경영:
기획·학예, 교육, 경영, 기획·경영 전반, 연구·비평, 특수 <br>·기술: 무대기술, 보존·복원, 기타 장르별 기술직 <br>·관리·행정:
관리, 행정(공무원) </td></tr><tr><td rowspan=''3''>지원 방식</td><td>·창작지원연계</td><td>·국내외
레지던시 <br>·예술창작지원연계 </td></tr><tr><td>·교육·훈련</td><td>·집체교육·연수 <br>·워크숍·세미나 <br>·멘토링·코칭·컨설팅
<br>·온라인 기반 </td></tr><tr><td>·경력개발</td><td> ·해외연수·리서치 트립 <br>·인력배치지원 <br>·인턴십
</td></tr><tr><td>지원 장르</td><td>·시각예술<br>·공연예술<br>·전통예술<br>·문학<br>·융복합 장르(다원예술
등) </td><td> </td></tr></tbody></table> 문화예술전문인력 지원 실태 분석 틀'
- 세종시와 중소벤처기업부가 31일부터 일반시민이 자율주행버스에 탑승할 수 있는 BRT 대중교통 유상 서비스 실증을 본격 시작한다. 시는 자율주행
규제자유특구 사업을 통해 2020년 5월부터 주거단지 및 도심공원, 일반도로에서 자율주행차 실증을 진행하는 등 안전성 확보와 인프라를 구축하는데
주력해왔다. 특히 ㈜오토노머스에이투지는 안전점검 등을 거쳐 지난 3월부터 약 3달간 산학연클러스터지원센터-세종시청-세종시외버스터미널 등 6.3㎞
구간에서 자율주행버스 시범 운행을 해 왔다. 지난 5월부터는 시민체험단 27명을 대상으로 레벨4 수준에서 시속 50㎞까지 고속주행 기술을 점검하는
유상서비스를 사전 점검했다. 31일~7월30일 진행되는 이번 일반시민 대상 유상 서비스로 일반시민들은 500원 이하의 저렴한 비용으로 자율주행버스를
이용할 수 있게 된다. 자율주행버스는 안전요원 2명이 동승하며, 코로나19 확산 차단을 위한 방역 등 안전을 최우선 가치로 두고 매주 월~금요일
주 5일 운행한다. 정차정류장은 국책연구단지→소담동→세종시청→시외버스터미널 순이며, 1일 운행 횟수는 오전11시, 오후 2시, 3시, 4시
등 모두 4회다. 류제일 경제정책과장은 “세종시에 자율주행 대중교통 셔틀을 도입할 경우 교통체증 및 주차난 해소, 대기오염 저감 등이 기대된다”라며
“앞으로 자율주행 기술동향을 고려해 자율주행 대중교통 버스 도입을 검토해 나갈 계획”이라고 말했다.
- 세종시가 4월 과학의 달을 맞아 (재)세종테크노파크와 오는 22일 ‘스마트한 세종의 미래’를 주제로 실시간 강연을 연다. 이번 강연은 메타버스를
활용해 온라인 생중계로 자율주행차 전문가 강연이다. 메타버스는 가공, 추상을 의미하는 ‘메타(meta)’와 현실 세계를 의미하는 ‘유니버스(universe)’의
합성어로 3차원 가상세계를 의미한다. 특히 메타버스 가상세계에서 자율주행전문가가 참가자와 직접 자신의 아바타로 강연에 참여해, 첨단 과학기술을
체험하는 형식이다. 메타버스 강연은 45명을 선착순 모집하고, 신청은 16일까지 세종테크노파크 홈페이지와 세종시청 홈페이지에서 가능하다. 강연은
22일 오후 3시부터 1시간 동안이고, 메타버스 신청자 외에도 유튜브에서 ‘세종테크노파크’를 검색한 후 채널에 접속해 시청할 수 있다. 메타버스
강연참여자는 세종테크노파크로부터 접속 환경 적응을 위한 매뉴얼을 전달받고, 사전모임을 통해 아바타 개설 등 사전연습을 진행한다. 또 강연 당일에는
가상공간 내 강연 및 공연은 관람하고 강연자와 참석자간 소통을 통해 질의응답을 할 예정이다. 시 관계자는 “짧은 시간 동안 과학을 체험할 수
있는 특별한 기회를 마련하기 위해 메타버스라는 가상세계를 선택했다”며 “스마트한 세종의 미래에 시민 여러분들의 많은 관심과 참여 바란다”고
말했다. 한편 이번 행사는 과학기술정보통신부의 4월 과학의 달 ‘봄날의 과학산책’ 과 연계한 지역별 프로그램으로, 전국의 지역과학문화 거점센터들이
릴레이 형식으로 과학콘텐츠를 선보이고 있다. 시는 2021년 지역과학문화 거점센터로 선정돼 (재)세종테크노파크가 운영 중에 있다.
- source_sentence: 관광 교통 서비스 체계 구축 정책의 추진은 몇 단계로 나눠서 할 수 있을까?
sentences:
- 창의ᆞ혁신상품은 TV홈쇼핑으로 구매하세요! □ 미래부가 내세우는 공영TV홈쇼핑의 또 다른 차별화 포인트는 중소기업, 농축수산가공업체 등을 위한
종합 글로벌 유통 채널 구축의 구심점으로 공영TV홈쇼핑을 활용한다는 것이다. o 공영TV홈쇼핑은 인터넷, 모바일, 오프라인 매장을 연결하는
종합유통 채널 구축을 위한 시발점이 될 것이다. 심사과정에서 TV홈쇼핑에서 발생한 광고효과를 다른 유통채널을 통해 판매로 유도하는 종합 유통
채널 구축 전략을 평가하고, 중소기업ㆍ창의혁신기업ㆍ농어민 지원을 위해 기존에 운영되고 있는 유통채널과의 전략적 제휴 등도 추진해 나가도록 할
계획이다. o 또한, 공영TV홈쇼핑은 창의ㆍ혁신 상품, 중소기업 제품 등의 글로벌시장 진출을 지원하기 위한 기반을 구축할 것이다. 농식품부,
중기청등 관련 부처에서 추진 중인 해외진출 지원 사업 등과 연계하고, 이미 해외에서 TV홈쇼핑 채널을 운영 중인 기존 TV홈쇼핑 업체와의 상생
협력 등을 통해 해외 판로 개척 모델을 만들어 나가도록 유도할 계획이다. □ 미래부는 12월 12일 더케이호텔서울(서초구 양재동)에서 승인신청
요령 등에 대한 사업자 대상 설명회를 개최하여 공영TV홈쇼핑채널 신청을 희망하는 사업자들에게 자세한 안내를 할 예정이다. o 이후, 12월
29일부터 31일까지 3일간 사업자 신청 접수를 받고 시청자 의견청취, 심사위원회 운영 등의 심사 절차를 진행하여 2015년 1월에는 신설
공영TV홈쇼핑 사업자 선정을 마무리할 계획이다.
- 관광 교통 서비스 체계 정책 추진 주체로는 중앙 및 지방정부, 공공기관, 민간기관 등이 고려될 수 있다. 중앙정부 및 지방정부, 공공기관 중
연구기관은 정책을 추진하는 주체로서, 지방정부와 사업기관은 정책을 실행하는 주체로서, 민간 기관은 직접 사업을 추진하는 주체로서 참여할 수
있다. 관광 교통은 기존 교통시설 및 수단을 관광객이 이용하는 개념이기 때문에 정책 영역이 국토교통부, 문화체육관광부, 넓게는 해양수산부 등
여러 부처에 걸쳐 있다. 원활한 정책사업 추진을 위해서는 부처 간 협력이 필수적이며, 부처 간 협력 체계로는 협력적 개별사업추진, 공동사업추진,
사업추진 조직구성 등 세 가지 대안을 고려해볼 수 있다. 관광 교통 서비스 체계 구축 정책은 3단계로 구분하여 추진할 수 있다. 1단계는 2016년
2017년으로 설정하고자 하며, 이 시기는 관광 교통 정책 사업을 추진하기 위한 기반을 마련하는 단계이다. 2단계는 2018년부터 2020년까지
3년간으로 본격적인 정책 사업이 추진되는 시기이며, 3단계는 2021년 이후 정책사업의 효과가 창출되는 기간으로, 확장된 형태의 신규 사업을
발굴 및 추진할 수 있어야 한다.
- 관광교통 서비스 체계는 관광 활동을 위한 관광객의 이동 편의성과 효용을 최대화 하는 시스템을 뜻한다. 서비스 체계를 적용하는 영역은 관광 교통
정보, 관광교통수단, 관광교통 편의 서비스로 구분하여 볼 수 있다. 관광교통 정보는 관광 목적지에 도달하기 위해 필요한 관광교통 수단 및 관광교통
편의 서비스 등에 대한 종합적 정보를 뜻한다. 주요 관광자원과 관광 자원까지 이동하는 데 필요한 루트, 루트를 이동하기 위해 필요한 관광교통
수단과 비용, 관광교통 편의 서비스 등에 대한 정보를 모두 포함한다. 관광교통 수단은 출발지로부터 관광목적지를 연결하는 일반 및 특수교통수단을
뜻한다. 또한 교통 수단의 시간적, 공간적 연계 배치와 기반 시설로서 공항, 터미널, 역 또한 교통수단의 범위에 포함한다. 관광교통 편의 시스템은
교통수단의 이용을 보다 편리하게 하는 제도 및 서비스를 뜻한다. 관광교통 편의 서비스 영역에는 예약 할인, 그 밖의 제반 편의 서비스를 모두
포괄한다. 또한 교통수단의 이용은 물론 관광지 입장까지 아우르는 통합 패스 티켓, 바우처 등을 포함한다.
---
# SentenceTransformer based on seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-32
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-32](https://huggingface.co/seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-32). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-32](https://huggingface.co/seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-32) <!-- at revision 57cd045ca31ceb5228a887b562051a1655ccc30f -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-50")
# Run inference
sentences = [
'관광 교통 서비스 체계 구축 정책의 추진은 몇 단계로 나눠서 할 수 있을까?',
'관광 교통 서비스 체계 정책 추진 주체로는 중앙 및 지방정부, 공공기관, 민간기관 등이 고려될 수 있다. 중앙정부 및 지방정부, 공공기관 중 연구기관은 정책을 추진하는 주체로서, 지방정부와 사업기관은 정책을 실행하는 주체로서, 민간 기관은 직접 사업을 추진하는 주체로서 참여할 수 있다. 관광 교통은 기존 교통시설 및 수단을 관광객이 이용하는 개념이기 때문에 정책 영역이 국토교통부, 문화체육관광부, 넓게는 해양수산부 등 여러 부처에 걸쳐 있다. 원활한 정책사업 추진을 위해서는 부처 간 협력이 필수적이며, 부처 간 협력 체계로는 협력적 개별사업추진, 공동사업추진, 사업추진 조직구성 등 세 가지 대안을 고려해볼 수 있다. 관광 교통 서비스 체계 구축 정책은 3단계로 구분하여 추진할 수 있다. 1단계는 2016년 2017년으로 설정하고자 하며, 이 시기는 관광 교통 정책 사업을 추진하기 위한 기반을 마련하는 단계이다. 2단계는 2018년부터 2020년까지 3년간으로 본격적인 정책 사업이 추진되는 시기이며, 3단계는 2021년 이후 정책사업의 효과가 창출되는 기간으로, 확장된 형태의 신규 사업을 발굴 및 추진할 수 있어야 한다.',
'관광교통 서비스 체계는 관광 활동을 위한 관광객의 이동 편의성과 효용을 최대화 하는 시스템을 뜻한다. 서비스 체계를 적용하는 영역은 관광 교통 정보, 관광교통수단, 관광교통 편의 서비스로 구분하여 볼 수 있다. 관광교통 정보는 관광 목적지에 도달하기 위해 필요한 관광교통 수단 및 관광교통 편의 서비스 등에 대한 종합적 정보를 뜻한다. 주요 관광자원과 관광 자원까지 이동하는 데 필요한 루트, 루트를 이동하기 위해 필요한 관광교통 수단과 비용, 관광교통 편의 서비스 등에 대한 정보를 모두 포함한다. 관광교통 수단은 출발지로부터 관광목적지를 연결하는 일반 및 특수교통수단을 뜻한다. 또한 교통 수단의 시간적, 공간적 연계 배치와 기반 시설로서 공항, 터미널, 역 또한 교통수단의 범위에 포함한다. 관광교통 편의 시스템은 교통수단의 이용을 보다 편리하게 하는 제도 및 서비스를 뜻한다. 관광교통 편의 서비스 영역에는 예약 할인, 그 밖의 제반 편의 서비스를 모두 포괄한다. 또한 교통수단의 이용은 물론 관광지 입장까지 아우르는 통합 패스 티켓, 바우처 등을 포함한다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 4096
- `learning_rate`: 3e-05
- `num_train_epochs`: 2
- `warmup_ratio`: 0.05
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4096
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0370 | 1 | 0.34 |
| 0.0741 | 2 | 0.3401 |
| 0.1111 | 3 | 0.3432 |
| 0.1481 | 4 | 0.3324 |
| 0.1852 | 5 | 0.3348 |
| 0.2222 | 6 | 0.3267 |
| 0.2593 | 7 | 0.3268 |
| 0.2963 | 8 | 0.3311 |
| 0.3333 | 9 | 0.3206 |
| 0.3704 | 10 | 0.3145 |
| 0.4074 | 11 | 0.3092 |
| 0.4444 | 12 | 0.3028 |
| 0.4815 | 13 | 0.3126 |
| 0.5185 | 14 | 0.2881 |
| 0.5556 | 15 | 0.3019 |
| 0.5926 | 16 | 0.2978 |
| 0.6296 | 17 | 0.293 |
| 0.6667 | 18 | 0.2836 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.3.1+cu121
- Accelerate: 1.1.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-32
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-32](https://huggingface.co/seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-32). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-32](https://huggingface.co/seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-32) <!-- at revision 57cd045ca31ceb5228a887b562051a1655ccc30f -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-50")
# Run inference
sentences = [
'관광 교통 서비스 체계 구축 정책의 추진은 몇 단계로 나눠서 할 수 있을까?',
'관광 교통 서비스 체계 정책 추진 주체로는 중앙 및 지방정부, 공공기관, 민간기관 등이 고려될 수 있다. 중앙정부 및 지방정부, 공공기관 중 연구기관은 정책을 추진하는 주체로서, 지방정부와 사업기관은 정책을 실행하는 주체로서, 민간 기관은 직접 사업을 추진하는 주체로서 참여할 수 있다. 관광 교통은 기존 교통시설 및 수단을 관광객이 이용하는 개념이기 때문에 정책 영역이 국토교통부, 문화체육관광부, 넓게는 해양수산부 등 여러 부처에 걸쳐 있다. 원활한 정책사업 추진을 위해서는 부처 간 협력이 필수적이며, 부처 간 협력 체계로는 협력적 개별사업추진, 공동사업추진, 사업추진 조직구성 등 세 가지 대안을 고려해볼 수 있다. 관광 교통 서비스 체계 구축 정책은 3단계로 구분하여 추진할 수 있다. 1단계는 2016년 2017년으로 설정하고자 하며, 이 시기는 관광 교통 정책 사업을 추진하기 위한 기반을 마련하는 단계이다. 2단계는 2018년부터 2020년까지 3년간으로 본격적인 정책 사업이 추진되는 시기이며, 3단계는 2021년 이후 정책사업의 효과가 창출되는 기간으로, 확장된 형태의 신규 사업을 발굴 및 추진할 수 있어야 한다.',
'관광교통 서비스 체계는 관광 활동을 위한 관광객의 이동 편의성과 효용을 최대화 하는 시스템을 뜻한다. 서비스 체계를 적용하는 영역은 관광 교통 정보, 관광교통수단, 관광교통 편의 서비스로 구분하여 볼 수 있다. 관광교통 정보는 관광 목적지에 도달하기 위해 필요한 관광교통 수단 및 관광교통 편의 서비스 등에 대한 종합적 정보를 뜻한다. 주요 관광자원과 관광 자원까지 이동하는 데 필요한 루트, 루트를 이동하기 위해 필요한 관광교통 수단과 비용, 관광교통 편의 서비스 등에 대한 정보를 모두 포함한다. 관광교통 수단은 출발지로부터 관광목적지를 연결하는 일반 및 특수교통수단을 뜻한다. 또한 교통 수단의 시간적, 공간적 연계 배치와 기반 시설로서 공항, 터미널, 역 또한 교통수단의 범위에 포함한다. 관광교통 편의 시스템은 교통수단의 이용을 보다 편리하게 하는 제도 및 서비스를 뜻한다. 관광교통 편의 서비스 영역에는 예약 할인, 그 밖의 제반 편의 서비스를 모두 포괄한다. 또한 교통수단의 이용은 물론 관광지 입장까지 아우르는 통합 패스 티켓, 바우처 등을 포함한다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 4096
- `learning_rate`: 3e-05
- `num_train_epochs`: 2
- `warmup_ratio`: 0.05
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4096
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0370 | 1 | 0.34 |
| 0.0741 | 2 | 0.3401 |
| 0.1111 | 3 | 0.3432 |
| 0.1481 | 4 | 0.3324 |
| 0.1852 | 5 | 0.3348 |
| 0.2222 | 6 | 0.3267 |
| 0.2593 | 7 | 0.3268 |
| 0.2963 | 8 | 0.3311 |
| 0.3333 | 9 | 0.3206 |
| 0.3704 | 10 | 0.3145 |
| 0.4074 | 11 | 0.3092 |
| 0.4444 | 12 | 0.3028 |
| 0.4815 | 13 | 0.3126 |
| 0.5185 | 14 | 0.2881 |
| 0.5556 | 15 | 0.3019 |
| 0.5926 | 16 | 0.2978 |
| 0.6296 | 17 | 0.293 |
| 0.6667 | 18 | 0.2836 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.3.1+cu121
- Accelerate: 1.1.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "seongil-dn/bge-m3-kor-retrieval-451949-bs4096-full-32", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:451949", "loss:CachedGISTEmbedLoss"], "widget": [{"source_sentence": "신체와 행동상의 특성을 통해 신원을 확인하는 기술로서 가장 민감한 개인정보에 해당하는 것은 뭐야?", "sentences": ["그러나 「개인정보 보호법」은 모든 생체인식 정보를 민감정보로서 포함하고 있지 않다. 예를 들어 지문은 개인의 고유성, 동일성을 나타내고 정보주체를 타인으로부터 식별가능하게 하는 개인정보이지만(헌법재판소, 2005), 현행 개인정보 보호법에서는 민감정보로 규율되고 있지 않다. 또 민족 또는 인종적 기원을 드러낼 수 있는 유전정보는 민감정보로서 보호받고 있지만, 국제 규범에서 통상 민감정보로서 보호하고 있는 민족 또는 인종적 기원에 대한 보호를 우리 법률은 명시하고 있지 않다. 어플리케이션이나 기기 등으로 수집되는 개인정보 또한 모두가 건강정보는 아닐 수 있다. 다만 진료정보인 경우이거나 직접적으로나 간접적으로 건강 상태나 위험에 대해 판단할 수 있는 원본 감지정보인 경우, 혹은 건강상태나 건강위험에 대한 결론을 도출하는 경우는 민감정보에 포함되는 건강관련 정보로서 보호될 수 있을 것이다.", "2. 거시경제분석모델을 통한 수요전망\n가. 실물경제 및 금융산업 성장 전망\n(1) 실물경제 성장 전망\n□ 한국 실질 국내총생산(GDP)은 성장률이 지속적으로 하락하는 추세에 있음. ○ 한국의 연평균 성장률은 1980년대에는 8.9%, 1990년부터 외환위기 이전인 1997년까지는 8.2%로 빠른 성장세를 유지하였으나, 1998년 외환위기로 성장률이 △5.5%까지 하락함. ○ 1999년에는 기저효과 등으로 실질 국내총생산(GDP) 성장률이 11.3%까지 반등하였다가 점차 하락세를 보여 2003년에는 2.9%까지 하락함\n○ 이후 2007년까지 연평균 4.9%의 성장률을 보였으나, 2008년과 2009년에는 글로벌 금융위기의 영향으로 다시 큰 폭으로 하락\n○ 2010년에는 전년대비 6.5% 성장하였으나, 2011년 유로지역 재정위기로 인해 다시 하락하여 2012년에는 2.3%를 기록하고 2013년과 2014년에는 반등하여 각각 2.9% 및 3.3%를 기록\n○ 2015년에는 수출 둔화 지속과 내수 회복 지연 등으로 2.6%를 기록하였으며, 2016년에는 소폭 상승한 2.7%를 기록할 전망\n□ 이러한 경제성장률 하락 추세는 앞으로도 지속될 것으로 예상됨. ○ 한국 잠재성장률이 2016년에는 약 2.9%이지만 2040년에는 1.7%까지 점진적으로 하락할 것으로 전망\n○ 물론 성장잠재력 수준은 여러 유·무형의 요인들에 의해 결정이 되기 때문에 직접적인 관측이 어렵지만 여러 통계기법을 활용하여 잠재성장률을 추정하더라도, 향후 잠재성장률의 추세적인 하락은 불가피한 현상으로 판단됨. □ 경제성장률은 장기적으로 잠재성장률 추세와 비슷한 움직임을 보이겠지만 단기적으로는 경기적인 요인으로 인해 잠재성장률 수준을 하회하거나 상회할 수 있음. ○ 이에 따라 금융연구원의 경제전망 모형을 이용하여 향후 5년간인 2017∼2021년까지의 경제성장률을 추정함. □ 2017년 우리경제는 내수와 수출부진으로 2.5% 성장에 그칠 것으로 예상되며, 이후 2018년과 2019년에는 소폭 개선되어 각각 2.7%, 2.8%를 기록할 것으로 전망됨. (2) 금융산업 성장 전망\n□ 2016년 1~3분기 중 한국에서 창출된 부가가치는 1,109.7조원이며 이중 금융업이 창출한 부가가치는 68.6조원으로 전체 부가가치중 6.18%를 차지함.", "지문이나 홍채와 같은 바이오 정보는 신체와 행동상의 특성을 통해 신원을 확인하는 기술로서, 기존에 저장해 둔 개인의 바이오 정보와 제시된 바이오 정보를 대조하는 방식이며, 또한 각 개인의 신체에 각인되어 특별한 신체적 변화가 없는 한 평생토록 바꿀 수 없기 때문에 개인정보 중에서도 가장 민감한 개인정보에 해당한다고 볼 수 있다. 그럼에도 불구하고, 개인정보 생명주기에 따라 수집과정에서 명확한 인식과 설명(개인정보 열람 정정 삭제 청구권) 없는 수집의 위험성, 저장관리 과정에서 해킹에 의한 유출 및 삭제, 이용 및 제공과정에서 다른 개인정보 데이터베이스와의 결합에 따른 오 남용(특성, 습관, 행동, 감정추론) 사례가 발생할 가능성이 존재하며, 원데이터의 복원이나 당사자 역추적 등도 가능하므로 개인정보 유출 가능성을 배제할 수는 없다. 본 진정사건과 관련하여 공익근무요원들에 대한 복무(근태)관리는 담당자가 지속적인 점검을 통하여 이를 감독할 수가 있으며, 부득이하게 출퇴근용 카드발급시스템을 사용한다 하더라도 충분히 예방할 수 있으므로 지문등록시스템 도입은 침해의 최소성에 맞지 않다고 판단된다."]}, {"source_sentence": "장애인활동 지원에 관한 법률이 개정되어 무엇이 늘어났어?", "sentences": ["장애인 활동 지원에 관한 법률 시행 규칙 일부 개정령안\n1. 개정이유\n보건복지부 사회보장위원회 결정(2013.12월 의결)에 따라 2016년부터 사회보장사업의 선정기준을 전국가구평균소득에서 기준 중위소득으로 표준화가 추진되고 있어 기존 ‘전국가구평균소득’을 ‘기준중위소득’으로 변경하고, 「장애인복지법」상 서비스 지원 종합조사 도입으로 ‘기본급여’와 ‘추가급여’가 ‘활동지원급여’로 통합됨에 따라 본인부담율을 조정하는 한편, 운영상 나타난 일부 미비사항을 보완하고자 함 2. 주요내용\n가. 본인부담금의 산정방법을 기준중위소득으로 변경하고, ‘기본급여’와 ‘추가급여’가 ‘활동지원급여’인 단일서비스로 통합됨에 따라 본인부담율 조정(안 별표6)\n나. 활동지원 응급안전서비스 신청서 개선(안 별지 제5의2서식)\n3. 참고사항\n가. 관계법령 : 생략\n나. 예산조치 : 별도조치 필요 없음\n다. 합의 : 해당기관 없음\n라. 기타 : 1) 신ㆍ구조문대비표", "장애인활동 지원에 관한 법률 시행령 일부개정령안\n1. 의결주문\n장애인활동 지원에 관한 법률 시행령 일부개정령안을 별지와 같이 의결한다.\n2. 제안이유 및 주요내용\n활동지원급여를 신청할 수 있는 사람을 종전의 중증장애인에서 모든 장애인으로 확대하고 활동지원급여 신청의 조사를 「장애인복지법」에 따른 서비스 지원 종합조사로 대체하는 등의 내용으로 「장애인활동 지원에 관한 법률」이 개정됨에 따라, 이에 맞추어 활동지원급여 수급자격 심의기준 및 수급자격 갱신에 관한 사항 등을 정비하는 한편, 활동지원급여 수급자격 결정의 유효기간을 종전의 2년에서 3년으로 늘림으로써 수급자의 권익을 보다 안정적으로 보호하고, 과태료의 가중된 부과처분에 적용되는 차수를 명확히 하는 등 현행 제도의 운영상 나타난 일부 미비점을 개선ㆍ보완하려는 것임. 3. 주요토의과제\n없음 4. 참고사항\n가. 관계법령 : 생략\n나. 예산조치 : 별도조치 필요 없음\n다. 합의 : 해당기관 없음\n라. 기타 : 1) 신ㆍ구조문대비표, 별첨\n2) 입법예고(2018. 9. 3. ~ 10. 15.) 결과, 특기할 사항 없음", "과천과학관에서 먼저 배우는 MBL과 수학 융합탐구<br> 일정 및 내용<br>○ 중등반 MBL-수학 융합 탐구과정 <table><tbody><tr><td>시간</td><td>차시</td><td>내용</td><td>실험도구</td><td>비고</td></tr><tr><td rowspan='2'>09:30~ 10:50 </td><td>1</td><td rowspan='2'>정비례 그래프 기울기 구하기 [수학만화+수학문제풀이] MBL 온도센서를 이용한 시합(조별) 엑셀에서 추세선 분석-운동시합(시범)-정비례 그래프 보일법칙(시범)-반비례 그래프 연관 관계 시범 </td><td rowspan='2'>노트북,초음파센서,온도센서,기체압력센서,판자</td><td rowspan='2'>교육동3층 제4실험실</td></tr><tr><td>2</td></tr><tr><td rowspan='2'>11:00~ 12:20 </td><td>3</td><td rowspan='2'>함수의 변환 연습하기<br>(예:시간·위치 그래프→시간·속도 그래프) MBL 초음파센서를 이용한 등속도운동 실험 및 분석 </td><td rowspan='2'>노트북</td><td rowspan='2'>교육동3층 제4실험실</td></tr><tr><td>4</td></tr><tr><td>12:20~13:10</td><td> </td><td>점심 식사</td><td> </td><td>식당<br>(자유) </td></tr><tr><td rowspan='2'>13:10~ 14:30 </td><td>5</td><td rowspan='2'>멘토 소개 및 자유탐구 활동 안내 기초과학관 탐방, 전시물 과학원리 탐구 MBL실험 시범 - 관련 전시물 : 신경전달 반응속도/거중기/뇌그림 </td><td rowspan='2'>멘토, 심전도센서, 초음파 센서,힘센서</td><td rowspan='2'>과천과학관 기초과학관 </td></tr><tr><td>6</td></tr><tr><td rowspan='2'>14:40~ 16:00 </td><td>7</td><td rowspan='2'>MBL을 활용하여 기초과학관의 전시물 속 과학원리 탐구실험하기 실험결과 발표 및 토론/참가확인서 발급 </td><td rowspan='2'>시험지, 시상물품, 참가확인서</td><td rowspan='2'>교육동3층 제4실험실</td></tr><tr><td>8</td></tr></tbody></table>"]}, {"source_sentence": "인도네시아에서 아체특별자치주가 기록한 빈곤 퍼센트는 얼마나 돼?", "sentences": ["연말연시를 맞아 인도네시아 부유층이 초호화 파티를 벌이고 있는 것과는 달리 대재앙으로 살아남은 반다 아체주의 이재민들이 이번엔 굶주림으로 생사 기로에 놓였다. 고지대의 길거리 등에서 텐트도 없이 생활하고 있는 이재민들은 1일 강진과 지진해일의 대재앙이 일어난지 7일째를 맞았지만 국제사회가 지원하는 구호품을 거의 공급 받지 못하고 있다. 도로변 인도에서 밤을 새운 시슬라(38.주부)는 \"반다 아체 공군기지에 외국 수송기들이 잇따라 도착하는데 구호품을 나눠준다는 소식은 없다\"면서 \"도대체 정부는 무엇을 하는지 모르겠다\"고 분통을 터뜨렸다. 시슬라는 \"악취를 막기 위한 마스크는 일부 나눠주는 것을 봤다\"면서 \"그러나 먹기 위한 비상식품이나 부상자들을 치료하기 위한 의약품, 텐트 등은 아예 볼 수가없다\"고 말했다. 또 만디리(28.점원)는 \"구호품이 속속 도착하고 있다는 말을 들었지만 어디를 가야 식량을 얻을 수 있는 지 도대체 알 수가 없다\"면서 \"통신 두절로 우리는 정보를 얻을 수 없다\"고 하소연했다. 반다 아체 공군기지에 구호품을 싣고 도착한 호주 공군의 리처드(24) 상병은 \"어제 밤 호주를 출발해 생수와 라면, 과자류를 갖고 왔다\"면서 \"배급은 인도네시아군 소관 사항\"이라고 말했다. 인도네시아 들도 \"반다 아체 인근 도시인 메단의 폴로냐공항이나 자카르타 할림 페르다나쿠수마 공군기지에는 국제사회가 지원해준 각종 구호품이 산더미처럼 쌓여 있다\"고 말했다. 이에 대해 임시 주정부 청사가 마련된 반다 아체 주지사 관저의 한 관계자는 \"인력도 부족하고 정신이 없다\"면서 \"특히 헬기나 트럭 등 운송장비가 부족해 대책이없다\"고 털어놨다. 한편 심리적 공황에 빠진 반다 아체주 주민과는 달리 자카르타의 고급 호텔 등지에서는 부유층들이 연말연시를 맞아 초호화 파티를 벌여 대조를 이루고 있다고 인도네시아 현지 들이 전했다.", "우리나라는 우리의 발전경험을 개도국과 공유하는 경제발전경험공유사업(KSP)이라는 대표적인 정책자문 사업을 실시 중이다. 2004년에 베트남과 우즈베키스탄에 대한 자문을 시작으로 10년 이상 수행해 오면서 대상국가와 주제들을 다각화하였다. 특히 경제발전과 거시금융경제 등 전체적인 발전담론 위주의 자문에서 구체적인 성장전략으로 자문 영역이 전환되는 추세에 있다. 그동안의 수행 경험을 바탕으로 하여 급변하는 국제사회 개발협력 정세와 일관된 우리나라 정책자문 사업의 방향성이 요구된다. 정책자문 컨설팅의 역할과 방향성 정립이 더욱 중요해진 이유이다. 경제발전의 조건이 선진국의 그것과 매우 다른 개도국의 지속가능한 발전을 촉진하기 위하여 개 도국은 선진국과는 차별화된 성장전략이 필요하다. 혁신역량을 바탕으로 한 산업과 무역의 역동성이 개도국의 지속가능하고 상생적인 발전을 유인하며, 따라서 개도국 혁신역량의 구축에 초점을 맞추는 협력모델이 중요하다. 주목할 점은 한 국가의 발전에 영향을 미치는 혁신이란 기술뿐만 아니라 제도나 조직 측면에서의 혁신 등 매우 다양한 측면을 포함한다는 것이다.", "인도네시아 서북단 아체특별자치주(州) 주정부 건물 앞에 최근 축하 화환 10개가 줄지어 늘어섰다. 주민들이 자발적으로 보낸 화환들인데, 축하 메시지가 이채롭다. ‘수마트라에서 가장 가난한 지역이 된 걸 축하한다’ ‘빈곤 우승자가 된 주지사에게 감사하다’ ‘가난 1등 아체 축하’ 등이다. 발신자는 ‘아체 주민들’, ‘전 아첨꾼’이라고 돼 있다. 22일 드틱닷컴에 따르면 아체의 주도인 반다아체 주정부 청사 앞에 17일 다양한 화환이 전시됐다. 축하 형식을 갖췄지만 기실 모두 수마트라섬에 있는 10개 주 중에서 아체특별자치주가 가장 빈곤한 지역에 선정된 걸 비꼬는 내용이다. 경찰까지 출동했다. 실제 인도네시아 통계청(BPS)은 지난해 9월 기준 아체의 빈곤율이 15.43%로 수마트라섬에서 가장 높다고 최근 발표했다. 수마트라섬은 자바섬에 이어 인도네시아 2대 주요 섬이다. 인도네시아 전체 빈곤율은 10.19%이다. 지난해 3월 조사에서 아체는 수마트라섬에서 븡쿨루주에 이어 빈곤율 2위였다. 아체 주정부 관계자는 “신종 코로나바이러스 감염증(코로나19) 사태로 인한 결과”라며 “인도네시아 전체 빈곤율이 전년 동기 대비 0.97%포인트 증가한 걸 감안하면 같은 기간 0.42%포인트 상승한 아체는 양호한 편”이라고 해명했다. 아울러 아체 주정부는 “빈곤율을 줄이기 위한 정책 노력을 계속하고 있다”고 밝혔다. 그러면서 “아체는 2000년 (인도네시아 중앙정부와) 내전, 2004년 동남아 쓰나미로 인해 급속도로 가난해진 역사적 배경이 있기 때문에 두 배로 열심히 일해야 한다”고 덧붙였다. 정작 주민들은 핑계에 불과하다고 일축했다. 풍자 화환을 보낸 한 주민은 “코로나19로 인해 시위를 할 수 없으니 그 열망을 화환에 담아 보낸 것”이라며 “전염병보다 (주지사의) 행정 능력과 비효율적인 예산 집행이 문제”라고 꼬집었다. 2005년 특별자치주가 되면서 중앙정부로부터 받고 있는 특별자치기금 운용에 문제가 있다는 지적도 잇따랐다. 자치권을 인정받는 아체는 이슬람 관습법(샤리아)이 실질 지배하는 독특한 지역이다."]}, {"source_sentence": "세종시에서 진행하는 메타버스 강연은 선착순으로 어느 정도나 되는 사람들을 모집해?", "sentences": ["문화예술전문인력 양성 및 지원 패러다임 전환 방향 모색 연구<br>4. 문화예술전문인력 양성 사업 실태 분석 및 시사점<br>1) 문화예술전문인력 양성 사업의 현황 및 특징<br>□ 문화예술전문인력 양성 사업 실태를 분석하는 틀로써 5개의 요소로 대분류하고, 각각의 중분류와 소분류 요소를 도출. 현재 시행 중인 문화예술전문인력 양성 사업(176개 사업)을 아래와 같은 분석 틀에 따라 지원 주체, 지원 목적, 지원 대상, 지원 방식, 지원 장르별로 분석함. <br>- 분석 틀은 사업 성격에 따라 상호배타성이 약한 경우도 있어 중복되는 경우가 상당수 있음. <br> <table><tbody><tr><td>대분류</td><td>중분류</td><td>소분류</td></tr><tr><td rowspan='3'>지원 주체</td><td>·중앙정부 및 소속/산하 기관</td><td>·문화체육관광부, 문화재청 <br>·문화체육관광부, 문화재청의 소속/산하 기관<br>-국립문화예술기관, 특수법인 형식의 문화예술기관, 민법상 법인 형식의 정부재정지원 문화예술기관 </td></tr><tr><td>·광역 및 기초 지자체 문화재단</td><td>·광역단위 문화재단 기초단위 문화재단</td></tr><tr><td>·민간 문화재단</td><td>·기업출연 문화재단</td></tr><tr><td>지원 목적</td><td>·창작자의 창작역량 강화<br>·현장 종사자의 직무역량 강화<br>·경력개발 지원 </td><td> </td></tr><tr><td rowspan='2'>지원 대상</td><td>·경력 단계(경력기간, 직급, 나이, 자격증, 선행과정 이수 등 기준 적용)</td><td>·예비인력 <br>·신진 <br>·중견 <br>·시니어 </td></tr><tr><td>·직능 구분</td><td>·창작·실연: 창작, 실연 <br>·기획·경영: 기획·학예, 교육, 경영, 기획·경영 전반, 연구·비평, 특수 <br>·기술: 무대기술, 보존·복원, 기타 장르별 기술직 <br>·관리·행정: 관리, 행정(공무원) </td></tr><tr><td rowspan='3'>지원 방식</td><td>·창작지원연계</td><td>·국내외 레지던시 <br>·예술창작지원연계 </td></tr><tr><td>·교육·훈련</td><td>·집체교육·연수 <br>·워크숍·세미나 <br>·멘토링·코칭·컨설팅 <br>·온라인 기반 </td></tr><tr><td>·경력개발</td><td> ·해외연수·리서치 트립 <br>·인력배치지원 <br>·인턴십 </td></tr><tr><td>지원 장르</td><td>·시각예술<br>·공연예술<br>·전통예술<br>·문학<br>·융복합 장르(다원예술 등) </td><td> </td></tr></tbody></table> 문화예술전문인력 지원 실태 분석 틀", "세종시와 중소벤처기업부가 31일부터 일반시민이 자율주행버스에 탑승할 수 있는 BRT 대중교통 유상 서비스 실증을 본격 시작한다. 시는 자율주행 규제자유특구 사업을 통해 2020년 5월부터 주거단지 및 도심공원, 일반도로에서 자율주행차 실증을 진행하는 등 안전성 확보와 인프라를 구축하는데 주력해왔다. 특히 ㈜오토노머스에이투지는 안전점검 등을 거쳐 지난 3월부터 약 3달간 산학연클러스터지원센터-세종시청-세종시외버스터미널 등 6.3㎞ 구간에서 자율주행버스 시범 운행을 해 왔다. 지난 5월부터는 시민체험단 27명을 대상으로 레벨4 수준에서 시속 50㎞까지 고속주행 기술을 점검하는 유상서비스를 사전 점검했다. 31일~7월30일 진행되는 이번 일반시민 대상 유상 서비스로 일반시민들은 500원 이하의 저렴한 비용으로 자율주행버스를 이용할 수 있게 된다. 자율주행버스는 안전요원 2명이 동승하며, 코로나19 확산 차단을 위한 방역 등 안전을 최우선 가치로 두고 매주 월~금요일 주 5일 운행한다. 정차정류장은 국책연구단지→소담동→세종시청→시외버스터미널 순이며, 1일 운행 횟수는 오전11시, 오후 2시, 3시, 4시 등 모두 4회다. 류제일 경제정책과장은 “세종시에 자율주행 대중교통 셔틀을 도입할 경우 교통체증 및 주차난 해소, 대기오염 저감 등이 기대된다”라며 “앞으로 자율주행 기술동향을 고려해 자율주행 대중교통 버스 도입을 검토해 나갈 계획”이라고 말했다.", "세종시가 4월 과학의 달을 맞아 (재)세종테크노파크와 오는 22일 ‘스마트한 세종의 미래’를 주제로 실시간 강연을 연다. 이번 강연은 메타버스를 활용해 온라인 생중계로 자율주행차 전문가 강연이다. 메타버스는 가공, 추상을 의미하는 ‘메타(meta)’와 현실 세계를 의미하는 ‘유니버스(universe)’의 합성어로 3차원 가상세계를 의미한다. 특히 메타버스 가상세계에서 자율주행전문가가 참가자와 직접 자신의 아바타로 강연에 참여해, 첨단 과학기술을 체험하는 형식이다. 메타버스 강연은 45명을 선착순 모집하고, 신청은 16일까지 세종테크노파크 홈페이지와 세종시청 홈페이지에서 가능하다. 강연은 22일 오후 3시부터 1시간 동안이고, 메타버스 신청자 외에도 유튜브에서 ‘세종테크노파크’를 검색한 후 채널에 접속해 시청할 수 있다. 메타버스 강연참여자는 세종테크노파크로부터 접속 환경 적응을 위한 매뉴얼을 전달받고, 사전모임을 통해 아바타 개설 등 사전연습을 진행한다. 또 강연 당일에는 가상공간 내 강연 및 공연은 관람하고 강연자와 참석자간 소통을 통해 질의응답을 할 예정이다. 시 관계자는 “짧은 시간 동안 과학을 체험할 수 있는 특별한 기회를 마련하기 위해 메타버스라는 가상세계를 선택했다”며 “스마트한 세종의 미래에 시민 여러분들의 많은 관심과 참여 바란다”고 말했다. 한편 이번 행사는 과학기술정보통신부의 4월 과학의 달 ‘봄날의 과학산책’ 과 연계한 지역별 프로그램으로, 전국의 지역과학문화 거점센터들이 릴레이 형식으로 과학콘텐츠를 선보이고 있다. 시는 2021년 지역과학문화 거점센터로 선정돼 (재)세종테크노파크가 운영 중에 있다."]}, {"source_sentence": "관광 교통 서비스 체계 구축 정책의 추진은 몇 단계로 나눠서 할 수 있을까?", "sentences": ["창의ᆞ혁신상품은 TV홈쇼핑으로 구매하세요! □ 미래부가 내세우는 공영TV홈쇼핑의 또 다른 차별화 포인트는 중소기업, 농축수산가공업체 등을 위한 종합 글로벌 유통 채널 구축의 구심점으로 공영TV홈쇼핑을 활용한다는 것이다. o 공영TV홈쇼핑은 인터넷, 모바일, 오프라인 매장을 연결하는 종합유통 채널 구축을 위한 시발점이 될 것이다. 심사과정에서 TV홈쇼핑에서 발생한 광고효과를 다른 유통채널을 통해 판매로 유도하는 종합 유통 채널 구축 전략을 평가하고, 중소기업ㆍ창의혁신기업ㆍ농어민 지원을 위해 기존에 운영되고 있는 유통채널과의 전략적 제휴 등도 추진해 나가도록 할 계획이다. o 또한, 공영TV홈쇼핑은 창의ㆍ혁신 상품, 중소기업 제품 등의 글로벌시장 진출을 지원하기 위한 기반을 구축할 것이다. 농식품부, 중기청등 관련 부처에서 추진 중인 해외진출 지원 사업 등과 연계하고, 이미 해외에서 TV홈쇼핑 채널을 운영 중인 기존 TV홈쇼핑 업체와의 상생 협력 등을 통해 해외 판로 개척 모델을 만들어 나가도록 유도할 계획이다. □ 미래부는 12월 12일 더케이호텔서울(서초구 양재동)에서 승인신청 요령 등에 대한 사업자 대상 설명회를 개최하여 공영TV홈쇼핑채널 신청을 희망하는 사업자들에게 자세한 안내를 할 예정이다. o 이후, 12월 29일부터 31일까지 3일간 사업자 신청 접수를 받고 시청자 의견청취, 심사위원회 운영 등의 심사 절차를 진행하여 2015년 1월에는 신설 공영TV홈쇼핑 사업자 선정을 마무리할 계획이다.", "관광 교통 서비스 체계 정책 추진 주체로는 중앙 및 지방정부, 공공기관, 민간기관 등이 고려될 수 있다. 중앙정부 및 지방정부, 공공기관 중 연구기관은 정책을 추진하는 주체로서, 지방정부와 사업기관은 정책을 실행하는 주체로서, 민간 기관은 직접 사업을 추진하는 주체로서 참여할 수 있다. 관광 교통은 기존 교통시설 및 수단을 관광객이 이용하는 개념이기 때문에 정책 영역이 국토교통부, 문화체육관광부, 넓게는 해양수산부 등 여러 부처에 걸쳐 있다. 원활한 정책사업 추진을 위해서는 부처 간 협력이 필수적이며, 부처 간 협력 체계로는 협력적 개별사업추진, 공동사업추진, 사업추진 조직구성 등 세 가지 대안을 고려해볼 수 있다. 관광 교통 서비스 체계 구축 정책은 3단계로 구분하여 추진할 수 있다. 1단계는 2016년 2017년으로 설정하고자 하며, 이 시기는 관광 교통 정책 사업을 추진하기 위한 기반을 마련하는 단계이다. 2단계는 2018년부터 2020년까지 3년간으로 본격적인 정책 사업이 추진되는 시기이며, 3단계는 2021년 이후 정책사업의 효과가 창출되는 기간으로, 확장된 형태의 신규 사업을 발굴 및 추진할 수 있어야 한다.", "관광교통 서비스 체계는 관광 활동을 위한 관광객의 이동 편의성과 효용을 최대화 하는 시스템을 뜻한다. 서비스 체계를 적용하는 영역은 관광 교통 정보, 관광교통수단, 관광교통 편의 서비스로 구분하여 볼 수 있다. 관광교통 정보는 관광 목적지에 도달하기 위해 필요한 관광교통 수단 및 관광교통 편의 서비스 등에 대한 종합적 정보를 뜻한다. 주요 관광자원과 관광 자원까지 이동하는 데 필요한 루트, 루트를 이동하기 위해 필요한 관광교통 수단과 비용, 관광교통 편의 서비스 등에 대한 정보를 모두 포함한다. 관광교통 수단은 출발지로부터 관광목적지를 연결하는 일반 및 특수교통수단을 뜻한다. 또한 교통 수단의 시간적, 공간적 연계 배치와 기반 시설로서 공항, 터미널, 역 또한 교통수단의 범위에 포함한다. 관광교통 편의 시스템은 교통수단의 이용을 보다 편리하게 하는 제도 및 서비스를 뜻한다. 관광교통 편의 서비스 영역에는 예약 할인, 그 밖의 제반 편의 서비스를 모두 포괄한다. 또한 교통수단의 이용은 물론 관광지 입장까지 아우르는 통합 패스 티켓, 바우처 등을 포함한다."]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,410 |
karsar/paraphrase-multilingual-MiniLM-L12-hu-v3
|
karsar
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1207229",
"loss:MultipleNegativesRankingLoss",
"hu",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-01-22T09:05:14Z |
2025-01-22T09:08:40+00:00
| 140 | 0 |
---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
language:
- hu
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1207229
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: a fara név azt jelenti
sentences:
- Utcafesztiváltól, egyhetes fesztiválon át havas fesztiválig! Ezek a fesztiválok
lehetőséget kínálnak arra, hogy egy helyszínen megkóstolják a Bend sörfőzdék kínálatát
– valójában több helyszínen egész évben! Kóstoljon meg egy jó főzetet, hallgasson
néhány jó dallamot, és találkozzon a sörfőzőkkel! Itt a sör ideje!
- 'Fara /fara/ [2 szótag.] mint lánynév középangol és arab eredetű, a Fara név jelentése
kedves, kellemes. A Fara a Farrah (közép angol, arab) változata: az angol fair
szóból származik. Hasonlítsa össze a Fura vezetéknevet.'
- 'A Fara név angolul azt jelenti: utazó. A Fara név angol névből származik. A Fara
nevet leggyakrabban lánynévként vagy női névként használják.'
- source_sentence: aki a kis Willie-t énekelte
sentences:
- Valaki énekel
- William Edward Little Willie John (1937. november 15. – 1968. május 26.) amerikai
rock 'n' roll és R&B énekes, aki az 1950-es években és az 1960-as évek elején
lépett fel. Leginkább a lemezlistákon elért sikereiről ismert, olyan dalokkal,
mint az All Around the World (1955), a Need Your Love So Bad (1956) és a Fever
(1956).
- A dal a Little Willy lenne, Sweet előadásában. Mert a kis Willy, Willy nem megy
haza. De nem lökheti körbe Willyt. Willy nem megy, próbáld meg mindenkinek elmondani,
de nem. Kicsi Willy, Willy nem megy haza.
- source_sentence: Amikor 1901-ben megházasodott, feleségével (Olga Knipper, a Moszkvai
Művészeti Színház munkatársa) közvetlenül a szertartásról mentek nászútra egy
szanatóriumba.
sentences:
- Amikor 1901-ben feleségül vette a feleségét, a szertartásról egyenesen a nászútra
mentek.
- 'Ez egyenlő a hullám sebességével, osztva a frekvenciával. A hullámhosszt méter
egységekben (m) fejezzük ki. λ = hullámhossz, a hullámhegyek közötti távolság
(m). v = hullámsebesség, a hullámok mozgásának sebessége egy irányban (m/s). f
= frekvencia, a hullámhegyek, amelyek egy bizonyos idő alatt átmennek egy ponton
(ciklus/s vagy Hz). A hullámhossz képletre vonatkozó kérdések: 1) A hang sebessége
körülbelül 340 m/s. Keresse meg egy olyan hanghullám hullámhosszát, amelynek frekvenciája
20,0 ciklus/másodperc (az emberi hallás alsó határa). Válasz: A hullámsebesség
v = 340 m/s, és a frekvencia f = 20,0 ciklus/s. = hullámhossz, a hullámhegyek
közötti távolság (m). v = hullámsebesség, az a sebesség, amellyel a hullámok egy
irányban mozognak (m/s). f = frekvencia, a hullámhegyek, amelyek egy bizonyos
idő alatt átmennek egy ponton (ciklus/s vagy Hz). A hullámhossz képletre vonatkozó
kérdések: 1) A hangsebesség körülbelül 340 m/s.'
- A felesége soha nem járt szanatóriumba.
- source_sentence: aki Elizabeth Blackwell volt
sentences:
- A módosítás definíciója valaminek a megváltoztatása, kiegészítése vagy átfogalmazása,
leggyakrabban javítási szándékkal. Példa a módosításra az Egyesült Államok alkotmányának
módosításai. 1 jobb változás; javulás. hibák, hibák stb. javítása.
- Elizabeth Blackwell (1707[1] – 1758) skót botanikai illusztrátor és író volt,
aki leginkább az 1737 és 1739 között megjelent A Curious Herbal tányérjainak művészeként
és metszőjeként volt ismert.
- Elizabeth Blackwell volt az első nő Amerikában, aki orvosi diplomát kapott. Úttörő
szerepet vállalt a nők orvostudományi oktatásában, és saját orvosi főiskolát nyitott
a nők számára. Ő volt az első nő, akit felvették a brit orvosi nyilvántartásba,
lehetővé téve számára, hogy az Egyesült Királyságban és az Egyesült Államokban
is praktizáljon.
- source_sentence: a sellő szindróma genetikai okai
sentences:
- 'Rfcamat válasza. Bizalom szavazat: 459. Ha sellő-szindrómásod van, akkor vele
születtél volna, és inkább hasadt volna a lábad, vagy mindkettőt amputálták volna.
A sellőszindróma oka a test alsó részének (lábainak) oxigén- és tápanyaghiánya
a keringési rendszer problémája miatt.További információ az alábbi linken.a sellő
szindrómát nem kaphatja meg. Ez egy veleszületett állapot, ami azt jelenti, hogy
vele kell születned ahhoz, hogy meglegyen. A betegségben szenvedő személy nem
sellő, csak arról van szó, hogy a lábai összeforrtak. Számos belső szerv hiányzik
vagy deformálódott.'
- Vezessen be lágy, nyájas ételeket, például pudingot, almaszószt vagy joghurtot.
A krémes anyagokat könnyű lenyelni, különösebb fájdalom nélkül. Lassan adjon be
több ételt, amint a torokfájás javulni kezd. A sült gyümölcsök és zöldségek, például
sült alma, sült körte és sült sárgarépa jó választás köretekhez. A burgonyapüré,
az őszi tök, a sima tészta és a rizs is ideális lágy ételek. Ezen kívül zöldséget,
tésztát és/vagy tésztát tartalmazó levesek vagy a puha húsdarabok egészséges választás
a mandulagyulladásban szenvedő betegek számára. Válasszon olyan szilárd ételeket,
amelyek nem irritálják a torkát, mint például a sült csirke, marhasült, teljes
kiőrlésű kenyerek és egész gyümölcsök. A kemény kekszek, a pizzahéjak, a ropogós
kekszek és a ropogtatnivalók túl kemények és ropogósak ahhoz, hogy torokfájása
élvezhesse. Őrizze meg ezeket az ételeket, amíg teljesen felépül.
- 1 A sellő-szindróma annak a következménye is lehet, hogy az anya sugárzásnak és
más környezeti hatásoknak van kitéve, amelyek a magzat normális fejlődésében részt
vevő gének mutációit okozták. 2 Spontán mutációk vagy a magzatban természetesen
előforduló mutációk is okozhatták a születési rendellenességet. Kutatásokra van
szükség ahhoz, hogy kiderítsük a sellőszindróma genetikai, biológiai vagy környezeti
okait. A sellő szindróma kezelése. Ha a két láb csak a bőrön keresztül olvadt
össze, és a három fő csont teljesen és megfelelően kialakult, műtétet alkalmaznak
a két láb szétválasztására.
model-index:
- name: paraphrase-multilingual-MiniLM-L12-hu-v3
results:
- task:
type: triplet
name: Triplet
dataset:
name: all triplet dev
type: all-triplet-dev
metrics:
- type: cosine_accuracy
value: 0.785140562248996
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: all triplet test
type: all-triplet-test
metrics:
- type: cosine_accuracy
value: 0.795494077694028
name: Cosine Accuracy
---
# paraphrase-multilingual-MiniLM-L12-hu-v3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 8d6b950845285729817bf8e1af1861502c2fed0c -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** hu
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("karsar/paraphrase-multilingual-MiniLM-L12-hu-v3")
# Run inference
sentences = [
'a sellő szindróma genetikai okai',
'Rfcamat válasza. Bizalom szavazat: 459. Ha sellő-szindrómásod van, akkor vele születtél volna, és inkább hasadt volna a lábad, vagy mindkettőt amputálták volna. A sellőszindróma oka a test alsó részének (lábainak) oxigén- és tápanyaghiánya a keringési rendszer problémája miatt.További információ az alábbi linken.a sellő szindrómát nem kaphatja meg. Ez egy veleszületett állapot, ami azt jelenti, hogy vele kell születned ahhoz, hogy meglegyen. A betegségben szenvedő személy nem sellő, csak arról van szó, hogy a lábai összeforrtak. Számos belső szerv hiányzik vagy deformálódott.',
'1 A sellő-szindróma annak a következménye is lehet, hogy az anya sugárzásnak és más környezeti hatásoknak van kitéve, amelyek a magzat normális fejlődésében részt vevő gének mutációit okozták. 2 Spontán mutációk vagy a magzatban természetesen előforduló mutációk is okozhatták a születési rendellenességet. Kutatásokra van szükség ahhoz, hogy kiderítsük a sellőszindróma genetikai, biológiai vagy környezeti okait. A sellő szindróma kezelése. Ha a két láb csak a bőrön keresztül olvadt össze, és a három fő csont teljesen és megfelelően kialakult, műtétet alkalmaznak a két láb szétválasztására.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Datasets: `all-triplet-dev` and `all-triplet-test`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | all-triplet-dev | all-triplet-test |
|:--------------------|:----------------|:-----------------|
| **cosine_accuracy** | **0.7851** | **0.7955** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 1,207,229 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 17.64 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 58.58 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 57.82 tokens</li><li>max: 128 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------|:----------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Megfordult, és előhúzta a kardját.</code> | <code>A kard megrajzolták.</code> | <code>A férfi ott hagyta a kardját, ahol volt.</code> |
| <code>Egy férfi, aki egy betonfalnak támaszkodik, karjait felül támasztja, az erkélyre néz.</code> | <code>Egy férfi a falnak támaszkodik.</code> | <code>Egy férfi egy fafalnak támaszkodik.</code> |
| <code>A nő a szabadban van.</code> | <code>Nő egy ruhában sétál át a hídon.</code> | <code>Egy nő a levegőben lévő lábával harcművészeti mozdulatot hajt végre egy edzőteremben, miközben öt csapattársa vagy versenyzője néz rá.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 1,207,229 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 17.92 tokens</li><li>max: 93 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 59.36 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 57.86 tokens</li><li>max: 128 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Az emberek nézik, amint egy zenész gitározik.</code> | <code>egy gitáros játszik az embereknek</code> | <code>Az emberek egy autóroncsot néznek.</code> |
| <code>hány csepp van egy ml-ben</code> | <code>Egy szabványos szemcseppentő 0,05 ml-t adagol cseppenként, ami azt jelenti, hogy 1 milliliter gyógyszerben 20 csepp van. Számoljuk ki: egy 5 ml-es üvegben 100, a 10 ml-es üvegben 200 adag van. (A legtöbb szemcsepp receptet 5 vagy 10 ml-es üvegekben adják ki.) A párolgás nem jelent nagy problémát, ha a kupakot minden alkalmazás után vissza kell cserélni. 30 napos hónapra számítva a napi egyszeri cseppek és a napi kétszeri cseppek egy 5 ml-es üvegben könnyen kitartanak egy hónapig. Egy 10 ml-es palack általában nagyobb adagok befogadására alkalmas. Íme, egy utolsó tipp.</code> | <code>Körülbelül 15-20 csepp van egy ml-ben. A folyadék viszkozitása megváltoztatja ezt a választ. Gondolhatja, hogy egy 5 ml-es üvegben 80-100 csepp van.</code> |
| <code>a szövetségi tartalékot milyen jogszabály hozta létre</code> | <code>Az „1913. évi Federal Reserve Act” MEGHATÁROZÁSA. Az 1913-as amerikai törvényhozás, amely létrehozta a jelenlegi Federal Reserve System-et. A Federal Reserve Act a gazdasági stabilitás egy formáját kívánta megteremteni a monetáris politikáért felelős Központi Bank bevezetésével az Egyesült Államokba. Az 1913-as amerikai törvényhozás, amely létrehozta a jelenlegi Federal Reserve System-et. A Federal Reserve Act a gazdasági stabilitás egy formáját kívánta megteremteni a monetáris politikáért felelős Központi Bank bevezetésével az Egyesült Államokba.</code> | <code>Az 1913-as amerikai törvényhozás, amely létrehozta a jelenlegi Federal Reserve System-et. A Federal Reserve Act a gazdasági stabilitás egy formáját kívánta megteremteni a monetáris politikáért felelős Központi Bank bevezetésével az Egyesült Államokba.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `warmup_ratio`: 0.1
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | all-triplet-dev_cosine_accuracy | all-triplet-test_cosine_accuracy |
|:------:|:------:|:-------------:|:---------------:|:-------------------------------:|:--------------------------------:|
| 0 | 0 | - | - | 0.6579 | - |
| 0.0007 | 100 | 1.0 | - | - | - |
| 0.0014 | 200 | 0.9771 | - | - | - |
| 0.0020 | 300 | 1.053 | - | - | - |
| 0.0027 | 400 | 0.887 | - | - | - |
| 0.0034 | 500 | 0.9726 | - | - | - |
| 0.0041 | 600 | 0.9072 | - | - | - |
| 0.0047 | 700 | 1.0523 | - | - | - |
| 0.0054 | 800 | 0.9033 | - | - | - |
| 0.0061 | 900 | 0.9774 | - | - | - |
| 0.0068 | 1000 | 0.8418 | - | - | - |
| 0.0074 | 1100 | 0.9079 | - | - | - |
| 0.0081 | 1200 | 0.7952 | - | - | - |
| 0.0088 | 1300 | 0.9232 | - | - | - |
| 0.0095 | 1400 | 0.8148 | - | - | - |
| 0.0101 | 1500 | 0.9004 | - | - | - |
| 0.0108 | 1600 | 0.8553 | - | - | - |
| 0.0115 | 1700 | 0.8049 | - | - | - |
| 0.0122 | 1800 | 0.7216 | - | - | - |
| 0.0128 | 1900 | 0.7598 | - | - | - |
| 0.0135 | 2000 | 0.802 | - | - | - |
| 0.0142 | 2100 | 0.879 | - | - | - |
| 0.0149 | 2200 | 0.8042 | - | - | - |
| 0.0156 | 2300 | 0.7186 | - | - | - |
| 0.0162 | 2400 | 0.7569 | - | - | - |
| 0.0169 | 2500 | 0.7585 | - | - | - |
| 0.0176 | 2600 | 0.7419 | - | - | - |
| 0.0183 | 2700 | 0.6902 | - | - | - |
| 0.0189 | 2800 | 0.7811 | - | - | - |
| 0.0196 | 2900 | 0.6972 | - | - | - |
| 0.0203 | 3000 | 0.6638 | - | - | - |
| 0.0210 | 3100 | 0.6797 | - | - | - |
| 0.0216 | 3200 | 0.6809 | - | - | - |
| 0.0223 | 3300 | 0.7417 | - | - | - |
| 0.0230 | 3400 | 0.7048 | - | - | - |
| 0.0237 | 3500 | 0.6981 | - | - | - |
| 0.0243 | 3600 | 0.6724 | - | - | - |
| 0.0250 | 3700 | 0.635 | - | - | - |
| 0.0257 | 3800 | 0.6869 | - | - | - |
| 0.0264 | 3900 | 0.6868 | - | - | - |
| 0.0270 | 4000 | 0.658 | - | - | - |
| 0.0277 | 4100 | 0.6692 | - | - | - |
| 0.0284 | 4200 | 0.6254 | - | - | - |
| 0.0291 | 4300 | 0.7114 | - | - | - |
| 0.0297 | 4400 | 0.6143 | - | - | - |
| 0.0304 | 4500 | 0.6775 | - | - | - |
| 0.0311 | 4600 | 0.6419 | - | - | - |
| 0.0318 | 4700 | 0.6887 | - | - | - |
| 0.0325 | 4800 | 0.6529 | - | - | - |
| 0.0331 | 4900 | 0.6365 | - | - | - |
| 0.0338 | 5000 | 0.6158 | 0.6443 | 0.7006 | - |
| 0.0345 | 5100 | 0.6508 | - | - | - |
| 0.0352 | 5200 | 0.6424 | - | - | - |
| 0.0358 | 5300 | 0.6766 | - | - | - |
| 0.0365 | 5400 | 0.6487 | - | - | - |
| 0.0372 | 5500 | 0.6886 | - | - | - |
| 0.0379 | 5600 | 0.6211 | - | - | - |
| 0.0385 | 5700 | 0.6523 | - | - | - |
| 0.0392 | 5800 | 0.6377 | - | - | - |
| 0.0399 | 5900 | 0.6524 | - | - | - |
| 0.0406 | 6000 | 0.6028 | - | - | - |
| 0.0412 | 6100 | 0.6466 | - | - | - |
| 0.0419 | 6200 | 0.6373 | - | - | - |
| 0.0426 | 6300 | 0.6434 | - | - | - |
| 0.0433 | 6400 | 0.6131 | - | - | - |
| 0.0439 | 6500 | 0.6133 | - | - | - |
| 0.0446 | 6600 | 0.6323 | - | - | - |
| 0.0453 | 6700 | 0.6384 | - | - | - |
| 0.0460 | 6800 | 0.6757 | - | - | - |
| 0.0467 | 6900 | 0.6366 | - | - | - |
| 0.0473 | 7000 | 0.6154 | - | - | - |
| 0.0480 | 7100 | 0.6554 | - | - | - |
| 0.0487 | 7200 | 0.6584 | - | - | - |
| 0.0494 | 7300 | 0.6527 | - | - | - |
| 0.0500 | 7400 | 0.5794 | - | - | - |
| 0.0507 | 7500 | 0.629 | - | - | - |
| 0.0514 | 7600 | 0.6272 | - | - | - |
| 0.0521 | 7700 | 0.6614 | - | - | - |
| 0.0527 | 7800 | 0.6511 | - | - | - |
| 0.0534 | 7900 | 0.5902 | - | - | - |
| 0.0541 | 8000 | 0.6243 | - | - | - |
| 0.0548 | 8100 | 0.5976 | - | - | - |
| 0.0554 | 8200 | 0.6198 | - | - | - |
| 0.0561 | 8300 | 0.6478 | - | - | - |
| 0.0568 | 8400 | 0.6167 | - | - | - |
| 0.0575 | 8500 | 0.6635 | - | - | - |
| 0.0581 | 8600 | 0.6189 | - | - | - |
| 0.0588 | 8700 | 0.5938 | - | - | - |
| 0.0595 | 8800 | 0.6059 | - | - | - |
| 0.0602 | 8900 | 0.6043 | - | - | - |
| 0.0609 | 9000 | 0.5994 | - | - | - |
| 0.0615 | 9100 | 0.6122 | - | - | - |
| 0.0622 | 9200 | 0.6553 | - | - | - |
| 0.0629 | 9300 | 0.5798 | - | - | - |
| 0.0636 | 9400 | 0.6315 | - | - | - |
| 0.0642 | 9500 | 0.7163 | - | - | - |
| 0.0649 | 9600 | 0.618 | - | - | - |
| 0.0656 | 9700 | 0.6174 | - | - | - |
| 0.0663 | 9800 | 0.6291 | - | - | - |
| 0.0669 | 9900 | 0.6296 | - | - | - |
| 0.0676 | 10000 | 0.6421 | 0.6147 | 0.7206 | - |
| 0.0683 | 10100 | 0.6046 | - | - | - |
| 0.0690 | 10200 | 0.5878 | - | - | - |
| 0.0696 | 10300 | 0.6091 | - | - | - |
| 0.0703 | 10400 | 0.6736 | - | - | - |
| 0.0710 | 10500 | 0.6205 | - | - | - |
| 0.0717 | 10600 | 0.5922 | - | - | - |
| 0.0723 | 10700 | 0.5989 | - | - | - |
| 0.0730 | 10800 | 0.614 | - | - | - |
| 0.0737 | 10900 | 0.6304 | - | - | - |
| 0.0744 | 11000 | 0.6241 | - | - | - |
| 0.0751 | 11100 | 0.5657 | - | - | - |
| 0.0757 | 11200 | 0.6008 | - | - | - |
| 0.0764 | 11300 | 0.6249 | - | - | - |
| 0.0771 | 11400 | 0.5991 | - | - | - |
| 0.0778 | 11500 | 0.5798 | - | - | - |
| 0.0784 | 11600 | 0.6286 | - | - | - |
| 0.0791 | 11700 | 0.6672 | - | - | - |
| 0.0798 | 11800 | 0.5947 | - | - | - |
| 0.0805 | 11900 | 0.5958 | - | - | - |
| 0.0811 | 12000 | 0.6229 | - | - | - |
| 0.0818 | 12100 | 0.6162 | - | - | - |
| 0.0825 | 12200 | 0.573 | - | - | - |
| 0.0832 | 12300 | 0.5661 | - | - | - |
| 0.0838 | 12400 | 0.594 | - | - | - |
| 0.0845 | 12500 | 0.5654 | - | - | - |
| 0.0852 | 12600 | 0.5925 | - | - | - |
| 0.0859 | 12700 | 0.6019 | - | - | - |
| 0.0865 | 12800 | 0.6 | - | - | - |
| 0.0872 | 12900 | 0.5931 | - | - | - |
| 0.0879 | 13000 | 0.6517 | - | - | - |
| 0.0886 | 13100 | 0.573 | - | - | - |
| 0.0892 | 13200 | 0.6486 | - | - | - |
| 0.0899 | 13300 | 0.6032 | - | - | - |
| 0.0906 | 13400 | 0.5799 | - | - | - |
| 0.0913 | 13500 | 0.585 | - | - | - |
| 0.0920 | 13600 | 0.6025 | - | - | - |
| 0.0926 | 13700 | 0.5873 | - | - | - |
| 0.0933 | 13800 | 0.6339 | - | - | - |
| 0.0940 | 13900 | 0.5779 | - | - | - |
| 0.0947 | 14000 | 0.5974 | - | - | - |
| 0.0953 | 14100 | 0.5706 | - | - | - |
| 0.0960 | 14200 | 0.5906 | - | - | - |
| 0.0967 | 14300 | 0.562 | - | - | - |
| 0.0974 | 14400 | 0.6264 | - | - | - |
| 0.0980 | 14500 | 0.6248 | - | - | - |
| 0.0987 | 14600 | 0.6212 | - | - | - |
| 0.0994 | 14700 | 0.5845 | - | - | - |
| 0.1001 | 14800 | 0.6237 | - | - | - |
| 0.1007 | 14900 | 0.5905 | - | - | - |
| 0.1014 | 15000 | 0.6176 | 0.5981 | 0.7167 | - |
| 0.1021 | 15100 | 0.6059 | - | - | - |
| 0.1028 | 15200 | 0.5882 | - | - | - |
| 0.1034 | 15300 | 0.5692 | - | - | - |
| 0.1041 | 15400 | 0.6028 | - | - | - |
| 0.1048 | 15500 | 0.5876 | - | - | - |
| 0.1055 | 15600 | 0.6507 | - | - | - |
| 0.1062 | 15700 | 0.5612 | - | - | - |
| 0.1068 | 15800 | 0.5882 | - | - | - |
| 0.1075 | 15900 | 0.5646 | - | - | - |
| 0.1082 | 16000 | 0.6212 | - | - | - |
| 0.1089 | 16100 | 0.6108 | - | - | - |
| 0.1095 | 16200 | 0.619 | - | - | - |
| 0.1102 | 16300 | 0.5962 | - | - | - |
| 0.1109 | 16400 | 0.6056 | - | - | - |
| 0.1116 | 16500 | 0.6057 | - | - | - |
| 0.1122 | 16600 | 0.5535 | - | - | - |
| 0.1129 | 16700 | 0.6167 | - | - | - |
| 0.1136 | 16800 | 0.5695 | - | - | - |
| 0.1143 | 16900 | 0.599 | - | - | - |
| 0.1149 | 17000 | 0.6122 | - | - | - |
| 0.1156 | 17100 | 0.5779 | - | - | - |
| 0.1163 | 17200 | 0.5822 | - | - | - |
| 0.1170 | 17300 | 0.6244 | - | - | - |
| 0.1176 | 17400 | 0.6428 | - | - | - |
| 0.1183 | 17500 | 0.6326 | - | - | - |
| 0.1190 | 17600 | 0.6027 | - | - | - |
| 0.1197 | 17700 | 0.5705 | - | - | - |
| 0.1204 | 17800 | 0.5414 | - | - | - |
| 0.1210 | 17900 | 0.5966 | - | - | - |
| 0.1217 | 18000 | 0.65 | - | - | - |
| 0.1224 | 18100 | 0.6097 | - | - | - |
| 0.1231 | 18200 | 0.5988 | - | - | - |
| 0.1237 | 18300 | 0.5901 | - | - | - |
| 0.1244 | 18400 | 0.6146 | - | - | - |
| 0.1251 | 18500 | 0.6408 | - | - | - |
| 0.1258 | 18600 | 0.6034 | - | - | - |
| 0.1264 | 18700 | 0.5878 | - | - | - |
| 0.1271 | 18800 | 0.5934 | - | - | - |
| 0.1278 | 18900 | 0.6162 | - | - | - |
| 0.1285 | 19000 | 0.6255 | - | - | - |
| 0.1291 | 19100 | 0.6546 | - | - | - |
| 0.1298 | 19200 | 0.59 | - | - | - |
| 0.1305 | 19300 | 0.6331 | - | - | - |
| 0.1312 | 19400 | 0.6444 | - | - | - |
| 0.1318 | 19500 | 0.6105 | - | - | - |
| 0.1325 | 19600 | 0.6169 | - | - | - |
| 0.1332 | 19700 | 0.6123 | - | - | - |
| 0.1339 | 19800 | 0.6612 | - | - | - |
| 0.1345 | 19900 | 0.6309 | - | - | - |
| 0.1352 | 20000 | 0.6805 | 0.5901 | 0.7213 | - |
| 0.1359 | 20100 | 0.6073 | - | - | - |
| 0.1366 | 20200 | 0.5956 | - | - | - |
| 0.1373 | 20300 | 0.6229 | - | - | - |
| 0.1379 | 20400 | 0.5919 | - | - | - |
| 0.1386 | 20500 | 0.6112 | - | - | - |
| 0.1393 | 20600 | 0.5877 | - | - | - |
| 0.1400 | 20700 | 0.6279 | - | - | - |
| 0.1406 | 20800 | 0.595 | - | - | - |
| 0.1413 | 20900 | 0.6205 | - | - | - |
| 0.1420 | 21000 | 0.5862 | - | - | - |
| 0.1427 | 21100 | 0.5719 | - | - | - |
| 0.1433 | 21200 | 0.5943 | - | - | - |
| 0.1440 | 21300 | 0.6299 | - | - | - |
| 0.1447 | 21400 | 0.5718 | - | - | - |
| 0.1454 | 21500 | 0.567 | - | - | - |
| 0.1460 | 21600 | 0.5808 | - | - | - |
| 0.1467 | 21700 | 0.5727 | - | - | - |
| 0.1474 | 21800 | 0.5625 | - | - | - |
| 0.1481 | 21900 | 0.6031 | - | - | - |
| 0.1487 | 22000 | 0.6512 | - | - | - |
| 0.1494 | 22100 | 0.5794 | - | - | - |
| 0.1501 | 22200 | 0.6473 | - | - | - |
| 0.1508 | 22300 | 0.6517 | - | - | - |
| 0.1515 | 22400 | 0.5644 | - | - | - |
| 0.1521 | 22500 | 0.587 | - | - | - |
| 0.1528 | 22600 | 0.5915 | - | - | - |
| 0.1535 | 22700 | 0.6034 | - | - | - |
| 0.1542 | 22800 | 0.6403 | - | - | - |
| 0.1548 | 22900 | 0.5921 | - | - | - |
| 0.1555 | 23000 | 0.5784 | - | - | - |
| 0.1562 | 23100 | 0.5978 | - | - | - |
| 0.1569 | 23200 | 0.6665 | - | - | - |
| 0.1575 | 23300 | 0.626 | - | - | - |
| 0.1582 | 23400 | 0.6435 | - | - | - |
| 0.1589 | 23500 | 0.6035 | - | - | - |
| 0.1596 | 23600 | 0.6134 | - | - | - |
| 0.1602 | 23700 | 0.6205 | - | - | - |
| 0.1609 | 23800 | 0.6334 | - | - | - |
| 0.1616 | 23900 | 0.6577 | - | - | - |
| 0.1623 | 24000 | 0.6574 | - | - | - |
| 0.1629 | 24100 | 0.6195 | - | - | - |
| 0.1636 | 24200 | 0.5966 | - | - | - |
| 0.1643 | 24300 | 0.6062 | - | - | - |
| 0.1650 | 24400 | 0.6582 | - | - | - |
| 0.1657 | 24500 | 0.5918 | - | - | - |
| 0.1663 | 24600 | 0.6007 | - | - | - |
| 0.1670 | 24700 | 0.6773 | - | - | - |
| 0.1677 | 24800 | 0.5891 | - | - | - |
| 0.1684 | 24900 | 0.6442 | - | - | - |
| 0.1690 | 25000 | 0.623 | 0.5940 | 0.7284 | - |
| 0.1697 | 25100 | 0.6034 | - | - | - |
| 0.1704 | 25200 | 0.62 | - | - | - |
| 0.1711 | 25300 | 0.5884 | - | - | - |
| 0.1717 | 25400 | 0.5619 | - | - | - |
| 0.1724 | 25500 | 0.6289 | - | - | - |
| 0.1731 | 25600 | 0.5684 | - | - | - |
| 0.1738 | 25700 | 0.613 | - | - | - |
| 0.1744 | 25800 | 0.6573 | - | - | - |
| 0.1751 | 25900 | 0.5645 | - | - | - |
| 0.1758 | 26000 | 0.6113 | - | - | - |
| 0.1765 | 26100 | 0.6504 | - | - | - |
| 0.1771 | 26200 | 0.615 | - | - | - |
| 0.1778 | 26300 | 0.6404 | - | - | - |
| 0.1785 | 26400 | 0.6431 | - | - | - |
| 0.1792 | 26500 | 0.619 | - | - | - |
| 0.1799 | 26600 | 0.6201 | - | - | - |
| 0.1805 | 26700 | 0.5756 | - | - | - |
| 0.1812 | 26800 | 0.5796 | - | - | - |
| 0.1819 | 26900 | 0.6046 | - | - | - |
| 0.1826 | 27000 | 0.6042 | - | - | - |
| 0.1832 | 27100 | 0.6867 | - | - | - |
| 0.1839 | 27200 | 0.6236 | - | - | - |
| 0.1846 | 27300 | 0.5696 | - | - | - |
| 0.1853 | 27400 | 0.6366 | - | - | - |
| 0.1859 | 27500 | 0.6467 | - | - | - |
| 0.1866 | 27600 | 0.6449 | - | - | - |
| 0.1873 | 27700 | 0.6579 | - | - | - |
| 0.1880 | 27800 | 0.6005 | - | - | - |
| 0.1886 | 27900 | 0.5824 | - | - | - |
| 0.1893 | 28000 | 0.6376 | - | - | - |
| 0.1900 | 28100 | 0.6348 | - | - | - |
| 0.1907 | 28200 | 0.5968 | - | - | - |
| 0.1913 | 28300 | 0.6361 | - | - | - |
| 0.1920 | 28400 | 0.5847 | - | - | - |
| 0.1927 | 28500 | 0.6203 | - | - | - |
| 0.1934 | 28600 | 0.6186 | - | - | - |
| 0.1940 | 28700 | 0.6275 | - | - | - |
| 0.1947 | 28800 | 0.5804 | - | - | - |
| 0.1954 | 28900 | 0.5898 | - | - | - |
| 0.1961 | 29000 | 0.6201 | - | - | - |
| 0.1968 | 29100 | 0.591 | - | - | - |
| 0.1974 | 29200 | 0.6571 | - | - | - |
| 0.1981 | 29300 | 0.6228 | - | - | - |
| 0.1988 | 29400 | 0.6722 | - | - | - |
| 0.1995 | 29500 | 0.5665 | - | - | - |
| 0.2001 | 29600 | 0.6216 | - | - | - |
| 0.2008 | 29700 | 0.6258 | - | - | - |
| 0.2015 | 29800 | 0.5789 | - | - | - |
| 0.2022 | 29900 | 0.6193 | - | - | - |
| 0.2028 | 30000 | 0.6435 | 0.6061 | 0.7186 | - |
| 0.2035 | 30100 | 0.6314 | - | - | - |
| 0.2042 | 30200 | 0.5847 | - | - | - |
| 0.2049 | 30300 | 0.6053 | - | - | - |
| 0.2055 | 30400 | 0.602 | - | - | - |
| 0.2062 | 30500 | 0.613 | - | - | - |
| 0.2069 | 30600 | 0.5967 | - | - | - |
| 0.2076 | 30700 | 0.6305 | - | - | - |
| 0.2082 | 30800 | 0.6322 | - | - | - |
| 0.2089 | 30900 | 0.6252 | - | - | - |
| 0.2096 | 31000 | 0.6217 | - | - | - |
| 0.2103 | 31100 | 0.586 | - | - | - |
| 0.2110 | 31200 | 0.6274 | - | - | - |
| 0.2116 | 31300 | 0.5972 | - | - | - |
| 0.2123 | 31400 | 0.6104 | - | - | - |
| 0.2130 | 31500 | 0.5858 | - | - | - |
| 0.2137 | 31600 | 0.6365 | - | - | - |
| 0.2143 | 31700 | 0.596 | - | - | - |
| 0.2150 | 31800 | 0.632 | - | - | - |
| 0.2157 | 31900 | 0.6488 | - | - | - |
| 0.2164 | 32000 | 0.6164 | - | - | - |
| 0.2170 | 32100 | 0.6263 | - | - | - |
| 0.2177 | 32200 | 0.6388 | - | - | - |
| 0.2184 | 32300 | 0.6245 | - | - | - |
| 0.2191 | 32400 | 0.6364 | - | - | - |
| 0.2197 | 32500 | 0.6578 | - | - | - |
| 0.2204 | 32600 | 0.6033 | - | - | - |
| 0.2211 | 32700 | 0.6066 | - | - | - |
| 0.2218 | 32800 | 0.6938 | - | - | - |
| 0.2224 | 32900 | 0.6226 | - | - | - |
| 0.2231 | 33000 | 0.6472 | - | - | - |
| 0.2238 | 33100 | 0.6485 | - | - | - |
| 0.2245 | 33200 | 0.6636 | - | - | - |
| 0.2252 | 33300 | 0.633 | - | - | - |
| 0.2258 | 33400 | 0.5909 | - | - | - |
| 0.2265 | 33500 | 0.6209 | - | - | - |
| 0.2272 | 33600 | 0.6256 | - | - | - |
| 0.2279 | 33700 | 0.6476 | - | - | - |
| 0.2285 | 33800 | 0.6369 | - | - | - |
| 0.2292 | 33900 | 0.6135 | - | - | - |
| 0.2299 | 34000 | 0.6749 | - | - | - |
| 0.2306 | 34100 | 0.6354 | - | - | - |
| 0.2312 | 34200 | 0.625 | - | - | - |
| 0.2319 | 34300 | 0.616 | - | - | - |
| 0.2326 | 34400 | 0.6047 | - | - | - |
| 0.2333 | 34500 | 0.6431 | - | - | - |
| 0.2339 | 34600 | 0.6576 | - | - | - |
| 0.2346 | 34700 | 0.6344 | - | - | - |
| 0.2353 | 34800 | 0.6477 | - | - | - |
| 0.2360 | 34900 | 0.6094 | - | - | - |
| 0.2366 | 35000 | 0.6243 | 0.6088 | 0.7208 | - |
| 0.2373 | 35100 | 0.5981 | - | - | - |
| 0.2380 | 35200 | 0.559 | - | - | - |
| 0.2387 | 35300 | 0.6523 | - | - | - |
| 0.2393 | 35400 | 0.6018 | - | - | - |
| 0.2400 | 35500 | 0.6228 | - | - | - |
| 0.2407 | 35600 | 0.6321 | - | - | - |
| 0.2414 | 35700 | 0.6072 | - | - | - |
| 0.2421 | 35800 | 0.6467 | - | - | - |
| 0.2427 | 35900 | 0.6676 | - | - | - |
| 0.2434 | 36000 | 0.6486 | - | - | - |
| 0.2441 | 36100 | 0.6241 | - | - | - |
| 0.2448 | 36200 | 0.6534 | - | - | - |
| 0.2454 | 36300 | 0.5945 | - | - | - |
| 0.2461 | 36400 | 0.6432 | - | - | - |
| 0.2468 | 36500 | 0.6952 | - | - | - |
| 0.2475 | 36600 | 0.6741 | - | - | - |
| 0.2481 | 36700 | 0.6525 | - | - | - |
| 0.2488 | 36800 | 0.599 | - | - | - |
| 0.2495 | 36900 | 0.643 | - | - | - |
| 0.2502 | 37000 | 0.6254 | - | - | - |
| 0.2508 | 37100 | 0.6511 | - | - | - |
| 0.2515 | 37200 | 0.6694 | - | - | - |
| 0.2522 | 37300 | 0.6213 | - | - | - |
| 0.2529 | 37400 | 0.6465 | - | - | - |
| 0.2535 | 37500 | 0.6623 | - | - | - |
| 0.2542 | 37600 | 0.6205 | - | - | - |
| 0.2549 | 37700 | 0.6552 | - | - | - |
| 0.2556 | 37800 | 0.5855 | - | - | - |
| 0.2563 | 37900 | 0.5539 | - | - | - |
| 0.2569 | 38000 | 0.6411 | - | - | - |
| 0.2576 | 38100 | 0.6509 | - | - | - |
| 0.2583 | 38200 | 0.6843 | - | - | - |
| 0.2590 | 38300 | 0.6742 | - | - | - |
| 0.2596 | 38400 | 0.6214 | - | - | - |
| 0.2603 | 38500 | 0.6486 | - | - | - |
| 0.2610 | 38600 | 0.6209 | - | - | - |
| 0.2617 | 38700 | 0.624 | - | - | - |
| 0.2623 | 38800 | 0.6221 | - | - | - |
| 0.2630 | 38900 | 0.6574 | - | - | - |
| 0.2637 | 39000 | 0.6147 | - | - | - |
| 0.2644 | 39100 | 0.6187 | - | - | - |
| 0.2650 | 39200 | 0.6194 | - | - | - |
| 0.2657 | 39300 | 0.589 | - | - | - |
| 0.2664 | 39400 | 0.6393 | - | - | - |
| 0.2671 | 39500 | 0.6584 | - | - | - |
| 0.2677 | 39600 | 0.6272 | - | - | - |
| 0.2684 | 39700 | 0.63 | - | - | - |
| 0.2691 | 39800 | 0.6646 | - | - | - |
| 0.2698 | 39900 | 0.5913 | - | - | - |
| 0.2705 | 40000 | 0.6878 | 0.6177 | 0.7156 | - |
| 0.2711 | 40100 | 0.6421 | - | - | - |
| 0.2718 | 40200 | 0.6111 | - | - | - |
| 0.2725 | 40300 | 0.6301 | - | - | - |
| 0.2732 | 40400 | 0.6192 | - | - | - |
| 0.2738 | 40500 | 0.6505 | - | - | - |
| 0.2745 | 40600 | 0.6067 | - | - | - |
| 0.2752 | 40700 | 0.6543 | - | - | - |
| 0.2759 | 40800 | 0.6214 | - | - | - |
| 0.2765 | 40900 | 0.6094 | - | - | - |
| 0.2772 | 41000 | 0.5979 | - | - | - |
| 0.2779 | 41100 | 0.6261 | - | - | - |
| 0.2786 | 41200 | 0.6484 | - | - | - |
| 0.2792 | 41300 | 0.6576 | - | - | - |
| 0.2799 | 41400 | 0.5837 | - | - | - |
| 0.2806 | 41500 | 0.6467 | - | - | - |
| 0.2813 | 41600 | 0.6436 | - | - | - |
| 0.2819 | 41700 | 0.6287 | - | - | - |
| 0.2826 | 41800 | 0.7045 | - | - | - |
| 0.2833 | 41900 | 0.6501 | - | - | - |
| 0.2840 | 42000 | 0.6895 | - | - | - |
| 0.2846 | 42100 | 0.6133 | - | - | - |
| 0.2853 | 42200 | 0.6624 | - | - | - |
| 0.2860 | 42300 | 0.6151 | - | - | - |
| 0.2867 | 42400 | 0.6498 | - | - | - |
| 0.2874 | 42500 | 0.6361 | - | - | - |
| 0.2880 | 42600 | 0.6671 | - | - | - |
| 0.2887 | 42700 | 0.6821 | - | - | - |
| 0.2894 | 42800 | 0.6116 | - | - | - |
| 0.2901 | 42900 | 0.6758 | - | - | - |
| 0.2907 | 43000 | 0.6289 | - | - | - |
| 0.2914 | 43100 | 0.5684 | - | - | - |
| 0.2921 | 43200 | 0.6287 | - | - | - |
| 0.2928 | 43300 | 0.6498 | - | - | - |
| 0.2934 | 43400 | 0.6669 | - | - | - |
| 0.2941 | 43500 | 0.6127 | - | - | - |
| 0.2948 | 43600 | 0.6474 | - | - | - |
| 0.2955 | 43700 | 0.6459 | - | - | - |
| 0.2961 | 43800 | 0.6588 | - | - | - |
| 0.2968 | 43900 | 0.6231 | - | - | - |
| 0.2975 | 44000 | 0.6723 | - | - | - |
| 0.2982 | 44100 | 0.5787 | - | - | - |
| 0.2988 | 44200 | 0.6469 | - | - | - |
| 0.2995 | 44300 | 0.6152 | - | - | - |
| 0.3002 | 44400 | 0.6105 | - | - | - |
| 0.3009 | 44500 | 0.6529 | - | - | - |
| 0.3016 | 44600 | 0.6514 | - | - | - |
| 0.3022 | 44700 | 0.603 | - | - | - |
| 0.3029 | 44800 | 0.6516 | - | - | - |
| 0.3036 | 44900 | 0.5861 | - | - | - |
| 0.3043 | 45000 | 0.6236 | 0.6444 | 0.7174 | - |
| 0.3049 | 45100 | 0.6714 | - | - | - |
| 0.3056 | 45200 | 0.6537 | - | - | - |
| 0.3063 | 45300 | 0.6436 | - | - | - |
| 0.3070 | 45400 | 0.6407 | - | - | - |
| 0.3076 | 45500 | 0.6597 | - | - | - |
| 0.3083 | 45600 | 0.6381 | - | - | - |
| 0.3090 | 45700 | 0.6688 | - | - | - |
| 0.3097 | 45800 | 0.6227 | - | - | - |
| 0.3103 | 45900 | 0.6119 | - | - | - |
| 0.3110 | 46000 | 0.6915 | - | - | - |
| 0.3117 | 46100 | 0.6381 | - | - | - |
| 0.3124 | 46200 | 0.6101 | - | - | - |
| 0.3130 | 46300 | 0.6061 | - | - | - |
| 0.3137 | 46400 | 0.6433 | - | - | - |
| 0.3144 | 46500 | 0.6245 | - | - | - |
| 0.3151 | 46600 | 0.6202 | - | - | - |
| 0.3158 | 46700 | 0.6556 | - | - | - |
| 0.3164 | 46800 | 0.6835 | - | - | - |
| 0.3171 | 46900 | 0.6869 | - | - | - |
| 0.3178 | 47000 | 0.5996 | - | - | - |
| 0.3185 | 47100 | 0.6391 | - | - | - |
| 0.3191 | 47200 | 0.6439 | - | - | - |
| 0.3198 | 47300 | 0.6664 | - | - | - |
| 0.3205 | 47400 | 0.6554 | - | - | - |
| 0.3212 | 47500 | 0.6527 | - | - | - |
| 0.3218 | 47600 | 0.6211 | - | - | - |
| 0.3225 | 47700 | 0.6645 | - | - | - |
| 0.3232 | 47800 | 0.66 | - | - | - |
| 0.3239 | 47900 | 0.5725 | - | - | - |
| 0.3245 | 48000 | 0.629 | - | - | - |
| 0.3252 | 48100 | 0.6016 | - | - | - |
| 0.3259 | 48200 | 0.6293 | - | - | - |
| 0.3266 | 48300 | 0.6543 | - | - | - |
| 0.3272 | 48400 | 0.6791 | - | - | - |
| 0.3279 | 48500 | 0.6016 | - | - | - |
| 0.3286 | 48600 | 0.678 | - | - | - |
| 0.3293 | 48700 | 0.6323 | - | - | - |
| 0.3300 | 48800 | 0.658 | - | - | - |
| 0.3306 | 48900 | 0.6325 | - | - | - |
| 0.3313 | 49000 | 0.6482 | - | - | - |
| 0.3320 | 49100 | 0.6245 | - | - | - |
| 0.3327 | 49200 | 0.6676 | - | - | - |
| 0.3333 | 49300 | 0.5797 | - | - | - |
| 0.3340 | 49400 | 0.6468 | - | - | - |
| 0.3347 | 49500 | 0.6416 | - | - | - |
| 0.3354 | 49600 | 0.6916 | - | - | - |
| 0.3360 | 49700 | 0.6063 | - | - | - |
| 0.3367 | 49800 | 0.6038 | - | - | - |
| 0.3374 | 49900 | 0.6232 | - | - | - |
| 0.3381 | 50000 | 0.6846 | 0.6324 | 0.7174 | - |
| 0.3387 | 50100 | 0.6282 | - | - | - |
| 0.3394 | 50200 | 0.6417 | - | - | - |
| 0.3401 | 50300 | 0.6414 | - | - | - |
| 0.3408 | 50400 | 0.6045 | - | - | - |
| 0.3414 | 50500 | 0.6352 | - | - | - |
| 0.3421 | 50600 | 0.6191 | - | - | - |
| 0.3428 | 50700 | 0.6575 | - | - | - |
| 0.3435 | 50800 | 0.6673 | - | - | - |
| 0.3441 | 50900 | 0.6318 | - | - | - |
| 0.3448 | 51000 | 0.6833 | - | - | - |
| 0.3455 | 51100 | 0.6585 | - | - | - |
| 0.3462 | 51200 | 0.6404 | - | - | - |
| 0.3469 | 51300 | 0.6103 | - | - | - |
| 0.3475 | 51400 | 0.6326 | - | - | - |
| 0.3482 | 51500 | 0.6061 | - | - | - |
| 0.3489 | 51600 | 0.6289 | - | - | - |
| 0.3496 | 51700 | 0.6171 | - | - | - |
| 0.3502 | 51800 | 0.6585 | - | - | - |
| 0.3509 | 51900 | 0.6368 | - | - | - |
| 0.3516 | 52000 | 0.6184 | - | - | - |
| 0.3523 | 52100 | 0.6797 | - | - | - |
| 0.3529 | 52200 | 0.6365 | - | - | - |
| 0.3536 | 52300 | 0.6044 | - | - | - |
| 0.3543 | 52400 | 0.6143 | - | - | - |
| 0.3550 | 52500 | 0.6061 | - | - | - |
| 0.3556 | 52600 | 0.599 | - | - | - |
| 0.3563 | 52700 | 0.5971 | - | - | - |
| 0.3570 | 52800 | 0.6478 | - | - | - |
| 0.3577 | 52900 | 0.6541 | - | - | - |
| 0.3583 | 53000 | 0.6451 | - | - | - |
| 0.3590 | 53100 | 0.6416 | - | - | - |
| 0.3597 | 53200 | 0.6254 | - | - | - |
| 0.3604 | 53300 | 0.6096 | - | - | - |
| 0.3611 | 53400 | 0.6307 | - | - | - |
| 0.3617 | 53500 | 0.606 | - | - | - |
| 0.3624 | 53600 | 0.6387 | - | - | - |
| 0.3631 | 53700 | 0.5961 | - | - | - |
| 0.3638 | 53800 | 0.6237 | - | - | - |
| 0.3644 | 53900 | 0.6239 | - | - | - |
| 0.3651 | 54000 | 0.6565 | - | - | - |
| 0.3658 | 54100 | 0.6405 | - | - | - |
| 0.3665 | 54200 | 0.6519 | - | - | - |
| 0.3671 | 54300 | 0.6073 | - | - | - |
| 0.3678 | 54400 | 0.5996 | - | - | - |
| 0.3685 | 54500 | 0.6359 | - | - | - |
| 0.3692 | 54600 | 0.6518 | - | - | - |
| 0.3698 | 54700 | 0.6553 | - | - | - |
| 0.3705 | 54800 | 0.644 | - | - | - |
| 0.3712 | 54900 | 0.6162 | - | - | - |
| 0.3719 | 55000 | 0.6249 | 0.6255 | 0.7278 | - |
| 0.3725 | 55100 | 0.6388 | - | - | - |
| 0.3732 | 55200 | 0.639 | - | - | - |
| 0.3739 | 55300 | 0.617 | - | - | - |
| 0.3746 | 55400 | 0.5962 | - | - | - |
| 0.3753 | 55500 | 0.6682 | - | - | - |
| 0.3759 | 55600 | 0.6443 | - | - | - |
| 0.3766 | 55700 | 0.6814 | - | - | - |
| 0.3773 | 55800 | 0.622 | - | - | - |
| 0.3780 | 55900 | 0.5706 | - | - | - |
| 0.3786 | 56000 | 0.634 | - | - | - |
| 0.3793 | 56100 | 0.716 | - | - | - |
| 0.3800 | 56200 | 0.6451 | - | - | - |
| 0.3807 | 56300 | 0.65 | - | - | - |
| 0.3813 | 56400 | 0.6057 | - | - | - |
| 0.3820 | 56500 | 0.698 | - | - | - |
| 0.3827 | 56600 | 0.623 | - | - | - |
| 0.3834 | 56700 | 0.6455 | - | - | - |
| 0.3840 | 56800 | 0.6551 | - | - | - |
| 0.3847 | 56900 | 0.6256 | - | - | - |
| 0.3854 | 57000 | 0.6746 | - | - | - |
| 0.3861 | 57100 | 0.6176 | - | - | - |
| 0.3867 | 57200 | 0.6617 | - | - | - |
| 0.3874 | 57300 | 0.6398 | - | - | - |
| 0.3881 | 57400 | 0.6081 | - | - | - |
| 0.3888 | 57500 | 0.6398 | - | - | - |
| 0.3894 | 57600 | 0.6344 | - | - | - |
| 0.3901 | 57700 | 0.6568 | - | - | - |
| 0.3908 | 57800 | 0.6455 | - | - | - |
| 0.3915 | 57900 | 0.6425 | - | - | - |
| 0.3922 | 58000 | 0.6042 | - | - | - |
| 0.3928 | 58100 | 0.6076 | - | - | - |
| 0.3935 | 58200 | 0.6339 | - | - | - |
| 0.3942 | 58300 | 0.6217 | - | - | - |
| 0.3949 | 58400 | 0.6651 | - | - | - |
| 0.3955 | 58500 | 0.6035 | - | - | - |
| 0.3962 | 58600 | 0.6103 | - | - | - |
| 0.3969 | 58700 | 0.6335 | - | - | - |
| 0.3976 | 58800 | 0.606 | - | - | - |
| 0.3982 | 58900 | 0.5992 | - | - | - |
| 0.3989 | 59000 | 0.5963 | - | - | - |
| 0.3996 | 59100 | 0.6815 | - | - | - |
| 0.4003 | 59200 | 0.6247 | - | - | - |
| 0.4009 | 59300 | 0.6558 | - | - | - |
| 0.4016 | 59400 | 0.64 | - | - | - |
| 0.4023 | 59500 | 0.6545 | - | - | - |
| 0.4030 | 59600 | 0.648 | - | - | - |
| 0.4036 | 59700 | 0.6931 | - | - | - |
| 0.4043 | 59800 | 0.6162 | - | - | - |
| 0.4050 | 59900 | 0.5646 | - | - | - |
| 0.4057 | 60000 | 0.6161 | 0.6338 | 0.7306 | - |
| 0.4064 | 60100 | 0.6343 | - | - | - |
| 0.4070 | 60200 | 0.6251 | - | - | - |
| 0.4077 | 60300 | 0.6308 | - | - | - |
| 0.4084 | 60400 | 0.645 | - | - | - |
| 0.4091 | 60500 | 0.6569 | - | - | - |
| 0.4097 | 60600 | 0.683 | - | - | - |
| 0.4104 | 60700 | 0.6618 | - | - | - |
| 0.4111 | 60800 | 0.6432 | - | - | - |
| 0.4118 | 60900 | 0.6021 | - | - | - |
| 0.4124 | 61000 | 0.6408 | - | - | - |
| 0.4131 | 61100 | 0.6512 | - | - | - |
| 0.4138 | 61200 | 0.657 | - | - | - |
| 0.4145 | 61300 | 0.6615 | - | - | - |
| 0.4151 | 61400 | 0.6271 | - | - | - |
| 0.4158 | 61500 | 0.6145 | - | - | - |
| 0.4165 | 61600 | 0.656 | - | - | - |
| 0.4172 | 61700 | 0.6566 | - | - | - |
| 0.4178 | 61800 | 0.6403 | - | - | - |
| 0.4185 | 61900 | 0.6262 | - | - | - |
| 0.4192 | 62000 | 0.6281 | - | - | - |
| 0.4199 | 62100 | 0.6687 | - | - | - |
| 0.4206 | 62200 | 0.6099 | - | - | - |
| 0.4212 | 62300 | 0.618 | - | - | - |
| 0.4219 | 62400 | 0.6656 | - | - | - |
| 0.4226 | 62500 | 0.6308 | - | - | - |
| 0.4233 | 62600 | 0.6708 | - | - | - |
| 0.4239 | 62700 | 0.6741 | - | - | - |
| 0.4246 | 62800 | 0.6129 | - | - | - |
| 0.4253 | 62900 | 0.6701 | - | - | - |
| 0.4260 | 63000 | 0.6287 | - | - | - |
| 0.4266 | 63100 | 0.6253 | - | - | - |
| 0.4273 | 63200 | 0.6209 | - | - | - |
| 0.4280 | 63300 | 0.6151 | - | - | - |
| 0.4287 | 63400 | 0.6661 | - | - | - |
| 0.4293 | 63500 | 0.593 | - | - | - |
| 0.4300 | 63600 | 0.6351 | - | - | - |
| 0.4307 | 63700 | 0.571 | - | - | - |
| 0.4314 | 63800 | 0.6677 | - | - | - |
| 0.4320 | 63900 | 0.6424 | - | - | - |
| 0.4327 | 64000 | 0.6167 | - | - | - |
| 0.4334 | 64100 | 0.6306 | - | - | - |
| 0.4341 | 64200 | 0.6459 | - | - | - |
| 0.4348 | 64300 | 0.6319 | - | - | - |
| 0.4354 | 64400 | 0.6046 | - | - | - |
| 0.4361 | 64500 | 0.5864 | - | - | - |
| 0.4368 | 64600 | 0.5976 | - | - | - |
| 0.4375 | 64700 | 0.6703 | - | - | - |
| 0.4381 | 64800 | 0.6285 | - | - | - |
| 0.4388 | 64900 | 0.6157 | - | - | - |
| 0.4395 | 65000 | 0.6242 | 0.6218 | 0.7230 | - |
| 0.4402 | 65100 | 0.6822 | - | - | - |
| 0.4408 | 65200 | 0.6187 | - | - | - |
| 0.4415 | 65300 | 0.6269 | - | - | - |
| 0.4422 | 65400 | 0.662 | - | - | - |
| 0.4429 | 65500 | 0.6735 | - | - | - |
| 0.4435 | 65600 | 0.5918 | - | - | - |
| 0.4442 | 65700 | 0.6078 | - | - | - |
| 0.4449 | 65800 | 0.6403 | - | - | - |
| 0.4456 | 65900 | 0.6206 | - | - | - |
| 0.4462 | 66000 | 0.6588 | - | - | - |
| 0.4469 | 66100 | 0.6088 | - | - | - |
| 0.4476 | 66200 | 0.682 | - | - | - |
| 0.4483 | 66300 | 0.6464 | - | - | - |
| 0.4489 | 66400 | 0.5804 | - | - | - |
| 0.4496 | 66500 | 0.619 | - | - | - |
| 0.4503 | 66600 | 0.5553 | - | - | - |
| 0.4510 | 66700 | 0.6467 | - | - | - |
| 0.4517 | 66800 | 0.6051 | - | - | - |
| 0.4523 | 66900 | 0.6018 | - | - | - |
| 0.4530 | 67000 | 0.6542 | - | - | - |
| 0.4537 | 67100 | 0.6279 | - | - | - |
| 0.4544 | 67200 | 0.6058 | - | - | - |
| 0.4550 | 67300 | 0.6401 | - | - | - |
| 0.4557 | 67400 | 0.6472 | - | - | - |
| 0.4564 | 67500 | 0.6139 | - | - | - |
| 0.4571 | 67600 | 0.6609 | - | - | - |
| 0.4577 | 67700 | 0.6618 | - | - | - |
| 0.4584 | 67800 | 0.6947 | - | - | - |
| 0.4591 | 67900 | 0.6402 | - | - | - |
| 0.4598 | 68000 | 0.626 | - | - | - |
| 0.4604 | 68100 | 0.5746 | - | - | - |
| 0.4611 | 68200 | 0.6357 | - | - | - |
| 0.4618 | 68300 | 0.5956 | - | - | - |
| 0.4625 | 68400 | 0.6628 | - | - | - |
| 0.4631 | 68500 | 0.6289 | - | - | - |
| 0.4638 | 68600 | 0.5994 | - | - | - |
| 0.4645 | 68700 | 0.6198 | - | - | - |
| 0.4652 | 68800 | 0.6084 | - | - | - |
| 0.4659 | 68900 | 0.5719 | - | - | - |
| 0.4665 | 69000 | 0.6377 | - | - | - |
| 0.4672 | 69100 | 0.6459 | - | - | - |
| 0.4679 | 69200 | 0.5992 | - | - | - |
| 0.4686 | 69300 | 0.6472 | - | - | - |
| 0.4692 | 69400 | 0.6353 | - | - | - |
| 0.4699 | 69500 | 0.6298 | - | - | - |
| 0.4706 | 69600 | 0.6451 | - | - | - |
| 0.4713 | 69700 | 0.612 | - | - | - |
| 0.4719 | 69800 | 0.6064 | - | - | - |
| 0.4726 | 69900 | 0.5837 | - | - | - |
| 0.4733 | 70000 | 0.6238 | 0.6179 | 0.7189 | - |
| 0.4740 | 70100 | 0.6257 | - | - | - |
| 0.4746 | 70200 | 0.6304 | - | - | - |
| 0.4753 | 70300 | 0.6209 | - | - | - |
| 0.4760 | 70400 | 0.621 | - | - | - |
| 0.4767 | 70500 | 0.6084 | - | - | - |
| 0.4773 | 70600 | 0.6252 | - | - | - |
| 0.4780 | 70700 | 0.5949 | - | - | - |
| 0.4787 | 70800 | 0.6235 | - | - | - |
| 0.4794 | 70900 | 0.6242 | - | - | - |
| 0.4801 | 71000 | 0.6453 | - | - | - |
| 0.4807 | 71100 | 0.6447 | - | - | - |
| 0.4814 | 71200 | 0.6388 | - | - | - |
| 0.4821 | 71300 | 0.6132 | - | - | - |
| 0.4828 | 71400 | 0.616 | - | - | - |
| 0.4834 | 71500 | 0.5966 | - | - | - |
| 0.4841 | 71600 | 0.6732 | - | - | - |
| 0.4848 | 71700 | 0.6082 | - | - | - |
| 0.4855 | 71800 | 0.611 | - | - | - |
| 0.4861 | 71900 | 0.6304 | - | - | - |
| 0.4868 | 72000 | 0.6341 | - | - | - |
| 0.4875 | 72100 | 0.6134 | - | - | - |
| 0.4882 | 72200 | 0.5944 | - | - | - |
| 0.4888 | 72300 | 0.6303 | - | - | - |
| 0.4895 | 72400 | 0.594 | - | - | - |
| 0.4902 | 72500 | 0.6315 | - | - | - |
| 0.4909 | 72600 | 0.5712 | - | - | - |
| 0.4915 | 72700 | 0.5829 | - | - | - |
| 0.4922 | 72800 | 0.6161 | - | - | - |
| 0.4929 | 72900 | 0.5878 | - | - | - |
| 0.4936 | 73000 | 0.6294 | - | - | - |
| 0.4942 | 73100 | 0.6111 | - | - | - |
| 0.4949 | 73200 | 0.5692 | - | - | - |
| 0.4956 | 73300 | 0.5736 | - | - | - |
| 0.4963 | 73400 | 0.6255 | - | - | - |
| 0.4970 | 73500 | 0.6148 | - | - | - |
| 0.4976 | 73600 | 0.5573 | - | - | - |
| 0.4983 | 73700 | 0.5809 | - | - | - |
| 0.4990 | 73800 | 0.6168 | - | - | - |
| 0.4997 | 73900 | 0.6424 | - | - | - |
| 0.5003 | 74000 | 0.6409 | - | - | - |
| 0.5010 | 74100 | 0.5661 | - | - | - |
| 0.5017 | 74200 | 0.6337 | - | - | - |
| 0.5024 | 74300 | 0.551 | - | - | - |
| 0.5030 | 74400 | 0.6262 | - | - | - |
| 0.5037 | 74500 | 0.6337 | - | - | - |
| 0.5044 | 74600 | 0.633 | - | - | - |
| 0.5051 | 74700 | 0.5337 | - | - | - |
| 0.5057 | 74800 | 0.5854 | - | - | - |
| 0.5064 | 74900 | 0.6169 | - | - | - |
| 0.5071 | 75000 | 0.6359 | 0.6160 | 0.7241 | - |
| 0.5078 | 75100 | 0.6374 | - | - | - |
| 0.5084 | 75200 | 0.6061 | - | - | - |
| 0.5091 | 75300 | 0.6369 | - | - | - |
| 0.5098 | 75400 | 0.6648 | - | - | - |
| 0.5105 | 75500 | 0.5873 | - | - | - |
| 0.5112 | 75600 | 0.5949 | - | - | - |
| 0.5118 | 75700 | 0.6224 | - | - | - |
| 0.5125 | 75800 | 0.6376 | - | - | - |
| 0.5132 | 75900 | 0.5902 | - | - | - |
| 0.5139 | 76000 | 0.6408 | - | - | - |
| 0.5145 | 76100 | 0.6021 | - | - | - |
| 0.5152 | 76200 | 0.5985 | - | - | - |
| 0.5159 | 76300 | 0.6502 | - | - | - |
| 0.5166 | 76400 | 0.5686 | - | - | - |
| 0.5172 | 76500 | 0.6252 | - | - | - |
| 0.5179 | 76600 | 0.6192 | - | - | - |
| 0.5186 | 76700 | 0.6058 | - | - | - |
| 0.5193 | 76800 | 0.6305 | - | - | - |
| 0.5199 | 76900 | 0.6343 | - | - | - |
| 0.5206 | 77000 | 0.5561 | - | - | - |
| 0.5213 | 77100 | 0.6145 | - | - | - |
| 0.5220 | 77200 | 0.6081 | - | - | - |
| 0.5226 | 77300 | 0.6396 | - | - | - |
| 0.5233 | 77400 | 0.5994 | - | - | - |
| 0.5240 | 77500 | 0.6493 | - | - | - |
| 0.5247 | 77600 | 0.6207 | - | - | - |
| 0.5254 | 77700 | 0.6138 | - | - | - |
| 0.5260 | 77800 | 0.713 | - | - | - |
| 0.5267 | 77900 | 0.5914 | - | - | - |
| 0.5274 | 78000 | 0.6569 | - | - | - |
| 0.5281 | 78100 | 0.6586 | - | - | - |
| 0.5287 | 78200 | 0.6452 | - | - | - |
| 0.5294 | 78300 | 0.5984 | - | - | - |
| 0.5301 | 78400 | 0.6117 | - | - | - |
| 0.5308 | 78500 | 0.6054 | - | - | - |
| 0.5314 | 78600 | 0.6085 | - | - | - |
| 0.5321 | 78700 | 0.6346 | - | - | - |
| 0.5328 | 78800 | 0.5873 | - | - | - |
| 0.5335 | 78900 | 0.6506 | - | - | - |
| 0.5341 | 79000 | 0.65 | - | - | - |
| 0.5348 | 79100 | 0.6223 | - | - | - |
| 0.5355 | 79200 | 0.6262 | - | - | - |
| 0.5362 | 79300 | 0.5406 | - | - | - |
| 0.5368 | 79400 | 0.5873 | - | - | - |
| 0.5375 | 79500 | 0.613 | - | - | - |
| 0.5382 | 79600 | 0.571 | - | - | - |
| 0.5389 | 79700 | 0.5856 | - | - | - |
| 0.5396 | 79800 | 0.5672 | - | - | - |
| 0.5402 | 79900 | 0.6027 | - | - | - |
| 0.5409 | 80000 | 0.6018 | 0.6046 | 0.7282 | - |
| 0.5416 | 80100 | 0.5906 | - | - | - |
| 0.5423 | 80200 | 0.5824 | - | - | - |
| 0.5429 | 80300 | 0.5971 | - | - | - |
| 0.5436 | 80400 | 0.6683 | - | - | - |
| 0.5443 | 80500 | 0.6331 | - | - | - |
| 0.5450 | 80600 | 0.6008 | - | - | - |
| 0.5456 | 80700 | 0.6628 | - | - | - |
| 0.5463 | 80800 | 0.5973 | - | - | - |
| 0.5470 | 80900 | 0.6765 | - | - | - |
| 0.5477 | 81000 | 0.6603 | - | - | - |
| 0.5483 | 81100 | 0.5987 | - | - | - |
| 0.5490 | 81200 | 0.5915 | - | - | - |
| 0.5497 | 81300 | 0.596 | - | - | - |
| 0.5504 | 81400 | 0.6053 | - | - | - |
| 0.5510 | 81500 | 0.6292 | - | - | - |
| 0.5517 | 81600 | 0.5678 | - | - | - |
| 0.5524 | 81700 | 0.6322 | - | - | - |
| 0.5531 | 81800 | 0.6004 | - | - | - |
| 0.5537 | 81900 | 0.6016 | - | - | - |
| 0.5544 | 82000 | 0.5989 | - | - | - |
| 0.5551 | 82100 | 0.6167 | - | - | - |
| 0.5558 | 82200 | 0.6094 | - | - | - |
| 0.5565 | 82300 | 0.6168 | - | - | - |
| 0.5571 | 82400 | 0.6085 | - | - | - |
| 0.5578 | 82500 | 0.6279 | - | - | - |
| 0.5585 | 82600 | 0.6032 | - | - | - |
| 0.5592 | 82700 | 0.5894 | - | - | - |
| 0.5598 | 82800 | 0.5738 | - | - | - |
| 0.5605 | 82900 | 0.675 | - | - | - |
| 0.5612 | 83000 | 0.5675 | - | - | - |
| 0.5619 | 83100 | 0.607 | - | - | - |
| 0.5625 | 83200 | 0.6119 | - | - | - |
| 0.5632 | 83300 | 0.6012 | - | - | - |
| 0.5639 | 83400 | 0.6348 | - | - | - |
| 0.5646 | 83500 | 0.5713 | - | - | - |
| 0.5652 | 83600 | 0.6091 | - | - | - |
| 0.5659 | 83700 | 0.5939 | - | - | - |
| 0.5666 | 83800 | 0.597 | - | - | - |
| 0.5673 | 83900 | 0.5814 | - | - | - |
| 0.5679 | 84000 | 0.656 | - | - | - |
| 0.5686 | 84100 | 0.5942 | - | - | - |
| 0.5693 | 84200 | 0.6431 | - | - | - |
| 0.5700 | 84300 | 0.5965 | - | - | - |
| 0.5707 | 84400 | 0.5977 | - | - | - |
| 0.5713 | 84500 | 0.6291 | - | - | - |
| 0.5720 | 84600 | 0.6457 | - | - | - |
| 0.5727 | 84700 | 0.637 | - | - | - |
| 0.5734 | 84800 | 0.5861 | - | - | - |
| 0.5740 | 84900 | 0.6334 | - | - | - |
| 0.5747 | 85000 | 0.6436 | 0.6067 | 0.7284 | - |
| 0.5754 | 85100 | 0.5756 | - | - | - |
| 0.5761 | 85200 | 0.6278 | - | - | - |
| 0.5767 | 85300 | 0.6198 | - | - | - |
| 0.5774 | 85400 | 0.5665 | - | - | - |
| 0.5781 | 85500 | 0.5766 | - | - | - |
| 0.5788 | 85600 | 0.6098 | - | - | - |
| 0.5794 | 85700 | 0.6054 | - | - | - |
| 0.5801 | 85800 | 0.6664 | - | - | - |
| 0.5808 | 85900 | 0.6086 | - | - | - |
| 0.5815 | 86000 | 0.6282 | - | - | - |
| 0.5821 | 86100 | 0.6393 | - | - | - |
| 0.5828 | 86200 | 0.5927 | - | - | - |
| 0.5835 | 86300 | 0.5718 | - | - | - |
| 0.5842 | 86400 | 0.6525 | - | - | - |
| 0.5849 | 86500 | 0.6253 | - | - | - |
| 0.5855 | 86600 | 0.6013 | - | - | - |
| 0.5862 | 86700 | 0.5895 | - | - | - |
| 0.5869 | 86800 | 0.6554 | - | - | - |
| 0.5876 | 86900 | 0.5854 | - | - | - |
| 0.5882 | 87000 | 0.5957 | - | - | - |
| 0.5889 | 87100 | 0.5893 | - | - | - |
| 0.5896 | 87200 | 0.5999 | - | - | - |
| 0.5903 | 87300 | 0.6045 | - | - | - |
| 0.5909 | 87400 | 0.5802 | - | - | - |
| 0.5916 | 87500 | 0.6172 | - | - | - |
| 0.5923 | 87600 | 0.5916 | - | - | - |
| 0.5930 | 87700 | 0.6331 | - | - | - |
| 0.5936 | 87800 | 0.6369 | - | - | - |
| 0.5943 | 87900 | 0.57 | - | - | - |
| 0.5950 | 88000 | 0.6162 | - | - | - |
| 0.5957 | 88100 | 0.5874 | - | - | - |
| 0.5963 | 88200 | 0.5545 | - | - | - |
| 0.5970 | 88300 | 0.6194 | - | - | - |
| 0.5977 | 88400 | 0.5856 | - | - | - |
| 0.5984 | 88500 | 0.6175 | - | - | - |
| 0.5990 | 88600 | 0.6045 | - | - | - |
| 0.5997 | 88700 | 0.6025 | - | - | - |
| 0.6004 | 88800 | 0.5826 | - | - | - |
| 0.6011 | 88900 | 0.6601 | - | - | - |
| 0.6018 | 89000 | 0.5775 | - | - | - |
| 0.6024 | 89100 | 0.6147 | - | - | - |
| 0.6031 | 89200 | 0.6425 | - | - | - |
| 0.6038 | 89300 | 0.6249 | - | - | - |
| 0.6045 | 89400 | 0.6077 | - | - | - |
| 0.6051 | 89500 | 0.6052 | - | - | - |
| 0.6058 | 89600 | 0.5881 | - | - | - |
| 0.6065 | 89700 | 0.6441 | - | - | - |
| 0.6072 | 89800 | 0.5686 | - | - | - |
| 0.6078 | 89900 | 0.6208 | - | - | - |
| 0.6085 | 90000 | 0.6262 | 0.5962 | 0.7290 | - |
| 0.6092 | 90100 | 0.5858 | - | - | - |
| 0.6099 | 90200 | 0.5632 | - | - | - |
| 0.6105 | 90300 | 0.6381 | - | - | - |
| 0.6112 | 90400 | 0.5926 | - | - | - |
| 0.6119 | 90500 | 0.6037 | - | - | - |
| 0.6126 | 90600 | 0.5921 | - | - | - |
| 0.6132 | 90700 | 0.6042 | - | - | - |
| 0.6139 | 90800 | 0.5751 | - | - | - |
| 0.6146 | 90900 | 0.6915 | - | - | - |
| 0.6153 | 91000 | 0.6356 | - | - | - |
| 0.6160 | 91100 | 0.5527 | - | - | - |
| 0.6166 | 91200 | 0.6945 | - | - | - |
| 0.6173 | 91300 | 0.5816 | - | - | - |
| 0.6180 | 91400 | 0.5905 | - | - | - |
| 0.6187 | 91500 | 0.5727 | - | - | - |
| 0.6193 | 91600 | 0.6347 | - | - | - |
| 0.6200 | 91700 | 0.6359 | - | - | - |
| 0.6207 | 91800 | 0.6003 | - | - | - |
| 0.6214 | 91900 | 0.578 | - | - | - |
| 0.6220 | 92000 | 0.5535 | - | - | - |
| 0.6227 | 92100 | 0.5671 | - | - | - |
| 0.6234 | 92200 | 0.5629 | - | - | - |
| 0.6241 | 92300 | 0.571 | - | - | - |
| 0.6247 | 92400 | 0.5791 | - | - | - |
| 0.6254 | 92500 | 0.6182 | - | - | - |
| 0.6261 | 92600 | 0.6103 | - | - | - |
| 0.6268 | 92700 | 0.5707 | - | - | - |
| 0.6274 | 92800 | 0.5786 | - | - | - |
| 0.6281 | 92900 | 0.554 | - | - | - |
| 0.6288 | 93000 | 0.5775 | - | - | - |
| 0.6295 | 93100 | 0.6026 | - | - | - |
| 0.6302 | 93200 | 0.5743 | - | - | - |
| 0.6308 | 93300 | 0.6418 | - | - | - |
| 0.6315 | 93400 | 0.5867 | - | - | - |
| 0.6322 | 93500 | 0.594 | - | - | - |
| 0.6329 | 93600 | 0.5203 | - | - | - |
| 0.6335 | 93700 | 0.5931 | - | - | - |
| 0.6342 | 93800 | 0.5703 | - | - | - |
| 0.6349 | 93900 | 0.5665 | - | - | - |
| 0.6356 | 94000 | 0.6185 | - | - | - |
| 0.6362 | 94100 | 0.6033 | - | - | - |
| 0.6369 | 94200 | 0.6003 | - | - | - |
| 0.6376 | 94300 | 0.61 | - | - | - |
| 0.6383 | 94400 | 0.6101 | - | - | - |
| 0.6389 | 94500 | 0.6051 | - | - | - |
| 0.6396 | 94600 | 0.5788 | - | - | - |
| 0.6403 | 94700 | 0.6017 | - | - | - |
| 0.6410 | 94800 | 0.6018 | - | - | - |
| 0.6416 | 94900 | 0.5726 | - | - | - |
| 0.6423 | 95000 | 0.594 | 0.5891 | 0.7249 | - |
| 0.6430 | 95100 | 0.5978 | - | - | - |
| 0.6437 | 95200 | 0.6216 | - | - | - |
| 0.6443 | 95300 | 0.6323 | - | - | - |
| 0.6450 | 95400 | 0.5357 | - | - | - |
| 0.6457 | 95500 | 0.5839 | - | - | - |
| 0.6464 | 95600 | 0.6459 | - | - | - |
| 0.6471 | 95700 | 0.5624 | - | - | - |
| 0.6477 | 95800 | 0.533 | - | - | - |
| 0.6484 | 95900 | 0.6307 | - | - | - |
| 0.6491 | 96000 | 0.616 | - | - | - |
| 0.6498 | 96100 | 0.6065 | - | - | - |
| 0.6504 | 96200 | 0.585 | - | - | - |
| 0.6511 | 96300 | 0.6208 | - | - | - |
| 0.6518 | 96400 | 0.6138 | - | - | - |
| 0.6525 | 96500 | 0.6185 | - | - | - |
| 0.6531 | 96600 | 0.6244 | - | - | - |
| 0.6538 | 96700 | 0.6085 | - | - | - |
| 0.6545 | 96800 | 0.6526 | - | - | - |
| 0.6552 | 96900 | 0.5471 | - | - | - |
| 0.6558 | 97000 | 0.6102 | - | - | - |
| 0.6565 | 97100 | 0.5853 | - | - | - |
| 0.6572 | 97200 | 0.6138 | - | - | - |
| 0.6579 | 97300 | 0.6025 | - | - | - |
| 0.6585 | 97400 | 0.6209 | - | - | - |
| 0.6592 | 97500 | 0.5849 | - | - | - |
| 0.6599 | 97600 | 0.5783 | - | - | - |
| 0.6606 | 97700 | 0.6042 | - | - | - |
| 0.6613 | 97800 | 0.5641 | - | - | - |
| 0.6619 | 97900 | 0.6084 | - | - | - |
| 0.6626 | 98000 | 0.5553 | - | - | - |
| 0.6633 | 98100 | 0.5948 | - | - | - |
| 0.6640 | 98200 | 0.5449 | - | - | - |
| 0.6646 | 98300 | 0.5889 | - | - | - |
| 0.6653 | 98400 | 0.6199 | - | - | - |
| 0.6660 | 98500 | 0.5621 | - | - | - |
| 0.6667 | 98600 | 0.5906 | - | - | - |
| 0.6673 | 98700 | 0.6085 | - | - | - |
| 0.6680 | 98800 | 0.5882 | - | - | - |
| 0.6687 | 98900 | 0.5827 | - | - | - |
| 0.6694 | 99000 | 0.5894 | - | - | - |
| 0.6700 | 99100 | 0.5856 | - | - | - |
| 0.6707 | 99200 | 0.5882 | - | - | - |
| 0.6714 | 99300 | 0.6242 | - | - | - |
| 0.6721 | 99400 | 0.5972 | - | - | - |
| 0.6727 | 99500 | 0.6286 | - | - | - |
| 0.6734 | 99600 | 0.6136 | - | - | - |
| 0.6741 | 99700 | 0.5609 | - | - | - |
| 0.6748 | 99800 | 0.5942 | - | - | - |
| 0.6755 | 99900 | 0.5529 | - | - | - |
| 0.6761 | 100000 | 0.6497 | 0.5823 | 0.7371 | - |
| 0.6768 | 100100 | 0.6292 | - | - | - |
| 0.6775 | 100200 | 0.5993 | - | - | - |
| 0.6782 | 100300 | 0.5609 | - | - | - |
| 0.6788 | 100400 | 0.578 | - | - | - |
| 0.6795 | 100500 | 0.634 | - | - | - |
| 0.6802 | 100600 | 0.6538 | - | - | - |
| 0.6809 | 100700 | 0.6005 | - | - | - |
| 0.6815 | 100800 | 0.6065 | - | - | - |
| 0.6822 | 100900 | 0.5853 | - | - | - |
| 0.6829 | 101000 | 0.6024 | - | - | - |
| 0.6836 | 101100 | 0.587 | - | - | - |
| 0.6842 | 101200 | 0.6135 | - | - | - |
| 0.6849 | 101300 | 0.6277 | - | - | - |
| 0.6856 | 101400 | 0.6031 | - | - | - |
| 0.6863 | 101500 | 0.6097 | - | - | - |
| 0.6869 | 101600 | 0.5853 | - | - | - |
| 0.6876 | 101700 | 0.5557 | - | - | - |
| 0.6883 | 101800 | 0.6153 | - | - | - |
| 0.6890 | 101900 | 0.6571 | - | - | - |
| 0.6897 | 102000 | 0.5962 | - | - | - |
| 0.6903 | 102100 | 0.6161 | - | - | - |
| 0.6910 | 102200 | 0.5817 | - | - | - |
| 0.6917 | 102300 | 0.617 | - | - | - |
| 0.6924 | 102400 | 0.5364 | - | - | - |
| 0.6930 | 102500 | 0.58 | - | - | - |
| 0.6937 | 102600 | 0.6076 | - | - | - |
| 0.6944 | 102700 | 0.5525 | - | - | - |
| 0.6951 | 102800 | 0.6226 | - | - | - |
| 0.6957 | 102900 | 0.6156 | - | - | - |
| 0.6964 | 103000 | 0.5889 | - | - | - |
| 0.6971 | 103100 | 0.5624 | - | - | - |
| 0.6978 | 103200 | 0.6526 | - | - | - |
| 0.6984 | 103300 | 0.5648 | - | - | - |
| 0.6991 | 103400 | 0.5939 | - | - | - |
| 0.6998 | 103500 | 0.5857 | - | - | - |
| 0.7005 | 103600 | 0.6231 | - | - | - |
| 0.7011 | 103700 | 0.5959 | - | - | - |
| 0.7018 | 103800 | 0.641 | - | - | - |
| 0.7025 | 103900 | 0.6118 | - | - | - |
| 0.7032 | 104000 | 0.6578 | - | - | - |
| 0.7038 | 104100 | 0.5524 | - | - | - |
| 0.7045 | 104200 | 0.5967 | - | - | - |
| 0.7052 | 104300 | 0.586 | - | - | - |
| 0.7059 | 104400 | 0.5776 | - | - | - |
| 0.7066 | 104500 | 0.5944 | - | - | - |
| 0.7072 | 104600 | 0.5675 | - | - | - |
| 0.7079 | 104700 | 0.5548 | - | - | - |
| 0.7086 | 104800 | 0.6153 | - | - | - |
| 0.7093 | 104900 | 0.5992 | - | - | - |
| 0.7099 | 105000 | 0.5789 | 0.5853 | 0.7318 | - |
| 0.7106 | 105100 | 0.5879 | - | - | - |
| 0.7113 | 105200 | 0.5815 | - | - | - |
| 0.7120 | 105300 | 0.5388 | - | - | - |
| 0.7126 | 105400 | 0.6104 | - | - | - |
| 0.7133 | 105500 | 0.586 | - | - | - |
| 0.7140 | 105600 | 0.5547 | - | - | - |
| 0.7147 | 105700 | 0.5529 | - | - | - |
| 0.7153 | 105800 | 0.5917 | - | - | - |
| 0.7160 | 105900 | 0.5689 | - | - | - |
| 0.7167 | 106000 | 0.6083 | - | - | - |
| 0.7174 | 106100 | 0.626 | - | - | - |
| 0.7180 | 106200 | 0.6076 | - | - | - |
| 0.7187 | 106300 | 0.5706 | - | - | - |
| 0.7194 | 106400 | 0.5976 | - | - | - |
| 0.7201 | 106500 | 0.5964 | - | - | - |
| 0.7208 | 106600 | 0.5841 | - | - | - |
| 0.7214 | 106700 | 0.5973 | - | - | - |
| 0.7221 | 106800 | 0.5978 | - | - | - |
| 0.7228 | 106900 | 0.5965 | - | - | - |
| 0.7235 | 107000 | 0.5934 | - | - | - |
| 0.7241 | 107100 | 0.5361 | - | - | - |
| 0.7248 | 107200 | 0.6005 | - | - | - |
| 0.7255 | 107300 | 0.5367 | - | - | - |
| 0.7262 | 107400 | 0.5863 | - | - | - |
| 0.7268 | 107500 | 0.5799 | - | - | - |
| 0.7275 | 107600 | 0.6288 | - | - | - |
| 0.7282 | 107700 | 0.5655 | - | - | - |
| 0.7289 | 107800 | 0.6095 | - | - | - |
| 0.7295 | 107900 | 0.5643 | - | - | - |
| 0.7302 | 108000 | 0.5704 | - | - | - |
| 0.7309 | 108100 | 0.5481 | - | - | - |
| 0.7316 | 108200 | 0.588 | - | - | - |
| 0.7322 | 108300 | 0.6065 | - | - | - |
| 0.7329 | 108400 | 0.5752 | - | - | - |
| 0.7336 | 108500 | 0.6316 | - | - | - |
| 0.7343 | 108600 | 0.5849 | - | - | - |
| 0.7350 | 108700 | 0.5968 | - | - | - |
| 0.7356 | 108800 | 0.6056 | - | - | - |
| 0.7363 | 108900 | 0.5976 | - | - | - |
| 0.7370 | 109000 | 0.6275 | - | - | - |
| 0.7377 | 109100 | 0.5933 | - | - | - |
| 0.7383 | 109200 | 0.5939 | - | - | - |
| 0.7390 | 109300 | 0.6135 | - | - | - |
| 0.7397 | 109400 | 0.5431 | - | - | - |
| 0.7404 | 109500 | 0.6265 | - | - | - |
| 0.7410 | 109600 | 0.6279 | - | - | - |
| 0.7417 | 109700 | 0.5668 | - | - | - |
| 0.7424 | 109800 | 0.5964 | - | - | - |
| 0.7431 | 109900 | 0.56 | - | - | - |
| 0.7437 | 110000 | 0.6061 | 0.5877 | 0.7244 | - |
| 0.7444 | 110100 | 0.6355 | - | - | - |
| 0.7451 | 110200 | 0.5443 | - | - | - |
| 0.7458 | 110300 | 0.6115 | - | - | - |
| 0.7464 | 110400 | 0.5828 | - | - | - |
| 0.7471 | 110500 | 0.598 | - | - | - |
| 0.7478 | 110600 | 0.572 | - | - | - |
| 0.7485 | 110700 | 0.611 | - | - | - |
| 0.7491 | 110800 | 0.5725 | - | - | - |
| 0.7498 | 110900 | 0.5722 | - | - | - |
| 0.7505 | 111000 | 0.5491 | - | - | - |
| 0.7512 | 111100 | 0.5647 | - | - | - |
| 0.7519 | 111200 | 0.6111 | - | - | - |
| 0.7525 | 111300 | 0.5597 | - | - | - |
| 0.7532 | 111400 | 0.5547 | - | - | - |
| 0.7539 | 111500 | 0.5672 | - | - | - |
| 0.7546 | 111600 | 0.5972 | - | - | - |
| 0.7552 | 111700 | 0.6053 | - | - | - |
| 0.7559 | 111800 | 0.5259 | - | - | - |
| 0.7566 | 111900 | 0.541 | - | - | - |
| 0.7573 | 112000 | 0.5516 | - | - | - |
| 0.7579 | 112100 | 0.5579 | - | - | - |
| 0.7586 | 112200 | 0.5843 | - | - | - |
| 0.7593 | 112300 | 0.6113 | - | - | - |
| 0.7600 | 112400 | 0.597 | - | - | - |
| 0.7606 | 112500 | 0.5951 | - | - | - |
| 0.7613 | 112600 | 0.5642 | - | - | - |
| 0.7620 | 112700 | 0.5787 | - | - | - |
| 0.7627 | 112800 | 0.6042 | - | - | - |
| 0.7633 | 112900 | 0.5876 | - | - | - |
| 0.7640 | 113000 | 0.6343 | - | - | - |
| 0.7647 | 113100 | 0.5725 | - | - | - |
| 0.7654 | 113200 | 0.5674 | - | - | - |
| 0.7661 | 113300 | 0.5957 | - | - | - |
| 0.7667 | 113400 | 0.6699 | - | - | - |
| 0.7674 | 113500 | 0.5619 | - | - | - |
| 0.7681 | 113600 | 0.5769 | - | - | - |
| 0.7688 | 113700 | 0.6329 | - | - | - |
| 0.7694 | 113800 | 0.6609 | - | - | - |
| 0.7701 | 113900 | 0.5893 | - | - | - |
| 0.7708 | 114000 | 0.5679 | - | - | - |
| 0.7715 | 114100 | 0.6012 | - | - | - |
| 0.7721 | 114200 | 0.5386 | - | - | - |
| 0.7728 | 114300 | 0.6282 | - | - | - |
| 0.7735 | 114400 | 0.5384 | - | - | - |
| 0.7742 | 114500 | 0.6082 | - | - | - |
| 0.7748 | 114600 | 0.5728 | - | - | - |
| 0.7755 | 114700 | 0.6041 | - | - | - |
| 0.7762 | 114800 | 0.5628 | - | - | - |
| 0.7769 | 114900 | 0.5847 | - | - | - |
| 0.7775 | 115000 | 0.5735 | 0.5785 | 0.7370 | - |
| 0.7782 | 115100 | 0.586 | - | - | - |
| 0.7789 | 115200 | 0.5692 | - | - | - |
| 0.7796 | 115300 | 0.6119 | - | - | - |
| 0.7803 | 115400 | 0.6128 | - | - | - |
| 0.7809 | 115500 | 0.6094 | - | - | - |
| 0.7816 | 115600 | 0.5753 | - | - | - |
| 0.7823 | 115700 | 0.5547 | - | - | - |
| 0.7830 | 115800 | 0.6574 | - | - | - |
| 0.7836 | 115900 | 0.5588 | - | - | - |
| 0.7843 | 116000 | 0.5797 | - | - | - |
| 0.7850 | 116100 | 0.5945 | - | - | - |
| 0.7857 | 116200 | 0.6008 | - | - | - |
| 0.7863 | 116300 | 0.6642 | - | - | - |
| 0.7870 | 116400 | 0.6693 | - | - | - |
| 0.7877 | 116500 | 0.5889 | - | - | - |
| 0.7884 | 116600 | 0.5822 | - | - | - |
| 0.7890 | 116700 | 0.6038 | - | - | - |
| 0.7897 | 116800 | 0.5356 | - | - | - |
| 0.7904 | 116900 | 0.5539 | - | - | - |
| 0.7911 | 117000 | 0.585 | - | - | - |
| 0.7917 | 117100 | 0.5612 | - | - | - |
| 0.7924 | 117200 | 0.5776 | - | - | - |
| 0.7931 | 117300 | 0.5997 | - | - | - |
| 0.7938 | 117400 | 0.5788 | - | - | - |
| 0.7945 | 117500 | 0.5468 | - | - | - |
| 0.7951 | 117600 | 0.6095 | - | - | - |
| 0.7958 | 117700 | 0.5922 | - | - | - |
| 0.7965 | 117800 | 0.5787 | - | - | - |
| 0.7972 | 117900 | 0.514 | - | - | - |
| 0.7978 | 118000 | 0.5866 | - | - | - |
| 0.7985 | 118100 | 0.5878 | - | - | - |
| 0.7992 | 118200 | 0.6085 | - | - | - |
| 0.7999 | 118300 | 0.608 | - | - | - |
| 0.8005 | 118400 | 0.6073 | - | - | - |
| 0.8012 | 118500 | 0.6014 | - | - | - |
| 0.8019 | 118600 | 0.6112 | - | - | - |
| 0.8026 | 118700 | 0.6029 | - | - | - |
| 0.8032 | 118800 | 0.6066 | - | - | - |
| 0.8039 | 118900 | 0.5594 | - | - | - |
| 0.8046 | 119000 | 0.5844 | - | - | - |
| 0.8053 | 119100 | 0.5943 | - | - | - |
| 0.8059 | 119200 | 0.5646 | - | - | - |
| 0.8066 | 119300 | 0.6438 | - | - | - |
| 0.8073 | 119400 | 0.5454 | - | - | - |
| 0.8080 | 119500 | 0.5899 | - | - | - |
| 0.8086 | 119600 | 0.5652 | - | - | - |
| 0.8093 | 119700 | 0.578 | - | - | - |
| 0.8100 | 119800 | 0.613 | - | - | - |
| 0.8107 | 119900 | 0.5346 | - | - | - |
| 0.8114 | 120000 | 0.6038 | 0.5812 | 0.7398 | - |
| 0.8120 | 120100 | 0.5886 | - | - | - |
| 0.8127 | 120200 | 0.5301 | - | - | - |
| 0.8134 | 120300 | 0.6578 | - | - | - |
| 0.8141 | 120400 | 0.6005 | - | - | - |
| 0.8147 | 120500 | 0.549 | - | - | - |
| 0.8154 | 120600 | 0.6004 | - | - | - |
| 0.8161 | 120700 | 0.5843 | - | - | - |
| 0.8168 | 120800 | 0.6028 | - | - | - |
| 0.8174 | 120900 | 0.6072 | - | - | - |
| 0.8181 | 121000 | 0.5894 | - | - | - |
| 0.8188 | 121100 | 0.5876 | - | - | - |
| 0.8195 | 121200 | 0.6424 | - | - | - |
| 0.8201 | 121300 | 0.575 | - | - | - |
| 0.8208 | 121400 | 0.5865 | - | - | - |
| 0.8215 | 121500 | 0.5518 | - | - | - |
| 0.8222 | 121600 | 0.6161 | - | - | - |
| 0.8228 | 121700 | 0.5586 | - | - | - |
| 0.8235 | 121800 | 0.5647 | - | - | - |
| 0.8242 | 121900 | 0.5604 | - | - | - |
| 0.8249 | 122000 | 0.5442 | - | - | - |
| 0.8256 | 122100 | 0.5922 | - | - | - |
| 0.8262 | 122200 | 0.5978 | - | - | - |
| 0.8269 | 122300 | 0.5598 | - | - | - |
| 0.8276 | 122400 | 0.6207 | - | - | - |
| 0.8283 | 122500 | 0.6166 | - | - | - |
| 0.8289 | 122600 | 0.5559 | - | - | - |
| 0.8296 | 122700 | 0.5559 | - | - | - |
| 0.8303 | 122800 | 0.5789 | - | - | - |
| 0.8310 | 122900 | 0.5594 | - | - | - |
| 0.8316 | 123000 | 0.6149 | - | - | - |
| 0.8323 | 123100 | 0.5921 | - | - | - |
| 0.8330 | 123200 | 0.6191 | - | - | - |
| 0.8337 | 123300 | 0.5552 | - | - | - |
| 0.8343 | 123400 | 0.5511 | - | - | - |
| 0.8350 | 123500 | 0.5625 | - | - | - |
| 0.8357 | 123600 | 0.6132 | - | - | - |
| 0.8364 | 123700 | 0.611 | - | - | - |
| 0.8370 | 123800 | 0.5488 | - | - | - |
| 0.8377 | 123900 | 0.5942 | - | - | - |
| 0.8384 | 124000 | 0.653 | - | - | - |
| 0.8391 | 124100 | 0.595 | - | - | - |
| 0.8398 | 124200 | 0.5888 | - | - | - |
| 0.8404 | 124300 | 0.638 | - | - | - |
| 0.8411 | 124400 | 0.6043 | - | - | - |
| 0.8418 | 124500 | 0.6013 | - | - | - |
| 0.8425 | 124600 | 0.5708 | - | - | - |
| 0.8431 | 124700 | 0.5368 | - | - | - |
| 0.8438 | 124800 | 0.6107 | - | - | - |
| 0.8445 | 124900 | 0.542 | - | - | - |
| 0.8452 | 125000 | 0.5732 | 0.5803 | 0.7451 | - |
| 0.8458 | 125100 | 0.5881 | - | - | - |
| 0.8465 | 125200 | 0.5454 | - | - | - |
| 0.8472 | 125300 | 0.6306 | - | - | - |
| 0.8479 | 125400 | 0.543 | - | - | - |
| 0.8485 | 125500 | 0.571 | - | - | - |
| 0.8492 | 125600 | 0.5825 | - | - | - |
| 0.8499 | 125700 | 0.5916 | - | - | - |
| 0.8506 | 125800 | 0.5481 | - | - | - |
| 0.8512 | 125900 | 0.5795 | - | - | - |
| 0.8519 | 126000 | 0.5811 | - | - | - |
| 0.8526 | 126100 | 0.5849 | - | - | - |
| 0.8533 | 126200 | 0.5474 | - | - | - |
| 0.8539 | 126300 | 0.5779 | - | - | - |
| 0.8546 | 126400 | 0.5853 | - | - | - |
| 0.8553 | 126500 | 0.575 | - | - | - |
| 0.8560 | 126600 | 0.5548 | - | - | - |
| 0.8567 | 126700 | 0.5429 | - | - | - |
| 0.8573 | 126800 | 0.5918 | - | - | - |
| 0.8580 | 126900 | 0.61 | - | - | - |
| 0.8587 | 127000 | 0.5896 | - | - | - |
| 0.8594 | 127100 | 0.5677 | - | - | - |
| 0.8600 | 127200 | 0.5705 | - | - | - |
| 0.8607 | 127300 | 0.5504 | - | - | - |
| 0.8614 | 127400 | 0.5399 | - | - | - |
| 0.8621 | 127500 | 0.5381 | - | - | - |
| 0.8627 | 127600 | 0.5228 | - | - | - |
| 0.8634 | 127700 | 0.602 | - | - | - |
| 0.8641 | 127800 | 0.6279 | - | - | - |
| 0.8648 | 127900 | 0.5489 | - | - | - |
| 0.8654 | 128000 | 0.5514 | - | - | - |
| 0.8661 | 128100 | 0.6084 | - | - | - |
| 0.8668 | 128200 | 0.5623 | - | - | - |
| 0.8675 | 128300 | 0.5566 | - | - | - |
| 0.8681 | 128400 | 0.5585 | - | - | - |
| 0.8688 | 128500 | 0.572 | - | - | - |
| 0.8695 | 128600 | 0.5958 | - | - | - |
| 0.8702 | 128700 | 0.5855 | - | - | - |
| 0.8709 | 128800 | 0.5529 | - | - | - |
| 0.8715 | 128900 | 0.5542 | - | - | - |
| 0.8722 | 129000 | 0.5765 | - | - | - |
| 0.8729 | 129100 | 0.6091 | - | - | - |
| 0.8736 | 129200 | 0.5828 | - | - | - |
| 0.8742 | 129300 | 0.5803 | - | - | - |
| 0.8749 | 129400 | 0.5688 | - | - | - |
| 0.8756 | 129500 | 0.593 | - | - | - |
| 0.8763 | 129600 | 0.5479 | - | - | - |
| 0.8769 | 129700 | 0.5336 | - | - | - |
| 0.8776 | 129800 | 0.5636 | - | - | - |
| 0.8783 | 129900 | 0.6156 | - | - | - |
| 0.8790 | 130000 | 0.5526 | 0.5621 | 0.7421 | - |
| 0.8796 | 130100 | 0.5444 | - | - | - |
| 0.8803 | 130200 | 0.5919 | - | - | - |
| 0.8810 | 130300 | 0.5816 | - | - | - |
| 0.8817 | 130400 | 0.5514 | - | - | - |
| 0.8823 | 130500 | 0.5948 | - | - | - |
| 0.8830 | 130600 | 0.6063 | - | - | - |
| 0.8837 | 130700 | 0.5105 | - | - | - |
| 0.8844 | 130800 | 0.5637 | - | - | - |
| 0.8851 | 130900 | 0.5382 | - | - | - |
| 0.8857 | 131000 | 0.5775 | - | - | - |
| 0.8864 | 131100 | 0.5647 | - | - | - |
| 0.8871 | 131200 | 0.5846 | - | - | - |
| 0.8878 | 131300 | 0.6211 | - | - | - |
| 0.8884 | 131400 | 0.5572 | - | - | - |
| 0.8891 | 131500 | 0.548 | - | - | - |
| 0.8898 | 131600 | 0.599 | - | - | - |
| 0.8905 | 131700 | 0.5746 | - | - | - |
| 0.8911 | 131800 | 0.5644 | - | - | - |
| 0.8918 | 131900 | 0.5848 | - | - | - |
| 0.8925 | 132000 | 0.5476 | - | - | - |
| 0.8932 | 132100 | 0.6046 | - | - | - |
| 0.8938 | 132200 | 0.5839 | - | - | - |
| 0.8945 | 132300 | 0.5945 | - | - | - |
| 0.8952 | 132400 | 0.5793 | - | - | - |
| 0.8959 | 132500 | 0.5561 | - | - | - |
| 0.8965 | 132600 | 0.591 | - | - | - |
| 0.8972 | 132700 | 0.5937 | - | - | - |
| 0.8979 | 132800 | 0.5432 | - | - | - |
| 0.8986 | 132900 | 0.5309 | - | - | - |
| 0.8993 | 133000 | 0.5357 | - | - | - |
| 0.8999 | 133100 | 0.5701 | - | - | - |
| 0.9006 | 133200 | 0.5971 | - | - | - |
| 0.9013 | 133300 | 0.5637 | - | - | - |
| 0.9020 | 133400 | 0.5646 | - | - | - |
| 0.9026 | 133500 | 0.5807 | - | - | - |
| 0.9033 | 133600 | 0.5386 | - | - | - |
| 0.9040 | 133700 | 0.5734 | - | - | - |
| 0.9047 | 133800 | 0.5247 | - | - | - |
| 0.9053 | 133900 | 0.5573 | - | - | - |
| 0.9060 | 134000 | 0.6363 | - | - | - |
| 0.9067 | 134100 | 0.6039 | - | - | - |
| 0.9074 | 134200 | 0.5799 | - | - | - |
| 0.9080 | 134300 | 0.589 | - | - | - |
| 0.9087 | 134400 | 0.6278 | - | - | - |
| 0.9094 | 134500 | 0.6219 | - | - | - |
| 0.9101 | 134600 | 0.5737 | - | - | - |
| 0.9107 | 134700 | 0.5468 | - | - | - |
| 0.9114 | 134800 | 0.5729 | - | - | - |
| 0.9121 | 134900 | 0.5563 | - | - | - |
| 0.9128 | 135000 | 0.5877 | 0.5689 | 0.7374 | - |
| 0.9134 | 135100 | 0.5632 | - | - | - |
| 0.9141 | 135200 | 0.5643 | - | - | - |
| 0.9148 | 135300 | 0.569 | - | - | - |
| 0.9155 | 135400 | 0.5753 | - | - | - |
| 0.9162 | 135500 | 0.5946 | - | - | - |
| 0.9168 | 135600 | 0.6021 | - | - | - |
| 0.9175 | 135700 | 0.5284 | - | - | - |
| 0.9182 | 135800 | 0.5633 | - | - | - |
| 0.9189 | 135900 | 0.5953 | - | - | - |
| 0.9195 | 136000 | 0.5964 | - | - | - |
| 0.9202 | 136100 | 0.5766 | - | - | - |
| 0.9209 | 136200 | 0.5626 | - | - | - |
| 0.9216 | 136300 | 0.5356 | - | - | - |
| 0.9222 | 136400 | 0.5728 | - | - | - |
| 0.9229 | 136500 | 0.6072 | - | - | - |
| 0.9236 | 136600 | 0.5217 | - | - | - |
| 0.9243 | 136700 | 0.5333 | - | - | - |
| 0.9249 | 136800 | 0.5603 | - | - | - |
| 0.9256 | 136900 | 0.5838 | - | - | - |
| 0.9263 | 137000 | 0.605 | - | - | - |
| 0.9270 | 137100 | 0.5549 | - | - | - |
| 0.9276 | 137200 | 0.5821 | - | - | - |
| 0.9283 | 137300 | 0.6145 | - | - | - |
| 0.9290 | 137400 | 0.5537 | - | - | - |
| 0.9297 | 137500 | 0.5394 | - | - | - |
| 0.9304 | 137600 | 0.5269 | - | - | - |
| 0.9310 | 137700 | 0.5888 | - | - | - |
| 0.9317 | 137800 | 0.5546 | - | - | - |
| 0.9324 | 137900 | 0.5634 | - | - | - |
| 0.9331 | 138000 | 0.5666 | - | - | - |
| 0.9337 | 138100 | 0.5502 | - | - | - |
| 0.9344 | 138200 | 0.5901 | - | - | - |
| 0.9351 | 138300 | 0.6067 | - | - | - |
| 0.9358 | 138400 | 0.5646 | - | - | - |
| 0.9364 | 138500 | 0.5516 | - | - | - |
| 0.9371 | 138600 | 0.5607 | - | - | - |
| 0.9378 | 138700 | 0.5544 | - | - | - |
| 0.9385 | 138800 | 0.5488 | - | - | - |
| 0.9391 | 138900 | 0.5658 | - | - | - |
| 0.9398 | 139000 | 0.5843 | - | - | - |
| 0.9405 | 139100 | 0.5226 | - | - | - |
| 0.9412 | 139200 | 0.5316 | - | - | - |
| 0.9418 | 139300 | 0.5717 | - | - | - |
| 0.9425 | 139400 | 0.5237 | - | - | - |
| 0.9432 | 139500 | 0.5836 | - | - | - |
| 0.9439 | 139600 | 0.5545 | - | - | - |
| 0.9446 | 139700 | 0.6058 | - | - | - |
| 0.9452 | 139800 | 0.5276 | - | - | - |
| 0.9459 | 139900 | 0.5628 | - | - | - |
| 0.9466 | 140000 | 0.5496 | 0.5703 | 0.7408 | - |
| 0.9473 | 140100 | 0.6136 | - | - | - |
| 0.9479 | 140200 | 0.6013 | - | - | - |
| 0.9486 | 140300 | 0.5359 | - | - | - |
| 0.9493 | 140400 | 0.5664 | - | - | - |
| 0.9500 | 140500 | 0.592 | - | - | - |
| 0.9506 | 140600 | 0.5637 | - | - | - |
| 0.9513 | 140700 | 0.5751 | - | - | - |
| 0.9520 | 140800 | 0.5819 | - | - | - |
| 0.9527 | 140900 | 0.5459 | - | - | - |
| 0.9533 | 141000 | 0.591 | - | - | - |
| 0.9540 | 141100 | 0.5685 | - | - | - |
| 0.9547 | 141200 | 0.5809 | - | - | - |
| 0.9554 | 141300 | 0.5362 | - | - | - |
| 0.9560 | 141400 | 0.5502 | - | - | - |
| 0.9567 | 141500 | 0.5653 | - | - | - |
| 0.9574 | 141600 | 0.557 | - | - | - |
| 0.9581 | 141700 | 0.5787 | - | - | - |
| 0.9587 | 141800 | 0.6126 | - | - | - |
| 0.9594 | 141900 | 0.5843 | - | - | - |
| 0.9601 | 142000 | 0.5397 | - | - | - |
| 0.9608 | 142100 | 0.5965 | - | - | - |
| 0.9615 | 142200 | 0.5748 | - | - | - |
| 0.9621 | 142300 | 0.5413 | - | - | - |
| 0.9628 | 142400 | 0.5295 | - | - | - |
| 0.9635 | 142500 | 0.6381 | - | - | - |
| 0.9642 | 142600 | 0.6071 | - | - | - |
| 0.9648 | 142700 | 0.5318 | - | - | - |
| 0.9655 | 142800 | 0.5855 | - | - | - |
| 0.9662 | 142900 | 0.6057 | - | - | - |
| 0.9669 | 143000 | 0.5785 | - | - | - |
| 0.9675 | 143100 | 0.5503 | - | - | - |
| 0.9682 | 143200 | 0.6102 | - | - | - |
| 0.9689 | 143300 | 0.5569 | - | - | - |
| 0.9696 | 143400 | 0.6124 | - | - | - |
| 0.9702 | 143500 | 0.5796 | - | - | - |
| 0.9709 | 143600 | 0.5253 | - | - | - |
| 0.9716 | 143700 | 0.5768 | - | - | - |
| 0.9723 | 143800 | 0.5543 | - | - | - |
| 0.9729 | 143900 | 0.5237 | - | - | - |
| 0.9736 | 144000 | 0.5858 | - | - | - |
| 0.9743 | 144100 | 0.5876 | - | - | - |
| 0.9750 | 144200 | 0.5428 | - | - | - |
| 0.9757 | 144300 | 0.5742 | - | - | - |
| 0.9763 | 144400 | 0.5611 | - | - | - |
| 0.9770 | 144500 | 0.6098 | - | - | - |
| 0.9777 | 144600 | 0.5868 | - | - | - |
| 0.9784 | 144700 | 0.5605 | - | - | - |
| 0.9790 | 144800 | 0.5429 | - | - | - |
| 0.9797 | 144900 | 0.5629 | - | - | - |
| 0.9804 | 145000 | 0.5973 | 0.5597 | 0.7456 | - |
| 0.9811 | 145100 | 0.5709 | - | - | - |
| 0.9817 | 145200 | 0.5527 | - | - | - |
| 0.9824 | 145300 | 0.5568 | - | - | - |
| 0.9831 | 145400 | 0.579 | - | - | - |
| 0.9838 | 145500 | 0.5927 | - | - | - |
| 0.9844 | 145600 | 0.55 | - | - | - |
| 0.9851 | 145700 | 0.5637 | - | - | - |
| 0.9858 | 145800 | 0.57 | - | - | - |
| 0.9865 | 145900 | 0.5708 | - | - | - |
| 0.9871 | 146000 | 0.5338 | - | - | - |
| 0.9878 | 146100 | 0.5808 | - | - | - |
| 0.9885 | 146200 | 0.5727 | - | - | - |
| 0.9892 | 146300 | 0.521 | - | - | - |
| 0.9899 | 146400 | 0.6102 | - | - | - |
| 0.9905 | 146500 | 0.5758 | - | - | - |
| 0.9912 | 146600 | 0.6229 | - | - | - |
| 0.9919 | 146700 | 0.5775 | - | - | - |
| 0.9926 | 146800 | 0.5339 | - | - | - |
| 0.9932 | 146900 | 0.5915 | - | - | - |
| 0.9939 | 147000 | 0.5699 | - | - | - |
| 0.9946 | 147100 | 0.5218 | - | - | - |
| 0.9953 | 147200 | 0.6229 | - | - | - |
| 0.9959 | 147300 | 0.5422 | - | - | - |
| 0.9966 | 147400 | 0.5498 | - | - | - |
| 0.9973 | 147500 | 0.5423 | - | - | - |
| 0.9980 | 147600 | 0.581 | - | - | - |
| 0.9986 | 147700 | 0.5645 | - | - | - |
| 0.9993 | 147800 | 0.5689 | - | - | - |
| 1.0000 | 147900 | 0.6141 | - | - | - |
| 1.0007 | 148000 | 0.5931 | - | - | - |
| 1.0013 | 148100 | 0.5535 | - | - | - |
| 1.0020 | 148200 | 0.5627 | - | - | - |
| 1.0027 | 148300 | 0.5359 | - | - | - |
| 1.0034 | 148400 | 0.5292 | - | - | - |
| 1.0041 | 148500 | 0.5492 | - | - | - |
| 1.0047 | 148600 | 0.6333 | - | - | - |
| 1.0054 | 148700 | 0.5251 | - | - | - |
| 1.0061 | 148800 | 0.6007 | - | - | - |
| 1.0068 | 148900 | 0.519 | - | - | - |
| 1.0074 | 149000 | 0.5598 | - | - | - |
| 1.0081 | 149100 | 0.5092 | - | - | - |
| 1.0088 | 149200 | 0.5574 | - | - | - |
| 1.0095 | 149300 | 0.5367 | - | - | - |
| 1.0101 | 149400 | 0.5998 | - | - | - |
| 1.0108 | 149500 | 0.5309 | - | - | - |
| 1.0115 | 149600 | 0.5655 | - | - | - |
| 1.0122 | 149700 | 0.5077 | - | - | - |
| 1.0128 | 149800 | 0.5394 | - | - | - |
| 1.0135 | 149900 | 0.5588 | - | - | - |
| 1.0142 | 150000 | 0.5825 | 0.5571 | 0.7405 | - |
| 1.0149 | 150100 | 0.5625 | - | - | - |
| 1.0155 | 150200 | 0.5948 | - | - | - |
| 1.0162 | 150300 | 0.5803 | - | - | - |
| 1.0169 | 150400 | 0.5913 | - | - | - |
| 1.0176 | 150500 | 0.5738 | - | - | - |
| 1.0182 | 150600 | 0.5224 | - | - | - |
| 1.0189 | 150700 | 0.5533 | - | - | - |
| 1.0196 | 150800 | 0.6178 | - | - | - |
| 1.0203 | 150900 | 0.5339 | - | - | - |
| 1.0210 | 151000 | 0.5251 | - | - | - |
| 1.0216 | 151100 | 0.591 | - | - | - |
| 1.0223 | 151200 | 0.5894 | - | - | - |
| 1.0230 | 151300 | 0.5544 | - | - | - |
| 1.0237 | 151400 | 0.5625 | - | - | - |
| 1.0243 | 151500 | 0.529 | - | - | - |
| 1.0250 | 151600 | 0.5158 | - | - | - |
| 1.0257 | 151700 | 0.5695 | - | - | - |
| 1.0264 | 151800 | 0.5773 | - | - | - |
| 1.0270 | 151900 | 0.532 | - | - | - |
| 1.0277 | 152000 | 0.5236 | - | - | - |
| 1.0284 | 152100 | 0.5429 | - | - | - |
| 1.0291 | 152200 | 0.5774 | - | - | - |
| 1.0297 | 152300 | 0.5734 | - | - | - |
| 1.0304 | 152400 | 0.5366 | - | - | - |
| 1.0311 | 152500 | 0.5817 | - | - | - |
| 1.0318 | 152600 | 0.6242 | - | - | - |
| 1.0324 | 152700 | 0.5737 | - | - | - |
| 1.0331 | 152800 | 0.5304 | - | - | - |
| 1.0338 | 152900 | 0.5344 | - | - | - |
| 1.0345 | 153000 | 0.5551 | - | - | - |
| 1.0352 | 153100 | 0.5626 | - | - | - |
| 1.0358 | 153200 | 0.5995 | - | - | - |
| 1.0365 | 153300 | 0.5674 | - | - | - |
| 1.0372 | 153400 | 0.6215 | - | - | - |
| 1.0379 | 153500 | 0.5527 | - | - | - |
| 1.0385 | 153600 | 0.5343 | - | - | - |
| 1.0392 | 153700 | 0.5977 | - | - | - |
| 1.0399 | 153800 | 0.5779 | - | - | - |
| 1.0406 | 153900 | 0.5175 | - | - | - |
| 1.0412 | 154000 | 0.6385 | - | - | - |
| 1.0419 | 154100 | 0.5362 | - | - | - |
| 1.0426 | 154200 | 0.5775 | - | - | - |
| 1.0433 | 154300 | 0.5637 | - | - | - |
| 1.0439 | 154400 | 0.5464 | - | - | - |
| 1.0446 | 154500 | 0.5803 | - | - | - |
| 1.0453 | 154600 | 0.5343 | - | - | - |
| 1.0460 | 154700 | 0.5492 | - | - | - |
| 1.0466 | 154800 | 0.5484 | - | - | - |
| 1.0473 | 154900 | 0.5358 | - | - | - |
| 1.0480 | 155000 | 0.5792 | 0.5546 | 0.7406 | - |
| 1.0487 | 155100 | 0.5966 | - | - | - |
| 1.0494 | 155200 | 0.579 | - | - | - |
| 1.0500 | 155300 | 0.5505 | - | - | - |
| 1.0507 | 155400 | 0.5519 | - | - | - |
| 1.0514 | 155500 | 0.5893 | - | - | - |
| 1.0521 | 155600 | 0.5946 | - | - | - |
| 1.0527 | 155700 | 0.5467 | - | - | - |
| 1.0534 | 155800 | 0.5249 | - | - | - |
| 1.0541 | 155900 | 0.5478 | - | - | - |
| 1.0548 | 156000 | 0.5596 | - | - | - |
| 1.0554 | 156100 | 0.518 | - | - | - |
| 1.0561 | 156200 | 0.5749 | - | - | - |
| 1.0568 | 156300 | 0.5189 | - | - | - |
| 1.0575 | 156400 | 0.5862 | - | - | - |
| 1.0581 | 156500 | 0.5523 | - | - | - |
| 1.0588 | 156600 | 0.519 | - | - | - |
| 1.0595 | 156700 | 0.5482 | - | - | - |
| 1.0602 | 156800 | 0.557 | - | - | - |
| 1.0608 | 156900 | 0.537 | - | - | - |
| 1.0615 | 157000 | 0.5545 | - | - | - |
| 1.0622 | 157100 | 0.5855 | - | - | - |
| 1.0629 | 157200 | 0.5448 | - | - | - |
| 1.0635 | 157300 | 0.5505 | - | - | - |
| 1.0642 | 157400 | 0.6443 | - | - | - |
| 1.0649 | 157500 | 0.5395 | - | - | - |
| 1.0656 | 157600 | 0.5876 | - | - | - |
| 1.0663 | 157700 | 0.5593 | - | - | - |
| 1.0669 | 157800 | 0.589 | - | - | - |
| 1.0676 | 157900 | 0.5527 | - | - | - |
| 1.0683 | 158000 | 0.5871 | - | - | - |
| 1.0690 | 158100 | 0.5496 | - | - | - |
| 1.0696 | 158200 | 0.5345 | - | - | - |
| 1.0703 | 158300 | 0.5721 | - | - | - |
| 1.0710 | 158400 | 0.533 | - | - | - |
| 1.0717 | 158500 | 0.5228 | - | - | - |
| 1.0723 | 158600 | 0.5522 | - | - | - |
| 1.0730 | 158700 | 0.536 | - | - | - |
| 1.0737 | 158800 | 0.5981 | - | - | - |
| 1.0744 | 158900 | 0.5388 | - | - | - |
| 1.0750 | 159000 | 0.537 | - | - | - |
| 1.0757 | 159100 | 0.5234 | - | - | - |
| 1.0764 | 159200 | 0.6104 | - | - | - |
| 1.0771 | 159300 | 0.4955 | - | - | - |
| 1.0777 | 159400 | 0.5346 | - | - | - |
| 1.0784 | 159500 | 0.5705 | - | - | - |
| 1.0791 | 159600 | 0.592 | - | - | - |
| 1.0798 | 159700 | 0.5422 | - | - | - |
| 1.0805 | 159800 | 0.5283 | - | - | - |
| 1.0811 | 159900 | 0.5883 | - | - | - |
| 1.0818 | 160000 | 0.5581 | 0.5527 | 0.7450 | - |
| 1.0825 | 160100 | 0.5364 | - | - | - |
| 1.0832 | 160200 | 0.486 | - | - | - |
| 1.0838 | 160300 | 0.5753 | - | - | - |
| 1.0845 | 160400 | 0.5096 | - | - | - |
| 1.0852 | 160500 | 0.5367 | - | - | - |
| 1.0859 | 160600 | 0.5158 | - | - | - |
| 1.0865 | 160700 | 0.5538 | - | - | - |
| 1.0872 | 160800 | 0.5477 | - | - | - |
| 1.0879 | 160900 | 0.5883 | - | - | - |
| 1.0886 | 161000 | 0.556 | - | - | - |
| 1.0892 | 161100 | 0.5753 | - | - | - |
| 1.0899 | 161200 | 0.5756 | - | - | - |
| 1.0906 | 161300 | 0.554 | - | - | - |
| 1.0913 | 161400 | 0.5293 | - | - | - |
| 1.0919 | 161500 | 0.5302 | - | - | - |
| 1.0926 | 161600 | 0.5525 | - | - | - |
| 1.0933 | 161700 | 0.5768 | - | - | - |
| 1.0940 | 161800 | 0.5067 | - | - | - |
| 1.0947 | 161900 | 0.5414 | - | - | - |
| 1.0953 | 162000 | 0.5191 | - | - | - |
| 1.0960 | 162100 | 0.5063 | - | - | - |
| 1.0967 | 162200 | 0.5149 | - | - | - |
| 1.0974 | 162300 | 0.5338 | - | - | - |
| 1.0980 | 162400 | 0.5768 | - | - | - |
| 1.0987 | 162500 | 0.5729 | - | - | - |
| 1.0994 | 162600 | 0.5536 | - | - | - |
| 1.1001 | 162700 | 0.5441 | - | - | - |
| 1.1007 | 162800 | 0.5603 | - | - | - |
| 1.1014 | 162900 | 0.5472 | - | - | - |
| 1.1021 | 163000 | 0.5338 | - | - | - |
| 1.1028 | 163100 | 0.4892 | - | - | - |
| 1.1034 | 163200 | 0.4997 | - | - | - |
| 1.1041 | 163300 | 0.5506 | - | - | - |
| 1.1048 | 163400 | 0.5021 | - | - | - |
| 1.1055 | 163500 | 0.5376 | - | - | - |
| 1.1061 | 163600 | 0.5228 | - | - | - |
| 1.1068 | 163700 | 0.5086 | - | - | - |
| 1.1075 | 163800 | 0.5312 | - | - | - |
| 1.1082 | 163900 | 0.5269 | - | - | - |
| 1.1088 | 164000 | 0.5312 | - | - | - |
| 1.1095 | 164100 | 0.5945 | - | - | - |
| 1.1102 | 164200 | 0.5226 | - | - | - |
| 1.1109 | 164300 | 0.542 | - | - | - |
| 1.1116 | 164400 | 0.5335 | - | - | - |
| 1.1122 | 164500 | 0.5272 | - | - | - |
| 1.1129 | 164600 | 0.5338 | - | - | - |
| 1.1136 | 164700 | 0.5255 | - | - | - |
| 1.1143 | 164800 | 0.5214 | - | - | - |
| 1.1149 | 164900 | 0.5167 | - | - | - |
| 1.1156 | 165000 | 0.5329 | 0.5586 | 0.7433 | - |
| 1.1163 | 165100 | 0.5169 | - | - | - |
| 1.1170 | 165200 | 0.539 | - | - | - |
| 1.1176 | 165300 | 0.6029 | - | - | - |
| 1.1183 | 165400 | 0.5752 | - | - | - |
| 1.1190 | 165500 | 0.5282 | - | - | - |
| 1.1197 | 165600 | 0.5613 | - | - | - |
| 1.1203 | 165700 | 0.5063 | - | - | - |
| 1.1210 | 165800 | 0.548 | - | - | - |
| 1.1217 | 165900 | 0.6063 | - | - | - |
| 1.1224 | 166000 | 0.5259 | - | - | - |
| 1.1230 | 166100 | 0.5241 | - | - | - |
| 1.1237 | 166200 | 0.5196 | - | - | - |
| 1.1244 | 166300 | 0.5279 | - | - | - |
| 1.1251 | 166400 | 0.5688 | - | - | - |
| 1.1258 | 166500 | 0.5726 | - | - | - |
| 1.1264 | 166600 | 0.5274 | - | - | - |
| 1.1271 | 166700 | 0.5148 | - | - | - |
| 1.1278 | 166800 | 0.5341 | - | - | - |
| 1.1285 | 166900 | 0.5716 | - | - | - |
| 1.1291 | 167000 | 0.5626 | - | - | - |
| 1.1298 | 167100 | 0.511 | - | - | - |
| 1.1305 | 167200 | 0.5732 | - | - | - |
| 1.1312 | 167300 | 0.5757 | - | - | - |
| 1.1318 | 167400 | 0.5414 | - | - | - |
| 1.1325 | 167500 | 0.5578 | - | - | - |
| 1.1332 | 167600 | 0.549 | - | - | - |
| 1.1339 | 167700 | 0.5614 | - | - | - |
| 1.1345 | 167800 | 0.56 | - | - | - |
| 1.1352 | 167900 | 0.5886 | - | - | - |
| 1.1359 | 168000 | 0.5377 | - | - | - |
| 1.1366 | 168100 | 0.5485 | - | - | - |
| 1.1372 | 168200 | 0.5551 | - | - | - |
| 1.1379 | 168300 | 0.5328 | - | - | - |
| 1.1386 | 168400 | 0.5026 | - | - | - |
| 1.1393 | 168500 | 0.5077 | - | - | - |
| 1.1400 | 168600 | 0.531 | - | - | - |
| 1.1406 | 168700 | 0.5434 | - | - | - |
| 1.1413 | 168800 | 0.5432 | - | - | - |
| 1.1420 | 168900 | 0.529 | - | - | - |
| 1.1427 | 169000 | 0.5093 | - | - | - |
| 1.1433 | 169100 | 0.5607 | - | - | - |
| 1.1440 | 169200 | 0.5733 | - | - | - |
| 1.1447 | 169300 | 0.5188 | - | - | - |
| 1.1454 | 169400 | 0.5043 | - | - | - |
| 1.1460 | 169500 | 0.5414 | - | - | - |
| 1.1467 | 169600 | 0.5555 | - | - | - |
| 1.1474 | 169700 | 0.4951 | - | - | - |
| 1.1481 | 169800 | 0.556 | - | - | - |
| 1.1487 | 169900 | 0.5992 | - | - | - |
| 1.1494 | 170000 | 0.4878 | 0.5431 | 0.7544 | - |
| 1.1501 | 170100 | 0.5739 | - | - | - |
| 1.1508 | 170200 | 0.5282 | - | - | - |
| 1.1514 | 170300 | 0.5183 | - | - | - |
| 1.1521 | 170400 | 0.523 | - | - | - |
| 1.1528 | 170500 | 0.5328 | - | - | - |
| 1.1535 | 170600 | 0.544 | - | - | - |
| 1.1542 | 170700 | 0.5604 | - | - | - |
| 1.1548 | 170800 | 0.5117 | - | - | - |
| 1.1555 | 170900 | 0.5076 | - | - | - |
| 1.1562 | 171000 | 0.5517 | - | - | - |
| 1.1569 | 171100 | 0.561 | - | - | - |
| 1.1575 | 171200 | 0.5558 | - | - | - |
| 1.1582 | 171300 | 0.5815 | - | - | - |
| 1.1589 | 171400 | 0.5324 | - | - | - |
| 1.1596 | 171500 | 0.5203 | - | - | - |
| 1.1602 | 171600 | 0.5398 | - | - | - |
| 1.1609 | 171700 | 0.5357 | - | - | - |
| 1.1616 | 171800 | 0.5715 | - | - | - |
| 1.1623 | 171900 | 0.5615 | - | - | - |
| 1.1629 | 172000 | 0.512 | - | - | - |
| 1.1636 | 172100 | 0.5073 | - | - | - |
| 1.1643 | 172200 | 0.5361 | - | - | - |
| 1.1650 | 172300 | 0.5462 | - | - | - |
| 1.1656 | 172400 | 0.5133 | - | - | - |
| 1.1663 | 172500 | 0.5151 | - | - | - |
| 1.1670 | 172600 | 0.5656 | - | - | - |
| 1.1677 | 172700 | 0.5256 | - | - | - |
| 1.1683 | 172800 | 0.5367 | - | - | - |
| 1.1690 | 172900 | 0.5146 | - | - | - |
| 1.1697 | 173000 | 0.5255 | - | - | - |
| 1.1704 | 173100 | 0.5159 | - | - | - |
| 1.1711 | 173200 | 0.5155 | - | - | - |
| 1.1717 | 173300 | 0.5079 | - | - | - |
| 1.1724 | 173400 | 0.5244 | - | - | - |
| 1.1731 | 173500 | 0.5401 | - | - | - |
| 1.1738 | 173600 | 0.5169 | - | - | - |
| 1.1744 | 173700 | 0.559 | - | - | - |
| 1.1751 | 173800 | 0.5211 | - | - | - |
| 1.1758 | 173900 | 0.5577 | - | - | - |
| 1.1765 | 174000 | 0.5511 | - | - | - |
| 1.1771 | 174100 | 0.4914 | - | - | - |
| 1.1778 | 174200 | 0.5643 | - | - | - |
| 1.1785 | 174300 | 0.5803 | - | - | - |
| 1.1792 | 174400 | 0.5278 | - | - | - |
| 1.1798 | 174500 | 0.5454 | - | - | - |
| 1.1805 | 174600 | 0.5288 | - | - | - |
| 1.1812 | 174700 | 0.504 | - | - | - |
| 1.1819 | 174800 | 0.5206 | - | - | - |
| 1.1825 | 174900 | 0.5291 | - | - | - |
| 1.1832 | 175000 | 0.5916 | 0.5452 | 0.7461 | - |
| 1.1839 | 175100 | 0.5214 | - | - | - |
| 1.1846 | 175200 | 0.4779 | - | - | - |
| 1.1853 | 175300 | 0.5714 | - | - | - |
| 1.1859 | 175400 | 0.5312 | - | - | - |
| 1.1866 | 175500 | 0.5032 | - | - | - |
| 1.1873 | 175600 | 0.5123 | - | - | - |
| 1.1880 | 175700 | 0.5104 | - | - | - |
| 1.1886 | 175800 | 0.4907 | - | - | - |
| 1.1893 | 175900 | 0.5474 | - | - | - |
| 1.1900 | 176000 | 0.5295 | - | - | - |
| 1.1907 | 176100 | 0.4825 | - | - | - |
| 1.1913 | 176200 | 0.5667 | - | - | - |
| 1.1920 | 176300 | 0.4914 | - | - | - |
| 1.1927 | 176400 | 0.5405 | - | - | - |
| 1.1934 | 176500 | 0.5322 | - | - | - |
| 1.1940 | 176600 | 0.4958 | - | - | - |
| 1.1947 | 176700 | 0.477 | - | - | - |
| 1.1954 | 176800 | 0.4622 | - | - | - |
| 1.1961 | 176900 | 0.5154 | - | - | - |
| 1.1967 | 177000 | 0.487 | - | - | - |
| 1.1974 | 177100 | 0.5569 | - | - | - |
| 1.1981 | 177200 | 0.535 | - | - | - |
| 1.1988 | 177300 | 0.5247 | - | - | - |
| 1.1995 | 177400 | 0.4922 | - | - | - |
| 1.2001 | 177500 | 0.5122 | - | - | - |
| 1.2008 | 177600 | 0.5189 | - | - | - |
| 1.2015 | 177700 | 0.4848 | - | - | - |
| 1.2022 | 177800 | 0.4975 | - | - | - |
| 1.2028 | 177900 | 0.5344 | - | - | - |
| 1.2035 | 178000 | 0.5301 | - | - | - |
| 1.2042 | 178100 | 0.5166 | - | - | - |
| 1.2049 | 178200 | 0.4858 | - | - | - |
| 1.2055 | 178300 | 0.5154 | - | - | - |
| 1.2062 | 178400 | 0.5423 | - | - | - |
| 1.2069 | 178500 | 0.481 | - | - | - |
| 1.2076 | 178600 | 0.5136 | - | - | - |
| 1.2082 | 178700 | 0.5079 | - | - | - |
| 1.2089 | 178800 | 0.5349 | - | - | - |
| 1.2096 | 178900 | 0.5221 | - | - | - |
| 1.2103 | 179000 | 0.4971 | - | - | - |
| 1.2109 | 179100 | 0.5115 | - | - | - |
| 1.2116 | 179200 | 0.5045 | - | - | - |
| 1.2123 | 179300 | 0.5347 | - | - | - |
| 1.2130 | 179400 | 0.5109 | - | - | - |
| 1.2136 | 179500 | 0.5631 | - | - | - |
| 1.2143 | 179600 | 0.5074 | - | - | - |
| 1.2150 | 179700 | 0.534 | - | - | - |
| 1.2157 | 179800 | 0.4971 | - | - | - |
| 1.2164 | 179900 | 0.4885 | - | - | - |
| 1.2170 | 180000 | 0.5197 | 0.5378 | 0.7541 | - |
| 1.2177 | 180100 | 0.5427 | - | - | - |
| 1.2184 | 180200 | 0.5506 | - | - | - |
| 1.2191 | 180300 | 0.5021 | - | - | - |
| 1.2197 | 180400 | 0.5473 | - | - | - |
| 1.2204 | 180500 | 0.5208 | - | - | - |
| 1.2211 | 180600 | 0.488 | - | - | - |
| 1.2218 | 180700 | 0.5462 | - | - | - |
| 1.2224 | 180800 | 0.5287 | - | - | - |
| 1.2231 | 180900 | 0.521 | - | - | - |
| 1.2238 | 181000 | 0.5336 | - | - | - |
| 1.2245 | 181100 | 0.5672 | - | - | - |
| 1.2251 | 181200 | 0.497 | - | - | - |
| 1.2258 | 181300 | 0.5271 | - | - | - |
| 1.2265 | 181400 | 0.5087 | - | - | - |
| 1.2272 | 181500 | 0.5035 | - | - | - |
| 1.2278 | 181600 | 0.4994 | - | - | - |
| 1.2285 | 181700 | 0.5211 | - | - | - |
| 1.2292 | 181800 | 0.5013 | - | - | - |
| 1.2299 | 181900 | 0.544 | - | - | - |
| 1.2306 | 182000 | 0.5325 | - | - | - |
| 1.2312 | 182100 | 0.5327 | - | - | - |
| 1.2319 | 182200 | 0.4875 | - | - | - |
| 1.2326 | 182300 | 0.5253 | - | - | - |
| 1.2333 | 182400 | 0.5389 | - | - | - |
| 1.2339 | 182500 | 0.5043 | - | - | - |
| 1.2346 | 182600 | 0.5292 | - | - | - |
| 1.2353 | 182700 | 0.5523 | - | - | - |
| 1.2360 | 182800 | 0.4971 | - | - | - |
| 1.2366 | 182900 | 0.5154 | - | - | - |
| 1.2373 | 183000 | 0.4666 | - | - | - |
| 1.2380 | 183100 | 0.4855 | - | - | - |
| 1.2387 | 183200 | 0.5284 | - | - | - |
| 1.2393 | 183300 | 0.5296 | - | - | - |
| 1.2400 | 183400 | 0.4876 | - | - | - |
| 1.2407 | 183500 | 0.5054 | - | - | - |
| 1.2414 | 183600 | 0.5402 | - | - | - |
| 1.2420 | 183700 | 0.5051 | - | - | - |
| 1.2427 | 183800 | 0.5287 | - | - | - |
| 1.2434 | 183900 | 0.5191 | - | - | - |
| 1.2441 | 184000 | 0.5042 | - | - | - |
| 1.2448 | 184100 | 0.5091 | - | - | - |
| 1.2454 | 184200 | 0.4801 | - | - | - |
| 1.2461 | 184300 | 0.4972 | - | - | - |
| 1.2468 | 184400 | 0.5532 | - | - | - |
| 1.2475 | 184500 | 0.5381 | - | - | - |
| 1.2481 | 184600 | 0.5417 | - | - | - |
| 1.2488 | 184700 | 0.4954 | - | - | - |
| 1.2495 | 184800 | 0.5088 | - | - | - |
| 1.2502 | 184900 | 0.4964 | - | - | - |
| 1.2508 | 185000 | 0.5161 | 0.5448 | 0.7559 | - |
| 1.2515 | 185100 | 0.5391 | - | - | - |
| 1.2522 | 185200 | 0.483 | - | - | - |
| 1.2529 | 185300 | 0.5064 | - | - | - |
| 1.2535 | 185400 | 0.5486 | - | - | - |
| 1.2542 | 185500 | 0.4959 | - | - | - |
| 1.2549 | 185600 | 0.5394 | - | - | - |
| 1.2556 | 185700 | 0.4586 | - | - | - |
| 1.2562 | 185800 | 0.4634 | - | - | - |
| 1.2569 | 185900 | 0.5228 | - | - | - |
| 1.2576 | 186000 | 0.5378 | - | - | - |
| 1.2583 | 186100 | 0.5836 | - | - | - |
| 1.2590 | 186200 | 0.5087 | - | - | - |
| 1.2596 | 186300 | 0.4947 | - | - | - |
| 1.2603 | 186400 | 0.4844 | - | - | - |
| 1.2610 | 186500 | 0.5182 | - | - | - |
| 1.2617 | 186600 | 0.4888 | - | - | - |
| 1.2623 | 186700 | 0.4508 | - | - | - |
| 1.2630 | 186800 | 0.5666 | - | - | - |
| 1.2637 | 186900 | 0.4936 | - | - | - |
| 1.2644 | 187000 | 0.5228 | - | - | - |
| 1.2650 | 187100 | 0.4783 | - | - | - |
| 1.2657 | 187200 | 0.4913 | - | - | - |
| 1.2664 | 187300 | 0.4682 | - | - | - |
| 1.2671 | 187400 | 0.509 | - | - | - |
| 1.2677 | 187500 | 0.4874 | - | - | - |
| 1.2684 | 187600 | 0.5208 | - | - | - |
| 1.2691 | 187700 | 0.5469 | - | - | - |
| 1.2698 | 187800 | 0.4704 | - | - | - |
| 1.2704 | 187900 | 0.5463 | - | - | - |
| 1.2711 | 188000 | 0.495 | - | - | - |
| 1.2718 | 188100 | 0.5149 | - | - | - |
| 1.2725 | 188200 | 0.5084 | - | - | - |
| 1.2731 | 188300 | 0.4425 | - | - | - |
| 1.2738 | 188400 | 0.5116 | - | - | - |
| 1.2745 | 188500 | 0.5056 | - | - | - |
| 1.2752 | 188600 | 0.4759 | - | - | - |
| 1.2759 | 188700 | 0.4927 | - | - | - |
| 1.2765 | 188800 | 0.5099 | - | - | - |
| 1.2772 | 188900 | 0.494 | - | - | - |
| 1.2779 | 189000 | 0.5103 | - | - | - |
| 1.2786 | 189100 | 0.5301 | - | - | - |
| 1.2792 | 189200 | 0.5205 | - | - | - |
| 1.2799 | 189300 | 0.4583 | - | - | - |
| 1.2806 | 189400 | 0.5008 | - | - | - |
| 1.2813 | 189500 | 0.4943 | - | - | - |
| 1.2819 | 189600 | 0.4938 | - | - | - |
| 1.2826 | 189700 | 0.5782 | - | - | - |
| 1.2833 | 189800 | 0.5149 | - | - | - |
| 1.2840 | 189900 | 0.5482 | - | - | - |
| 1.2846 | 190000 | 0.4619 | 0.5428 | 0.7525 | - |
| 1.2853 | 190100 | 0.4846 | - | - | - |
| 1.2860 | 190200 | 0.469 | - | - | - |
| 1.2867 | 190300 | 0.4997 | - | - | - |
| 1.2873 | 190400 | 0.4967 | - | - | - |
| 1.2880 | 190500 | 0.4953 | - | - | - |
| 1.2887 | 190600 | 0.5419 | - | - | - |
| 1.2894 | 190700 | 0.4935 | - | - | - |
| 1.2901 | 190800 | 0.5141 | - | - | - |
| 1.2907 | 190900 | 0.4803 | - | - | - |
| 1.2914 | 191000 | 0.458 | - | - | - |
| 1.2921 | 191100 | 0.4836 | - | - | - |
| 1.2928 | 191200 | 0.4859 | - | - | - |
| 1.2934 | 191300 | 0.485 | - | - | - |
| 1.2941 | 191400 | 0.4762 | - | - | - |
| 1.2948 | 191500 | 0.5488 | - | - | - |
| 1.2955 | 191600 | 0.4921 | - | - | - |
| 1.2961 | 191700 | 0.5127 | - | - | - |
| 1.2968 | 191800 | 0.4515 | - | - | - |
| 1.2975 | 191900 | 0.5212 | - | - | - |
| 1.2982 | 192000 | 0.4545 | - | - | - |
| 1.2988 | 192100 | 0.4977 | - | - | - |
| 1.2995 | 192200 | 0.5078 | - | - | - |
| 1.3002 | 192300 | 0.4938 | - | - | - |
| 1.3009 | 192400 | 0.5292 | - | - | - |
| 1.3015 | 192500 | 0.503 | - | - | - |
| 1.3022 | 192600 | 0.4928 | - | - | - |
| 1.3029 | 192700 | 0.5225 | - | - | - |
| 1.3036 | 192800 | 0.4352 | - | - | - |
| 1.3043 | 192900 | 0.4906 | - | - | - |
| 1.3049 | 193000 | 0.4871 | - | - | - |
| 1.3056 | 193100 | 0.5293 | - | - | - |
| 1.3063 | 193200 | 0.5319 | - | - | - |
| 1.3070 | 193300 | 0.5273 | - | - | - |
| 1.3076 | 193400 | 0.4965 | - | - | - |
| 1.3083 | 193500 | 0.485 | - | - | - |
| 1.3090 | 193600 | 0.5279 | - | - | - |
| 1.3097 | 193700 | 0.4996 | - | - | - |
| 1.3103 | 193800 | 0.4763 | - | - | - |
| 1.3110 | 193900 | 0.5496 | - | - | - |
| 1.3117 | 194000 | 0.5104 | - | - | - |
| 1.3124 | 194100 | 0.4664 | - | - | - |
| 1.3130 | 194200 | 0.4913 | - | - | - |
| 1.3137 | 194300 | 0.4837 | - | - | - |
| 1.3144 | 194400 | 0.5023 | - | - | - |
| 1.3151 | 194500 | 0.4961 | - | - | - |
| 1.3157 | 194600 | 0.5201 | - | - | - |
| 1.3164 | 194700 | 0.5071 | - | - | - |
| 1.3171 | 194800 | 0.5162 | - | - | - |
| 1.3178 | 194900 | 0.4915 | - | - | - |
| 1.3184 | 195000 | 0.4853 | 0.5496 | 0.7555 | - |
| 1.3191 | 195100 | 0.5355 | - | - | - |
| 1.3198 | 195200 | 0.4819 | - | - | - |
| 1.3205 | 195300 | 0.5133 | - | - | - |
| 1.3212 | 195400 | 0.5023 | - | - | - |
| 1.3218 | 195500 | 0.4849 | - | - | - |
| 1.3225 | 195600 | 0.5129 | - | - | - |
| 1.3232 | 195700 | 0.5341 | - | - | - |
| 1.3239 | 195800 | 0.4105 | - | - | - |
| 1.3245 | 195900 | 0.4616 | - | - | - |
| 1.3252 | 196000 | 0.4865 | - | - | - |
| 1.3259 | 196100 | 0.5203 | - | - | - |
| 1.3266 | 196200 | 0.5589 | - | - | - |
| 1.3272 | 196300 | 0.5056 | - | - | - |
| 1.3279 | 196400 | 0.441 | - | - | - |
| 1.3286 | 196500 | 0.5481 | - | - | - |
| 1.3293 | 196600 | 0.4934 | - | - | - |
| 1.3299 | 196700 | 0.4713 | - | - | - |
| 1.3306 | 196800 | 0.4586 | - | - | - |
| 1.3313 | 196900 | 0.5314 | - | - | - |
| 1.3320 | 197000 | 0.4745 | - | - | - |
| 1.3326 | 197100 | 0.4676 | - | - | - |
| 1.3333 | 197200 | 0.449 | - | - | - |
| 1.3340 | 197300 | 0.5112 | - | - | - |
| 1.3347 | 197400 | 0.4876 | - | - | - |
| 1.3354 | 197500 | 0.5133 | - | - | - |
| 1.3360 | 197600 | 0.4924 | - | - | - |
| 1.3367 | 197700 | 0.4644 | - | - | - |
| 1.3374 | 197800 | 0.4455 | - | - | - |
| 1.3381 | 197900 | 0.516 | - | - | - |
| 1.3387 | 198000 | 0.4805 | - | - | - |
| 1.3394 | 198100 | 0.5274 | - | - | - |
| 1.3401 | 198200 | 0.4636 | - | - | - |
| 1.3408 | 198300 | 0.4358 | - | - | - |
| 1.3414 | 198400 | 0.4963 | - | - | - |
| 1.3421 | 198500 | 0.4758 | - | - | - |
| 1.3428 | 198600 | 0.4961 | - | - | - |
| 1.3435 | 198700 | 0.5095 | - | - | - |
| 1.3441 | 198800 | 0.4829 | - | - | - |
| 1.3448 | 198900 | 0.5339 | - | - | - |
| 1.3455 | 199000 | 0.4835 | - | - | - |
| 1.3462 | 199100 | 0.5258 | - | - | - |
| 1.3468 | 199200 | 0.4726 | - | - | - |
| 1.3475 | 199300 | 0.4804 | - | - | - |
| 1.3482 | 199400 | 0.4636 | - | - | - |
| 1.3489 | 199500 | 0.4817 | - | - | - |
| 1.3496 | 199600 | 0.482 | - | - | - |
| 1.3502 | 199700 | 0.504 | - | - | - |
| 1.3509 | 199800 | 0.5124 | - | - | - |
| 1.3516 | 199900 | 0.443 | - | - | - |
| 1.3523 | 200000 | 0.5348 | 0.5423 | 0.7563 | - |
| 1.3529 | 200100 | 0.5052 | - | - | - |
| 1.3536 | 200200 | 0.4553 | - | - | - |
| 1.3543 | 200300 | 0.4715 | - | - | - |
| 1.3550 | 200400 | 0.4629 | - | - | - |
| 1.3556 | 200500 | 0.4649 | - | - | - |
| 1.3563 | 200600 | 0.4974 | - | - | - |
| 1.3570 | 200700 | 0.5105 | - | - | - |
| 1.3577 | 200800 | 0.4986 | - | - | - |
| 1.3583 | 200900 | 0.4647 | - | - | - |
| 1.3590 | 201000 | 0.4805 | - | - | - |
| 1.3597 | 201100 | 0.5027 | - | - | - |
| 1.3604 | 201200 | 0.5004 | - | - | - |
| 1.3610 | 201300 | 0.4637 | - | - | - |
| 1.3617 | 201400 | 0.4693 | - | - | - |
| 1.3624 | 201500 | 0.4459 | - | - | - |
| 1.3631 | 201600 | 0.4746 | - | - | - |
| 1.3638 | 201700 | 0.4807 | - | - | - |
| 1.3644 | 201800 | 0.4755 | - | - | - |
| 1.3651 | 201900 | 0.4861 | - | - | - |
| 1.3658 | 202000 | 0.4499 | - | - | - |
| 1.3665 | 202100 | 0.4852 | - | - | - |
| 1.3671 | 202200 | 0.4745 | - | - | - |
| 1.3678 | 202300 | 0.489 | - | - | - |
| 1.3685 | 202400 | 0.4706 | - | - | - |
| 1.3692 | 202500 | 0.4798 | - | - | - |
| 1.3698 | 202600 | 0.4882 | - | - | - |
| 1.3705 | 202700 | 0.4737 | - | - | - |
| 1.3712 | 202800 | 0.4624 | - | - | - |
| 1.3719 | 202900 | 0.4784 | - | - | - |
| 1.3725 | 203000 | 0.4952 | - | - | - |
| 1.3732 | 203100 | 0.5017 | - | - | - |
| 1.3739 | 203200 | 0.5015 | - | - | - |
| 1.3746 | 203300 | 0.4416 | - | - | - |
| 1.3752 | 203400 | 0.5097 | - | - | - |
| 1.3759 | 203500 | 0.4815 | - | - | - |
| 1.3766 | 203600 | 0.4924 | - | - | - |
| 1.3773 | 203700 | 0.4628 | - | - | - |
| 1.3779 | 203800 | 0.4751 | - | - | - |
| 1.3786 | 203900 | 0.4679 | - | - | - |
| 1.3793 | 204000 | 0.5467 | - | - | - |
| 1.3800 | 204100 | 0.4983 | - | - | - |
| 1.3807 | 204200 | 0.5047 | - | - | - |
| 1.3813 | 204300 | 0.4685 | - | - | - |
| 1.3820 | 204400 | 0.5224 | - | - | - |
| 1.3827 | 204500 | 0.465 | - | - | - |
| 1.3834 | 204600 | 0.4876 | - | - | - |
| 1.3840 | 204700 | 0.504 | - | - | - |
| 1.3847 | 204800 | 0.4624 | - | - | - |
| 1.3854 | 204900 | 0.5205 | - | - | - |
| 1.3861 | 205000 | 0.4526 | 0.5400 | 0.7595 | - |
| 1.3867 | 205100 | 0.5068 | - | - | - |
| 1.3874 | 205200 | 0.4379 | - | - | - |
| 1.3881 | 205300 | 0.4858 | - | - | - |
| 1.3888 | 205400 | 0.4933 | - | - | - |
| 1.3894 | 205500 | 0.4885 | - | - | - |
| 1.3901 | 205600 | 0.5256 | - | - | - |
| 1.3908 | 205700 | 0.4909 | - | - | - |
| 1.3915 | 205800 | 0.4595 | - | - | - |
| 1.3921 | 205900 | 0.4579 | - | - | - |
| 1.3928 | 206000 | 0.4509 | - | - | - |
| 1.3935 | 206100 | 0.5018 | - | - | - |
| 1.3942 | 206200 | 0.4901 | - | - | - |
| 1.3949 | 206300 | 0.4789 | - | - | - |
| 1.3955 | 206400 | 0.4711 | - | - | - |
| 1.3962 | 206500 | 0.4726 | - | - | - |
| 1.3969 | 206600 | 0.5106 | - | - | - |
| 1.3976 | 206700 | 0.4658 | - | - | - |
| 1.3982 | 206800 | 0.4608 | - | - | - |
| 1.3989 | 206900 | 0.462 | - | - | - |
| 1.3996 | 207000 | 0.5146 | - | - | - |
| 1.4003 | 207100 | 0.5001 | - | - | - |
| 1.4009 | 207200 | 0.5157 | - | - | - |
| 1.4016 | 207300 | 0.4832 | - | - | - |
| 1.4023 | 207400 | 0.5159 | - | - | - |
| 1.4030 | 207500 | 0.5186 | - | - | - |
| 1.4036 | 207600 | 0.5075 | - | - | - |
| 1.4043 | 207700 | 0.4713 | - | - | - |
| 1.4050 | 207800 | 0.4252 | - | - | - |
| 1.4057 | 207900 | 0.4327 | - | - | - |
| 1.4063 | 208000 | 0.4651 | - | - | - |
| 1.4070 | 208100 | 0.5014 | - | - | - |
| 1.4077 | 208200 | 0.4894 | - | - | - |
| 1.4084 | 208300 | 0.5509 | - | - | - |
| 1.4091 | 208400 | 0.4821 | - | - | - |
| 1.4097 | 208500 | 0.5021 | - | - | - |
| 1.4104 | 208600 | 0.5262 | - | - | - |
| 1.4111 | 208700 | 0.4583 | - | - | - |
| 1.4118 | 208800 | 0.4524 | - | - | - |
| 1.4124 | 208900 | 0.4506 | - | - | - |
| 1.4131 | 209000 | 0.5256 | - | - | - |
| 1.4138 | 209100 | 0.5151 | - | - | - |
| 1.4145 | 209200 | 0.5081 | - | - | - |
| 1.4151 | 209300 | 0.4742 | - | - | - |
| 1.4158 | 209400 | 0.4816 | - | - | - |
| 1.4165 | 209500 | 0.4853 | - | - | - |
| 1.4172 | 209600 | 0.4775 | - | - | - |
| 1.4178 | 209700 | 0.4868 | - | - | - |
| 1.4185 | 209800 | 0.4626 | - | - | - |
| 1.4192 | 209900 | 0.5078 | - | - | - |
| 1.4199 | 210000 | 0.4994 | 0.5371 | 0.7597 | - |
| 1.4205 | 210100 | 0.471 | - | - | - |
| 1.4212 | 210200 | 0.5009 | - | - | - |
| 1.4219 | 210300 | 0.5125 | - | - | - |
| 1.4226 | 210400 | 0.492 | - | - | - |
| 1.4232 | 210500 | 0.5281 | - | - | - |
| 1.4239 | 210600 | 0.5255 | - | - | - |
| 1.4246 | 210700 | 0.4393 | - | - | - |
| 1.4253 | 210800 | 0.5011 | - | - | - |
| 1.4260 | 210900 | 0.5004 | - | - | - |
| 1.4266 | 211000 | 0.4843 | - | - | - |
| 1.4273 | 211100 | 0.4866 | - | - | - |
| 1.4280 | 211200 | 0.4586 | - | - | - |
| 1.4287 | 211300 | 0.5276 | - | - | - |
| 1.4293 | 211400 | 0.4544 | - | - | - |
| 1.4300 | 211500 | 0.4936 | - | - | - |
| 1.4307 | 211600 | 0.4498 | - | - | - |
| 1.4314 | 211700 | 0.4759 | - | - | - |
| 1.4320 | 211800 | 0.4735 | - | - | - |
| 1.4327 | 211900 | 0.4537 | - | - | - |
| 1.4334 | 212000 | 0.5012 | - | - | - |
| 1.4341 | 212100 | 0.5325 | - | - | - |
| 1.4347 | 212200 | 0.4797 | - | - | - |
| 1.4354 | 212300 | 0.4597 | - | - | - |
| 1.4361 | 212400 | 0.4514 | - | - | - |
| 1.4368 | 212500 | 0.451 | - | - | - |
| 1.4374 | 212600 | 0.5148 | - | - | - |
| 1.4381 | 212700 | 0.484 | - | - | - |
| 1.4388 | 212800 | 0.4761 | - | - | - |
| 1.4395 | 212900 | 0.4608 | - | - | - |
| 1.4402 | 213000 | 0.5341 | - | - | - |
| 1.4408 | 213100 | 0.4899 | - | - | - |
| 1.4415 | 213200 | 0.4814 | - | - | - |
| 1.4422 | 213300 | 0.5104 | - | - | - |
| 1.4429 | 213400 | 0.502 | - | - | - |
| 1.4435 | 213500 | 0.4639 | - | - | - |
| 1.4442 | 213600 | 0.4742 | - | - | - |
| 1.4449 | 213700 | 0.4737 | - | - | - |
| 1.4456 | 213800 | 0.4743 | - | - | - |
| 1.4462 | 213900 | 0.4613 | - | - | - |
| 1.4469 | 214000 | 0.5021 | - | - | - |
| 1.4476 | 214100 | 0.5386 | - | - | - |
| 1.4483 | 214200 | 0.4992 | - | - | - |
| 1.4489 | 214300 | 0.4302 | - | - | - |
| 1.4496 | 214400 | 0.4601 | - | - | - |
| 1.4503 | 214500 | 0.4061 | - | - | - |
| 1.4510 | 214600 | 0.4878 | - | - | - |
| 1.4516 | 214700 | 0.4531 | - | - | - |
| 1.4523 | 214800 | 0.4754 | - | - | - |
| 1.4530 | 214900 | 0.4831 | - | - | - |
| 1.4537 | 215000 | 0.4628 | 0.5442 | 0.7620 | - |
| 1.4544 | 215100 | 0.4794 | - | - | - |
| 1.4550 | 215200 | 0.4889 | - | - | - |
| 1.4557 | 215300 | 0.499 | - | - | - |
| 1.4564 | 215400 | 0.4593 | - | - | - |
| 1.4571 | 215500 | 0.5281 | - | - | - |
| 1.4577 | 215600 | 0.4935 | - | - | - |
| 1.4584 | 215700 | 0.5279 | - | - | - |
| 1.4591 | 215800 | 0.4744 | - | - | - |
| 1.4598 | 215900 | 0.4979 | - | - | - |
| 1.4604 | 216000 | 0.4307 | - | - | - |
| 1.4611 | 216100 | 0.4676 | - | - | - |
| 1.4618 | 216200 | 0.4652 | - | - | - |
| 1.4625 | 216300 | 0.484 | - | - | - |
| 1.4631 | 216400 | 0.465 | - | - | - |
| 1.4638 | 216500 | 0.4558 | - | - | - |
| 1.4645 | 216600 | 0.4717 | - | - | - |
| 1.4652 | 216700 | 0.487 | - | - | - |
| 1.4658 | 216800 | 0.4458 | - | - | - |
| 1.4665 | 216900 | 0.5153 | - | - | - |
| 1.4672 | 217000 | 0.5046 | - | - | - |
| 1.4679 | 217100 | 0.4624 | - | - | - |
| 1.4685 | 217200 | 0.5073 | - | - | - |
| 1.4692 | 217300 | 0.4872 | - | - | - |
| 1.4699 | 217400 | 0.4799 | - | - | - |
| 1.4706 | 217500 | 0.518 | - | - | - |
| 1.4713 | 217600 | 0.4481 | - | - | - |
| 1.4719 | 217700 | 0.4859 | - | - | - |
| 1.4726 | 217800 | 0.4285 | - | - | - |
| 1.4733 | 217900 | 0.4793 | - | - | - |
| 1.4740 | 218000 | 0.4855 | - | - | - |
| 1.4746 | 218100 | 0.4878 | - | - | - |
| 1.4753 | 218200 | 0.4743 | - | - | - |
| 1.4760 | 218300 | 0.453 | - | - | - |
| 1.4767 | 218400 | 0.4627 | - | - | - |
| 1.4773 | 218500 | 0.4689 | - | - | - |
| 1.4780 | 218600 | 0.4655 | - | - | - |
| 1.4787 | 218700 | 0.4672 | - | - | - |
| 1.4794 | 218800 | 0.4433 | - | - | - |
| 1.4800 | 218900 | 0.5168 | - | - | - |
| 1.4807 | 219000 | 0.4854 | - | - | - |
| 1.4814 | 219100 | 0.4613 | - | - | - |
| 1.4821 | 219200 | 0.4697 | - | - | - |
| 1.4827 | 219300 | 0.4898 | - | - | - |
| 1.4834 | 219400 | 0.4462 | - | - | - |
| 1.4841 | 219500 | 0.5175 | - | - | - |
| 1.4848 | 219600 | 0.4957 | - | - | - |
| 1.4855 | 219700 | 0.5098 | - | - | - |
| 1.4861 | 219800 | 0.497 | - | - | - |
| 1.4868 | 219900 | 0.5067 | - | - | - |
| 1.4875 | 220000 | 0.4488 | 0.5371 | 0.7595 | - |
| 1.4882 | 220100 | 0.4687 | - | - | - |
| 1.4888 | 220200 | 0.4715 | - | - | - |
| 1.4895 | 220300 | 0.4244 | - | - | - |
| 1.4902 | 220400 | 0.4696 | - | - | - |
| 1.4909 | 220500 | 0.4517 | - | - | - |
| 1.4915 | 220600 | 0.4317 | - | - | - |
| 1.4922 | 220700 | 0.462 | - | - | - |
| 1.4929 | 220800 | 0.436 | - | - | - |
| 1.4936 | 220900 | 0.4933 | - | - | - |
| 1.4942 | 221000 | 0.4744 | - | - | - |
| 1.4949 | 221100 | 0.4591 | - | - | - |
| 1.4956 | 221200 | 0.4717 | - | - | - |
| 1.4963 | 221300 | 0.4851 | - | - | - |
| 1.4969 | 221400 | 0.482 | - | - | - |
| 1.4976 | 221500 | 0.4362 | - | - | - |
| 1.4983 | 221600 | 0.4574 | - | - | - |
| 1.4990 | 221700 | 0.4783 | - | - | - |
| 1.4997 | 221800 | 0.5475 | - | - | - |
| 1.5003 | 221900 | 0.4602 | - | - | - |
| 1.5010 | 222000 | 0.4271 | - | - | - |
| 1.5017 | 222100 | 0.5019 | - | - | - |
| 1.5024 | 222200 | 0.4193 | - | - | - |
| 1.5030 | 222300 | 0.4977 | - | - | - |
| 1.5037 | 222400 | 0.5011 | - | - | - |
| 1.5044 | 222500 | 0.4828 | - | - | - |
| 1.5051 | 222600 | 0.4222 | - | - | - |
| 1.5057 | 222700 | 0.457 | - | - | - |
| 1.5064 | 222800 | 0.4745 | - | - | - |
| 1.5071 | 222900 | 0.5158 | - | - | - |
| 1.5078 | 223000 | 0.478 | - | - | - |
| 1.5084 | 223100 | 0.4607 | - | - | - |
| 1.5091 | 223200 | 0.4588 | - | - | - |
| 1.5098 | 223300 | 0.5097 | - | - | - |
| 1.5105 | 223400 | 0.4626 | - | - | - |
| 1.5111 | 223500 | 0.4521 | - | - | - |
| 1.5118 | 223600 | 0.493 | - | - | - |
| 1.5125 | 223700 | 0.481 | - | - | - |
| 1.5132 | 223800 | 0.4463 | - | - | - |
| 1.5139 | 223900 | 0.4982 | - | - | - |
| 1.5145 | 224000 | 0.4744 | - | - | - |
| 1.5152 | 224100 | 0.454 | - | - | - |
| 1.5159 | 224200 | 0.5134 | - | - | - |
| 1.5166 | 224300 | 0.4807 | - | - | - |
| 1.5172 | 224400 | 0.4653 | - | - | - |
| 1.5179 | 224500 | 0.4877 | - | - | - |
| 1.5186 | 224600 | 0.4791 | - | - | - |
| 1.5193 | 224700 | 0.4691 | - | - | - |
| 1.5199 | 224800 | 0.4734 | - | - | - |
| 1.5206 | 224900 | 0.4327 | - | - | - |
| 1.5213 | 225000 | 0.4711 | 0.5446 | 0.7608 | - |
| 1.5220 | 225100 | 0.4883 | - | - | - |
| 1.5226 | 225200 | 0.5147 | - | - | - |
| 1.5233 | 225300 | 0.464 | - | - | - |
| 1.5240 | 225400 | 0.5124 | - | - | - |
| 1.5247 | 225500 | 0.4876 | - | - | - |
| 1.5253 | 225600 | 0.4611 | - | - | - |
| 1.5260 | 225700 | 0.5207 | - | - | - |
| 1.5267 | 225800 | 0.4821 | - | - | - |
| 1.5274 | 225900 | 0.5009 | - | - | - |
| 1.5280 | 226000 | 0.5359 | - | - | - |
| 1.5287 | 226100 | 0.4622 | - | - | - |
| 1.5294 | 226200 | 0.4747 | - | - | - |
| 1.5301 | 226300 | 0.4974 | - | - | - |
| 1.5308 | 226400 | 0.4563 | - | - | - |
| 1.5314 | 226500 | 0.455 | - | - | - |
| 1.5321 | 226600 | 0.4635 | - | - | - |
| 1.5328 | 226700 | 0.4782 | - | - | - |
| 1.5335 | 226800 | 0.4855 | - | - | - |
| 1.5341 | 226900 | 0.4821 | - | - | - |
| 1.5348 | 227000 | 0.4684 | - | - | - |
| 1.5355 | 227100 | 0.468 | - | - | - |
| 1.5362 | 227200 | 0.4191 | - | - | - |
| 1.5368 | 227300 | 0.4692 | - | - | - |
| 1.5375 | 227400 | 0.4572 | - | - | - |
| 1.5382 | 227500 | 0.4261 | - | - | - |
| 1.5389 | 227600 | 0.4533 | - | - | - |
| 1.5395 | 227700 | 0.4412 | - | - | - |
| 1.5402 | 227800 | 0.4864 | - | - | - |
| 1.5409 | 227900 | 0.4668 | - | - | - |
| 1.5416 | 228000 | 0.4577 | - | - | - |
| 1.5422 | 228100 | 0.4566 | - | - | - |
| 1.5429 | 228200 | 0.5041 | - | - | - |
| 1.5436 | 228300 | 0.484 | - | - | - |
| 1.5443 | 228400 | 0.4984 | - | - | - |
| 1.5450 | 228500 | 0.4611 | - | - | - |
| 1.5456 | 228600 | 0.5161 | - | - | - |
| 1.5463 | 228700 | 0.4372 | - | - | - |
| 1.5470 | 228800 | 0.5088 | - | - | - |
| 1.5477 | 228900 | 0.4875 | - | - | - |
| 1.5483 | 229000 | 0.4717 | - | - | - |
| 1.5490 | 229100 | 0.4599 | - | - | - |
| 1.5497 | 229200 | 0.4386 | - | - | - |
| 1.5504 | 229300 | 0.4823 | - | - | - |
| 1.5510 | 229400 | 0.5137 | - | - | - |
| 1.5517 | 229500 | 0.4678 | - | - | - |
| 1.5524 | 229600 | 0.4561 | - | - | - |
| 1.5531 | 229700 | 0.4982 | - | - | - |
| 1.5537 | 229800 | 0.4558 | - | - | - |
| 1.5544 | 229900 | 0.4697 | - | - | - |
| 1.5551 | 230000 | 0.4741 | 0.5472 | 0.7568 | - |
| 1.5558 | 230100 | 0.4427 | - | - | - |
| 1.5564 | 230200 | 0.4494 | - | - | - |
| 1.5571 | 230300 | 0.489 | - | - | - |
| 1.5578 | 230400 | 0.4755 | - | - | - |
| 1.5585 | 230500 | 0.4565 | - | - | - |
| 1.5592 | 230600 | 0.4558 | - | - | - |
| 1.5598 | 230700 | 0.4554 | - | - | - |
| 1.5605 | 230800 | 0.5236 | - | - | - |
| 1.5612 | 230900 | 0.4614 | - | - | - |
| 1.5619 | 231000 | 0.484 | - | - | - |
| 1.5625 | 231100 | 0.4665 | - | - | - |
| 1.5632 | 231200 | 0.46 | - | - | - |
| 1.5639 | 231300 | 0.4767 | - | - | - |
| 1.5646 | 231400 | 0.4649 | - | - | - |
| 1.5652 | 231500 | 0.4697 | - | - | - |
| 1.5659 | 231600 | 0.4748 | - | - | - |
| 1.5666 | 231700 | 0.4465 | - | - | - |
| 1.5673 | 231800 | 0.4756 | - | - | - |
| 1.5679 | 231900 | 0.4834 | - | - | - |
| 1.5686 | 232000 | 0.4511 | - | - | - |
| 1.5693 | 232100 | 0.4922 | - | - | - |
| 1.5700 | 232200 | 0.4461 | - | - | - |
| 1.5706 | 232300 | 0.4671 | - | - | - |
| 1.5713 | 232400 | 0.4859 | - | - | - |
| 1.5720 | 232500 | 0.4887 | - | - | - |
| 1.5727 | 232600 | 0.5057 | - | - | - |
| 1.5733 | 232700 | 0.4681 | - | - | - |
| 1.5740 | 232800 | 0.4713 | - | - | - |
| 1.5747 | 232900 | 0.5302 | - | - | - |
| 1.5754 | 233000 | 0.4689 | - | - | - |
| 1.5761 | 233100 | 0.4461 | - | - | - |
| 1.5767 | 233200 | 0.4639 | - | - | - |
| 1.5774 | 233300 | 0.4345 | - | - | - |
| 1.5781 | 233400 | 0.4367 | - | - | - |
| 1.5788 | 233500 | 0.4802 | - | - | - |
| 1.5794 | 233600 | 0.4759 | - | - | - |
| 1.5801 | 233700 | 0.4986 | - | - | - |
| 1.5808 | 233800 | 0.4337 | - | - | - |
| 1.5815 | 233900 | 0.4664 | - | - | - |
| 1.5821 | 234000 | 0.5146 | - | - | - |
| 1.5828 | 234100 | 0.4519 | - | - | - |
| 1.5835 | 234200 | 0.4903 | - | - | - |
| 1.5842 | 234300 | 0.5063 | - | - | - |
| 1.5848 | 234400 | 0.4625 | - | - | - |
| 1.5855 | 234500 | 0.4804 | - | - | - |
| 1.5862 | 234600 | 0.43 | - | - | - |
| 1.5869 | 234700 | 0.4816 | - | - | - |
| 1.5875 | 234800 | 0.4564 | - | - | - |
| 1.5882 | 234900 | 0.4492 | - | - | - |
| 1.5889 | 235000 | 0.4807 | 0.5384 | 0.7569 | - |
| 1.5896 | 235100 | 0.4699 | - | - | - |
| 1.5903 | 235200 | 0.4669 | - | - | - |
| 1.5909 | 235300 | 0.4638 | - | - | - |
| 1.5916 | 235400 | 0.4475 | - | - | - |
| 1.5923 | 235500 | 0.4492 | - | - | - |
| 1.5930 | 235600 | 0.4694 | - | - | - |
| 1.5936 | 235700 | 0.5007 | - | - | - |
| 1.5943 | 235800 | 0.4228 | - | - | - |
| 1.5950 | 235900 | 0.5 | - | - | - |
| 1.5957 | 236000 | 0.4549 | - | - | - |
| 1.5963 | 236100 | 0.4356 | - | - | - |
| 1.5970 | 236200 | 0.4668 | - | - | - |
| 1.5977 | 236300 | 0.4428 | - | - | - |
| 1.5984 | 236400 | 0.5008 | - | - | - |
| 1.5990 | 236500 | 0.4634 | - | - | - |
| 1.5997 | 236600 | 0.4653 | - | - | - |
| 1.6004 | 236700 | 0.4364 | - | - | - |
| 1.6011 | 236800 | 0.4774 | - | - | - |
| 1.6017 | 236900 | 0.4435 | - | - | - |
| 1.6024 | 237000 | 0.4613 | - | - | - |
| 1.6031 | 237100 | 0.4872 | - | - | - |
| 1.6038 | 237200 | 0.4796 | - | - | - |
| 1.6045 | 237300 | 0.4521 | - | - | - |
| 1.6051 | 237400 | 0.4693 | - | - | - |
| 1.6058 | 237500 | 0.4384 | - | - | - |
| 1.6065 | 237600 | 0.5008 | - | - | - |
| 1.6072 | 237700 | 0.4385 | - | - | - |
| 1.6078 | 237800 | 0.4605 | - | - | - |
| 1.6085 | 237900 | 0.456 | - | - | - |
| 1.6092 | 238000 | 0.4636 | - | - | - |
| 1.6099 | 238100 | 0.4212 | - | - | - |
| 1.6105 | 238200 | 0.4826 | - | - | - |
| 1.6112 | 238300 | 0.4699 | - | - | - |
| 1.6119 | 238400 | 0.4605 | - | - | - |
| 1.6126 | 238500 | 0.4578 | - | - | - |
| 1.6132 | 238600 | 0.4583 | - | - | - |
| 1.6139 | 238700 | 0.4355 | - | - | - |
| 1.6146 | 238800 | 0.4949 | - | - | - |
| 1.6153 | 238900 | 0.4982 | - | - | - |
| 1.6159 | 239000 | 0.435 | - | - | - |
| 1.6166 | 239100 | 0.5358 | - | - | - |
| 1.6173 | 239200 | 0.4552 | - | - | - |
| 1.6180 | 239300 | 0.457 | - | - | - |
| 1.6187 | 239400 | 0.447 | - | - | - |
| 1.6193 | 239500 | 0.4706 | - | - | - |
| 1.6200 | 239600 | 0.4624 | - | - | - |
| 1.6207 | 239700 | 0.4517 | - | - | - |
| 1.6214 | 239800 | 0.4426 | - | - | - |
| 1.6220 | 239900 | 0.4019 | - | - | - |
| 1.6227 | 240000 | 0.4413 | 0.5373 | 0.7591 | - |
| 1.6234 | 240100 | 0.4081 | - | - | - |
| 1.6241 | 240200 | 0.4797 | - | - | - |
| 1.6247 | 240300 | 0.4245 | - | - | - |
| 1.6254 | 240400 | 0.4675 | - | - | - |
| 1.6261 | 240500 | 0.4965 | - | - | - |
| 1.6268 | 240600 | 0.4275 | - | - | - |
| 1.6274 | 240700 | 0.4458 | - | - | - |
| 1.6281 | 240800 | 0.4376 | - | - | - |
| 1.6288 | 240900 | 0.4543 | - | - | - |
| 1.6295 | 241000 | 0.4436 | - | - | - |
| 1.6301 | 241100 | 0.4572 | - | - | - |
| 1.6308 | 241200 | 0.475 | - | - | - |
| 1.6315 | 241300 | 0.446 | - | - | - |
| 1.6322 | 241400 | 0.4339 | - | - | - |
| 1.6328 | 241500 | 0.4201 | - | - | - |
| 1.6335 | 241600 | 0.4543 | - | - | - |
| 1.6342 | 241700 | 0.4225 | - | - | - |
| 1.6349 | 241800 | 0.4275 | - | - | - |
| 1.6356 | 241900 | 0.4651 | - | - | - |
| 1.6362 | 242000 | 0.498 | - | - | - |
| 1.6369 | 242100 | 0.4633 | - | - | - |
| 1.6376 | 242200 | 0.455 | - | - | - |
| 1.6383 | 242300 | 0.4585 | - | - | - |
| 1.6389 | 242400 | 0.4545 | - | - | - |
| 1.6396 | 242500 | 0.4258 | - | - | - |
| 1.6403 | 242600 | 0.5008 | - | - | - |
| 1.6410 | 242700 | 0.4757 | - | - | - |
| 1.6416 | 242800 | 0.4246 | - | - | - |
| 1.6423 | 242900 | 0.4288 | - | - | - |
| 1.6430 | 243000 | 0.4058 | - | - | - |
| 1.6437 | 243100 | 0.4794 | - | - | - |
| 1.6443 | 243200 | 0.4699 | - | - | - |
| 1.6450 | 243300 | 0.3919 | - | - | - |
| 1.6457 | 243400 | 0.4771 | - | - | - |
| 1.6464 | 243500 | 0.4785 | - | - | - |
| 1.6470 | 243600 | 0.4538 | - | - | - |
| 1.6477 | 243700 | 0.4474 | - | - | - |
| 1.6484 | 243800 | 0.468 | - | - | - |
| 1.6491 | 243900 | 0.4782 | - | - | - |
| 1.6498 | 244000 | 0.4909 | - | - | - |
| 1.6504 | 244100 | 0.4588 | - | - | - |
| 1.6511 | 244200 | 0.4601 | - | - | - |
| 1.6518 | 244300 | 0.4636 | - | - | - |
| 1.6525 | 244400 | 0.4555 | - | - | - |
| 1.6531 | 244500 | 0.4752 | - | - | - |
| 1.6538 | 244600 | 0.4428 | - | - | - |
| 1.6545 | 244700 | 0.5098 | - | - | - |
| 1.6552 | 244800 | 0.4214 | - | - | - |
| 1.6558 | 244900 | 0.4709 | - | - | - |
| 1.6565 | 245000 | 0.4452 | 0.5253 | 0.7637 | - |
| 1.6572 | 245100 | 0.4678 | - | - | - |
| 1.6579 | 245200 | 0.4759 | - | - | - |
| 1.6585 | 245300 | 0.4877 | - | - | - |
| 1.6592 | 245400 | 0.4263 | - | - | - |
| 1.6599 | 245500 | 0.4286 | - | - | - |
| 1.6606 | 245600 | 0.4847 | - | - | - |
| 1.6612 | 245700 | 0.4414 | - | - | - |
| 1.6619 | 245800 | 0.4771 | - | - | - |
| 1.6626 | 245900 | 0.4356 | - | - | - |
| 1.6633 | 246000 | 0.4591 | - | - | - |
| 1.6640 | 246100 | 0.4132 | - | - | - |
| 1.6646 | 246200 | 0.4585 | - | - | - |
| 1.6653 | 246300 | 0.484 | - | - | - |
| 1.6660 | 246400 | 0.4346 | - | - | - |
| 1.6667 | 246500 | 0.4384 | - | - | - |
| 1.6673 | 246600 | 0.4829 | - | - | - |
| 1.6680 | 246700 | 0.4508 | - | - | - |
| 1.6687 | 246800 | 0.4368 | - | - | - |
| 1.6694 | 246900 | 0.4608 | - | - | - |
| 1.6700 | 247000 | 0.4528 | - | - | - |
| 1.6707 | 247100 | 0.449 | - | - | - |
| 1.6714 | 247200 | 0.4572 | - | - | - |
| 1.6721 | 247300 | 0.4757 | - | - | - |
| 1.6727 | 247400 | 0.4626 | - | - | - |
| 1.6734 | 247500 | 0.4839 | - | - | - |
| 1.6741 | 247600 | 0.465 | - | - | - |
| 1.6748 | 247700 | 0.4427 | - | - | - |
| 1.6754 | 247800 | 0.4216 | - | - | - |
| 1.6761 | 247900 | 0.5065 | - | - | - |
| 1.6768 | 248000 | 0.4899 | - | - | - |
| 1.6775 | 248100 | 0.4554 | - | - | - |
| 1.6781 | 248200 | 0.4244 | - | - | - |
| 1.6788 | 248300 | 0.4889 | - | - | - |
| 1.6795 | 248400 | 0.5147 | - | - | - |
| 1.6802 | 248500 | 0.4877 | - | - | - |
| 1.6809 | 248600 | 0.4626 | - | - | - |
| 1.6815 | 248700 | 0.4391 | - | - | - |
| 1.6822 | 248800 | 0.4556 | - | - | - |
| 1.6829 | 248900 | 0.4703 | - | - | - |
| 1.6836 | 249000 | 0.4428 | - | - | - |
| 1.6842 | 249100 | 0.4623 | - | - | - |
| 1.6849 | 249200 | 0.4512 | - | - | - |
| 1.6856 | 249300 | 0.4828 | - | - | - |
| 1.6863 | 249400 | 0.4712 | - | - | - |
| 1.6869 | 249500 | 0.4331 | - | - | - |
| 1.6876 | 249600 | 0.4554 | - | - | - |
| 1.6883 | 249700 | 0.501 | - | - | - |
| 1.6890 | 249800 | 0.5304 | - | - | - |
| 1.6896 | 249900 | 0.4416 | - | - | - |
| 1.6903 | 250000 | 0.4661 | 0.5317 | 0.7661 | - |
| 1.6910 | 250100 | 0.4625 | - | - | - |
| 1.6917 | 250200 | 0.4846 | - | - | - |
| 1.6923 | 250300 | 0.4077 | - | - | - |
| 1.6930 | 250400 | 0.44 | - | - | - |
| 1.6937 | 250500 | 0.4667 | - | - | - |
| 1.6944 | 250600 | 0.4376 | - | - | - |
| 1.6951 | 250700 | 0.4977 | - | - | - |
| 1.6957 | 250800 | 0.4818 | - | - | - |
| 1.6964 | 250900 | 0.466 | - | - | - |
| 1.6971 | 251000 | 0.4095 | - | - | - |
| 1.6978 | 251100 | 0.458 | - | - | - |
| 1.6984 | 251200 | 0.4152 | - | - | - |
| 1.6991 | 251300 | 0.4536 | - | - | - |
| 1.6998 | 251400 | 0.4464 | - | - | - |
| 1.7005 | 251500 | 0.4732 | - | - | - |
| 1.7011 | 251600 | 0.4769 | - | - | - |
| 1.7018 | 251700 | 0.4576 | - | - | - |
| 1.7025 | 251800 | 0.4625 | - | - | - |
| 1.7032 | 251900 | 0.4901 | - | - | - |
| 1.7038 | 252000 | 0.405 | - | - | - |
| 1.7045 | 252100 | 0.4638 | - | - | - |
| 1.7052 | 252200 | 0.4445 | - | - | - |
| 1.7059 | 252300 | 0.432 | - | - | - |
| 1.7065 | 252400 | 0.4725 | - | - | - |
| 1.7072 | 252500 | 0.4271 | - | - | - |
| 1.7079 | 252600 | 0.4432 | - | - | - |
| 1.7086 | 252700 | 0.4594 | - | - | - |
| 1.7093 | 252800 | 0.4684 | - | - | - |
| 1.7099 | 252900 | 0.4413 | - | - | - |
| 1.7106 | 253000 | 0.4387 | - | - | - |
| 1.7113 | 253100 | 0.4531 | - | - | - |
| 1.7120 | 253200 | 0.4175 | - | - | - |
| 1.7126 | 253300 | 0.4827 | - | - | - |
| 1.7133 | 253400 | 0.4693 | - | - | - |
| 1.7140 | 253500 | 0.3994 | - | - | - |
| 1.7147 | 253600 | 0.4315 | - | - | - |
| 1.7153 | 253700 | 0.4678 | - | - | - |
| 1.7160 | 253800 | 0.4232 | - | - | - |
| 1.7167 | 253900 | 0.4582 | - | - | - |
| 1.7174 | 254000 | 0.4659 | - | - | - |
| 1.7180 | 254100 | 0.471 | - | - | - |
| 1.7187 | 254200 | 0.4212 | - | - | - |
| 1.7194 | 254300 | 0.5232 | - | - | - |
| 1.7201 | 254400 | 0.4563 | - | - | - |
| 1.7207 | 254500 | 0.4624 | - | - | - |
| 1.7214 | 254600 | 0.4454 | - | - | - |
| 1.7221 | 254700 | 0.4658 | - | - | - |
| 1.7228 | 254800 | 0.4783 | - | - | - |
| 1.7235 | 254900 | 0.4557 | - | - | - |
| 1.7241 | 255000 | 0.4349 | 0.5338 | 0.7631 | - |
| 1.7248 | 255100 | 0.4425 | - | - | - |
| 1.7255 | 255200 | 0.4169 | - | - | - |
| 1.7262 | 255300 | 0.4647 | - | - | - |
| 1.7268 | 255400 | 0.4266 | - | - | - |
| 1.7275 | 255500 | 0.4864 | - | - | - |
| 1.7282 | 255600 | 0.4499 | - | - | - |
| 1.7289 | 255700 | 0.4617 | - | - | - |
| 1.7295 | 255800 | 0.4296 | - | - | - |
| 1.7302 | 255900 | 0.4446 | - | - | - |
| 1.7309 | 256000 | 0.4519 | - | - | - |
| 1.7316 | 256100 | 0.4387 | - | - | - |
| 1.7322 | 256200 | 0.4492 | - | - | - |
| 1.7329 | 256300 | 0.4692 | - | - | - |
| 1.7336 | 256400 | 0.4881 | - | - | - |
| 1.7343 | 256500 | 0.4518 | - | - | - |
| 1.7349 | 256600 | 0.499 | - | - | - |
| 1.7356 | 256700 | 0.4207 | - | - | - |
| 1.7363 | 256800 | 0.4467 | - | - | - |
| 1.7370 | 256900 | 0.493 | - | - | - |
| 1.7376 | 257000 | 0.4235 | - | - | - |
| 1.7383 | 257100 | 0.4495 | - | - | - |
| 1.7390 | 257200 | 0.4806 | - | - | - |
| 1.7397 | 257300 | 0.4228 | - | - | - |
| 1.7404 | 257400 | 0.4826 | - | - | - |
| 1.7410 | 257500 | 0.4556 | - | - | - |
| 1.7417 | 257600 | 0.4426 | - | - | - |
| 1.7424 | 257700 | 0.4341 | - | - | - |
| 1.7431 | 257800 | 0.4359 | - | - | - |
| 1.7437 | 257900 | 0.454 | - | - | - |
| 1.7444 | 258000 | 0.4675 | - | - | - |
| 1.7451 | 258100 | 0.4077 | - | - | - |
| 1.7458 | 258200 | 0.4628 | - | - | - |
| 1.7464 | 258300 | 0.4641 | - | - | - |
| 1.7471 | 258400 | 0.4553 | - | - | - |
| 1.7478 | 258500 | 0.4568 | - | - | - |
| 1.7485 | 258600 | 0.4537 | - | - | - |
| 1.7491 | 258700 | 0.4504 | - | - | - |
| 1.7498 | 258800 | 0.4367 | - | - | - |
| 1.7505 | 258900 | 0.4413 | - | - | - |
| 1.7512 | 259000 | 0.43 | - | - | - |
| 1.7518 | 259100 | 0.4355 | - | - | - |
| 1.7525 | 259200 | 0.422 | - | - | - |
| 1.7532 | 259300 | 0.4069 | - | - | - |
| 1.7539 | 259400 | 0.402 | - | - | - |
| 1.7546 | 259500 | 0.4491 | - | - | - |
| 1.7552 | 259600 | 0.4964 | - | - | - |
| 1.7559 | 259700 | 0.4047 | - | - | - |
| 1.7566 | 259800 | 0.3931 | - | - | - |
| 1.7573 | 259900 | 0.4079 | - | - | - |
| 1.7579 | 260000 | 0.4314 | 0.5351 | 0.7618 | - |
| 1.7586 | 260100 | 0.4477 | - | - | - |
| 1.7593 | 260200 | 0.4434 | - | - | - |
| 1.7600 | 260300 | 0.4618 | - | - | - |
| 1.7606 | 260400 | 0.4529 | - | - | - |
| 1.7613 | 260500 | 0.4321 | - | - | - |
| 1.7620 | 260600 | 0.4381 | - | - | - |
| 1.7627 | 260700 | 0.4704 | - | - | - |
| 1.7633 | 260800 | 0.4405 | - | - | - |
| 1.7640 | 260900 | 0.476 | - | - | - |
| 1.7647 | 261000 | 0.4275 | - | - | - |
| 1.7654 | 261100 | 0.4359 | - | - | - |
| 1.7660 | 261200 | 0.4428 | - | - | - |
| 1.7667 | 261300 | 0.4994 | - | - | - |
| 1.7674 | 261400 | 0.4338 | - | - | - |
| 1.7681 | 261500 | 0.4182 | - | - | - |
| 1.7688 | 261600 | 0.474 | - | - | - |
| 1.7694 | 261700 | 0.4998 | - | - | - |
| 1.7701 | 261800 | 0.4428 | - | - | - |
| 1.7708 | 261900 | 0.4493 | - | - | - |
| 1.7715 | 262000 | 0.4438 | - | - | - |
| 1.7721 | 262100 | 0.4262 | - | - | - |
| 1.7728 | 262200 | 0.4951 | - | - | - |
| 1.7735 | 262300 | 0.4052 | - | - | - |
| 1.7742 | 262400 | 0.4559 | - | - | - |
| 1.7748 | 262500 | 0.4356 | - | - | - |
| 1.7755 | 262600 | 0.4665 | - | - | - |
| 1.7762 | 262700 | 0.4272 | - | - | - |
| 1.7769 | 262800 | 0.4536 | - | - | - |
| 1.7775 | 262900 | 0.451 | - | - | - |
| 1.7782 | 263000 | 0.4425 | - | - | - |
| 1.7789 | 263100 | 0.4601 | - | - | - |
| 1.7796 | 263200 | 0.477 | - | - | - |
| 1.7802 | 263300 | 0.4763 | - | - | - |
| 1.7809 | 263400 | 0.4309 | - | - | - |
| 1.7816 | 263500 | 0.4302 | - | - | - |
| 1.7823 | 263600 | 0.409 | - | - | - |
| 1.7829 | 263700 | 0.4719 | - | - | - |
| 1.7836 | 263800 | 0.3989 | - | - | - |
| 1.7843 | 263900 | 0.4616 | - | - | - |
| 1.7850 | 264000 | 0.4738 | - | - | - |
| 1.7857 | 264100 | 0.467 | - | - | - |
| 1.7863 | 264200 | 0.4863 | - | - | - |
| 1.7870 | 264300 | 0.5005 | - | - | - |
| 1.7877 | 264400 | 0.4274 | - | - | - |
| 1.7884 | 264500 | 0.4274 | - | - | - |
| 1.7890 | 264600 | 0.4403 | - | - | - |
| 1.7897 | 264700 | 0.3987 | - | - | - |
| 1.7904 | 264800 | 0.4381 | - | - | - |
| 1.7911 | 264900 | 0.4345 | - | - | - |
| 1.7917 | 265000 | 0.4098 | 0.5240 | 0.7629 | - |
| 1.7924 | 265100 | 0.4502 | - | - | - |
| 1.7931 | 265200 | 0.4727 | - | - | - |
| 1.7938 | 265300 | 0.4093 | - | - | - |
| 1.7944 | 265400 | 0.4555 | - | - | - |
| 1.7951 | 265500 | 0.47 | - | - | - |
| 1.7958 | 265600 | 0.4633 | - | - | - |
| 1.7965 | 265700 | 0.4531 | - | - | - |
| 1.7971 | 265800 | 0.4135 | - | - | - |
| 1.7978 | 265900 | 0.4698 | - | - | - |
| 1.7985 | 266000 | 0.4512 | - | - | - |
| 1.7992 | 266100 | 0.4259 | - | - | - |
| 1.7999 | 266200 | 0.4375 | - | - | - |
| 1.8005 | 266300 | 0.5042 | - | - | - |
| 1.8012 | 266400 | 0.4725 | - | - | - |
| 1.8019 | 266500 | 0.4517 | - | - | - |
| 1.8026 | 266600 | 0.4508 | - | - | - |
| 1.8032 | 266700 | 0.4553 | - | - | - |
| 1.8039 | 266800 | 0.4305 | - | - | - |
| 1.8046 | 266900 | 0.4599 | - | - | - |
| 1.8053 | 267000 | 0.4408 | - | - | - |
| 1.8059 | 267100 | 0.4377 | - | - | - |
| 1.8066 | 267200 | 0.5151 | - | - | - |
| 1.8073 | 267300 | 0.4088 | - | - | - |
| 1.8080 | 267400 | 0.4464 | - | - | - |
| 1.8086 | 267500 | 0.4165 | - | - | - |
| 1.8093 | 267600 | 0.4189 | - | - | - |
| 1.8100 | 267700 | 0.4611 | - | - | - |
| 1.8107 | 267800 | 0.4116 | - | - | - |
| 1.8113 | 267900 | 0.4228 | - | - | - |
| 1.8120 | 268000 | 0.4124 | - | - | - |
| 1.8127 | 268100 | 0.4254 | - | - | - |
| 1.8134 | 268200 | 0.5178 | - | - | - |
| 1.8141 | 268300 | 0.4767 | - | - | - |
| 1.8147 | 268400 | 0.4132 | - | - | - |
| 1.8154 | 268500 | 0.4613 | - | - | - |
| 1.8161 | 268600 | 0.4421 | - | - | - |
| 1.8168 | 268700 | 0.4615 | - | - | - |
| 1.8174 | 268800 | 0.4731 | - | - | - |
| 1.8181 | 268900 | 0.4604 | - | - | - |
| 1.8188 | 269000 | 0.455 | - | - | - |
| 1.8195 | 269100 | 0.4539 | - | - | - |
| 1.8201 | 269200 | 0.423 | - | - | - |
| 1.8208 | 269300 | 0.4408 | - | - | - |
| 1.8215 | 269400 | 0.4341 | - | - | - |
| 1.8222 | 269500 | 0.4578 | - | - | - |
| 1.8228 | 269600 | 0.4232 | - | - | - |
| 1.8235 | 269700 | 0.4091 | - | - | - |
| 1.8242 | 269800 | 0.4371 | - | - | - |
| 1.8249 | 269900 | 0.3723 | - | - | - |
| 1.8255 | 270000 | 0.4409 | 0.5281 | 0.7677 | - |
| 1.8262 | 270100 | 0.4741 | - | - | - |
| 1.8269 | 270200 | 0.412 | - | - | - |
| 1.8276 | 270300 | 0.4721 | - | - | - |
| 1.8282 | 270400 | 0.4463 | - | - | - |
| 1.8289 | 270500 | 0.4056 | - | - | - |
| 1.8296 | 270600 | 0.4471 | - | - | - |
| 1.8303 | 270700 | 0.4514 | - | - | - |
| 1.8310 | 270800 | 0.4326 | - | - | - |
| 1.8316 | 270900 | 0.4773 | - | - | - |
| 1.8323 | 271000 | 0.4699 | - | - | - |
| 1.8330 | 271100 | 0.4608 | - | - | - |
| 1.8337 | 271200 | 0.4251 | - | - | - |
| 1.8343 | 271300 | 0.4064 | - | - | - |
| 1.8350 | 271400 | 0.4326 | - | - | - |
| 1.8357 | 271500 | 0.4474 | - | - | - |
| 1.8364 | 271600 | 0.4519 | - | - | - |
| 1.8370 | 271700 | 0.425 | - | - | - |
| 1.8377 | 271800 | 0.4424 | - | - | - |
| 1.8384 | 271900 | 0.4984 | - | - | - |
| 1.8391 | 272000 | 0.4578 | - | - | - |
| 1.8397 | 272100 | 0.4309 | - | - | - |
| 1.8404 | 272200 | 0.4433 | - | - | - |
| 1.8411 | 272300 | 0.4621 | - | - | - |
| 1.8418 | 272400 | 0.4785 | - | - | - |
| 1.8424 | 272500 | 0.43 | - | - | - |
| 1.8431 | 272600 | 0.4519 | - | - | - |
| 1.8438 | 272700 | 0.4306 | - | - | - |
| 1.8445 | 272800 | 0.4259 | - | - | - |
| 1.8452 | 272900 | 0.4359 | - | - | - |
| 1.8458 | 273000 | 0.4489 | - | - | - |
| 1.8465 | 273100 | 0.4255 | - | - | - |
| 1.8472 | 273200 | 0.4681 | - | - | - |
| 1.8479 | 273300 | 0.4031 | - | - | - |
| 1.8485 | 273400 | 0.4154 | - | - | - |
| 1.8492 | 273500 | 0.444 | - | - | - |
| 1.8499 | 273600 | 0.467 | - | - | - |
| 1.8506 | 273700 | 0.4442 | - | - | - |
| 1.8512 | 273800 | 0.4408 | - | - | - |
| 1.8519 | 273900 | 0.459 | - | - | - |
| 1.8526 | 274000 | 0.429 | - | - | - |
| 1.8533 | 274100 | 0.4476 | - | - | - |
| 1.8539 | 274200 | 0.4554 | - | - | - |
| 1.8546 | 274300 | 0.427 | - | - | - |
| 1.8553 | 274400 | 0.4367 | - | - | - |
| 1.8560 | 274500 | 0.4396 | - | - | - |
| 1.8566 | 274600 | 0.3952 | - | - | - |
| 1.8573 | 274700 | 0.444 | - | - | - |
| 1.8580 | 274800 | 0.4539 | - | - | - |
| 1.8587 | 274900 | 0.4407 | - | - | - |
| 1.8594 | 275000 | 0.4248 | 0.5281 | 0.7640 | - |
| 1.8600 | 275100 | 0.4386 | - | - | - |
| 1.8607 | 275200 | 0.4254 | - | - | - |
| 1.8614 | 275300 | 0.3987 | - | - | - |
| 1.8621 | 275400 | 0.4319 | - | - | - |
| 1.8627 | 275500 | 0.4191 | - | - | - |
| 1.8634 | 275600 | 0.4446 | - | - | - |
| 1.8641 | 275700 | 0.5099 | - | - | - |
| 1.8648 | 275800 | 0.3804 | - | - | - |
| 1.8654 | 275900 | 0.4248 | - | - | - |
| 1.8661 | 276000 | 0.4485 | - | - | - |
| 1.8668 | 276100 | 0.4388 | - | - | - |
| 1.8675 | 276200 | 0.4131 | - | - | - |
| 1.8681 | 276300 | 0.4515 | - | - | - |
| 1.8688 | 276400 | 0.4089 | - | - | - |
| 1.8695 | 276500 | 0.4571 | - | - | - |
| 1.8702 | 276600 | 0.4156 | - | - | - |
| 1.8708 | 276700 | 0.4005 | - | - | - |
| 1.8715 | 276800 | 0.388 | - | - | - |
| 1.8722 | 276900 | 0.4257 | - | - | - |
| 1.8729 | 277000 | 0.4673 | - | - | - |
| 1.8736 | 277100 | 0.4639 | - | - | - |
| 1.8742 | 277200 | 0.3981 | - | - | - |
| 1.8749 | 277300 | 0.4139 | - | - | - |
| 1.8756 | 277400 | 0.4667 | - | - | - |
| 1.8763 | 277500 | 0.4481 | - | - | - |
| 1.8769 | 277600 | 0.3864 | - | - | - |
| 1.8776 | 277700 | 0.4507 | - | - | - |
| 1.8783 | 277800 | 0.479 | - | - | - |
| 1.8790 | 277900 | 0.3917 | - | - | - |
| 1.8796 | 278000 | 0.4305 | - | - | - |
| 1.8803 | 278100 | 0.4063 | - | - | - |
| 1.8810 | 278200 | 0.4432 | - | - | - |
| 1.8817 | 278300 | 0.4194 | - | - | - |
| 1.8823 | 278400 | 0.4427 | - | - | - |
| 1.8830 | 278500 | 0.4273 | - | - | - |
| 1.8837 | 278600 | 0.385 | - | - | - |
| 1.8844 | 278700 | 0.4182 | - | - | - |
| 1.8850 | 278800 | 0.3941 | - | - | - |
| 1.8857 | 278900 | 0.4495 | - | - | - |
| 1.8864 | 279000 | 0.4479 | - | - | - |
| 1.8871 | 279100 | 0.4293 | - | - | - |
| 1.8877 | 279200 | 0.4556 | - | - | - |
| 1.8884 | 279300 | 0.413 | - | - | - |
| 1.8891 | 279400 | 0.4027 | - | - | - |
| 1.8898 | 279500 | 0.457 | - | - | - |
| 1.8905 | 279600 | 0.4444 | - | - | - |
| 1.8911 | 279700 | 0.4073 | - | - | - |
| 1.8918 | 279800 | 0.444 | - | - | - |
| 1.8925 | 279900 | 0.4101 | - | - | - |
| 1.8932 | 280000 | 0.4268 | 0.5230 | 0.7639 | - |
| 1.8938 | 280100 | 0.4286 | - | - | - |
| 1.8945 | 280200 | 0.4589 | - | - | - |
| 1.8952 | 280300 | 0.4249 | - | - | - |
| 1.8959 | 280400 | 0.4298 | - | - | - |
| 1.8965 | 280500 | 0.4286 | - | - | - |
| 1.8972 | 280600 | 0.4373 | - | - | - |
| 1.8979 | 280700 | 0.4208 | - | - | - |
| 1.8986 | 280800 | 0.4003 | - | - | - |
| 1.8992 | 280900 | 0.4227 | - | - | - |
| 1.8999 | 281000 | 0.4324 | - | - | - |
| 1.9006 | 281100 | 0.4388 | - | - | - |
| 1.9013 | 281200 | 0.4292 | - | - | - |
| 1.9019 | 281300 | 0.427 | - | - | - |
| 1.9026 | 281400 | 0.4535 | - | - | - |
| 1.9033 | 281500 | 0.407 | - | - | - |
| 1.9040 | 281600 | 0.4438 | - | - | - |
| 1.9047 | 281700 | 0.4194 | - | - | - |
| 1.9053 | 281800 | 0.4331 | - | - | - |
| 1.9060 | 281900 | 0.4341 | - | - | - |
| 1.9067 | 282000 | 0.4829 | - | - | - |
| 1.9074 | 282100 | 0.417 | - | - | - |
| 1.9080 | 282200 | 0.4421 | - | - | - |
| 1.9087 | 282300 | 0.4868 | - | - | - |
| 1.9094 | 282400 | 0.465 | - | - | - |
| 1.9101 | 282500 | 0.4357 | - | - | - |
| 1.9107 | 282600 | 0.3994 | - | - | - |
| 1.9114 | 282700 | 0.4579 | - | - | - |
| 1.9121 | 282800 | 0.4337 | - | - | - |
| 1.9128 | 282900 | 0.4628 | - | - | - |
| 1.9134 | 283000 | 0.4021 | - | - | - |
| 1.9141 | 283100 | 0.3979 | - | - | - |
| 1.9148 | 283200 | 0.4485 | - | - | - |
| 1.9155 | 283300 | 0.4469 | - | - | - |
| 1.9161 | 283400 | 0.4323 | - | - | - |
| 1.9168 | 283500 | 0.4509 | - | - | - |
| 1.9175 | 283600 | 0.3932 | - | - | - |
| 1.9182 | 283700 | 0.4433 | - | - | - |
| 1.9189 | 283800 | 0.4608 | - | - | - |
| 1.9195 | 283900 | 0.4664 | - | - | - |
| 1.9202 | 284000 | 0.4297 | - | - | - |
| 1.9209 | 284100 | 0.4383 | - | - | - |
| 1.9216 | 284200 | 0.3961 | - | - | - |
| 1.9222 | 284300 | 0.4311 | - | - | - |
| 1.9229 | 284400 | 0.4525 | - | - | - |
| 1.9236 | 284500 | 0.3962 | - | - | - |
| 1.9243 | 284600 | 0.4037 | - | - | - |
| 1.9249 | 284700 | 0.4356 | - | - | - |
| 1.9256 | 284800 | 0.4548 | - | - | - |
| 1.9263 | 284900 | 0.4386 | - | - | - |
| 1.9270 | 285000 | 0.4011 | 0.5227 | 0.7744 | - |
| 1.9276 | 285100 | 0.4305 | - | - | - |
| 1.9283 | 285200 | 0.4543 | - | - | - |
| 1.9290 | 285300 | 0.4194 | - | - | - |
| 1.9297 | 285400 | 0.4191 | - | - | - |
| 1.9303 | 285500 | 0.3797 | - | - | - |
| 1.9310 | 285600 | 0.4355 | - | - | - |
| 1.9317 | 285700 | 0.4265 | - | - | - |
| 1.9324 | 285800 | 0.4184 | - | - | - |
| 1.9330 | 285900 | 0.4458 | - | - | - |
| 1.9337 | 286000 | 0.4158 | - | - | - |
| 1.9344 | 286100 | 0.4428 | - | - | - |
| 1.9351 | 286200 | 0.48 | - | - | - |
| 1.9358 | 286300 | 0.4347 | - | - | - |
| 1.9364 | 286400 | 0.4158 | - | - | - |
| 1.9371 | 286500 | 0.439 | - | - | - |
| 1.9378 | 286600 | 0.4389 | - | - | - |
| 1.9385 | 286700 | 0.421 | - | - | - |
| 1.9391 | 286800 | 0.4327 | - | - | - |
| 1.9398 | 286900 | 0.4548 | - | - | - |
| 1.9405 | 287000 | 0.411 | - | - | - |
| 1.9412 | 287100 | 0.4257 | - | - | - |
| 1.9418 | 287200 | 0.4002 | - | - | - |
| 1.9425 | 287300 | 0.4075 | - | - | - |
| 1.9432 | 287400 | 0.4437 | - | - | - |
| 1.9439 | 287500 | 0.3973 | - | - | - |
| 1.9445 | 287600 | 0.4458 | - | - | - |
| 1.9452 | 287700 | 0.3918 | - | - | - |
| 1.9459 | 287800 | 0.4036 | - | - | - |
| 1.9466 | 287900 | 0.3801 | - | - | - |
| 1.9472 | 288000 | 0.4574 | - | - | - |
| 1.9479 | 288100 | 0.4534 | - | - | - |
| 1.9486 | 288200 | 0.401 | - | - | - |
| 1.9493 | 288300 | 0.4324 | - | - | - |
| 1.9500 | 288400 | 0.4558 | - | - | - |
| 1.9506 | 288500 | 0.4266 | - | - | - |
| 1.9513 | 288600 | 0.4431 | - | - | - |
| 1.9520 | 288700 | 0.4412 | - | - | - |
| 1.9527 | 288800 | 0.4375 | - | - | - |
| 1.9533 | 288900 | 0.4315 | - | - | - |
| 1.9540 | 289000 | 0.4364 | - | - | - |
| 1.9547 | 289100 | 0.4571 | - | - | - |
| 1.9554 | 289200 | 0.3804 | - | - | - |
| 1.9560 | 289300 | 0.4015 | - | - | - |
| 1.9567 | 289400 | 0.4246 | - | - | - |
| 1.9574 | 289500 | 0.4271 | - | - | - |
| 1.9581 | 289600 | 0.4617 | - | - | - |
| 1.9587 | 289700 | 0.487 | - | - | - |
| 1.9594 | 289800 | 0.4578 | - | - | - |
| 1.9601 | 289900 | 0.4246 | - | - | - |
| 1.9608 | 290000 | 0.4446 | 0.5157 | 0.7655 | - |
| 1.9614 | 290100 | 0.4153 | - | - | - |
| 1.9621 | 290200 | 0.3869 | - | - | - |
| 1.9628 | 290300 | 0.4247 | - | - | - |
| 1.9635 | 290400 | 0.4867 | - | - | - |
| 1.9642 | 290500 | 0.4609 | - | - | - |
| 1.9648 | 290600 | 0.3966 | - | - | - |
| 1.9655 | 290700 | 0.4386 | - | - | - |
| 1.9662 | 290800 | 0.4427 | - | - | - |
| 1.9669 | 290900 | 0.4297 | - | - | - |
| 1.9675 | 291000 | 0.4346 | - | - | - |
| 1.9682 | 291100 | 0.468 | - | - | - |
| 1.9689 | 291200 | 0.4293 | - | - | - |
| 1.9696 | 291300 | 0.4852 | - | - | - |
| 1.9702 | 291400 | 0.4483 | - | - | - |
| 1.9709 | 291500 | 0.411 | - | - | - |
| 1.9716 | 291600 | 0.4304 | - | - | - |
| 1.9723 | 291700 | 0.4375 | - | - | - |
| 1.9729 | 291800 | 0.4095 | - | - | - |
| 1.9736 | 291900 | 0.4472 | - | - | - |
| 1.9743 | 292000 | 0.4483 | - | - | - |
| 1.9750 | 292100 | 0.4129 | - | - | - |
| 1.9756 | 292200 | 0.4491 | - | - | - |
| 1.9763 | 292300 | 0.4207 | - | - | - |
| 1.9770 | 292400 | 0.4899 | - | - | - |
| 1.9777 | 292500 | 0.4511 | - | - | - |
| 1.9784 | 292600 | 0.4087 | - | - | - |
| 1.9790 | 292700 | 0.4077 | - | - | - |
| 1.9797 | 292800 | 0.4228 | - | - | - |
| 1.9804 | 292900 | 0.4071 | - | - | - |
| 1.9811 | 293000 | 0.4288 | - | - | - |
| 1.9817 | 293100 | 0.4238 | - | - | - |
| 1.9824 | 293200 | 0.4348 | - | - | - |
| 1.9831 | 293300 | 0.4318 | - | - | - |
| 1.9838 | 293400 | 0.489 | - | - | - |
| 1.9844 | 293500 | 0.4077 | - | - | - |
| 1.9851 | 293600 | 0.4265 | - | - | - |
| 1.9858 | 293700 | 0.4415 | - | - | - |
| 1.9865 | 293800 | 0.4488 | - | - | - |
| 1.9871 | 293900 | 0.4495 | - | - | - |
| 1.9878 | 294000 | 0.4473 | - | - | - |
| 1.9885 | 294100 | 0.4289 | - | - | - |
| 1.9892 | 294200 | 0.4017 | - | - | - |
| 1.9898 | 294300 | 0.5058 | - | - | - |
| 1.9905 | 294400 | 0.4392 | - | - | - |
| 1.9912 | 294500 | 0.4715 | - | - | - |
| 1.9919 | 294600 | 0.4536 | - | - | - |
| 1.9925 | 294700 | 0.4095 | - | - | - |
| 1.9932 | 294800 | 0.4449 | - | - | - |
| 1.9939 | 294900 | 0.4382 | - | - | - |
| 1.9946 | 295000 | 0.3763 | 0.5282 | 0.7654 | - |
| 1.9953 | 295100 | 0.4293 | - | - | - |
| 1.9959 | 295200 | 0.4237 | - | - | - |
| 1.9966 | 295300 | 0.4238 | - | - | - |
| 1.9973 | 295400 | 0.4289 | - | - | - |
| 1.9980 | 295500 | 0.4223 | - | - | - |
| 1.9986 | 295600 | 0.425 | - | - | - |
| 1.9993 | 295700 | 0.4192 | - | - | - |
| 2.0000 | 295800 | 0.4516 | - | - | - |
| 2.0007 | 295900 | 0.4469 | - | - | - |
| 2.0013 | 296000 | 0.407 | - | - | - |
| 2.0020 | 296100 | 0.4458 | - | - | - |
| 2.0027 | 296200 | 0.4159 | - | - | - |
| 2.0034 | 296300 | 0.4025 | - | - | - |
| 2.0040 | 296400 | 0.418 | - | - | - |
| 2.0047 | 296500 | 0.4382 | - | - | - |
| 2.0054 | 296600 | 0.3907 | - | - | - |
| 2.0061 | 296700 | 0.4566 | - | - | - |
| 2.0067 | 296800 | 0.4067 | - | - | - |
| 2.0074 | 296900 | 0.4219 | - | - | - |
| 2.0081 | 297000 | 0.3557 | - | - | - |
| 2.0088 | 297100 | 0.4436 | - | - | - |
| 2.0095 | 297200 | 0.4457 | - | - | - |
| 2.0101 | 297300 | 0.4133 | - | - | - |
| 2.0108 | 297400 | 0.3949 | - | - | - |
| 2.0115 | 297500 | 0.4555 | - | - | - |
| 2.0122 | 297600 | 0.4052 | - | - | - |
| 2.0128 | 297700 | 0.3796 | - | - | - |
| 2.0135 | 297800 | 0.4332 | - | - | - |
| 2.0142 | 297900 | 0.444 | - | - | - |
| 2.0149 | 298000 | 0.4262 | - | - | - |
| 2.0155 | 298100 | 0.4136 | - | - | - |
| 2.0162 | 298200 | 0.443 | - | - | - |
| 2.0169 | 298300 | 0.4485 | - | - | - |
| 2.0176 | 298400 | 0.4267 | - | - | - |
| 2.0182 | 298500 | 0.409 | - | - | - |
| 2.0189 | 298600 | 0.4439 | - | - | - |
| 2.0196 | 298700 | 0.4479 | - | - | - |
| 2.0203 | 298800 | 0.3977 | - | - | - |
| 2.0209 | 298900 | 0.3977 | - | - | - |
| 2.0216 | 299000 | 0.4399 | - | - | - |
| 2.0223 | 299100 | 0.4667 | - | - | - |
| 2.0230 | 299200 | 0.4016 | - | - | - |
| 2.0237 | 299300 | 0.4377 | - | - | - |
| 2.0243 | 299400 | 0.3961 | - | - | - |
| 2.0250 | 299500 | 0.3777 | - | - | - |
| 2.0257 | 299600 | 0.4515 | - | - | - |
| 2.0264 | 299700 | 0.4365 | - | - | - |
| 2.0270 | 299800 | 0.396 | - | - | - |
| 2.0277 | 299900 | 0.4141 | - | - | - |
| 2.0284 | 300000 | 0.3807 | 0.5224 | 0.7684 | - |
| 2.0291 | 300100 | 0.4437 | - | - | - |
| 2.0297 | 300200 | 0.4198 | - | - | - |
| 2.0304 | 300300 | 0.4118 | - | - | - |
| 2.0311 | 300400 | 0.429 | - | - | - |
| 2.0318 | 300500 | 0.4622 | - | - | - |
| 2.0324 | 300600 | 0.4205 | - | - | - |
| 2.0331 | 300700 | 0.3693 | - | - | - |
| 2.0338 | 300800 | 0.4434 | - | - | - |
| 2.0345 | 300900 | 0.4213 | - | - | - |
| 2.0351 | 301000 | 0.4038 | - | - | - |
| 2.0358 | 301100 | 0.4501 | - | - | - |
| 2.0365 | 301200 | 0.4485 | - | - | - |
| 2.0372 | 301300 | 0.4327 | - | - | - |
| 2.0378 | 301400 | 0.4234 | - | - | - |
| 2.0385 | 301500 | 0.4047 | - | - | - |
| 2.0392 | 301600 | 0.4492 | - | - | - |
| 2.0399 | 301700 | 0.4241 | - | - | - |
| 2.0406 | 301800 | 0.3889 | - | - | - |
| 2.0412 | 301900 | 0.487 | - | - | - |
| 2.0419 | 302000 | 0.4308 | - | - | - |
| 2.0426 | 302100 | 0.4358 | - | - | - |
| 2.0433 | 302200 | 0.4174 | - | - | - |
| 2.0439 | 302300 | 0.409 | - | - | - |
| 2.0446 | 302400 | 0.4416 | - | - | - |
| 2.0453 | 302500 | 0.3959 | - | - | - |
| 2.0460 | 302600 | 0.4356 | - | - | - |
| 2.0466 | 302700 | 0.4229 | - | - | - |
| 2.0473 | 302800 | 0.3872 | - | - | - |
| 2.0480 | 302900 | 0.4625 | - | - | - |
| 2.0487 | 303000 | 0.4454 | - | - | - |
| 2.0493 | 303100 | 0.4498 | - | - | - |
| 2.0500 | 303200 | 0.3975 | - | - | - |
| 2.0507 | 303300 | 0.4062 | - | - | - |
| 2.0514 | 303400 | 0.4656 | - | - | - |
| 2.0520 | 303500 | 0.4723 | - | - | - |
| 2.0527 | 303600 | 0.4135 | - | - | - |
| 2.0534 | 303700 | 0.3935 | - | - | - |
| 2.0541 | 303800 | 0.4563 | - | - | - |
| 2.0548 | 303900 | 0.4464 | - | - | - |
| 2.0554 | 304000 | 0.4218 | - | - | - |
| 2.0561 | 304100 | 0.4087 | - | - | - |
| 2.0568 | 304200 | 0.3859 | - | - | - |
| 2.0575 | 304300 | 0.4219 | - | - | - |
| 2.0581 | 304400 | 0.415 | - | - | - |
| 2.0588 | 304500 | 0.3951 | - | - | - |
| 2.0595 | 304600 | 0.4004 | - | - | - |
| 2.0602 | 304700 | 0.4075 | - | - | - |
| 2.0608 | 304800 | 0.3995 | - | - | - |
| 2.0615 | 304900 | 0.398 | - | - | - |
| 2.0622 | 305000 | 0.4554 | 0.5321 | 0.7666 | - |
| 2.0629 | 305100 | 0.391 | - | - | - |
| 2.0635 | 305200 | 0.4388 | - | - | - |
| 2.0642 | 305300 | 0.4536 | - | - | - |
| 2.0649 | 305400 | 0.3989 | - | - | - |
| 2.0656 | 305500 | 0.432 | - | - | - |
| 2.0662 | 305600 | 0.4117 | - | - | - |
| 2.0669 | 305700 | 0.4462 | - | - | - |
| 2.0676 | 305800 | 0.4297 | - | - | - |
| 2.0683 | 305900 | 0.4357 | - | - | - |
| 2.0690 | 306000 | 0.418 | - | - | - |
| 2.0696 | 306100 | 0.4303 | - | - | - |
| 2.0703 | 306200 | 0.4426 | - | - | - |
| 2.0710 | 306300 | 0.421 | - | - | - |
| 2.0717 | 306400 | 0.3861 | - | - | - |
| 2.0723 | 306500 | 0.4225 | - | - | - |
| 2.0730 | 306600 | 0.4008 | - | - | - |
| 2.0737 | 306700 | 0.4305 | - | - | - |
| 2.0744 | 306800 | 0.4126 | - | - | - |
| 2.0750 | 306900 | 0.4306 | - | - | - |
| 2.0757 | 307000 | 0.3974 | - | - | - |
| 2.0764 | 307100 | 0.4338 | - | - | - |
| 2.0771 | 307200 | 0.3872 | - | - | - |
| 2.0777 | 307300 | 0.3997 | - | - | - |
| 2.0784 | 307400 | 0.4804 | - | - | - |
| 2.0791 | 307500 | 0.4391 | - | - | - |
| 2.0798 | 307600 | 0.407 | - | - | - |
| 2.0804 | 307700 | 0.4084 | - | - | - |
| 2.0811 | 307800 | 0.4681 | - | - | - |
| 2.0818 | 307900 | 0.4411 | - | - | - |
| 2.0825 | 308000 | 0.3869 | - | - | - |
| 2.0832 | 308100 | 0.3637 | - | - | - |
| 2.0838 | 308200 | 0.4436 | - | - | - |
| 2.0845 | 308300 | 0.3722 | - | - | - |
| 2.0852 | 308400 | 0.3904 | - | - | - |
| 2.0859 | 308500 | 0.3784 | - | - | - |
| 2.0865 | 308600 | 0.425 | - | - | - |
| 2.0872 | 308700 | 0.4123 | - | - | - |
| 2.0879 | 308800 | 0.4148 | - | - | - |
| 2.0886 | 308900 | 0.4038 | - | - | - |
| 2.0892 | 309000 | 0.4086 | - | - | - |
| 2.0899 | 309100 | 0.3961 | - | - | - |
| 2.0906 | 309200 | 0.4136 | - | - | - |
| 2.0913 | 309300 | 0.39 | - | - | - |
| 2.0919 | 309400 | 0.4193 | - | - | - |
| 2.0926 | 309500 | 0.4044 | - | - | - |
| 2.0933 | 309600 | 0.4245 | - | - | - |
| 2.0940 | 309700 | 0.3641 | - | - | - |
| 2.0946 | 309800 | 0.406 | - | - | - |
| 2.0953 | 309900 | 0.3862 | - | - | - |
| 2.0960 | 310000 | 0.3684 | 0.5252 | 0.7740 | - |
| 2.0967 | 310100 | 0.3781 | - | - | - |
| 2.0973 | 310200 | 0.4007 | - | - | - |
| 2.0980 | 310300 | 0.4782 | - | - | - |
| 2.0987 | 310400 | 0.4061 | - | - | - |
| 2.0994 | 310500 | 0.3932 | - | - | - |
| 2.1001 | 310600 | 0.4176 | - | - | - |
| 2.1007 | 310700 | 0.4318 | - | - | - |
| 2.1014 | 310800 | 0.3804 | - | - | - |
| 2.1021 | 310900 | 0.4028 | - | - | - |
| 2.1028 | 311000 | 0.3499 | - | - | - |
| 2.1034 | 311100 | 0.3664 | - | - | - |
| 2.1041 | 311200 | 0.4006 | - | - | - |
| 2.1048 | 311300 | 0.3781 | - | - | - |
| 2.1055 | 311400 | 0.4195 | - | - | - |
| 2.1061 | 311500 | 0.4168 | - | - | - |
| 2.1068 | 311600 | 0.3695 | - | - | - |
| 2.1075 | 311700 | 0.4181 | - | - | - |
| 2.1082 | 311800 | 0.3773 | - | - | - |
| 2.1088 | 311900 | 0.3809 | - | - | - |
| 2.1095 | 312000 | 0.4087 | - | - | - |
| 2.1102 | 312100 | 0.4 | - | - | - |
| 2.1109 | 312200 | 0.4093 | - | - | - |
| 2.1115 | 312300 | 0.4177 | - | - | - |
| 2.1122 | 312400 | 0.3769 | - | - | - |
| 2.1129 | 312500 | 0.384 | - | - | - |
| 2.1136 | 312600 | 0.3989 | - | - | - |
| 2.1143 | 312700 | 0.4194 | - | - | - |
| 2.1149 | 312800 | 0.3889 | - | - | - |
| 2.1156 | 312900 | 0.4164 | - | - | - |
| 2.1163 | 313000 | 0.3601 | - | - | - |
| 2.1170 | 313100 | 0.4029 | - | - | - |
| 2.1176 | 313200 | 0.4404 | - | - | - |
| 2.1183 | 313300 | 0.4007 | - | - | - |
| 2.1190 | 313400 | 0.3832 | - | - | - |
| 2.1197 | 313500 | 0.4195 | - | - | - |
| 2.1203 | 313600 | 0.3591 | - | - | - |
| 2.1210 | 313700 | 0.432 | - | - | - |
| 2.1217 | 313800 | 0.442 | - | - | - |
| 2.1224 | 313900 | 0.4006 | - | - | - |
| 2.1230 | 314000 | 0.3803 | - | - | - |
| 2.1237 | 314100 | 0.3819 | - | - | - |
| 2.1244 | 314200 | 0.3708 | - | - | - |
| 2.1251 | 314300 | 0.3983 | - | - | - |
| 2.1257 | 314400 | 0.4346 | - | - | - |
| 2.1264 | 314500 | 0.3899 | - | - | - |
| 2.1271 | 314600 | 0.3963 | - | - | - |
| 2.1278 | 314700 | 0.3857 | - | - | - |
| 2.1285 | 314800 | 0.4109 | - | - | - |
| 2.1291 | 314900 | 0.4186 | - | - | - |
| 2.1298 | 315000 | 0.3896 | 0.5203 | 0.7656 | - |
| 2.1305 | 315100 | 0.446 | - | - | - |
| 2.1312 | 315200 | 0.4358 | - | - | - |
| 2.1318 | 315300 | 0.4023 | - | - | - |
| 2.1325 | 315400 | 0.4318 | - | - | - |
| 2.1332 | 315500 | 0.4045 | - | - | - |
| 2.1339 | 315600 | 0.4068 | - | - | - |
| 2.1345 | 315700 | 0.4294 | - | - | - |
| 2.1352 | 315800 | 0.415 | - | - | - |
| 2.1359 | 315900 | 0.399 | - | - | - |
| 2.1366 | 316000 | 0.4164 | - | - | - |
| 2.1372 | 316100 | 0.422 | - | - | - |
| 2.1379 | 316200 | 0.3602 | - | - | - |
| 2.1386 | 316300 | 0.3743 | - | - | - |
| 2.1393 | 316400 | 0.3487 | - | - | - |
| 2.1399 | 316500 | 0.4144 | - | - | - |
| 2.1406 | 316600 | 0.4056 | - | - | - |
| 2.1413 | 316700 | 0.3964 | - | - | - |
| 2.1420 | 316800 | 0.3789 | - | - | - |
| 2.1426 | 316900 | 0.3668 | - | - | - |
| 2.1433 | 317000 | 0.4127 | - | - | - |
| 2.1440 | 317100 | 0.4342 | - | - | - |
| 2.1447 | 317200 | 0.3823 | - | - | - |
| 2.1454 | 317300 | 0.3691 | - | - | - |
| 2.1460 | 317400 | 0.4049 | - | - | - |
| 2.1467 | 317500 | 0.3894 | - | - | - |
| 2.1474 | 317600 | 0.3448 | - | - | - |
| 2.1481 | 317700 | 0.3925 | - | - | - |
| 2.1487 | 317800 | 0.4581 | - | - | - |
| 2.1494 | 317900 | 0.3603 | - | - | - |
| 2.1501 | 318000 | 0.4609 | - | - | - |
| 2.1508 | 318100 | 0.411 | - | - | - |
| 2.1514 | 318200 | 0.3565 | - | - | - |
| 2.1521 | 318300 | 0.4125 | - | - | - |
| 2.1528 | 318400 | 0.3601 | - | - | - |
| 2.1535 | 318500 | 0.4099 | - | - | - |
| 2.1541 | 318600 | 0.4131 | - | - | - |
| 2.1548 | 318700 | 0.4037 | - | - | - |
| 2.1555 | 318800 | 0.3675 | - | - | - |
| 2.1562 | 318900 | 0.4101 | - | - | - |
| 2.1568 | 319000 | 0.4596 | - | - | - |
| 2.1575 | 319100 | 0.4104 | - | - | - |
| 2.1582 | 319200 | 0.4252 | - | - | - |
| 2.1589 | 319300 | 0.4296 | - | - | - |
| 2.1596 | 319400 | 0.3727 | - | - | - |
| 2.1602 | 319500 | 0.3954 | - | - | - |
| 2.1609 | 319600 | 0.3897 | - | - | - |
| 2.1616 | 319700 | 0.4039 | - | - | - |
| 2.1623 | 319800 | 0.4159 | - | - | - |
| 2.1629 | 319900 | 0.3736 | - | - | - |
| 2.1636 | 320000 | 0.3546 | 0.5284 | 0.7738 | - |
| 2.1643 | 320100 | 0.3887 | - | - | - |
| 2.1650 | 320200 | 0.4216 | - | - | - |
| 2.1656 | 320300 | 0.386 | - | - | - |
| 2.1663 | 320400 | 0.3968 | - | - | - |
| 2.1670 | 320500 | 0.4222 | - | - | - |
| 2.1677 | 320600 | 0.3705 | - | - | - |
| 2.1683 | 320700 | 0.3858 | - | - | - |
| 2.1690 | 320800 | 0.3554 | - | - | - |
| 2.1697 | 320900 | 0.4083 | - | - | - |
| 2.1704 | 321000 | 0.3554 | - | - | - |
| 2.1710 | 321100 | 0.3752 | - | - | - |
| 2.1717 | 321200 | 0.3802 | - | - | - |
| 2.1724 | 321300 | 0.3948 | - | - | - |
| 2.1731 | 321400 | 0.4056 | - | - | - |
| 2.1738 | 321500 | 0.4246 | - | - | - |
| 2.1744 | 321600 | 0.445 | - | - | - |
| 2.1751 | 321700 | 0.3702 | - | - | - |
| 2.1758 | 321800 | 0.4039 | - | - | - |
| 2.1765 | 321900 | 0.4033 | - | - | - |
| 2.1771 | 322000 | 0.3713 | - | - | - |
| 2.1778 | 322100 | 0.4253 | - | - | - |
| 2.1785 | 322200 | 0.4437 | - | - | - |
| 2.1792 | 322300 | 0.3943 | - | - | - |
| 2.1798 | 322400 | 0.3989 | - | - | - |
| 2.1805 | 322500 | 0.3995 | - | - | - |
| 2.1812 | 322600 | 0.3423 | - | - | - |
| 2.1819 | 322700 | 0.4021 | - | - | - |
| 2.1825 | 322800 | 0.3885 | - | - | - |
| 2.1832 | 322900 | 0.4461 | - | - | - |
| 2.1839 | 323000 | 0.3759 | - | - | - |
| 2.1846 | 323100 | 0.3364 | - | - | - |
| 2.1852 | 323200 | 0.4253 | - | - | - |
| 2.1859 | 323300 | 0.3867 | - | - | - |
| 2.1866 | 323400 | 0.3756 | - | - | - |
| 2.1873 | 323500 | 0.3929 | - | - | - |
| 2.1880 | 323600 | 0.3872 | - | - | - |
| 2.1886 | 323700 | 0.3937 | - | - | - |
| 2.1893 | 323800 | 0.4093 | - | - | - |
| 2.1900 | 323900 | 0.4093 | - | - | - |
| 2.1907 | 324000 | 0.3772 | - | - | - |
| 2.1913 | 324100 | 0.4197 | - | - | - |
| 2.1920 | 324200 | 0.3644 | - | - | - |
| 2.1927 | 324300 | 0.3882 | - | - | - |
| 2.1934 | 324400 | 0.416 | - | - | - |
| 2.1940 | 324500 | 0.3779 | - | - | - |
| 2.1947 | 324600 | 0.3566 | - | - | - |
| 2.1954 | 324700 | 0.3495 | - | - | - |
| 2.1961 | 324800 | 0.3543 | - | - | - |
| 2.1967 | 324900 | 0.3713 | - | - | - |
| 2.1974 | 325000 | 0.467 | 0.5297 | 0.7734 | - |
| 2.1981 | 325100 | 0.3857 | - | - | - |
| 2.1988 | 325200 | 0.3898 | - | - | - |
| 2.1994 | 325300 | 0.35 | - | - | - |
| 2.2001 | 325400 | 0.3735 | - | - | - |
| 2.2008 | 325500 | 0.4056 | - | - | - |
| 2.2015 | 325600 | 0.3535 | - | - | - |
| 2.2021 | 325700 | 0.3773 | - | - | - |
| 2.2028 | 325800 | 0.3855 | - | - | - |
| 2.2035 | 325900 | 0.3861 | - | - | - |
| 2.2042 | 326000 | 0.3749 | - | - | - |
| 2.2049 | 326100 | 0.3548 | - | - | - |
| 2.2055 | 326200 | 0.42 | - | - | - |
| 2.2062 | 326300 | 0.3895 | - | - | - |
| 2.2069 | 326400 | 0.3647 | - | - | - |
| 2.2076 | 326500 | 0.4055 | - | - | - |
| 2.2082 | 326600 | 0.3698 | - | - | - |
| 2.2089 | 326700 | 0.3782 | - | - | - |
| 2.2096 | 326800 | 0.3498 | - | - | - |
| 2.2103 | 326900 | 0.347 | - | - | - |
| 2.2109 | 327000 | 0.3845 | - | - | - |
| 2.2116 | 327100 | 0.3584 | - | - | - |
| 2.2123 | 327200 | 0.3632 | - | - | - |
| 2.2130 | 327300 | 0.3436 | - | - | - |
| 2.2136 | 327400 | 0.418 | - | - | - |
| 2.2143 | 327500 | 0.3973 | - | - | - |
| 2.2150 | 327600 | 0.3823 | - | - | - |
| 2.2157 | 327700 | 0.3455 | - | - | - |
| 2.2163 | 327800 | 0.3403 | - | - | - |
| 2.2170 | 327900 | 0.3911 | - | - | - |
| 2.2177 | 328000 | 0.3847 | - | - | - |
| 2.2184 | 328100 | 0.4192 | - | - | - |
| 2.2191 | 328200 | 0.3886 | - | - | - |
| 2.2197 | 328300 | 0.4373 | - | - | - |
| 2.2204 | 328400 | 0.3881 | - | - | - |
| 2.2211 | 328500 | 0.3421 | - | - | - |
| 2.2218 | 328600 | 0.399 | - | - | - |
| 2.2224 | 328700 | 0.3896 | - | - | - |
| 2.2231 | 328800 | 0.3802 | - | - | - |
| 2.2238 | 328900 | 0.4061 | - | - | - |
| 2.2245 | 329000 | 0.3945 | - | - | - |
| 2.2251 | 329100 | 0.374 | - | - | - |
| 2.2258 | 329200 | 0.3704 | - | - | - |
| 2.2265 | 329300 | 0.3794 | - | - | - |
| 2.2272 | 329400 | 0.3719 | - | - | - |
| 2.2278 | 329500 | 0.3886 | - | - | - |
| 2.2285 | 329600 | 0.3672 | - | - | - |
| 2.2292 | 329700 | 0.3701 | - | - | - |
| 2.2299 | 329800 | 0.4168 | - | - | - |
| 2.2305 | 329900 | 0.4247 | - | - | - |
| 2.2312 | 330000 | 0.4098 | 0.5194 | 0.7727 | - |
| 2.2319 | 330100 | 0.3466 | - | - | - |
| 2.2326 | 330200 | 0.3868 | - | - | - |
| 2.2333 | 330300 | 0.3808 | - | - | - |
| 2.2339 | 330400 | 0.3772 | - | - | - |
| 2.2346 | 330500 | 0.3553 | - | - | - |
| 2.2353 | 330600 | 0.4153 | - | - | - |
| 2.2360 | 330700 | 0.3732 | - | - | - |
| 2.2366 | 330800 | 0.3693 | - | - | - |
| 2.2373 | 330900 | 0.3348 | - | - | - |
| 2.2380 | 331000 | 0.3395 | - | - | - |
| 2.2387 | 331100 | 0.4026 | - | - | - |
| 2.2393 | 331200 | 0.3987 | - | - | - |
| 2.2400 | 331300 | 0.377 | - | - | - |
| 2.2407 | 331400 | 0.3521 | - | - | - |
| 2.2414 | 331500 | 0.393 | - | - | - |
| 2.2420 | 331600 | 0.358 | - | - | - |
| 2.2427 | 331700 | 0.382 | - | - | - |
| 2.2434 | 331800 | 0.3733 | - | - | - |
| 2.2441 | 331900 | 0.3853 | - | - | - |
| 2.2447 | 332000 | 0.3678 | - | - | - |
| 2.2454 | 332100 | 0.3532 | - | - | - |
| 2.2461 | 332200 | 0.351 | - | - | - |
| 2.2468 | 332300 | 0.4066 | - | - | - |
| 2.2474 | 332400 | 0.3724 | - | - | - |
| 2.2481 | 332500 | 0.4137 | - | - | - |
| 2.2488 | 332600 | 0.3458 | - | - | - |
| 2.2495 | 332700 | 0.4008 | - | - | - |
| 2.2502 | 332800 | 0.3615 | - | - | - |
| 2.2508 | 332900 | 0.3783 | - | - | - |
| 2.2515 | 333000 | 0.3997 | - | - | - |
| 2.2522 | 333100 | 0.3563 | - | - | - |
| 2.2529 | 333200 | 0.3533 | - | - | - |
| 2.2535 | 333300 | 0.3906 | - | - | - |
| 2.2542 | 333400 | 0.3795 | - | - | - |
| 2.2549 | 333500 | 0.3917 | - | - | - |
| 2.2556 | 333600 | 0.3336 | - | - | - |
| 2.2562 | 333700 | 0.3498 | - | - | - |
| 2.2569 | 333800 | 0.4161 | - | - | - |
| 2.2576 | 333900 | 0.372 | - | - | - |
| 2.2583 | 334000 | 0.452 | - | - | - |
| 2.2589 | 334100 | 0.3852 | - | - | - |
| 2.2596 | 334200 | 0.3791 | - | - | - |
| 2.2603 | 334300 | 0.353 | - | - | - |
| 2.2610 | 334400 | 0.368 | - | - | - |
| 2.2616 | 334500 | 0.3467 | - | - | - |
| 2.2623 | 334600 | 0.3362 | - | - | - |
| 2.2630 | 334700 | 0.4289 | - | - | - |
| 2.2637 | 334800 | 0.3666 | - | - | - |
| 2.2644 | 334900 | 0.3897 | - | - | - |
| 2.2650 | 335000 | 0.3481 | 0.5218 | 0.7769 | - |
| 2.2657 | 335100 | 0.3705 | - | - | - |
| 2.2664 | 335200 | 0.3336 | - | - | - |
| 2.2671 | 335300 | 0.3849 | - | - | - |
| 2.2677 | 335400 | 0.3565 | - | - | - |
| 2.2684 | 335500 | 0.388 | - | - | - |
| 2.2691 | 335600 | 0.4085 | - | - | - |
| 2.2698 | 335700 | 0.3549 | - | - | - |
| 2.2704 | 335800 | 0.4103 | - | - | - |
| 2.2711 | 335900 | 0.3763 | - | - | - |
| 2.2718 | 336000 | 0.3856 | - | - | - |
| 2.2725 | 336100 | 0.3683 | - | - | - |
| 2.2731 | 336200 | 0.3458 | - | - | - |
| 2.2738 | 336300 | 0.373 | - | - | - |
| 2.2745 | 336400 | 0.3307 | - | - | - |
| 2.2752 | 336500 | 0.3565 | - | - | - |
| 2.2758 | 336600 | 0.39 | - | - | - |
| 2.2765 | 336700 | 0.3706 | - | - | - |
| 2.2772 | 336800 | 0.3826 | - | - | - |
| 2.2779 | 336900 | 0.3599 | - | - | - |
| 2.2786 | 337000 | 0.4095 | - | - | - |
| 2.2792 | 337100 | 0.4099 | - | - | - |
| 2.2799 | 337200 | 0.3185 | - | - | - |
| 2.2806 | 337300 | 0.3728 | - | - | - |
| 2.2813 | 337400 | 0.3797 | - | - | - |
| 2.2819 | 337500 | 0.3617 | - | - | - |
| 2.2826 | 337600 | 0.4147 | - | - | - |
| 2.2833 | 337700 | 0.3829 | - | - | - |
| 2.2840 | 337800 | 0.4415 | - | - | - |
| 2.2846 | 337900 | 0.3577 | - | - | - |
| 2.2853 | 338000 | 0.3646 | - | - | - |
| 2.2860 | 338100 | 0.3344 | - | - | - |
| 2.2867 | 338200 | 0.3517 | - | - | - |
| 2.2873 | 338300 | 0.3849 | - | - | - |
| 2.2880 | 338400 | 0.3506 | - | - | - |
| 2.2887 | 338500 | 0.3844 | - | - | - |
| 2.2894 | 338600 | 0.3481 | - | - | - |
| 2.2900 | 338700 | 0.3841 | - | - | - |
| 2.2907 | 338800 | 0.3538 | - | - | - |
| 2.2914 | 338900 | 0.35 | - | - | - |
| 2.2921 | 339000 | 0.372 | - | - | - |
| 2.2927 | 339100 | 0.3523 | - | - | - |
| 2.2934 | 339200 | 0.378 | - | - | - |
| 2.2941 | 339300 | 0.361 | - | - | - |
| 2.2948 | 339400 | 0.4187 | - | - | - |
| 2.2955 | 339500 | 0.3703 | - | - | - |
| 2.2961 | 339600 | 0.4037 | - | - | - |
| 2.2968 | 339700 | 0.3497 | - | - | - |
| 2.2975 | 339800 | 0.3576 | - | - | - |
| 2.2982 | 339900 | 0.3201 | - | - | - |
| 2.2988 | 340000 | 0.3568 | 0.5251 | 0.7756 | - |
| 2.2995 | 340100 | 0.3389 | - | - | - |
| 2.3002 | 340200 | 0.4018 | - | - | - |
| 2.3009 | 340300 | 0.389 | - | - | - |
| 2.3015 | 340400 | 0.3691 | - | - | - |
| 2.3022 | 340500 | 0.3774 | - | - | - |
| 2.3029 | 340600 | 0.3759 | - | - | - |
| 2.3036 | 340700 | 0.3328 | - | - | - |
| 2.3042 | 340800 | 0.3397 | - | - | - |
| 2.3049 | 340900 | 0.3445 | - | - | - |
| 2.3056 | 341000 | 0.3826 | - | - | - |
| 2.3063 | 341100 | 0.4337 | - | - | - |
| 2.3069 | 341200 | 0.3947 | - | - | - |
| 2.3076 | 341300 | 0.3406 | - | - | - |
| 2.3083 | 341400 | 0.3682 | - | - | - |
| 2.3090 | 341500 | 0.3912 | - | - | - |
| 2.3097 | 341600 | 0.3619 | - | - | - |
| 2.3103 | 341700 | 0.3402 | - | - | - |
| 2.3110 | 341800 | 0.3923 | - | - | - |
| 2.3117 | 341900 | 0.3586 | - | - | - |
| 2.3124 | 342000 | 0.3485 | - | - | - |
| 2.3130 | 342100 | 0.3664 | - | - | - |
| 2.3137 | 342200 | 0.3436 | - | - | - |
| 2.3144 | 342300 | 0.3594 | - | - | - |
| 2.3151 | 342400 | 0.3511 | - | - | - |
| 2.3157 | 342500 | 0.4079 | - | - | - |
| 2.3164 | 342600 | 0.3421 | - | - | - |
| 2.3171 | 342700 | 0.3569 | - | - | - |
| 2.3178 | 342800 | 0.3575 | - | - | - |
| 2.3184 | 342900 | 0.3676 | - | - | - |
| 2.3191 | 343000 | 0.4183 | - | - | - |
| 2.3198 | 343100 | 0.3657 | - | - | - |
| 2.3205 | 343200 | 0.3678 | - | - | - |
| 2.3211 | 343300 | 0.3994 | - | - | - |
| 2.3218 | 343400 | 0.3485 | - | - | - |
| 2.3225 | 343500 | 0.3985 | - | - | - |
| 2.3232 | 343600 | 0.3961 | - | - | - |
| 2.3239 | 343700 | 0.2983 | - | - | - |
| 2.3245 | 343800 | 0.3411 | - | - | - |
| 2.3252 | 343900 | 0.3604 | - | - | - |
| 2.3259 | 344000 | 0.3675 | - | - | - |
| 2.3266 | 344100 | 0.3761 | - | - | - |
| 2.3272 | 344200 | 0.3734 | - | - | - |
| 2.3279 | 344300 | 0.3309 | - | - | - |
| 2.3286 | 344400 | 0.4029 | - | - | - |
| 2.3293 | 344500 | 0.342 | - | - | - |
| 2.3299 | 344600 | 0.3492 | - | - | - |
| 2.3306 | 344700 | 0.3451 | - | - | - |
| 2.3313 | 344800 | 0.4008 | - | - | - |
| 2.3320 | 344900 | 0.3493 | - | - | - |
| 2.3326 | 345000 | 0.326 | 0.5412 | 0.7733 | - |
| 2.3333 | 345100 | 0.3139 | - | - | - |
| 2.3340 | 345200 | 0.3719 | - | - | - |
| 2.3347 | 345300 | 0.3583 | - | - | - |
| 2.3353 | 345400 | 0.3678 | - | - | - |
| 2.3360 | 345500 | 0.3616 | - | - | - |
| 2.3367 | 345600 | 0.3246 | - | - | - |
| 2.3374 | 345700 | 0.3348 | - | - | - |
| 2.3381 | 345800 | 0.3528 | - | - | - |
| 2.3387 | 345900 | 0.3182 | - | - | - |
| 2.3394 | 346000 | 0.4038 | - | - | - |
| 2.3401 | 346100 | 0.3617 | - | - | - |
| 2.3408 | 346200 | 0.3198 | - | - | - |
| 2.3414 | 346300 | 0.3481 | - | - | - |
| 2.3421 | 346400 | 0.3579 | - | - | - |
| 2.3428 | 346500 | 0.3563 | - | - | - |
| 2.3435 | 346600 | 0.369 | - | - | - |
| 2.3441 | 346700 | 0.3691 | - | - | - |
| 2.3448 | 346800 | 0.3703 | - | - | - |
| 2.3455 | 346900 | 0.4009 | - | - | - |
| 2.3462 | 347000 | 0.3651 | - | - | - |
| 2.3468 | 347100 | 0.3815 | - | - | - |
| 2.3475 | 347200 | 0.3285 | - | - | - |
| 2.3482 | 347300 | 0.3318 | - | - | - |
| 2.3489 | 347400 | 0.3602 | - | - | - |
| 2.3495 | 347500 | 0.3657 | - | - | - |
| 2.3502 | 347600 | 0.3615 | - | - | - |
| 2.3509 | 347700 | 0.3603 | - | - | - |
| 2.3516 | 347800 | 0.3146 | - | - | - |
| 2.3522 | 347900 | 0.3979 | - | - | - |
| 2.3529 | 348000 | 0.3675 | - | - | - |
| 2.3536 | 348100 | 0.3037 | - | - | - |
| 2.3543 | 348200 | 0.3659 | - | - | - |
| 2.3550 | 348300 | 0.3183 | - | - | - |
| 2.3556 | 348400 | 0.3505 | - | - | - |
| 2.3563 | 348500 | 0.3501 | - | - | - |
| 2.3570 | 348600 | 0.3783 | - | - | - |
| 2.3577 | 348700 | 0.3803 | - | - | - |
| 2.3583 | 348800 | 0.355 | - | - | - |
| 2.3590 | 348900 | 0.3779 | - | - | - |
| 2.3597 | 349000 | 0.3446 | - | - | - |
| 2.3604 | 349100 | 0.3454 | - | - | - |
| 2.3610 | 349200 | 0.3374 | - | - | - |
| 2.3617 | 349300 | 0.3362 | - | - | - |
| 2.3624 | 349400 | 0.329 | - | - | - |
| 2.3631 | 349500 | 0.3444 | - | - | - |
| 2.3637 | 349600 | 0.3005 | - | - | - |
| 2.3644 | 349700 | 0.3628 | - | - | - |
| 2.3651 | 349800 | 0.323 | - | - | - |
| 2.3658 | 349900 | 0.3409 | - | - | - |
| 2.3664 | 350000 | 0.364 | 0.5435 | 0.7704 | - |
| 2.3671 | 350100 | 0.3523 | - | - | - |
| 2.3678 | 350200 | 0.3476 | - | - | - |
| 2.3685 | 350300 | 0.3515 | - | - | - |
| 2.3692 | 350400 | 0.3502 | - | - | - |
| 2.3698 | 350500 | 0.3427 | - | - | - |
| 2.3705 | 350600 | 0.3401 | - | - | - |
| 2.3712 | 350700 | 0.3655 | - | - | - |
| 2.3719 | 350800 | 0.3542 | - | - | - |
| 2.3725 | 350900 | 0.3485 | - | - | - |
| 2.3732 | 351000 | 0.3555 | - | - | - |
| 2.3739 | 351100 | 0.3381 | - | - | - |
| 2.3746 | 351200 | 0.3128 | - | - | - |
| 2.3752 | 351300 | 0.3591 | - | - | - |
| 2.3759 | 351400 | 0.3307 | - | - | - |
| 2.3766 | 351500 | 0.3654 | - | - | - |
| 2.3773 | 351600 | 0.3197 | - | - | - |
| 2.3779 | 351700 | 0.3441 | - | - | - |
| 2.3786 | 351800 | 0.3249 | - | - | - |
| 2.3793 | 351900 | 0.3736 | - | - | - |
| 2.3800 | 352000 | 0.358 | - | - | - |
| 2.3806 | 352100 | 0.3471 | - | - | - |
| 2.3813 | 352200 | 0.362 | - | - | - |
| 2.3820 | 352300 | 0.379 | - | - | - |
| 2.3827 | 352400 | 0.3356 | - | - | - |
| 2.3834 | 352500 | 0.3377 | - | - | - |
| 2.3840 | 352600 | 0.3716 | - | - | - |
| 2.3847 | 352700 | 0.3486 | - | - | - |
| 2.3854 | 352800 | 0.3606 | - | - | - |
| 2.3861 | 352900 | 0.3371 | - | - | - |
| 2.3867 | 353000 | 0.3848 | - | - | - |
| 2.3874 | 353100 | 0.3285 | - | - | - |
| 2.3881 | 353200 | 0.3324 | - | - | - |
| 2.3888 | 353300 | 0.3405 | - | - | - |
| 2.3894 | 353400 | 0.3585 | - | - | - |
| 2.3901 | 353500 | 0.399 | - | - | - |
| 2.3908 | 353600 | 0.3369 | - | - | - |
| 2.3915 | 353700 | 0.3634 | - | - | - |
| 2.3921 | 353800 | 0.3295 | - | - | - |
| 2.3928 | 353900 | 0.2972 | - | - | - |
| 2.3935 | 354000 | 0.4023 | - | - | - |
| 2.3942 | 354100 | 0.3431 | - | - | - |
| 2.3948 | 354200 | 0.3289 | - | - | - |
| 2.3955 | 354300 | 0.3463 | - | - | - |
| 2.3962 | 354400 | 0.3785 | - | - | - |
| 2.3969 | 354500 | 0.3954 | - | - | - |
| 2.3975 | 354600 | 0.306 | - | - | - |
| 2.3982 | 354700 | 0.3302 | - | - | - |
| 2.3989 | 354800 | 0.3632 | - | - | - |
| 2.3996 | 354900 | 0.3546 | - | - | - |
| 2.4003 | 355000 | 0.3654 | 0.5347 | 0.7747 | - |
| 2.4009 | 355100 | 0.3721 | - | - | - |
| 2.4016 | 355200 | 0.3624 | - | - | - |
| 2.4023 | 355300 | 0.355 | - | - | - |
| 2.4030 | 355400 | 0.3632 | - | - | - |
| 2.4036 | 355500 | 0.3508 | - | - | - |
| 2.4043 | 355600 | 0.365 | - | - | - |
| 2.4050 | 355700 | 0.2937 | - | - | - |
| 2.4057 | 355800 | 0.3256 | - | - | - |
| 2.4063 | 355900 | 0.3511 | - | - | - |
| 2.4070 | 356000 | 0.372 | - | - | - |
| 2.4077 | 356100 | 0.3729 | - | - | - |
| 2.4084 | 356200 | 0.358 | - | - | - |
| 2.4090 | 356300 | 0.3645 | - | - | - |
| 2.4097 | 356400 | 0.3505 | - | - | - |
| 2.4104 | 356500 | 0.3588 | - | - | - |
| 2.4111 | 356600 | 0.3365 | - | - | - |
| 2.4117 | 356700 | 0.3143 | - | - | - |
| 2.4124 | 356800 | 0.3145 | - | - | - |
| 2.4131 | 356900 | 0.3653 | - | - | - |
| 2.4138 | 357000 | 0.3671 | - | - | - |
| 2.4145 | 357100 | 0.3706 | - | - | - |
| 2.4151 | 357200 | 0.3792 | - | - | - |
| 2.4158 | 357300 | 0.3705 | - | - | - |
| 2.4165 | 357400 | 0.3444 | - | - | - |
| 2.4172 | 357500 | 0.3508 | - | - | - |
| 2.4178 | 357600 | 0.3584 | - | - | - |
| 2.4185 | 357700 | 0.311 | - | - | - |
| 2.4192 | 357800 | 0.3221 | - | - | - |
| 2.4199 | 357900 | 0.3574 | - | - | - |
| 2.4205 | 358000 | 0.3614 | - | - | - |
| 2.4212 | 358100 | 0.3513 | - | - | - |
| 2.4219 | 358200 | 0.3703 | - | - | - |
| 2.4226 | 358300 | 0.3601 | - | - | - |
| 2.4232 | 358400 | 0.3735 | - | - | - |
| 2.4239 | 358500 | 0.4002 | - | - | - |
| 2.4246 | 358600 | 0.3237 | - | - | - |
| 2.4253 | 358700 | 0.3592 | - | - | - |
| 2.4259 | 358800 | 0.3709 | - | - | - |
| 2.4266 | 358900 | 0.3498 | - | - | - |
| 2.4273 | 359000 | 0.3645 | - | - | - |
| 2.4280 | 359100 | 0.3384 | - | - | - |
| 2.4287 | 359200 | 0.3563 | - | - | - |
| 2.4293 | 359300 | 0.3107 | - | - | - |
| 2.4300 | 359400 | 0.3642 | - | - | - |
| 2.4307 | 359500 | 0.2984 | - | - | - |
| 2.4314 | 359600 | 0.3631 | - | - | - |
| 2.4320 | 359700 | 0.3272 | - | - | - |
| 2.4327 | 359800 | 0.319 | - | - | - |
| 2.4334 | 359900 | 0.3511 | - | - | - |
| 2.4341 | 360000 | 0.3674 | 0.5364 | 0.7782 | - |
| 2.4347 | 360100 | 0.3567 | - | - | - |
| 2.4354 | 360200 | 0.3232 | - | - | - |
| 2.4361 | 360300 | 0.3218 | - | - | - |
| 2.4368 | 360400 | 0.3202 | - | - | - |
| 2.4374 | 360500 | 0.3704 | - | - | - |
| 2.4381 | 360600 | 0.3702 | - | - | - |
| 2.4388 | 360700 | 0.3581 | - | - | - |
| 2.4395 | 360800 | 0.3257 | - | - | - |
| 2.4401 | 360900 | 0.3624 | - | - | - |
| 2.4408 | 361000 | 0.349 | - | - | - |
| 2.4415 | 361100 | 0.372 | - | - | - |
| 2.4422 | 361200 | 0.351 | - | - | - |
| 2.4429 | 361300 | 0.369 | - | - | - |
| 2.4435 | 361400 | 0.3268 | - | - | - |
| 2.4442 | 361500 | 0.3517 | - | - | - |
| 2.4449 | 361600 | 0.3289 | - | - | - |
| 2.4456 | 361700 | 0.3482 | - | - | - |
| 2.4462 | 361800 | 0.3345 | - | - | - |
| 2.4469 | 361900 | 0.3901 | - | - | - |
| 2.4476 | 362000 | 0.374 | - | - | - |
| 2.4483 | 362100 | 0.3414 | - | - | - |
| 2.4489 | 362200 | 0.3482 | - | - | - |
| 2.4496 | 362300 | 0.3365 | - | - | - |
| 2.4503 | 362400 | 0.305 | - | - | - |
| 2.4510 | 362500 | 0.3322 | - | - | - |
| 2.4516 | 362600 | 0.3427 | - | - | - |
| 2.4523 | 362700 | 0.3269 | - | - | - |
| 2.4530 | 362800 | 0.3623 | - | - | - |
| 2.4537 | 362900 | 0.3241 | - | - | - |
| 2.4543 | 363000 | 0.3414 | - | - | - |
| 2.4550 | 363100 | 0.3502 | - | - | - |
| 2.4557 | 363200 | 0.3445 | - | - | - |
| 2.4564 | 363300 | 0.3207 | - | - | - |
| 2.4570 | 363400 | 0.3547 | - | - | - |
| 2.4577 | 363500 | 0.3737 | - | - | - |
| 2.4584 | 363600 | 0.4008 | - | - | - |
| 2.4591 | 363700 | 0.3527 | - | - | - |
| 2.4598 | 363800 | 0.3317 | - | - | - |
| 2.4604 | 363900 | 0.3071 | - | - | - |
| 2.4611 | 364000 | 0.3303 | - | - | - |
| 2.4618 | 364100 | 0.3589 | - | - | - |
| 2.4625 | 364200 | 0.3555 | - | - | - |
| 2.4631 | 364300 | 0.3366 | - | - | - |
| 2.4638 | 364400 | 0.336 | - | - | - |
| 2.4645 | 364500 | 0.3461 | - | - | - |
| 2.4652 | 364600 | 0.3451 | - | - | - |
| 2.4658 | 364700 | 0.3134 | - | - | - |
| 2.4665 | 364800 | 0.3574 | - | - | - |
| 2.4672 | 364900 | 0.3689 | - | - | - |
| 2.4679 | 365000 | 0.3216 | 0.5373 | 0.7754 | - |
| 2.4685 | 365100 | 0.3578 | - | - | - |
| 2.4692 | 365200 | 0.3823 | - | - | - |
| 2.4699 | 365300 | 0.3507 | - | - | - |
| 2.4706 | 365400 | 0.3634 | - | - | - |
| 2.4712 | 365500 | 0.322 | - | - | - |
| 2.4719 | 365600 | 0.34 | - | - | - |
| 2.4726 | 365700 | 0.3186 | - | - | - |
| 2.4733 | 365800 | 0.3455 | - | - | - |
| 2.4740 | 365900 | 0.3481 | - | - | - |
| 2.4746 | 366000 | 0.3615 | - | - | - |
| 2.4753 | 366100 | 0.3364 | - | - | - |
| 2.4760 | 366200 | 0.3412 | - | - | - |
| 2.4767 | 366300 | 0.3783 | - | - | - |
| 2.4773 | 366400 | 0.3189 | - | - | - |
| 2.4780 | 366500 | 0.3375 | - | - | - |
| 2.4787 | 366600 | 0.3237 | - | - | - |
| 2.4794 | 366700 | 0.2865 | - | - | - |
| 2.4800 | 366800 | 0.3961 | - | - | - |
| 2.4807 | 366900 | 0.3724 | - | - | - |
| 2.4814 | 367000 | 0.3471 | - | - | - |
| 2.4821 | 367100 | 0.3366 | - | - | - |
| 2.4827 | 367200 | 0.3662 | - | - | - |
| 2.4834 | 367300 | 0.3306 | - | - | - |
| 2.4841 | 367400 | 0.3936 | - | - | - |
| 2.4848 | 367500 | 0.3453 | - | - | - |
| 2.4854 | 367600 | 0.3872 | - | - | - |
| 2.4861 | 367700 | 0.3524 | - | - | - |
| 2.4868 | 367800 | 0.3902 | - | - | - |
| 2.4875 | 367900 | 0.3562 | - | - | - |
| 2.4882 | 368000 | 0.3417 | - | - | - |
| 2.4888 | 368100 | 0.3444 | - | - | - |
| 2.4895 | 368200 | 0.3276 | - | - | - |
| 2.4902 | 368300 | 0.3395 | - | - | - |
| 2.4909 | 368400 | 0.2924 | - | - | - |
| 2.4915 | 368500 | 0.2896 | - | - | - |
| 2.4922 | 368600 | 0.3406 | - | - | - |
| 2.4929 | 368700 | 0.3036 | - | - | - |
| 2.4936 | 368800 | 0.3656 | - | - | - |
| 2.4942 | 368900 | 0.3053 | - | - | - |
| 2.4949 | 369000 | 0.3439 | - | - | - |
| 2.4956 | 369100 | 0.3468 | - | - | - |
| 2.4963 | 369200 | 0.337 | - | - | - |
| 2.4969 | 369300 | 0.3594 | - | - | - |
| 2.4976 | 369400 | 0.3248 | - | - | - |
| 2.4983 | 369500 | 0.3278 | - | - | - |
| 2.4990 | 369600 | 0.3424 | - | - | - |
| 2.4996 | 369700 | 0.3974 | - | - | - |
| 2.5003 | 369800 | 0.3263 | - | - | - |
| 2.5010 | 369900 | 0.2972 | - | - | - |
| 2.5017 | 370000 | 0.3518 | 0.5469 | 0.7769 | - |
| 2.5023 | 370100 | 0.2808 | - | - | - |
| 2.5030 | 370200 | 0.3763 | - | - | - |
| 2.5037 | 370300 | 0.3774 | - | - | - |
| 2.5044 | 370400 | 0.3134 | - | - | - |
| 2.5051 | 370500 | 0.3064 | - | - | - |
| 2.5057 | 370600 | 0.3328 | - | - | - |
| 2.5064 | 370700 | 0.3454 | - | - | - |
| 2.5071 | 370800 | 0.3804 | - | - | - |
| 2.5078 | 370900 | 0.3324 | - | - | - |
| 2.5084 | 371000 | 0.3301 | - | - | - |
| 2.5091 | 371100 | 0.3222 | - | - | - |
| 2.5098 | 371200 | 0.3661 | - | - | - |
| 2.5105 | 371300 | 0.3279 | - | - | - |
| 2.5111 | 371400 | 0.346 | - | - | - |
| 2.5118 | 371500 | 0.3417 | - | - | - |
| 2.5125 | 371600 | 0.3523 | - | - | - |
| 2.5132 | 371700 | 0.336 | - | - | - |
| 2.5138 | 371800 | 0.3467 | - | - | - |
| 2.5145 | 371900 | 0.3231 | - | - | - |
| 2.5152 | 372000 | 0.3239 | - | - | - |
| 2.5159 | 372100 | 0.3507 | - | - | - |
| 2.5165 | 372200 | 0.326 | - | - | - |
| 2.5172 | 372300 | 0.3379 | - | - | - |
| 2.5179 | 372400 | 0.3538 | - | - | - |
| 2.5186 | 372500 | 0.3309 | - | - | - |
| 2.5193 | 372600 | 0.3484 | - | - | - |
| 2.5199 | 372700 | 0.3694 | - | - | - |
| 2.5206 | 372800 | 0.2863 | - | - | - |
| 2.5213 | 372900 | 0.3401 | - | - | - |
| 2.5220 | 373000 | 0.3333 | - | - | - |
| 2.5226 | 373100 | 0.3656 | - | - | - |
| 2.5233 | 373200 | 0.3478 | - | - | - |
| 2.5240 | 373300 | 0.3575 | - | - | - |
| 2.5247 | 373400 | 0.3565 | - | - | - |
| 2.5253 | 373500 | 0.3196 | - | - | - |
| 2.5260 | 373600 | 0.3795 | - | - | - |
| 2.5267 | 373700 | 0.3539 | - | - | - |
| 2.5274 | 373800 | 0.3513 | - | - | - |
| 2.5280 | 373900 | 0.3589 | - | - | - |
| 2.5287 | 374000 | 0.3346 | - | - | - |
| 2.5294 | 374100 | 0.3409 | - | - | - |
| 2.5301 | 374200 | 0.3701 | - | - | - |
| 2.5307 | 374300 | 0.3182 | - | - | - |
| 2.5314 | 374400 | 0.3472 | - | - | - |
| 2.5321 | 374500 | 0.3325 | - | - | - |
| 2.5328 | 374600 | 0.3147 | - | - | - |
| 2.5335 | 374700 | 0.3608 | - | - | - |
| 2.5341 | 374800 | 0.3289 | - | - | - |
| 2.5348 | 374900 | 0.3406 | - | - | - |
| 2.5355 | 375000 | 0.3732 | 0.5402 | 0.7764 | - |
| 2.5362 | 375100 | 0.3023 | - | - | - |
| 2.5368 | 375200 | 0.3374 | - | - | - |
| 2.5375 | 375300 | 0.3292 | - | - | - |
| 2.5382 | 375400 | 0.2952 | - | - | - |
| 2.5389 | 375500 | 0.3285 | - | - | - |
| 2.5395 | 375600 | 0.304 | - | - | - |
| 2.5402 | 375700 | 0.3291 | - | - | - |
| 2.5409 | 375800 | 0.3312 | - | - | - |
| 2.5416 | 375900 | 0.3404 | - | - | - |
| 2.5422 | 376000 | 0.3096 | - | - | - |
| 2.5429 | 376100 | 0.3312 | - | - | - |
| 2.5436 | 376200 | 0.3467 | - | - | - |
| 2.5443 | 376300 | 0.3539 | - | - | - |
| 2.5449 | 376400 | 0.3409 | - | - | - |
| 2.5456 | 376500 | 0.3783 | - | - | - |
| 2.5463 | 376600 | 0.3072 | - | - | - |
| 2.5470 | 376700 | 0.3613 | - | - | - |
| 2.5477 | 376800 | 0.3444 | - | - | - |
| 2.5483 | 376900 | 0.3322 | - | - | - |
| 2.5490 | 377000 | 0.3224 | - | - | - |
| 2.5497 | 377100 | 0.3214 | - | - | - |
| 2.5504 | 377200 | 0.3499 | - | - | - |
| 2.5510 | 377300 | 0.3706 | - | - | - |
| 2.5517 | 377400 | 0.345 | - | - | - |
| 2.5524 | 377500 | 0.3091 | - | - | - |
| 2.5531 | 377600 | 0.3336 | - | - | - |
| 2.5537 | 377700 | 0.3238 | - | - | - |
| 2.5544 | 377800 | 0.331 | - | - | - |
| 2.5551 | 377900 | 0.3341 | - | - | - |
| 2.5558 | 378000 | 0.3 | - | - | - |
| 2.5564 | 378100 | 0.3326 | - | - | - |
| 2.5571 | 378200 | 0.3519 | - | - | - |
| 2.5578 | 378300 | 0.3468 | - | - | - |
| 2.5585 | 378400 | 0.3239 | - | - | - |
| 2.5591 | 378500 | 0.3471 | - | - | - |
| 2.5598 | 378600 | 0.3079 | - | - | - |
| 2.5605 | 378700 | 0.3846 | - | - | - |
| 2.5612 | 378800 | 0.3249 | - | - | - |
| 2.5618 | 378900 | 0.3379 | - | - | - |
| 2.5625 | 379000 | 0.3209 | - | - | - |
| 2.5632 | 379100 | 0.3189 | - | - | - |
| 2.5639 | 379200 | 0.3523 | - | - | - |
| 2.5646 | 379300 | 0.3172 | - | - | - |
| 2.5652 | 379400 | 0.3451 | - | - | - |
| 2.5659 | 379500 | 0.3118 | - | - | - |
| 2.5666 | 379600 | 0.3088 | - | - | - |
| 2.5673 | 379700 | 0.361 | - | - | - |
| 2.5679 | 379800 | 0.3255 | - | - | - |
| 2.5686 | 379900 | 0.3017 | - | - | - |
| 2.5693 | 380000 | 0.3414 | 0.5416 | 0.7783 | - |
| 2.5700 | 380100 | 0.3258 | - | - | - |
| 2.5706 | 380200 | 0.3412 | - | - | - |
| 2.5713 | 380300 | 0.37 | - | - | - |
| 2.5720 | 380400 | 0.3368 | - | - | - |
| 2.5727 | 380500 | 0.3519 | - | - | - |
| 2.5733 | 380600 | 0.3391 | - | - | - |
| 2.5740 | 380700 | 0.3323 | - | - | - |
| 2.5747 | 380800 | 0.3666 | - | - | - |
| 2.5754 | 380900 | 0.3159 | - | - | - |
| 2.5760 | 381000 | 0.3324 | - | - | - |
| 2.5767 | 381100 | 0.3333 | - | - | - |
| 2.5774 | 381200 | 0.2882 | - | - | - |
| 2.5781 | 381300 | 0.3223 | - | - | - |
| 2.5788 | 381400 | 0.3284 | - | - | - |
| 2.5794 | 381500 | 0.3026 | - | - | - |
| 2.5801 | 381600 | 0.3737 | - | - | - |
| 2.5808 | 381700 | 0.3256 | - | - | - |
| 2.5815 | 381800 | 0.3458 | - | - | - |
| 2.5821 | 381900 | 0.3647 | - | - | - |
| 2.5828 | 382000 | 0.3057 | - | - | - |
| 2.5835 | 382100 | 0.3427 | - | - | - |
| 2.5842 | 382200 | 0.3462 | - | - | - |
| 2.5848 | 382300 | 0.3224 | - | - | - |
| 2.5855 | 382400 | 0.3721 | - | - | - |
| 2.5862 | 382500 | 0.3137 | - | - | - |
| 2.5869 | 382600 | 0.3271 | - | - | - |
| 2.5875 | 382700 | 0.3379 | - | - | - |
| 2.5882 | 382800 | 0.3109 | - | - | - |
| 2.5889 | 382900 | 0.3533 | - | - | - |
| 2.5896 | 383000 | 0.3256 | - | - | - |
| 2.5902 | 383100 | 0.2986 | - | - | - |
| 2.5909 | 383200 | 0.3378 | - | - | - |
| 2.5916 | 383300 | 0.3257 | - | - | - |
| 2.5923 | 383400 | 0.2926 | - | - | - |
| 2.5930 | 383500 | 0.3157 | - | - | - |
| 2.5936 | 383600 | 0.3606 | - | - | - |
| 2.5943 | 383700 | 0.3179 | - | - | - |
| 2.5950 | 383800 | 0.343 | - | - | - |
| 2.5957 | 383900 | 0.3127 | - | - | - |
| 2.5963 | 384000 | 0.2919 | - | - | - |
| 2.5970 | 384100 | 0.3351 | - | - | - |
| 2.5977 | 384200 | 0.2716 | - | - | - |
| 2.5984 | 384300 | 0.3498 | - | - | - |
| 2.5990 | 384400 | 0.3381 | - | - | - |
| 2.5997 | 384500 | 0.35 | - | - | - |
| 2.6004 | 384600 | 0.2971 | - | - | - |
| 2.6011 | 384700 | 0.318 | - | - | - |
| 2.6017 | 384800 | 0.328 | - | - | - |
| 2.6024 | 384900 | 0.3278 | - | - | - |
| 2.6031 | 385000 | 0.3424 | 0.5363 | 0.7818 | - |
| 2.6038 | 385100 | 0.3334 | - | - | - |
| 2.6044 | 385200 | 0.3388 | - | - | - |
| 2.6051 | 385300 | 0.3351 | - | - | - |
| 2.6058 | 385400 | 0.3335 | - | - | - |
| 2.6065 | 385500 | 0.3532 | - | - | - |
| 2.6071 | 385600 | 0.3169 | - | - | - |
| 2.6078 | 385700 | 0.3226 | - | - | - |
| 2.6085 | 385800 | 0.3459 | - | - | - |
| 2.6092 | 385900 | 0.3473 | - | - | - |
| 2.6099 | 386000 | 0.2826 | - | - | - |
| 2.6105 | 386100 | 0.3608 | - | - | - |
| 2.6112 | 386200 | 0.3149 | - | - | - |
| 2.6119 | 386300 | 0.3221 | - | - | - |
| 2.6126 | 386400 | 0.311 | - | - | - |
| 2.6132 | 386500 | 0.3182 | - | - | - |
| 2.6139 | 386600 | 0.3138 | - | - | - |
| 2.6146 | 386700 | 0.3529 | - | - | - |
| 2.6153 | 386800 | 0.3127 | - | - | - |
| 2.6159 | 386900 | 0.3199 | - | - | - |
| 2.6166 | 387000 | 0.3905 | - | - | - |
| 2.6173 | 387100 | 0.338 | - | - | - |
| 2.6180 | 387200 | 0.3337 | - | - | - |
| 2.6186 | 387300 | 0.3145 | - | - | - |
| 2.6193 | 387400 | 0.338 | - | - | - |
| 2.6200 | 387500 | 0.3117 | - | - | - |
| 2.6207 | 387600 | 0.3431 | - | - | - |
| 2.6213 | 387700 | 0.2958 | - | - | - |
| 2.6220 | 387800 | 0.2787 | - | - | - |
| 2.6227 | 387900 | 0.3056 | - | - | - |
| 2.6234 | 388000 | 0.2971 | - | - | - |
| 2.6241 | 388100 | 0.3429 | - | - | - |
| 2.6247 | 388200 | 0.3103 | - | - | - |
| 2.6254 | 388300 | 0.32 | - | - | - |
| 2.6261 | 388400 | 0.3487 | - | - | - |
| 2.6268 | 388500 | 0.3147 | - | - | - |
| 2.6274 | 388600 | 0.3489 | - | - | - |
| 2.6281 | 388700 | 0.3171 | - | - | - |
| 2.6288 | 388800 | 0.2931 | - | - | - |
| 2.6295 | 388900 | 0.3094 | - | - | - |
| 2.6301 | 389000 | 0.3221 | - | - | - |
| 2.6308 | 389100 | 0.2987 | - | - | - |
| 2.6315 | 389200 | 0.3199 | - | - | - |
| 2.6322 | 389300 | 0.3084 | - | - | - |
| 2.6328 | 389400 | 0.3129 | - | - | - |
| 2.6335 | 389500 | 0.3255 | - | - | - |
| 2.6342 | 389600 | 0.3144 | - | - | - |
| 2.6349 | 389700 | 0.2888 | - | - | - |
| 2.6355 | 389800 | 0.3563 | - | - | - |
| 2.6362 | 389900 | 0.3554 | - | - | - |
| 2.6369 | 390000 | 0.3515 | 0.5365 | 0.7760 | - |
| 2.6376 | 390100 | 0.3412 | - | - | - |
| 2.6383 | 390200 | 0.3125 | - | - | - |
| 2.6389 | 390300 | 0.3129 | - | - | - |
| 2.6396 | 390400 | 0.2845 | - | - | - |
| 2.6403 | 390500 | 0.3368 | - | - | - |
| 2.6410 | 390600 | 0.332 | - | - | - |
| 2.6416 | 390700 | 0.3285 | - | - | - |
| 2.6423 | 390800 | 0.295 | - | - | - |
| 2.6430 | 390900 | 0.2855 | - | - | - |
| 2.6437 | 391000 | 0.3566 | - | - | - |
| 2.6443 | 391100 | 0.334 | - | - | - |
| 2.6450 | 391200 | 0.2806 | - | - | - |
| 2.6457 | 391300 | 0.3277 | - | - | - |
| 2.6464 | 391400 | 0.3556 | - | - | - |
| 2.6470 | 391500 | 0.3089 | - | - | - |
| 2.6477 | 391600 | 0.2909 | - | - | - |
| 2.6484 | 391700 | 0.3199 | - | - | - |
| 2.6491 | 391800 | 0.3293 | - | - | - |
| 2.6497 | 391900 | 0.356 | - | - | - |
| 2.6504 | 392000 | 0.3373 | - | - | - |
| 2.6511 | 392100 | 0.3479 | - | - | - |
| 2.6518 | 392200 | 0.3415 | - | - | - |
| 2.6524 | 392300 | 0.3206 | - | - | - |
| 2.6531 | 392400 | 0.3369 | - | - | - |
| 2.6538 | 392500 | 0.2952 | - | - | - |
| 2.6545 | 392600 | 0.3844 | - | - | - |
| 2.6552 | 392700 | 0.3019 | - | - | - |
| 2.6558 | 392800 | 0.3203 | - | - | - |
| 2.6565 | 392900 | 0.307 | - | - | - |
| 2.6572 | 393000 | 0.3437 | - | - | - |
| 2.6579 | 393100 | 0.3228 | - | - | - |
| 2.6585 | 393200 | 0.3161 | - | - | - |
| 2.6592 | 393300 | 0.324 | - | - | - |
| 2.6599 | 393400 | 0.3078 | - | - | - |
| 2.6606 | 393500 | 0.3467 | - | - | - |
| 2.6612 | 393600 | 0.3341 | - | - | - |
| 2.6619 | 393700 | 0.3539 | - | - | - |
| 2.6626 | 393800 | 0.3293 | - | - | - |
| 2.6633 | 393900 | 0.3117 | - | - | - |
| 2.6639 | 394000 | 0.2864 | - | - | - |
| 2.6646 | 394100 | 0.3177 | - | - | - |
| 2.6653 | 394200 | 0.3616 | - | - | - |
| 2.6660 | 394300 | 0.2986 | - | - | - |
| 2.6666 | 394400 | 0.2807 | - | - | - |
| 2.6673 | 394500 | 0.3787 | - | - | - |
| 2.6680 | 394600 | 0.2925 | - | - | - |
| 2.6687 | 394700 | 0.3117 | - | - | - |
| 2.6694 | 394800 | 0.333 | - | - | - |
| 2.6700 | 394900 | 0.3202 | - | - | - |
| 2.6707 | 395000 | 0.2952 | 0.5358 | 0.7789 | - |
| 2.6714 | 395100 | 0.3 | - | - | - |
| 2.6721 | 395200 | 0.3454 | - | - | - |
| 2.6727 | 395300 | 0.3456 | - | - | - |
| 2.6734 | 395400 | 0.3282 | - | - | - |
| 2.6741 | 395500 | 0.3698 | - | - | - |
| 2.6748 | 395600 | 0.3331 | - | - | - |
| 2.6754 | 395700 | 0.2985 | - | - | - |
| 2.6761 | 395800 | 0.3828 | - | - | - |
| 2.6768 | 395900 | 0.353 | - | - | - |
| 2.6775 | 396000 | 0.3433 | - | - | - |
| 2.6781 | 396100 | 0.2896 | - | - | - |
| 2.6788 | 396200 | 0.3328 | - | - | - |
| 2.6795 | 396300 | 0.3462 | - | - | - |
| 2.6802 | 396400 | 0.3618 | - | - | - |
| 2.6808 | 396500 | 0.312 | - | - | - |
| 2.6815 | 396600 | 0.3331 | - | - | - |
| 2.6822 | 396700 | 0.327 | - | - | - |
| 2.6829 | 396800 | 0.328 | - | - | - |
| 2.6836 | 396900 | 0.3242 | - | - | - |
| 2.6842 | 397000 | 0.3372 | - | - | - |
| 2.6849 | 397100 | 0.3487 | - | - | - |
| 2.6856 | 397200 | 0.3337 | - | - | - |
| 2.6863 | 397300 | 0.3427 | - | - | - |
| 2.6869 | 397400 | 0.2871 | - | - | - |
| 2.6876 | 397500 | 0.3067 | - | - | - |
| 2.6883 | 397600 | 0.3441 | - | - | - |
| 2.6890 | 397700 | 0.3546 | - | - | - |
| 2.6896 | 397800 | 0.3193 | - | - | - |
| 2.6903 | 397900 | 0.3315 | - | - | - |
| 2.6910 | 398000 | 0.3443 | - | - | - |
| 2.6917 | 398100 | 0.3584 | - | - | - |
| 2.6923 | 398200 | 0.2765 | - | - | - |
| 2.6930 | 398300 | 0.3037 | - | - | - |
| 2.6937 | 398400 | 0.3252 | - | - | - |
| 2.6944 | 398500 | 0.3019 | - | - | - |
| 2.6950 | 398600 | 0.3595 | - | - | - |
| 2.6957 | 398700 | 0.3358 | - | - | - |
| 2.6964 | 398800 | 0.3423 | - | - | - |
| 2.6971 | 398900 | 0.2938 | - | - | - |
| 2.6978 | 399000 | 0.3343 | - | - | - |
| 2.6984 | 399100 | 0.3006 | - | - | - |
| 2.6991 | 399200 | 0.294 | - | - | - |
| 2.6998 | 399300 | 0.31 | - | - | - |
| 2.7005 | 399400 | 0.3286 | - | - | - |
| 2.7011 | 399500 | 0.3351 | - | - | - |
| 2.7018 | 399600 | 0.3218 | - | - | - |
| 2.7025 | 399700 | 0.3263 | - | - | - |
| 2.7032 | 399800 | 0.3271 | - | - | - |
| 2.7038 | 399900 | 0.2779 | - | - | - |
| 2.7045 | 400000 | 0.3072 | 0.5355 | 0.7778 | - |
| 2.7052 | 400100 | 0.3167 | - | - | - |
| 2.7059 | 400200 | 0.3094 | - | - | - |
| 2.7065 | 400300 | 0.3338 | - | - | - |
| 2.7072 | 400400 | 0.2896 | - | - | - |
| 2.7079 | 400500 | 0.331 | - | - | - |
| 2.7086 | 400600 | 0.3229 | - | - | - |
| 2.7092 | 400700 | 0.3062 | - | - | - |
| 2.7099 | 400800 | 0.33 | - | - | - |
| 2.7106 | 400900 | 0.3269 | - | - | - |
| 2.7113 | 401000 | 0.3225 | - | - | - |
| 2.7119 | 401100 | 0.31 | - | - | - |
| 2.7126 | 401200 | 0.3582 | - | - | - |
| 2.7133 | 401300 | 0.3372 | - | - | - |
| 2.7140 | 401400 | 0.2859 | - | - | - |
| 2.7147 | 401500 | 0.3311 | - | - | - |
| 2.7153 | 401600 | 0.3299 | - | - | - |
| 2.7160 | 401700 | 0.2862 | - | - | - |
| 2.7167 | 401800 | 0.3308 | - | - | - |
| 2.7174 | 401900 | 0.3424 | - | - | - |
| 2.7180 | 402000 | 0.3629 | - | - | - |
| 2.7187 | 402100 | 0.2774 | - | - | - |
| 2.7194 | 402200 | 0.3739 | - | - | - |
| 2.7201 | 402300 | 0.3204 | - | - | - |
| 2.7207 | 402400 | 0.3436 | - | - | - |
| 2.7214 | 402500 | 0.294 | - | - | - |
| 2.7221 | 402600 | 0.3235 | - | - | - |
| 2.7228 | 402700 | 0.3413 | - | - | - |
| 2.7234 | 402800 | 0.3318 | - | - | - |
| 2.7241 | 402900 | 0.325 | - | - | - |
| 2.7248 | 403000 | 0.3181 | - | - | - |
| 2.7255 | 403100 | 0.292 | - | - | - |
| 2.7261 | 403200 | 0.3315 | - | - | - |
| 2.7268 | 403300 | 0.3026 | - | - | - |
| 2.7275 | 403400 | 0.3214 | - | - | - |
| 2.7282 | 403500 | 0.3441 | - | - | - |
| 2.7289 | 403600 | 0.3274 | - | - | - |
| 2.7295 | 403700 | 0.3448 | - | - | - |
| 2.7302 | 403800 | 0.3144 | - | - | - |
| 2.7309 | 403900 | 0.3099 | - | - | - |
| 2.7316 | 404000 | 0.3016 | - | - | - |
| 2.7322 | 404100 | 0.3111 | - | - | - |
| 2.7329 | 404200 | 0.3429 | - | - | - |
| 2.7336 | 404300 | 0.3401 | - | - | - |
| 2.7343 | 404400 | 0.3356 | - | - | - |
| 2.7349 | 404500 | 0.3359 | - | - | - |
| 2.7356 | 404600 | 0.3113 | - | - | - |
| 2.7363 | 404700 | 0.3174 | - | - | - |
| 2.7370 | 404800 | 0.3754 | - | - | - |
| 2.7376 | 404900 | 0.2967 | - | - | - |
| 2.7383 | 405000 | 0.311 | 0.5380 | 0.7779 | - |
| 2.7390 | 405100 | 0.3554 | - | - | - |
| 2.7397 | 405200 | 0.2834 | - | - | - |
| 2.7403 | 405300 | 0.3313 | - | - | - |
| 2.7410 | 405400 | 0.3033 | - | - | - |
| 2.7417 | 405500 | 0.3003 | - | - | - |
| 2.7424 | 405600 | 0.3129 | - | - | - |
| 2.7431 | 405700 | 0.3055 | - | - | - |
| 2.7437 | 405800 | 0.3277 | - | - | - |
| 2.7444 | 405900 | 0.3138 | - | - | - |
| 2.7451 | 406000 | 0.286 | - | - | - |
| 2.7458 | 406100 | 0.3252 | - | - | - |
| 2.7464 | 406200 | 0.3103 | - | - | - |
| 2.7471 | 406300 | 0.3311 | - | - | - |
| 2.7478 | 406400 | 0.3052 | - | - | - |
| 2.7485 | 406500 | 0.2858 | - | - | - |
| 2.7491 | 406600 | 0.297 | - | - | - |
| 2.7498 | 406700 | 0.2967 | - | - | - |
| 2.7505 | 406800 | 0.322 | - | - | - |
| 2.7512 | 406900 | 0.2896 | - | - | - |
| 2.7518 | 407000 | 0.325 | - | - | - |
| 2.7525 | 407100 | 0.2928 | - | - | - |
| 2.7532 | 407200 | 0.3038 | - | - | - |
| 2.7539 | 407300 | 0.2659 | - | - | - |
| 2.7545 | 407400 | 0.3277 | - | - | - |
| 2.7552 | 407500 | 0.3513 | - | - | - |
| 2.7559 | 407600 | 0.2941 | - | - | - |
| 2.7566 | 407700 | 0.2625 | - | - | - |
| 2.7572 | 407800 | 0.2805 | - | - | - |
| 2.7579 | 407900 | 0.2678 | - | - | - |
| 2.7586 | 408000 | 0.3407 | - | - | - |
| 2.7593 | 408100 | 0.3406 | - | - | - |
| 2.7600 | 408200 | 0.3509 | - | - | - |
| 2.7606 | 408300 | 0.3036 | - | - | - |
| 2.7613 | 408400 | 0.3169 | - | - | - |
| 2.7620 | 408500 | 0.3128 | - | - | - |
| 2.7627 | 408600 | 0.3496 | - | - | - |
| 2.7633 | 408700 | 0.3056 | - | - | - |
| 2.7640 | 408800 | 0.3233 | - | - | - |
| 2.7647 | 408900 | 0.3174 | - | - | - |
| 2.7654 | 409000 | 0.314 | - | - | - |
| 2.7660 | 409100 | 0.3288 | - | - | - |
| 2.7667 | 409200 | 0.3705 | - | - | - |
| 2.7674 | 409300 | 0.3192 | - | - | - |
| 2.7681 | 409400 | 0.2721 | - | - | - |
| 2.7687 | 409500 | 0.3189 | - | - | - |
| 2.7694 | 409600 | 0.3862 | - | - | - |
| 2.7701 | 409700 | 0.3061 | - | - | - |
| 2.7708 | 409800 | 0.3023 | - | - | - |
| 2.7714 | 409900 | 0.3374 | - | - | - |
| 2.7721 | 410000 | 0.3039 | 0.5357 | 0.7810 | - |
| 2.7728 | 410100 | 0.3555 | - | - | - |
| 2.7735 | 410200 | 0.3054 | - | - | - |
| 2.7742 | 410300 | 0.3211 | - | - | - |
| 2.7748 | 410400 | 0.3102 | - | - | - |
| 2.7755 | 410500 | 0.3323 | - | - | - |
| 2.7762 | 410600 | 0.3018 | - | - | - |
| 2.7769 | 410700 | 0.3349 | - | - | - |
| 2.7775 | 410800 | 0.2874 | - | - | - |
| 2.7782 | 410900 | 0.3191 | - | - | - |
| 2.7789 | 411000 | 0.3119 | - | - | - |
| 2.7796 | 411100 | 0.3159 | - | - | - |
| 2.7802 | 411200 | 0.3205 | - | - | - |
| 2.7809 | 411300 | 0.3014 | - | - | - |
| 2.7816 | 411400 | 0.301 | - | - | - |
| 2.7823 | 411500 | 0.2984 | - | - | - |
| 2.7829 | 411600 | 0.3412 | - | - | - |
| 2.7836 | 411700 | 0.2783 | - | - | - |
| 2.7843 | 411800 | 0.3092 | - | - | - |
| 2.7850 | 411900 | 0.3393 | - | - | - |
| 2.7856 | 412000 | 0.3504 | - | - | - |
| 2.7863 | 412100 | 0.3658 | - | - | - |
| 2.7870 | 412200 | 0.3478 | - | - | - |
| 2.7877 | 412300 | 0.2646 | - | - | - |
| 2.7884 | 412400 | 0.3027 | - | - | - |
| 2.7890 | 412500 | 0.2889 | - | - | - |
| 2.7897 | 412600 | 0.2987 | - | - | - |
| 2.7904 | 412700 | 0.3317 | - | - | - |
| 2.7911 | 412800 | 0.293 | - | - | - |
| 2.7917 | 412900 | 0.2994 | - | - | - |
| 2.7924 | 413000 | 0.3144 | - | - | - |
| 2.7931 | 413100 | 0.3393 | - | - | - |
| 2.7938 | 413200 | 0.3053 | - | - | - |
| 2.7944 | 413300 | 0.3204 | - | - | - |
| 2.7951 | 413400 | 0.3269 | - | - | - |
| 2.7958 | 413500 | 0.3435 | - | - | - |
| 2.7965 | 413600 | 0.347 | - | - | - |
| 2.7971 | 413700 | 0.2918 | - | - | - |
| 2.7978 | 413800 | 0.3663 | - | - | - |
| 2.7985 | 413900 | 0.3364 | - | - | - |
| 2.7992 | 414000 | 0.2899 | - | - | - |
| 2.7998 | 414100 | 0.3113 | - | - | - |
| 2.8005 | 414200 | 0.3525 | - | - | - |
| 2.8012 | 414300 | 0.333 | - | - | - |
| 2.8019 | 414400 | 0.345 | - | - | - |
| 2.8026 | 414500 | 0.3044 | - | - | - |
| 2.8032 | 414600 | 0.3328 | - | - | - |
| 2.8039 | 414700 | 0.2952 | - | - | - |
| 2.8046 | 414800 | 0.3524 | - | - | - |
| 2.8053 | 414900 | 0.3175 | - | - | - |
| 2.8059 | 415000 | 0.315 | 0.5325 | 0.7799 | - |
| 2.8066 | 415100 | 0.3944 | - | - | - |
| 2.8073 | 415200 | 0.2733 | - | - | - |
| 2.8080 | 415300 | 0.3245 | - | - | - |
| 2.8086 | 415400 | 0.3063 | - | - | - |
| 2.8093 | 415500 | 0.3062 | - | - | - |
| 2.8100 | 415600 | 0.3036 | - | - | - |
| 2.8107 | 415700 | 0.2833 | - | - | - |
| 2.8113 | 415800 | 0.3012 | - | - | - |
| 2.8120 | 415900 | 0.3112 | - | - | - |
| 2.8127 | 416000 | 0.3012 | - | - | - |
| 2.8134 | 416100 | 0.3487 | - | - | - |
| 2.8140 | 416200 | 0.3423 | - | - | - |
| 2.8147 | 416300 | 0.3128 | - | - | - |
| 2.8154 | 416400 | 0.3451 | - | - | - |
| 2.8161 | 416500 | 0.3378 | - | - | - |
| 2.8167 | 416600 | 0.3396 | - | - | - |
| 2.8174 | 416700 | 0.3314 | - | - | - |
| 2.8181 | 416800 | 0.3284 | - | - | - |
| 2.8188 | 416900 | 0.3563 | - | - | - |
| 2.8195 | 417000 | 0.3322 | - | - | - |
| 2.8201 | 417100 | 0.288 | - | - | - |
| 2.8208 | 417200 | 0.303 | - | - | - |
| 2.8215 | 417300 | 0.2839 | - | - | - |
| 2.8222 | 417400 | 0.3499 | - | - | - |
| 2.8228 | 417500 | 0.2946 | - | - | - |
| 2.8235 | 417600 | 0.284 | - | - | - |
| 2.8242 | 417700 | 0.332 | - | - | - |
| 2.8249 | 417800 | 0.2855 | - | - | - |
| 2.8255 | 417900 | 0.3244 | - | - | - |
| 2.8262 | 418000 | 0.3189 | - | - | - |
| 2.8269 | 418100 | 0.3 | - | - | - |
| 2.8276 | 418200 | 0.3249 | - | - | - |
| 2.8282 | 418300 | 0.3143 | - | - | - |
| 2.8289 | 418400 | 0.3055 | - | - | - |
| 2.8296 | 418500 | 0.3046 | - | - | - |
| 2.8303 | 418600 | 0.3385 | - | - | - |
| 2.8309 | 418700 | 0.2647 | - | - | - |
| 2.8316 | 418800 | 0.3377 | - | - | - |
| 2.8323 | 418900 | 0.3181 | - | - | - |
| 2.8330 | 419000 | 0.3242 | - | - | - |
| 2.8337 | 419100 | 0.3109 | - | - | - |
| 2.8343 | 419200 | 0.2853 | - | - | - |
| 2.8350 | 419300 | 0.2959 | - | - | - |
| 2.8357 | 419400 | 0.3517 | - | - | - |
| 2.8364 | 419500 | 0.3489 | - | - | - |
| 2.8370 | 419600 | 0.3243 | - | - | - |
| 2.8377 | 419700 | 0.3092 | - | - | - |
| 2.8384 | 419800 | 0.3407 | - | - | - |
| 2.8391 | 419900 | 0.3473 | - | - | - |
| 2.8397 | 420000 | 0.3201 | 0.5361 | 0.7791 | - |
| 2.8404 | 420100 | 0.3172 | - | - | - |
| 2.8411 | 420200 | 0.3288 | - | - | - |
| 2.8418 | 420300 | 0.3608 | - | - | - |
| 2.8424 | 420400 | 0.3263 | - | - | - |
| 2.8431 | 420500 | 0.3232 | - | - | - |
| 2.8438 | 420600 | 0.2952 | - | - | - |
| 2.8445 | 420700 | 0.3023 | - | - | - |
| 2.8451 | 420800 | 0.3071 | - | - | - |
| 2.8458 | 420900 | 0.3445 | - | - | - |
| 2.8465 | 421000 | 0.2883 | - | - | - |
| 2.8472 | 421100 | 0.346 | - | - | - |
| 2.8479 | 421200 | 0.2749 | - | - | - |
| 2.8485 | 421300 | 0.3086 | - | - | - |
| 2.8492 | 421400 | 0.3309 | - | - | - |
| 2.8499 | 421500 | 0.3348 | - | - | - |
| 2.8506 | 421600 | 0.3286 | - | - | - |
| 2.8512 | 421700 | 0.2793 | - | - | - |
| 2.8519 | 421800 | 0.3026 | - | - | - |
| 2.8526 | 421900 | 0.2995 | - | - | - |
| 2.8533 | 422000 | 0.3361 | - | - | - |
| 2.8539 | 422100 | 0.3415 | - | - | - |
| 2.8546 | 422200 | 0.2957 | - | - | - |
| 2.8553 | 422300 | 0.3287 | - | - | - |
| 2.8560 | 422400 | 0.3144 | - | - | - |
| 2.8566 | 422500 | 0.2691 | - | - | - |
| 2.8573 | 422600 | 0.3293 | - | - | - |
| 2.8580 | 422700 | 0.3184 | - | - | - |
| 2.8587 | 422800 | 0.3228 | - | - | - |
| 2.8593 | 422900 | 0.295 | - | - | - |
| 2.8600 | 423000 | 0.3057 | - | - | - |
| 2.8607 | 423100 | 0.2919 | - | - | - |
| 2.8614 | 423200 | 0.2925 | - | - | - |
| 2.8620 | 423300 | 0.3041 | - | - | - |
| 2.8627 | 423400 | 0.3199 | - | - | - |
| 2.8634 | 423500 | 0.3001 | - | - | - |
| 2.8641 | 423600 | 0.3767 | - | - | - |
| 2.8648 | 423700 | 0.2825 | - | - | - |
| 2.8654 | 423800 | 0.3174 | - | - | - |
| 2.8661 | 423900 | 0.343 | - | - | - |
| 2.8668 | 424000 | 0.3043 | - | - | - |
| 2.8675 | 424100 | 0.2764 | - | - | - |
| 2.8681 | 424200 | 0.3205 | - | - | - |
| 2.8688 | 424300 | 0.2876 | - | - | - |
| 2.8695 | 424400 | 0.3312 | - | - | - |
| 2.8702 | 424500 | 0.2892 | - | - | - |
| 2.8708 | 424600 | 0.3022 | - | - | - |
| 2.8715 | 424700 | 0.2852 | - | - | - |
| 2.8722 | 424800 | 0.2933 | - | - | - |
| 2.8729 | 424900 | 0.3242 | - | - | - |
| 2.8735 | 425000 | 0.314 | 0.5364 | 0.7805 | - |
| 2.8742 | 425100 | 0.2706 | - | - | - |
| 2.8749 | 425200 | 0.2865 | - | - | - |
| 2.8756 | 425300 | 0.3138 | - | - | - |
| 2.8762 | 425400 | 0.3016 | - | - | - |
| 2.8769 | 425500 | 0.2615 | - | - | - |
| 2.8776 | 425600 | 0.3108 | - | - | - |
| 2.8783 | 425700 | 0.3419 | - | - | - |
| 2.8790 | 425800 | 0.2876 | - | - | - |
| 2.8796 | 425900 | 0.3284 | - | - | - |
| 2.8803 | 426000 | 0.2979 | - | - | - |
| 2.8810 | 426100 | 0.3168 | - | - | - |
| 2.8817 | 426200 | 0.3123 | - | - | - |
| 2.8823 | 426300 | 0.3244 | - | - | - |
| 2.8830 | 426400 | 0.2797 | - | - | - |
| 2.8837 | 426500 | 0.2649 | - | - | - |
| 2.8844 | 426600 | 0.2941 | - | - | - |
| 2.8850 | 426700 | 0.2882 | - | - | - |
| 2.8857 | 426800 | 0.2965 | - | - | - |
| 2.8864 | 426900 | 0.3306 | - | - | - |
| 2.8871 | 427000 | 0.3258 | - | - | - |
| 2.8877 | 427100 | 0.3247 | - | - | - |
| 2.8884 | 427200 | 0.2605 | - | - | - |
| 2.8891 | 427300 | 0.2763 | - | - | - |
| 2.8898 | 427400 | 0.3633 | - | - | - |
| 2.8904 | 427500 | 0.3124 | - | - | - |
| 2.8911 | 427600 | 0.3058 | - | - | - |
| 2.8918 | 427700 | 0.3126 | - | - | - |
| 2.8925 | 427800 | 0.2909 | - | - | - |
| 2.8932 | 427900 | 0.3314 | - | - | - |
| 2.8938 | 428000 | 0.2955 | - | - | - |
| 2.8945 | 428100 | 0.3097 | - | - | - |
| 2.8952 | 428200 | 0.3123 | - | - | - |
| 2.8959 | 428300 | 0.3209 | - | - | - |
| 2.8965 | 428400 | 0.3115 | - | - | - |
| 2.8972 | 428500 | 0.2841 | - | - | - |
| 2.8979 | 428600 | 0.3047 | - | - | - |
| 2.8986 | 428700 | 0.2948 | - | - | - |
| 2.8992 | 428800 | 0.3115 | - | - | - |
| 2.8999 | 428900 | 0.2966 | - | - | - |
| 2.9006 | 429000 | 0.298 | - | - | - |
| 2.9013 | 429100 | 0.3417 | - | - | - |
| 2.9019 | 429200 | 0.3151 | - | - | - |
| 2.9026 | 429300 | 0.3171 | - | - | - |
| 2.9033 | 429400 | 0.3234 | - | - | - |
| 2.9040 | 429500 | 0.3282 | - | - | - |
| 2.9046 | 429600 | 0.3123 | - | - | - |
| 2.9053 | 429700 | 0.3168 | - | - | - |
| 2.9060 | 429800 | 0.3265 | - | - | - |
| 2.9067 | 429900 | 0.3601 | - | - | - |
| 2.9074 | 430000 | 0.316 | 0.5341 | 0.7830 | - |
| 2.9080 | 430100 | 0.3256 | - | - | - |
| 2.9087 | 430200 | 0.3405 | - | - | - |
| 2.9094 | 430300 | 0.3408 | - | - | - |
| 2.9101 | 430400 | 0.3313 | - | - | - |
| 2.9107 | 430500 | 0.2975 | - | - | - |
| 2.9114 | 430600 | 0.3396 | - | - | - |
| 2.9121 | 430700 | 0.2966 | - | - | - |
| 2.9128 | 430800 | 0.3354 | - | - | - |
| 2.9134 | 430900 | 0.2806 | - | - | - |
| 2.9141 | 431000 | 0.2948 | - | - | - |
| 2.9148 | 431100 | 0.3184 | - | - | - |
| 2.9155 | 431200 | 0.3456 | - | - | - |
| 2.9161 | 431300 | 0.3159 | - | - | - |
| 2.9168 | 431400 | 0.3139 | - | - | - |
| 2.9175 | 431500 | 0.2922 | - | - | - |
| 2.9182 | 431600 | 0.3367 | - | - | - |
| 2.9188 | 431700 | 0.3493 | - | - | - |
| 2.9195 | 431800 | 0.313 | - | - | - |
| 2.9202 | 431900 | 0.3161 | - | - | - |
| 2.9209 | 432000 | 0.322 | - | - | - |
| 2.9215 | 432100 | 0.2878 | - | - | - |
| 2.9222 | 432200 | 0.2934 | - | - | - |
| 2.9229 | 432300 | 0.3342 | - | - | - |
| 2.9236 | 432400 | 0.277 | - | - | - |
| 2.9243 | 432500 | 0.2605 | - | - | - |
| 2.9249 | 432600 | 0.3078 | - | - | - |
| 2.9256 | 432700 | 0.3273 | - | - | - |
| 2.9263 | 432800 | 0.3207 | - | - | - |
| 2.9270 | 432900 | 0.2812 | - | - | - |
| 2.9276 | 433000 | 0.3378 | - | - | - |
| 2.9283 | 433100 | 0.3272 | - | - | - |
| 2.9290 | 433200 | 0.3119 | - | - | - |
| 2.9297 | 433300 | 0.2942 | - | - | - |
| 2.9303 | 433400 | 0.2741 | - | - | - |
| 2.9310 | 433500 | 0.3115 | - | - | - |
| 2.9317 | 433600 | 0.3019 | - | - | - |
| 2.9324 | 433700 | 0.2902 | - | - | - |
| 2.9330 | 433800 | 0.3253 | - | - | - |
| 2.9337 | 433900 | 0.2985 | - | - | - |
| 2.9344 | 434000 | 0.3078 | - | - | - |
| 2.9351 | 434100 | 0.3854 | - | - | - |
| 2.9357 | 434200 | 0.2974 | - | - | - |
| 2.9364 | 434300 | 0.2922 | - | - | - |
| 2.9371 | 434400 | 0.3166 | - | - | - |
| 2.9378 | 434500 | 0.3247 | - | - | - |
| 2.9385 | 434600 | 0.2662 | - | - | - |
| 2.9391 | 434700 | 0.2796 | - | - | - |
| 2.9398 | 434800 | 0.2981 | - | - | - |
| 2.9405 | 434900 | 0.3049 | - | - | - |
| 2.9412 | 435000 | 0.2975 | 0.5333 | 0.7836 | - |
| 2.9418 | 435100 | 0.295 | - | - | - |
| 2.9425 | 435200 | 0.3076 | - | - | - |
| 2.9432 | 435300 | 0.3302 | - | - | - |
| 2.9439 | 435400 | 0.277 | - | - | - |
| 2.9445 | 435500 | 0.3219 | - | - | - |
| 2.9452 | 435600 | 0.2785 | - | - | - |
| 2.9459 | 435700 | 0.3077 | - | - | - |
| 2.9466 | 435800 | 0.2837 | - | - | - |
| 2.9472 | 435900 | 0.3695 | - | - | - |
| 2.9479 | 436000 | 0.3068 | - | - | - |
| 2.9486 | 436100 | 0.301 | - | - | - |
| 2.9493 | 436200 | 0.316 | - | - | - |
| 2.9499 | 436300 | 0.3299 | - | - | - |
| 2.9506 | 436400 | 0.3464 | - | - | - |
| 2.9513 | 436500 | 0.3192 | - | - | - |
| 2.9520 | 436600 | 0.3137 | - | - | - |
| 2.9527 | 436700 | 0.2981 | - | - | - |
| 2.9533 | 436800 | 0.2997 | - | - | - |
| 2.9540 | 436900 | 0.3171 | - | - | - |
| 2.9547 | 437000 | 0.3397 | - | - | - |
| 2.9554 | 437100 | 0.314 | - | - | - |
| 2.9560 | 437200 | 0.3004 | - | - | - |
| 2.9567 | 437300 | 0.3258 | - | - | - |
| 2.9574 | 437400 | 0.2851 | - | - | - |
| 2.9581 | 437500 | 0.3258 | - | - | - |
| 2.9587 | 437600 | 0.3471 | - | - | - |
| 2.9594 | 437700 | 0.3699 | - | - | - |
| 2.9601 | 437800 | 0.2801 | - | - | - |
| 2.9608 | 437900 | 0.3349 | - | - | - |
| 2.9614 | 438000 | 0.3389 | - | - | - |
| 2.9621 | 438100 | 0.2557 | - | - | - |
| 2.9628 | 438200 | 0.293 | - | - | - |
| 2.9635 | 438300 | 0.3525 | - | - | - |
| 2.9641 | 438400 | 0.3515 | - | - | - |
| 2.9648 | 438500 | 0.3027 | - | - | - |
| 2.9655 | 438600 | 0.337 | - | - | - |
| 2.9662 | 438700 | 0.3426 | - | - | - |
| 2.9668 | 438800 | 0.291 | - | - | - |
| 2.9675 | 438900 | 0.3119 | - | - | - |
| 2.9682 | 439000 | 0.3371 | - | - | - |
| 2.9689 | 439100 | 0.3183 | - | - | - |
| 2.9696 | 439200 | 0.3517 | - | - | - |
| 2.9702 | 439300 | 0.3263 | - | - | - |
| 2.9709 | 439400 | 0.3055 | - | - | - |
| 2.9716 | 439500 | 0.3171 | - | - | - |
| 2.9723 | 439600 | 0.2815 | - | - | - |
| 2.9729 | 439700 | 0.3069 | - | - | - |
| 2.9736 | 439800 | 0.332 | - | - | - |
| 2.9743 | 439900 | 0.3461 | - | - | - |
| 2.9750 | 440000 | 0.2879 | 0.5318 | 0.7851 | - |
| 2.9756 | 440100 | 0.354 | - | - | - |
| 2.9763 | 440200 | 0.3224 | - | - | - |
| 2.9770 | 440300 | 0.3787 | - | - | - |
| 2.9777 | 440400 | 0.3171 | - | - | - |
| 2.9783 | 440500 | 0.3004 | - | - | - |
| 2.9790 | 440600 | 0.2808 | - | - | - |
| 2.9797 | 440700 | 0.2999 | - | - | - |
| 2.9804 | 440800 | 0.3059 | - | - | - |
| 2.9810 | 440900 | 0.3219 | - | - | - |
| 2.9817 | 441000 | 0.3017 | - | - | - |
| 2.9824 | 441100 | 0.3481 | - | - | - |
| 2.9831 | 441200 | 0.3136 | - | - | - |
| 2.9838 | 441300 | 0.3722 | - | - | - |
| 2.9844 | 441400 | 0.309 | - | - | - |
| 2.9851 | 441500 | 0.3126 | - | - | - |
| 2.9858 | 441600 | 0.3474 | - | - | - |
| 2.9865 | 441700 | 0.3167 | - | - | - |
| 2.9871 | 441800 | 0.3302 | - | - | - |
| 2.9878 | 441900 | 0.3047 | - | - | - |
| 2.9885 | 442000 | 0.3353 | - | - | - |
| 2.9892 | 442100 | 0.2927 | - | - | - |
| 2.9898 | 442200 | 0.3905 | - | - | - |
| 2.9905 | 442300 | 0.3256 | - | - | - |
| 2.9912 | 442400 | 0.3546 | - | - | - |
| 2.9919 | 442500 | 0.2989 | - | - | - |
| 2.9925 | 442600 | 0.3113 | - | - | - |
| 2.9932 | 442700 | 0.3127 | - | - | - |
| 2.9939 | 442800 | 0.3393 | - | - | - |
| 2.9946 | 442900 | 0.2916 | - | - | - |
| 2.9952 | 443000 | 0.3403 | - | - | - |
| 2.9959 | 443100 | 0.318 | - | - | - |
| 2.9966 | 443200 | 0.3252 | - | - | - |
| 2.9973 | 443300 | 0.2852 | - | - | - |
| 2.9980 | 443400 | 0.3143 | - | - | - |
| 2.9986 | 443500 | 0.3042 | - | - | - |
| 2.9993 | 443600 | 0.3474 | - | - | - |
| 3.0000 | 443700 | 0.3281 | - | - | - |
| 3.0 | 443703 | - | - | - | 0.7955 |
</details>
### Framework Versions
- Python: 3.11.8
- Sentence Transformers: 3.3.1
- Transformers: 4.48.0
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# paraphrase-multilingual-MiniLM-L12-hu-v3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 8d6b950845285729817bf8e1af1861502c2fed0c -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** hu
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("karsar/paraphrase-multilingual-MiniLM-L12-hu-v3")
# Run inference
sentences = [
'a sellő szindróma genetikai okai',
'Rfcamat válasza. Bizalom szavazat: 459. Ha sellő-szindrómásod van, akkor vele születtél volna, és inkább hasadt volna a lábad, vagy mindkettőt amputálták volna. A sellőszindróma oka a test alsó részének (lábainak) oxigén- és tápanyaghiánya a keringési rendszer problémája miatt.További információ az alábbi linken.a sellő szindrómát nem kaphatja meg. Ez egy veleszületett állapot, ami azt jelenti, hogy vele kell születned ahhoz, hogy meglegyen. A betegségben szenvedő személy nem sellő, csak arról van szó, hogy a lábai összeforrtak. Számos belső szerv hiányzik vagy deformálódott.',
'1 A sellő-szindróma annak a következménye is lehet, hogy az anya sugárzásnak és más környezeti hatásoknak van kitéve, amelyek a magzat normális fejlődésében részt vevő gének mutációit okozták. 2 Spontán mutációk vagy a magzatban természetesen előforduló mutációk is okozhatták a születési rendellenességet. Kutatásokra van szükség ahhoz, hogy kiderítsük a sellőszindróma genetikai, biológiai vagy környezeti okait. A sellő szindróma kezelése. Ha a két láb csak a bőrön keresztül olvadt össze, és a három fő csont teljesen és megfelelően kialakult, műtétet alkalmaznak a két láb szétválasztására.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Datasets: `all-triplet-dev` and `all-triplet-test`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | all-triplet-dev | all-triplet-test |
|:--------------------|:----------------|:-----------------|
| **cosine_accuracy** | **0.7851** | **0.7955** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 1,207,229 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 17.64 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 58.58 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 57.82 tokens</li><li>max: 128 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------|:----------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Megfordult, és előhúzta a kardját.</code> | <code>A kard megrajzolták.</code> | <code>A férfi ott hagyta a kardját, ahol volt.</code> |
| <code>Egy férfi, aki egy betonfalnak támaszkodik, karjait felül támasztja, az erkélyre néz.</code> | <code>Egy férfi a falnak támaszkodik.</code> | <code>Egy férfi egy fafalnak támaszkodik.</code> |
| <code>A nő a szabadban van.</code> | <code>Nő egy ruhában sétál át a hídon.</code> | <code>Egy nő a levegőben lévő lábával harcművészeti mozdulatot hajt végre egy edzőteremben, miközben öt csapattársa vagy versenyzője néz rá.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 1,207,229 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 17.92 tokens</li><li>max: 93 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 59.36 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 57.86 tokens</li><li>max: 128 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Az emberek nézik, amint egy zenész gitározik.</code> | <code>egy gitáros játszik az embereknek</code> | <code>Az emberek egy autóroncsot néznek.</code> |
| <code>hány csepp van egy ml-ben</code> | <code>Egy szabványos szemcseppentő 0,05 ml-t adagol cseppenként, ami azt jelenti, hogy 1 milliliter gyógyszerben 20 csepp van. Számoljuk ki: egy 5 ml-es üvegben 100, a 10 ml-es üvegben 200 adag van. (A legtöbb szemcsepp receptet 5 vagy 10 ml-es üvegekben adják ki.) A párolgás nem jelent nagy problémát, ha a kupakot minden alkalmazás után vissza kell cserélni. 30 napos hónapra számítva a napi egyszeri cseppek és a napi kétszeri cseppek egy 5 ml-es üvegben könnyen kitartanak egy hónapig. Egy 10 ml-es palack általában nagyobb adagok befogadására alkalmas. Íme, egy utolsó tipp.</code> | <code>Körülbelül 15-20 csepp van egy ml-ben. A folyadék viszkozitása megváltoztatja ezt a választ. Gondolhatja, hogy egy 5 ml-es üvegben 80-100 csepp van.</code> |
| <code>a szövetségi tartalékot milyen jogszabály hozta létre</code> | <code>Az „1913. évi Federal Reserve Act” MEGHATÁROZÁSA. Az 1913-as amerikai törvényhozás, amely létrehozta a jelenlegi Federal Reserve System-et. A Federal Reserve Act a gazdasági stabilitás egy formáját kívánta megteremteni a monetáris politikáért felelős Központi Bank bevezetésével az Egyesült Államokba. Az 1913-as amerikai törvényhozás, amely létrehozta a jelenlegi Federal Reserve System-et. A Federal Reserve Act a gazdasági stabilitás egy formáját kívánta megteremteni a monetáris politikáért felelős Központi Bank bevezetésével az Egyesült Államokba.</code> | <code>Az 1913-as amerikai törvényhozás, amely létrehozta a jelenlegi Federal Reserve System-et. A Federal Reserve Act a gazdasági stabilitás egy formáját kívánta megteremteni a monetáris politikáért felelős Központi Bank bevezetésével az Egyesült Államokba.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `warmup_ratio`: 0.1
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | all-triplet-dev_cosine_accuracy | all-triplet-test_cosine_accuracy |
|:------:|:------:|:-------------:|:---------------:|:-------------------------------:|:--------------------------------:|
| 0 | 0 | - | - | 0.6579 | - |
| 0.0007 | 100 | 1.0 | - | - | - |
| 0.0014 | 200 | 0.9771 | - | - | - |
| 0.0020 | 300 | 1.053 | - | - | - |
| 0.0027 | 400 | 0.887 | - | - | - |
| 0.0034 | 500 | 0.9726 | - | - | - |
| 0.0041 | 600 | 0.9072 | - | - | - |
| 0.0047 | 700 | 1.0523 | - | - | - |
| 0.0054 | 800 | 0.9033 | - | - | - |
| 0.0061 | 900 | 0.9774 | - | - | - |
| 0.0068 | 1000 | 0.8418 | - | - | - |
| 0.0074 | 1100 | 0.9079 | - | - | - |
| 0.0081 | 1200 | 0.7952 | - | - | - |
| 0.0088 | 1300 | 0.9232 | - | - | - |
| 0.0095 | 1400 | 0.8148 | - | - | - |
| 0.0101 | 1500 | 0.9004 | - | - | - |
| 0.0108 | 1600 | 0.8553 | - | - | - |
| 0.0115 | 1700 | 0.8049 | - | - | - |
| 0.0122 | 1800 | 0.7216 | - | - | - |
| 0.0128 | 1900 | 0.7598 | - | - | - |
| 0.0135 | 2000 | 0.802 | - | - | - |
| 0.0142 | 2100 | 0.879 | - | - | - |
| 0.0149 | 2200 | 0.8042 | - | - | - |
| 0.0156 | 2300 | 0.7186 | - | - | - |
| 0.0162 | 2400 | 0.7569 | - | - | - |
| 0.0169 | 2500 | 0.7585 | - | - | - |
| 0.0176 | 2600 | 0.7419 | - | - | - |
| 0.0183 | 2700 | 0.6902 | - | - | - |
| 0.0189 | 2800 | 0.7811 | - | - | - |
| 0.0196 | 2900 | 0.6972 | - | - | - |
| 0.0203 | 3000 | 0.6638 | - | - | - |
| 0.0210 | 3100 | 0.6797 | - | - | - |
| 0.0216 | 3200 | 0.6809 | - | - | - |
| 0.0223 | 3300 | 0.7417 | - | - | - |
| 0.0230 | 3400 | 0.7048 | - | - | - |
| 0.0237 | 3500 | 0.6981 | - | - | - |
| 0.0243 | 3600 | 0.6724 | - | - | - |
| 0.0250 | 3700 | 0.635 | - | - | - |
| 0.0257 | 3800 | 0.6869 | - | - | - |
| 0.0264 | 3900 | 0.6868 | - | - | - |
| 0.0270 | 4000 | 0.658 | - | - | - |
| 0.0277 | 4100 | 0.6692 | - | - | - |
| 0.0284 | 4200 | 0.6254 | - | - | - |
| 0.0291 | 4300 | 0.7114 | - | - | - |
| 0.0297 | 4400 | 0.6143 | - | - | - |
| 0.0304 | 4500 | 0.6775 | - | - | - |
| 0.0311 | 4600 | 0.6419 | - | - | - |
| 0.0318 | 4700 | 0.6887 | - | - | - |
| 0.0325 | 4800 | 0.6529 | - | - | - |
| 0.0331 | 4900 | 0.6365 | - | - | - |
| 0.0338 | 5000 | 0.6158 | 0.6443 | 0.7006 | - |
| 0.0345 | 5100 | 0.6508 | - | - | - |
| 0.0352 | 5200 | 0.6424 | - | - | - |
| 0.0358 | 5300 | 0.6766 | - | - | - |
| 0.0365 | 5400 | 0.6487 | - | - | - |
| 0.0372 | 5500 | 0.6886 | - | - | - |
| 0.0379 | 5600 | 0.6211 | - | - | - |
| 0.0385 | 5700 | 0.6523 | - | - | - |
| 0.0392 | 5800 | 0.6377 | - | - | - |
| 0.0399 | 5900 | 0.6524 | - | - | - |
| 0.0406 | 6000 | 0.6028 | - | - | - |
| 0.0412 | 6100 | 0.6466 | - | - | - |
| 0.0419 | 6200 | 0.6373 | - | - | - |
| 0.0426 | 6300 | 0.6434 | - | - | - |
| 0.0433 | 6400 | 0.6131 | - | - | - |
| 0.0439 | 6500 | 0.6133 | - | - | - |
| 0.0446 | 6600 | 0.6323 | - | - | - |
| 0.0453 | 6700 | 0.6384 | - | - | - |
| 0.0460 | 6800 | 0.6757 | - | - | - |
| 0.0467 | 6900 | 0.6366 | - | - | - |
| 0.0473 | 7000 | 0.6154 | - | - | - |
| 0.0480 | 7100 | 0.6554 | - | - | - |
| 0.0487 | 7200 | 0.6584 | - | - | - |
| 0.0494 | 7300 | 0.6527 | - | - | - |
| 0.0500 | 7400 | 0.5794 | - | - | - |
| 0.0507 | 7500 | 0.629 | - | - | - |
| 0.0514 | 7600 | 0.6272 | - | - | - |
| 0.0521 | 7700 | 0.6614 | - | - | - |
| 0.0527 | 7800 | 0.6511 | - | - | - |
| 0.0534 | 7900 | 0.5902 | - | - | - |
| 0.0541 | 8000 | 0.6243 | - | - | - |
| 0.0548 | 8100 | 0.5976 | - | - | - |
| 0.0554 | 8200 | 0.6198 | - | - | - |
| 0.0561 | 8300 | 0.6478 | - | - | - |
| 0.0568 | 8400 | 0.6167 | - | - | - |
| 0.0575 | 8500 | 0.6635 | - | - | - |
| 0.0581 | 8600 | 0.6189 | - | - | - |
| 0.0588 | 8700 | 0.5938 | - | - | - |
| 0.0595 | 8800 | 0.6059 | - | - | - |
| 0.0602 | 8900 | 0.6043 | - | - | - |
| 0.0609 | 9000 | 0.5994 | - | - | - |
| 0.0615 | 9100 | 0.6122 | - | - | - |
| 0.0622 | 9200 | 0.6553 | - | - | - |
| 0.0629 | 9300 | 0.5798 | - | - | - |
| 0.0636 | 9400 | 0.6315 | - | - | - |
| 0.0642 | 9500 | 0.7163 | - | - | - |
| 0.0649 | 9600 | 0.618 | - | - | - |
| 0.0656 | 9700 | 0.6174 | - | - | - |
| 0.0663 | 9800 | 0.6291 | - | - | - |
| 0.0669 | 9900 | 0.6296 | - | - | - |
| 0.0676 | 10000 | 0.6421 | 0.6147 | 0.7206 | - |
| 0.0683 | 10100 | 0.6046 | - | - | - |
| 0.0690 | 10200 | 0.5878 | - | - | - |
| 0.0696 | 10300 | 0.6091 | - | - | - |
| 0.0703 | 10400 | 0.6736 | - | - | - |
| 0.0710 | 10500 | 0.6205 | - | - | - |
| 0.0717 | 10600 | 0.5922 | - | - | - |
| 0.0723 | 10700 | 0.5989 | - | - | - |
| 0.0730 | 10800 | 0.614 | - | - | - |
| 0.0737 | 10900 | 0.6304 | - | - | - |
| 0.0744 | 11000 | 0.6241 | - | - | - |
| 0.0751 | 11100 | 0.5657 | - | - | - |
| 0.0757 | 11200 | 0.6008 | - | - | - |
| 0.0764 | 11300 | 0.6249 | - | - | - |
| 0.0771 | 11400 | 0.5991 | - | - | - |
| 0.0778 | 11500 | 0.5798 | - | - | - |
| 0.0784 | 11600 | 0.6286 | - | - | - |
| 0.0791 | 11700 | 0.6672 | - | - | - |
| 0.0798 | 11800 | 0.5947 | - | - | - |
| 0.0805 | 11900 | 0.5958 | - | - | - |
| 0.0811 | 12000 | 0.6229 | - | - | - |
| 0.0818 | 12100 | 0.6162 | - | - | - |
| 0.0825 | 12200 | 0.573 | - | - | - |
| 0.0832 | 12300 | 0.5661 | - | - | - |
| 0.0838 | 12400 | 0.594 | - | - | - |
| 0.0845 | 12500 | 0.5654 | - | - | - |
| 0.0852 | 12600 | 0.5925 | - | - | - |
| 0.0859 | 12700 | 0.6019 | - | - | - |
| 0.0865 | 12800 | 0.6 | - | - | - |
| 0.0872 | 12900 | 0.5931 | - | - | - |
| 0.0879 | 13000 | 0.6517 | - | - | - |
| 0.0886 | 13100 | 0.573 | - | - | - |
| 0.0892 | 13200 | 0.6486 | - | - | - |
| 0.0899 | 13300 | 0.6032 | - | - | - |
| 0.0906 | 13400 | 0.5799 | - | - | - |
| 0.0913 | 13500 | 0.585 | - | - | - |
| 0.0920 | 13600 | 0.6025 | - | - | - |
| 0.0926 | 13700 | 0.5873 | - | - | - |
| 0.0933 | 13800 | 0.6339 | - | - | - |
| 0.0940 | 13900 | 0.5779 | - | - | - |
| 0.0947 | 14000 | 0.5974 | - | - | - |
| 0.0953 | 14100 | 0.5706 | - | - | - |
| 0.0960 | 14200 | 0.5906 | - | - | - |
| 0.0967 | 14300 | 0.562 | - | - | - |
| 0.0974 | 14400 | 0.6264 | - | - | - |
| 0.0980 | 14500 | 0.6248 | - | - | - |
| 0.0987 | 14600 | 0.6212 | - | - | - |
| 0.0994 | 14700 | 0.5845 | - | - | - |
| 0.1001 | 14800 | 0.6237 | - | - | - |
| 0.1007 | 14900 | 0.5905 | - | - | - |
| 0.1014 | 15000 | 0.6176 | 0.5981 | 0.7167 | - |
| 0.1021 | 15100 | 0.6059 | - | - | - |
| 0.1028 | 15200 | 0.5882 | - | - | - |
| 0.1034 | 15300 | 0.5692 | - | - | - |
| 0.1041 | 15400 | 0.6028 | - | - | - |
| 0.1048 | 15500 | 0.5876 | - | - | - |
| 0.1055 | 15600 | 0.6507 | - | - | - |
| 0.1062 | 15700 | 0.5612 | - | - | - |
| 0.1068 | 15800 | 0.5882 | - | - | - |
| 0.1075 | 15900 | 0.5646 | - | - | - |
| 0.1082 | 16000 | 0.6212 | - | - | - |
| 0.1089 | 16100 | 0.6108 | - | - | - |
| 0.1095 | 16200 | 0.619 | - | - | - |
| 0.1102 | 16300 | 0.5962 | - | - | - |
| 0.1109 | 16400 | 0.6056 | - | - | - |
| 0.1116 | 16500 | 0.6057 | - | - | - |
| 0.1122 | 16600 | 0.5535 | - | - | - |
| 0.1129 | 16700 | 0.6167 | - | - | - |
| 0.1136 | 16800 | 0.5695 | - | - | - |
| 0.1143 | 16900 | 0.599 | - | - | - |
| 0.1149 | 17000 | 0.6122 | - | - | - |
| 0.1156 | 17100 | 0.5779 | - | - | - |
| 0.1163 | 17200 | 0.5822 | - | - | - |
| 0.1170 | 17300 | 0.6244 | - | - | - |
| 0.1176 | 17400 | 0.6428 | - | - | - |
| 0.1183 | 17500 | 0.6326 | - | - | - |
| 0.1190 | 17600 | 0.6027 | - | - | - |
| 0.1197 | 17700 | 0.5705 | - | - | - |
| 0.1204 | 17800 | 0.5414 | - | - | - |
| 0.1210 | 17900 | 0.5966 | - | - | - |
| 0.1217 | 18000 | 0.65 | - | - | - |
| 0.1224 | 18100 | 0.6097 | - | - | - |
| 0.1231 | 18200 | 0.5988 | - | - | - |
| 0.1237 | 18300 | 0.5901 | - | - | - |
| 0.1244 | 18400 | 0.6146 | - | - | - |
| 0.1251 | 18500 | 0.6408 | - | - | - |
| 0.1258 | 18600 | 0.6034 | - | - | - |
| 0.1264 | 18700 | 0.5878 | - | - | - |
| 0.1271 | 18800 | 0.5934 | - | - | - |
| 0.1278 | 18900 | 0.6162 | - | - | - |
| 0.1285 | 19000 | 0.6255 | - | - | - |
| 0.1291 | 19100 | 0.6546 | - | - | - |
| 0.1298 | 19200 | 0.59 | - | - | - |
| 0.1305 | 19300 | 0.6331 | - | - | - |
| 0.1312 | 19400 | 0.6444 | - | - | - |
| 0.1318 | 19500 | 0.6105 | - | - | - |
| 0.1325 | 19600 | 0.6169 | - | - | - |
| 0.1332 | 19700 | 0.6123 | - | - | - |
| 0.1339 | 19800 | 0.6612 | - | - | - |
| 0.1345 | 19900 | 0.6309 | - | - | - |
| 0.1352 | 20000 | 0.6805 | 0.5901 | 0.7213 | - |
| 0.1359 | 20100 | 0.6073 | - | - | - |
| 0.1366 | 20200 | 0.5956 | - | - | - |
| 0.1373 | 20300 | 0.6229 | - | - | - |
| 0.1379 | 20400 | 0.5919 | - | - | - |
| 0.1386 | 20500 | 0.6112 | - | - | - |
| 0.1393 | 20600 | 0.5877 | - | - | - |
| 0.1400 | 20700 | 0.6279 | - | - | - |
| 0.1406 | 20800 | 0.595 | - | - | - |
| 0.1413 | 20900 | 0.6205 | - | - | - |
| 0.1420 | 21000 | 0.5862 | - | - | - |
| 0.1427 | 21100 | 0.5719 | - | - | - |
| 0.1433 | 21200 | 0.5943 | - | - | - |
| 0.1440 | 21300 | 0.6299 | - | - | - |
| 0.1447 | 21400 | 0.5718 | - | - | - |
| 0.1454 | 21500 | 0.567 | - | - | - |
| 0.1460 | 21600 | 0.5808 | - | - | - |
| 0.1467 | 21700 | 0.5727 | - | - | - |
| 0.1474 | 21800 | 0.5625 | - | - | - |
| 0.1481 | 21900 | 0.6031 | - | - | - |
| 0.1487 | 22000 | 0.6512 | - | - | - |
| 0.1494 | 22100 | 0.5794 | - | - | - |
| 0.1501 | 22200 | 0.6473 | - | - | - |
| 0.1508 | 22300 | 0.6517 | - | - | - |
| 0.1515 | 22400 | 0.5644 | - | - | - |
| 0.1521 | 22500 | 0.587 | - | - | - |
| 0.1528 | 22600 | 0.5915 | - | - | - |
| 0.1535 | 22700 | 0.6034 | - | - | - |
| 0.1542 | 22800 | 0.6403 | - | - | - |
| 0.1548 | 22900 | 0.5921 | - | - | - |
| 0.1555 | 23000 | 0.5784 | - | - | - |
| 0.1562 | 23100 | 0.5978 | - | - | - |
| 0.1569 | 23200 | 0.6665 | - | - | - |
| 0.1575 | 23300 | 0.626 | - | - | - |
| 0.1582 | 23400 | 0.6435 | - | - | - |
| 0.1589 | 23500 | 0.6035 | - | - | - |
| 0.1596 | 23600 | 0.6134 | - | - | - |
| 0.1602 | 23700 | 0.6205 | - | - | - |
| 0.1609 | 23800 | 0.6334 | - | - | - |
| 0.1616 | 23900 | 0.6577 | - | - | - |
| 0.1623 | 24000 | 0.6574 | - | - | - |
| 0.1629 | 24100 | 0.6195 | - | - | - |
| 0.1636 | 24200 | 0.5966 | - | - | - |
| 0.1643 | 24300 | 0.6062 | - | - | - |
| 0.1650 | 24400 | 0.6582 | - | - | - |
| 0.1657 | 24500 | 0.5918 | - | - | - |
| 0.1663 | 24600 | 0.6007 | - | - | - |
| 0.1670 | 24700 | 0.6773 | - | - | - |
| 0.1677 | 24800 | 0.5891 | - | - | - |
| 0.1684 | 24900 | 0.6442 | - | - | - |
| 0.1690 | 25000 | 0.623 | 0.5940 | 0.7284 | - |
| 0.1697 | 25100 | 0.6034 | - | - | - |
| 0.1704 | 25200 | 0.62 | - | - | - |
| 0.1711 | 25300 | 0.5884 | - | - | - |
| 0.1717 | 25400 | 0.5619 | - | - | - |
| 0.1724 | 25500 | 0.6289 | - | - | - |
| 0.1731 | 25600 | 0.5684 | - | - | - |
| 0.1738 | 25700 | 0.613 | - | - | - |
| 0.1744 | 25800 | 0.6573 | - | - | - |
| 0.1751 | 25900 | 0.5645 | - | - | - |
| 0.1758 | 26000 | 0.6113 | - | - | - |
| 0.1765 | 26100 | 0.6504 | - | - | - |
| 0.1771 | 26200 | 0.615 | - | - | - |
| 0.1778 | 26300 | 0.6404 | - | - | - |
| 0.1785 | 26400 | 0.6431 | - | - | - |
| 0.1792 | 26500 | 0.619 | - | - | - |
| 0.1799 | 26600 | 0.6201 | - | - | - |
| 0.1805 | 26700 | 0.5756 | - | - | - |
| 0.1812 | 26800 | 0.5796 | - | - | - |
| 0.1819 | 26900 | 0.6046 | - | - | - |
| 0.1826 | 27000 | 0.6042 | - | - | - |
| 0.1832 | 27100 | 0.6867 | - | - | - |
| 0.1839 | 27200 | 0.6236 | - | - | - |
| 0.1846 | 27300 | 0.5696 | - | - | - |
| 0.1853 | 27400 | 0.6366 | - | - | - |
| 0.1859 | 27500 | 0.6467 | - | - | - |
| 0.1866 | 27600 | 0.6449 | - | - | - |
| 0.1873 | 27700 | 0.6579 | - | - | - |
| 0.1880 | 27800 | 0.6005 | - | - | - |
| 0.1886 | 27900 | 0.5824 | - | - | - |
| 0.1893 | 28000 | 0.6376 | - | - | - |
| 0.1900 | 28100 | 0.6348 | - | - | - |
| 0.1907 | 28200 | 0.5968 | - | - | - |
| 0.1913 | 28300 | 0.6361 | - | - | - |
| 0.1920 | 28400 | 0.5847 | - | - | - |
| 0.1927 | 28500 | 0.6203 | - | - | - |
| 0.1934 | 28600 | 0.6186 | - | - | - |
| 0.1940 | 28700 | 0.6275 | - | - | - |
| 0.1947 | 28800 | 0.5804 | - | - | - |
| 0.1954 | 28900 | 0.5898 | - | - | - |
| 0.1961 | 29000 | 0.6201 | - | - | - |
| 0.1968 | 29100 | 0.591 | - | - | - |
| 0.1974 | 29200 | 0.6571 | - | - | - |
| 0.1981 | 29300 | 0.6228 | - | - | - |
| 0.1988 | 29400 | 0.6722 | - | - | - |
| 0.1995 | 29500 | 0.5665 | - | - | - |
| 0.2001 | 29600 | 0.6216 | - | - | - |
| 0.2008 | 29700 | 0.6258 | - | - | - |
| 0.2015 | 29800 | 0.5789 | - | - | - |
| 0.2022 | 29900 | 0.6193 | - | - | - |
| 0.2028 | 30000 | 0.6435 | 0.6061 | 0.7186 | - |
| 0.2035 | 30100 | 0.6314 | - | - | - |
| 0.2042 | 30200 | 0.5847 | - | - | - |
| 0.2049 | 30300 | 0.6053 | - | - | - |
| 0.2055 | 30400 | 0.602 | - | - | - |
| 0.2062 | 30500 | 0.613 | - | - | - |
| 0.2069 | 30600 | 0.5967 | - | - | - |
| 0.2076 | 30700 | 0.6305 | - | - | - |
| 0.2082 | 30800 | 0.6322 | - | - | - |
| 0.2089 | 30900 | 0.6252 | - | - | - |
| 0.2096 | 31000 | 0.6217 | - | - | - |
| 0.2103 | 31100 | 0.586 | - | - | - |
| 0.2110 | 31200 | 0.6274 | - | - | - |
| 0.2116 | 31300 | 0.5972 | - | - | - |
| 0.2123 | 31400 | 0.6104 | - | - | - |
| 0.2130 | 31500 | 0.5858 | - | - | - |
| 0.2137 | 31600 | 0.6365 | - | - | - |
| 0.2143 | 31700 | 0.596 | - | - | - |
| 0.2150 | 31800 | 0.632 | - | - | - |
| 0.2157 | 31900 | 0.6488 | - | - | - |
| 0.2164 | 32000 | 0.6164 | - | - | - |
| 0.2170 | 32100 | 0.6263 | - | - | - |
| 0.2177 | 32200 | 0.6388 | - | - | - |
| 0.2184 | 32300 | 0.6245 | - | - | - |
| 0.2191 | 32400 | 0.6364 | - | - | - |
| 0.2197 | 32500 | 0.6578 | - | - | - |
| 0.2204 | 32600 | 0.6033 | - | - | - |
| 0.2211 | 32700 | 0.6066 | - | - | - |
| 0.2218 | 32800 | 0.6938 | - | - | - |
| 0.2224 | 32900 | 0.6226 | - | - | - |
| 0.2231 | 33000 | 0.6472 | - | - | - |
| 0.2238 | 33100 | 0.6485 | - | - | - |
| 0.2245 | 33200 | 0.6636 | - | - | - |
| 0.2252 | 33300 | 0.633 | - | - | - |
| 0.2258 | 33400 | 0.5909 | - | - | - |
| 0.2265 | 33500 | 0.6209 | - | - | - |
| 0.2272 | 33600 | 0.6256 | - | - | - |
| 0.2279 | 33700 | 0.6476 | - | - | - |
| 0.2285 | 33800 | 0.6369 | - | - | - |
| 0.2292 | 33900 | 0.6135 | - | - | - |
| 0.2299 | 34000 | 0.6749 | - | - | - |
| 0.2306 | 34100 | 0.6354 | - | - | - |
| 0.2312 | 34200 | 0.625 | - | - | - |
| 0.2319 | 34300 | 0.616 | - | - | - |
| 0.2326 | 34400 | 0.6047 | - | - | - |
| 0.2333 | 34500 | 0.6431 | - | - | - |
| 0.2339 | 34600 | 0.6576 | - | - | - |
| 0.2346 | 34700 | 0.6344 | - | - | - |
| 0.2353 | 34800 | 0.6477 | - | - | - |
| 0.2360 | 34900 | 0.6094 | - | - | - |
| 0.2366 | 35000 | 0.6243 | 0.6088 | 0.7208 | - |
| 0.2373 | 35100 | 0.5981 | - | - | - |
| 0.2380 | 35200 | 0.559 | - | - | - |
| 0.2387 | 35300 | 0.6523 | - | - | - |
| 0.2393 | 35400 | 0.6018 | - | - | - |
| 0.2400 | 35500 | 0.6228 | - | - | - |
| 0.2407 | 35600 | 0.6321 | - | - | - |
| 0.2414 | 35700 | 0.6072 | - | - | - |
| 0.2421 | 35800 | 0.6467 | - | - | - |
| 0.2427 | 35900 | 0.6676 | - | - | - |
| 0.2434 | 36000 | 0.6486 | - | - | - |
| 0.2441 | 36100 | 0.6241 | - | - | - |
| 0.2448 | 36200 | 0.6534 | - | - | - |
| 0.2454 | 36300 | 0.5945 | - | - | - |
| 0.2461 | 36400 | 0.6432 | - | - | - |
| 0.2468 | 36500 | 0.6952 | - | - | - |
| 0.2475 | 36600 | 0.6741 | - | - | - |
| 0.2481 | 36700 | 0.6525 | - | - | - |
| 0.2488 | 36800 | 0.599 | - | - | - |
| 0.2495 | 36900 | 0.643 | - | - | - |
| 0.2502 | 37000 | 0.6254 | - | - | - |
| 0.2508 | 37100 | 0.6511 | - | - | - |
| 0.2515 | 37200 | 0.6694 | - | - | - |
| 0.2522 | 37300 | 0.6213 | - | - | - |
| 0.2529 | 37400 | 0.6465 | - | - | - |
| 0.2535 | 37500 | 0.6623 | - | - | - |
| 0.2542 | 37600 | 0.6205 | - | - | - |
| 0.2549 | 37700 | 0.6552 | - | - | - |
| 0.2556 | 37800 | 0.5855 | - | - | - |
| 0.2563 | 37900 | 0.5539 | - | - | - |
| 0.2569 | 38000 | 0.6411 | - | - | - |
| 0.2576 | 38100 | 0.6509 | - | - | - |
| 0.2583 | 38200 | 0.6843 | - | - | - |
| 0.2590 | 38300 | 0.6742 | - | - | - |
| 0.2596 | 38400 | 0.6214 | - | - | - |
| 0.2603 | 38500 | 0.6486 | - | - | - |
| 0.2610 | 38600 | 0.6209 | - | - | - |
| 0.2617 | 38700 | 0.624 | - | - | - |
| 0.2623 | 38800 | 0.6221 | - | - | - |
| 0.2630 | 38900 | 0.6574 | - | - | - |
| 0.2637 | 39000 | 0.6147 | - | - | - |
| 0.2644 | 39100 | 0.6187 | - | - | - |
| 0.2650 | 39200 | 0.6194 | - | - | - |
| 0.2657 | 39300 | 0.589 | - | - | - |
| 0.2664 | 39400 | 0.6393 | - | - | - |
| 0.2671 | 39500 | 0.6584 | - | - | - |
| 0.2677 | 39600 | 0.6272 | - | - | - |
| 0.2684 | 39700 | 0.63 | - | - | - |
| 0.2691 | 39800 | 0.6646 | - | - | - |
| 0.2698 | 39900 | 0.5913 | - | - | - |
| 0.2705 | 40000 | 0.6878 | 0.6177 | 0.7156 | - |
| 0.2711 | 40100 | 0.6421 | - | - | - |
| 0.2718 | 40200 | 0.6111 | - | - | - |
| 0.2725 | 40300 | 0.6301 | - | - | - |
| 0.2732 | 40400 | 0.6192 | - | - | - |
| 0.2738 | 40500 | 0.6505 | - | - | - |
| 0.2745 | 40600 | 0.6067 | - | - | - |
| 0.2752 | 40700 | 0.6543 | - | - | - |
| 0.2759 | 40800 | 0.6214 | - | - | - |
| 0.2765 | 40900 | 0.6094 | - | - | - |
| 0.2772 | 41000 | 0.5979 | - | - | - |
| 0.2779 | 41100 | 0.6261 | - | - | - |
| 0.2786 | 41200 | 0.6484 | - | - | - |
| 0.2792 | 41300 | 0.6576 | - | - | - |
| 0.2799 | 41400 | 0.5837 | - | - | - |
| 0.2806 | 41500 | 0.6467 | - | - | - |
| 0.2813 | 41600 | 0.6436 | - | - | - |
| 0.2819 | 41700 | 0.6287 | - | - | - |
| 0.2826 | 41800 | 0.7045 | - | - | - |
| 0.2833 | 41900 | 0.6501 | - | - | - |
| 0.2840 | 42000 | 0.6895 | - | - | - |
| 0.2846 | 42100 | 0.6133 | - | - | - |
| 0.2853 | 42200 | 0.6624 | - | - | - |
| 0.2860 | 42300 | 0.6151 | - | - | - |
| 0.2867 | 42400 | 0.6498 | - | - | - |
| 0.2874 | 42500 | 0.6361 | - | - | - |
| 0.2880 | 42600 | 0.6671 | - | - | - |
| 0.2887 | 42700 | 0.6821 | - | - | - |
| 0.2894 | 42800 | 0.6116 | - | - | - |
| 0.2901 | 42900 | 0.6758 | - | - | - |
| 0.2907 | 43000 | 0.6289 | - | - | - |
| 0.2914 | 43100 | 0.5684 | - | - | - |
| 0.2921 | 43200 | 0.6287 | - | - | - |
| 0.2928 | 43300 | 0.6498 | - | - | - |
| 0.2934 | 43400 | 0.6669 | - | - | - |
| 0.2941 | 43500 | 0.6127 | - | - | - |
| 0.2948 | 43600 | 0.6474 | - | - | - |
| 0.2955 | 43700 | 0.6459 | - | - | - |
| 0.2961 | 43800 | 0.6588 | - | - | - |
| 0.2968 | 43900 | 0.6231 | - | - | - |
| 0.2975 | 44000 | 0.6723 | - | - | - |
| 0.2982 | 44100 | 0.5787 | - | - | - |
| 0.2988 | 44200 | 0.6469 | - | - | - |
| 0.2995 | 44300 | 0.6152 | - | - | - |
| 0.3002 | 44400 | 0.6105 | - | - | - |
| 0.3009 | 44500 | 0.6529 | - | - | - |
| 0.3016 | 44600 | 0.6514 | - | - | - |
| 0.3022 | 44700 | 0.603 | - | - | - |
| 0.3029 | 44800 | 0.6516 | - | - | - |
| 0.3036 | 44900 | 0.5861 | - | - | - |
| 0.3043 | 45000 | 0.6236 | 0.6444 | 0.7174 | - |
| 0.3049 | 45100 | 0.6714 | - | - | - |
| 0.3056 | 45200 | 0.6537 | - | - | - |
| 0.3063 | 45300 | 0.6436 | - | - | - |
| 0.3070 | 45400 | 0.6407 | - | - | - |
| 0.3076 | 45500 | 0.6597 | - | - | - |
| 0.3083 | 45600 | 0.6381 | - | - | - |
| 0.3090 | 45700 | 0.6688 | - | - | - |
| 0.3097 | 45800 | 0.6227 | - | - | - |
| 0.3103 | 45900 | 0.6119 | - | - | - |
| 0.3110 | 46000 | 0.6915 | - | - | - |
| 0.3117 | 46100 | 0.6381 | - | - | - |
| 0.3124 | 46200 | 0.6101 | - | - | - |
| 0.3130 | 46300 | 0.6061 | - | - | - |
| 0.3137 | 46400 | 0.6433 | - | - | - |
| 0.3144 | 46500 | 0.6245 | - | - | - |
| 0.3151 | 46600 | 0.6202 | - | - | - |
| 0.3158 | 46700 | 0.6556 | - | - | - |
| 0.3164 | 46800 | 0.6835 | - | - | - |
| 0.3171 | 46900 | 0.6869 | - | - | - |
| 0.3178 | 47000 | 0.5996 | - | - | - |
| 0.3185 | 47100 | 0.6391 | - | - | - |
| 0.3191 | 47200 | 0.6439 | - | - | - |
| 0.3198 | 47300 | 0.6664 | - | - | - |
| 0.3205 | 47400 | 0.6554 | - | - | - |
| 0.3212 | 47500 | 0.6527 | - | - | - |
| 0.3218 | 47600 | 0.6211 | - | - | - |
| 0.3225 | 47700 | 0.6645 | - | - | - |
| 0.3232 | 47800 | 0.66 | - | - | - |
| 0.3239 | 47900 | 0.5725 | - | - | - |
| 0.3245 | 48000 | 0.629 | - | - | - |
| 0.3252 | 48100 | 0.6016 | - | - | - |
| 0.3259 | 48200 | 0.6293 | - | - | - |
| 0.3266 | 48300 | 0.6543 | - | - | - |
| 0.3272 | 48400 | 0.6791 | - | - | - |
| 0.3279 | 48500 | 0.6016 | - | - | - |
| 0.3286 | 48600 | 0.678 | - | - | - |
| 0.3293 | 48700 | 0.6323 | - | - | - |
| 0.3300 | 48800 | 0.658 | - | - | - |
| 0.3306 | 48900 | 0.6325 | - | - | - |
| 0.3313 | 49000 | 0.6482 | - | - | - |
| 0.3320 | 49100 | 0.6245 | - | - | - |
| 0.3327 | 49200 | 0.6676 | - | - | - |
| 0.3333 | 49300 | 0.5797 | - | - | - |
| 0.3340 | 49400 | 0.6468 | - | - | - |
| 0.3347 | 49500 | 0.6416 | - | - | - |
| 0.3354 | 49600 | 0.6916 | - | - | - |
| 0.3360 | 49700 | 0.6063 | - | - | - |
| 0.3367 | 49800 | 0.6038 | - | - | - |
| 0.3374 | 49900 | 0.6232 | - | - | - |
| 0.3381 | 50000 | 0.6846 | 0.6324 | 0.7174 | - |
| 0.3387 | 50100 | 0.6282 | - | - | - |
| 0.3394 | 50200 | 0.6417 | - | - | - |
| 0.3401 | 50300 | 0.6414 | - | - | - |
| 0.3408 | 50400 | 0.6045 | - | - | - |
| 0.3414 | 50500 | 0.6352 | - | - | - |
| 0.3421 | 50600 | 0.6191 | - | - | - |
| 0.3428 | 50700 | 0.6575 | - | - | - |
| 0.3435 | 50800 | 0.6673 | - | - | - |
| 0.3441 | 50900 | 0.6318 | - | - | - |
| 0.3448 | 51000 | 0.6833 | - | - | - |
| 0.3455 | 51100 | 0.6585 | - | - | - |
| 0.3462 | 51200 | 0.6404 | - | - | - |
| 0.3469 | 51300 | 0.6103 | - | - | - |
| 0.3475 | 51400 | 0.6326 | - | - | - |
| 0.3482 | 51500 | 0.6061 | - | - | - |
| 0.3489 | 51600 | 0.6289 | - | - | - |
| 0.3496 | 51700 | 0.6171 | - | - | - |
| 0.3502 | 51800 | 0.6585 | - | - | - |
| 0.3509 | 51900 | 0.6368 | - | - | - |
| 0.3516 | 52000 | 0.6184 | - | - | - |
| 0.3523 | 52100 | 0.6797 | - | - | - |
| 0.3529 | 52200 | 0.6365 | - | - | - |
| 0.3536 | 52300 | 0.6044 | - | - | - |
| 0.3543 | 52400 | 0.6143 | - | - | - |
| 0.3550 | 52500 | 0.6061 | - | - | - |
| 0.3556 | 52600 | 0.599 | - | - | - |
| 0.3563 | 52700 | 0.5971 | - | - | - |
| 0.3570 | 52800 | 0.6478 | - | - | - |
| 0.3577 | 52900 | 0.6541 | - | - | - |
| 0.3583 | 53000 | 0.6451 | - | - | - |
| 0.3590 | 53100 | 0.6416 | - | - | - |
| 0.3597 | 53200 | 0.6254 | - | - | - |
| 0.3604 | 53300 | 0.6096 | - | - | - |
| 0.3611 | 53400 | 0.6307 | - | - | - |
| 0.3617 | 53500 | 0.606 | - | - | - |
| 0.3624 | 53600 | 0.6387 | - | - | - |
| 0.3631 | 53700 | 0.5961 | - | - | - |
| 0.3638 | 53800 | 0.6237 | - | - | - |
| 0.3644 | 53900 | 0.6239 | - | - | - |
| 0.3651 | 54000 | 0.6565 | - | - | - |
| 0.3658 | 54100 | 0.6405 | - | - | - |
| 0.3665 | 54200 | 0.6519 | - | - | - |
| 0.3671 | 54300 | 0.6073 | - | - | - |
| 0.3678 | 54400 | 0.5996 | - | - | - |
| 0.3685 | 54500 | 0.6359 | - | - | - |
| 0.3692 | 54600 | 0.6518 | - | - | - |
| 0.3698 | 54700 | 0.6553 | - | - | - |
| 0.3705 | 54800 | 0.644 | - | - | - |
| 0.3712 | 54900 | 0.6162 | - | - | - |
| 0.3719 | 55000 | 0.6249 | 0.6255 | 0.7278 | - |
| 0.3725 | 55100 | 0.6388 | - | - | - |
| 0.3732 | 55200 | 0.639 | - | - | - |
| 0.3739 | 55300 | 0.617 | - | - | - |
| 0.3746 | 55400 | 0.5962 | - | - | - |
| 0.3753 | 55500 | 0.6682 | - | - | - |
| 0.3759 | 55600 | 0.6443 | - | - | - |
| 0.3766 | 55700 | 0.6814 | - | - | - |
| 0.3773 | 55800 | 0.622 | - | - | - |
| 0.3780 | 55900 | 0.5706 | - | - | - |
| 0.3786 | 56000 | 0.634 | - | - | - |
| 0.3793 | 56100 | 0.716 | - | - | - |
| 0.3800 | 56200 | 0.6451 | - | - | - |
| 0.3807 | 56300 | 0.65 | - | - | - |
| 0.3813 | 56400 | 0.6057 | - | - | - |
| 0.3820 | 56500 | 0.698 | - | - | - |
| 0.3827 | 56600 | 0.623 | - | - | - |
| 0.3834 | 56700 | 0.6455 | - | - | - |
| 0.3840 | 56800 | 0.6551 | - | - | - |
| 0.3847 | 56900 | 0.6256 | - | - | - |
| 0.3854 | 57000 | 0.6746 | - | - | - |
| 0.3861 | 57100 | 0.6176 | - | - | - |
| 0.3867 | 57200 | 0.6617 | - | - | - |
| 0.3874 | 57300 | 0.6398 | - | - | - |
| 0.3881 | 57400 | 0.6081 | - | - | - |
| 0.3888 | 57500 | 0.6398 | - | - | - |
| 0.3894 | 57600 | 0.6344 | - | - | - |
| 0.3901 | 57700 | 0.6568 | - | - | - |
| 0.3908 | 57800 | 0.6455 | - | - | - |
| 0.3915 | 57900 | 0.6425 | - | - | - |
| 0.3922 | 58000 | 0.6042 | - | - | - |
| 0.3928 | 58100 | 0.6076 | - | - | - |
| 0.3935 | 58200 | 0.6339 | - | - | - |
| 0.3942 | 58300 | 0.6217 | - | - | - |
| 0.3949 | 58400 | 0.6651 | - | - | - |
| 0.3955 | 58500 | 0.6035 | - | - | - |
| 0.3962 | 58600 | 0.6103 | - | - | - |
| 0.3969 | 58700 | 0.6335 | - | - | - |
| 0.3976 | 58800 | 0.606 | - | - | - |
| 0.3982 | 58900 | 0.5992 | - | - | - |
| 0.3989 | 59000 | 0.5963 | - | - | - |
| 0.3996 | 59100 | 0.6815 | - | - | - |
| 0.4003 | 59200 | 0.6247 | - | - | - |
| 0.4009 | 59300 | 0.6558 | - | - | - |
| 0.4016 | 59400 | 0.64 | - | - | - |
| 0.4023 | 59500 | 0.6545 | - | - | - |
| 0.4030 | 59600 | 0.648 | - | - | - |
| 0.4036 | 59700 | 0.6931 | - | - | - |
| 0.4043 | 59800 | 0.6162 | - | - | - |
| 0.4050 | 59900 | 0.5646 | - | - | - |
| 0.4057 | 60000 | 0.6161 | 0.6338 | 0.7306 | - |
| 0.4064 | 60100 | 0.6343 | - | - | - |
| 0.4070 | 60200 | 0.6251 | - | - | - |
| 0.4077 | 60300 | 0.6308 | - | - | - |
| 0.4084 | 60400 | 0.645 | - | - | - |
| 0.4091 | 60500 | 0.6569 | - | - | - |
| 0.4097 | 60600 | 0.683 | - | - | - |
| 0.4104 | 60700 | 0.6618 | - | - | - |
| 0.4111 | 60800 | 0.6432 | - | - | - |
| 0.4118 | 60900 | 0.6021 | - | - | - |
| 0.4124 | 61000 | 0.6408 | - | - | - |
| 0.4131 | 61100 | 0.6512 | - | - | - |
| 0.4138 | 61200 | 0.657 | - | - | - |
| 0.4145 | 61300 | 0.6615 | - | - | - |
| 0.4151 | 61400 | 0.6271 | - | - | - |
| 0.4158 | 61500 | 0.6145 | - | - | - |
| 0.4165 | 61600 | 0.656 | - | - | - |
| 0.4172 | 61700 | 0.6566 | - | - | - |
| 0.4178 | 61800 | 0.6403 | - | - | - |
| 0.4185 | 61900 | 0.6262 | - | - | - |
| 0.4192 | 62000 | 0.6281 | - | - | - |
| 0.4199 | 62100 | 0.6687 | - | - | - |
| 0.4206 | 62200 | 0.6099 | - | - | - |
| 0.4212 | 62300 | 0.618 | - | - | - |
| 0.4219 | 62400 | 0.6656 | - | - | - |
| 0.4226 | 62500 | 0.6308 | - | - | - |
| 0.4233 | 62600 | 0.6708 | - | - | - |
| 0.4239 | 62700 | 0.6741 | - | - | - |
| 0.4246 | 62800 | 0.6129 | - | - | - |
| 0.4253 | 62900 | 0.6701 | - | - | - |
| 0.4260 | 63000 | 0.6287 | - | - | - |
| 0.4266 | 63100 | 0.6253 | - | - | - |
| 0.4273 | 63200 | 0.6209 | - | - | - |
| 0.4280 | 63300 | 0.6151 | - | - | - |
| 0.4287 | 63400 | 0.6661 | - | - | - |
| 0.4293 | 63500 | 0.593 | - | - | - |
| 0.4300 | 63600 | 0.6351 | - | - | - |
| 0.4307 | 63700 | 0.571 | - | - | - |
| 0.4314 | 63800 | 0.6677 | - | - | - |
| 0.4320 | 63900 | 0.6424 | - | - | - |
| 0.4327 | 64000 | 0.6167 | - | - | - |
| 0.4334 | 64100 | 0.6306 | - | - | - |
| 0.4341 | 64200 | 0.6459 | - | - | - |
| 0.4348 | 64300 | 0.6319 | - | - | - |
| 0.4354 | 64400 | 0.6046 | - | - | - |
| 0.4361 | 64500 | 0.5864 | - | - | - |
| 0.4368 | 64600 | 0.5976 | - | - | - |
| 0.4375 | 64700 | 0.6703 | - | - | - |
| 0.4381 | 64800 | 0.6285 | - | - | - |
| 0.4388 | 64900 | 0.6157 | - | - | - |
| 0.4395 | 65000 | 0.6242 | 0.6218 | 0.7230 | - |
| 0.4402 | 65100 | 0.6822 | - | - | - |
| 0.4408 | 65200 | 0.6187 | - | - | - |
| 0.4415 | 65300 | 0.6269 | - | - | - |
| 0.4422 | 65400 | 0.662 | - | - | - |
| 0.4429 | 65500 | 0.6735 | - | - | - |
| 0.4435 | 65600 | 0.5918 | - | - | - |
| 0.4442 | 65700 | 0.6078 | - | - | - |
| 0.4449 | 65800 | 0.6403 | - | - | - |
| 0.4456 | 65900 | 0.6206 | - | - | - |
| 0.4462 | 66000 | 0.6588 | - | - | - |
| 0.4469 | 66100 | 0.6088 | - | - | - |
| 0.4476 | 66200 | 0.682 | - | - | - |
| 0.4483 | 66300 | 0.6464 | - | - | - |
| 0.4489 | 66400 | 0.5804 | - | - | - |
| 0.4496 | 66500 | 0.619 | - | - | - |
| 0.4503 | 66600 | 0.5553 | - | - | - |
| 0.4510 | 66700 | 0.6467 | - | - | - |
| 0.4517 | 66800 | 0.6051 | - | - | - |
| 0.4523 | 66900 | 0.6018 | - | - | - |
| 0.4530 | 67000 | 0.6542 | - | - | - |
| 0.4537 | 67100 | 0.6279 | - | - | - |
| 0.4544 | 67200 | 0.6058 | - | - | - |
| 0.4550 | 67300 | 0.6401 | - | - | - |
| 0.4557 | 67400 | 0.6472 | - | - | - |
| 0.4564 | 67500 | 0.6139 | - | - | - |
| 0.4571 | 67600 | 0.6609 | - | - | - |
| 0.4577 | 67700 | 0.6618 | - | - | - |
| 0.4584 | 67800 | 0.6947 | - | - | - |
| 0.4591 | 67900 | 0.6402 | - | - | - |
| 0.4598 | 68000 | 0.626 | - | - | - |
| 0.4604 | 68100 | 0.5746 | - | - | - |
| 0.4611 | 68200 | 0.6357 | - | - | - |
| 0.4618 | 68300 | 0.5956 | - | - | - |
| 0.4625 | 68400 | 0.6628 | - | - | - |
| 0.4631 | 68500 | 0.6289 | - | - | - |
| 0.4638 | 68600 | 0.5994 | - | - | - |
| 0.4645 | 68700 | 0.6198 | - | - | - |
| 0.4652 | 68800 | 0.6084 | - | - | - |
| 0.4659 | 68900 | 0.5719 | - | - | - |
| 0.4665 | 69000 | 0.6377 | - | - | - |
| 0.4672 | 69100 | 0.6459 | - | - | - |
| 0.4679 | 69200 | 0.5992 | - | - | - |
| 0.4686 | 69300 | 0.6472 | - | - | - |
| 0.4692 | 69400 | 0.6353 | - | - | - |
| 0.4699 | 69500 | 0.6298 | - | - | - |
| 0.4706 | 69600 | 0.6451 | - | - | - |
| 0.4713 | 69700 | 0.612 | - | - | - |
| 0.4719 | 69800 | 0.6064 | - | - | - |
| 0.4726 | 69900 | 0.5837 | - | - | - |
| 0.4733 | 70000 | 0.6238 | 0.6179 | 0.7189 | - |
| 0.4740 | 70100 | 0.6257 | - | - | - |
| 0.4746 | 70200 | 0.6304 | - | - | - |
| 0.4753 | 70300 | 0.6209 | - | - | - |
| 0.4760 | 70400 | 0.621 | - | - | - |
| 0.4767 | 70500 | 0.6084 | - | - | - |
| 0.4773 | 70600 | 0.6252 | - | - | - |
| 0.4780 | 70700 | 0.5949 | - | - | - |
| 0.4787 | 70800 | 0.6235 | - | - | - |
| 0.4794 | 70900 | 0.6242 | - | - | - |
| 0.4801 | 71000 | 0.6453 | - | - | - |
| 0.4807 | 71100 | 0.6447 | - | - | - |
| 0.4814 | 71200 | 0.6388 | - | - | - |
| 0.4821 | 71300 | 0.6132 | - | - | - |
| 0.4828 | 71400 | 0.616 | - | - | - |
| 0.4834 | 71500 | 0.5966 | - | - | - |
| 0.4841 | 71600 | 0.6732 | - | - | - |
| 0.4848 | 71700 | 0.6082 | - | - | - |
| 0.4855 | 71800 | 0.611 | - | - | - |
| 0.4861 | 71900 | 0.6304 | - | - | - |
| 0.4868 | 72000 | 0.6341 | - | - | - |
| 0.4875 | 72100 | 0.6134 | - | - | - |
| 0.4882 | 72200 | 0.5944 | - | - | - |
| 0.4888 | 72300 | 0.6303 | - | - | - |
| 0.4895 | 72400 | 0.594 | - | - | - |
| 0.4902 | 72500 | 0.6315 | - | - | - |
| 0.4909 | 72600 | 0.5712 | - | - | - |
| 0.4915 | 72700 | 0.5829 | - | - | - |
| 0.4922 | 72800 | 0.6161 | - | - | - |
| 0.4929 | 72900 | 0.5878 | - | - | - |
| 0.4936 | 73000 | 0.6294 | - | - | - |
| 0.4942 | 73100 | 0.6111 | - | - | - |
| 0.4949 | 73200 | 0.5692 | - | - | - |
| 0.4956 | 73300 | 0.5736 | - | - | - |
| 0.4963 | 73400 | 0.6255 | - | - | - |
| 0.4970 | 73500 | 0.6148 | - | - | - |
| 0.4976 | 73600 | 0.5573 | - | - | - |
| 0.4983 | 73700 | 0.5809 | - | - | - |
| 0.4990 | 73800 | 0.6168 | - | - | - |
| 0.4997 | 73900 | 0.6424 | - | - | - |
| 0.5003 | 74000 | 0.6409 | - | - | - |
| 0.5010 | 74100 | 0.5661 | - | - | - |
| 0.5017 | 74200 | 0.6337 | - | - | - |
| 0.5024 | 74300 | 0.551 | - | - | - |
| 0.5030 | 74400 | 0.6262 | - | - | - |
| 0.5037 | 74500 | 0.6337 | - | - | - |
| 0.5044 | 74600 | 0.633 | - | - | - |
| 0.5051 | 74700 | 0.5337 | - | - | - |
| 0.5057 | 74800 | 0.5854 | - | - | - |
| 0.5064 | 74900 | 0.6169 | - | - | - |
| 0.5071 | 75000 | 0.6359 | 0.6160 | 0.7241 | - |
| 0.5078 | 75100 | 0.6374 | - | - | - |
| 0.5084 | 75200 | 0.6061 | - | - | - |
| 0.5091 | 75300 | 0.6369 | - | - | - |
| 0.5098 | 75400 | 0.6648 | - | - | - |
| 0.5105 | 75500 | 0.5873 | - | - | - |
| 0.5112 | 75600 | 0.5949 | - | - | - |
| 0.5118 | 75700 | 0.6224 | - | - | - |
| 0.5125 | 75800 | 0.6376 | - | - | - |
| 0.5132 | 75900 | 0.5902 | - | - | - |
| 0.5139 | 76000 | 0.6408 | - | - | - |
| 0.5145 | 76100 | 0.6021 | - | - | - |
| 0.5152 | 76200 | 0.5985 | - | - | - |
| 0.5159 | 76300 | 0.6502 | - | - | - |
| 0.5166 | 76400 | 0.5686 | - | - | - |
| 0.5172 | 76500 | 0.6252 | - | - | - |
| 0.5179 | 76600 | 0.6192 | - | - | - |
| 0.5186 | 76700 | 0.6058 | - | - | - |
| 0.5193 | 76800 | 0.6305 | - | - | - |
| 0.5199 | 76900 | 0.6343 | - | - | - |
| 0.5206 | 77000 | 0.5561 | - | - | - |
| 0.5213 | 77100 | 0.6145 | - | - | - |
| 0.5220 | 77200 | 0.6081 | - | - | - |
| 0.5226 | 77300 | 0.6396 | - | - | - |
| 0.5233 | 77400 | 0.5994 | - | - | - |
| 0.5240 | 77500 | 0.6493 | - | - | - |
| 0.5247 | 77600 | 0.6207 | - | - | - |
| 0.5254 | 77700 | 0.6138 | - | - | - |
| 0.5260 | 77800 | 0.713 | - | - | - |
| 0.5267 | 77900 | 0.5914 | - | - | - |
| 0.5274 | 78000 | 0.6569 | - | - | - |
| 0.5281 | 78100 | 0.6586 | - | - | - |
| 0.5287 | 78200 | 0.6452 | - | - | - |
| 0.5294 | 78300 | 0.5984 | - | - | - |
| 0.5301 | 78400 | 0.6117 | - | - | - |
| 0.5308 | 78500 | 0.6054 | - | - | - |
| 0.5314 | 78600 | 0.6085 | - | - | - |
| 0.5321 | 78700 | 0.6346 | - | - | - |
| 0.5328 | 78800 | 0.5873 | - | - | - |
| 0.5335 | 78900 | 0.6506 | - | - | - |
| 0.5341 | 79000 | 0.65 | - | - | - |
| 0.5348 | 79100 | 0.6223 | - | - | - |
| 0.5355 | 79200 | 0.6262 | - | - | - |
| 0.5362 | 79300 | 0.5406 | - | - | - |
| 0.5368 | 79400 | 0.5873 | - | - | - |
| 0.5375 | 79500 | 0.613 | - | - | - |
| 0.5382 | 79600 | 0.571 | - | - | - |
| 0.5389 | 79700 | 0.5856 | - | - | - |
| 0.5396 | 79800 | 0.5672 | - | - | - |
| 0.5402 | 79900 | 0.6027 | - | - | - |
| 0.5409 | 80000 | 0.6018 | 0.6046 | 0.7282 | - |
| 0.5416 | 80100 | 0.5906 | - | - | - |
| 0.5423 | 80200 | 0.5824 | - | - | - |
| 0.5429 | 80300 | 0.5971 | - | - | - |
| 0.5436 | 80400 | 0.6683 | - | - | - |
| 0.5443 | 80500 | 0.6331 | - | - | - |
| 0.5450 | 80600 | 0.6008 | - | - | - |
| 0.5456 | 80700 | 0.6628 | - | - | - |
| 0.5463 | 80800 | 0.5973 | - | - | - |
| 0.5470 | 80900 | 0.6765 | - | - | - |
| 0.5477 | 81000 | 0.6603 | - | - | - |
| 0.5483 | 81100 | 0.5987 | - | - | - |
| 0.5490 | 81200 | 0.5915 | - | - | - |
| 0.5497 | 81300 | 0.596 | - | - | - |
| 0.5504 | 81400 | 0.6053 | - | - | - |
| 0.5510 | 81500 | 0.6292 | - | - | - |
| 0.5517 | 81600 | 0.5678 | - | - | - |
| 0.5524 | 81700 | 0.6322 | - | - | - |
| 0.5531 | 81800 | 0.6004 | - | - | - |
| 0.5537 | 81900 | 0.6016 | - | - | - |
| 0.5544 | 82000 | 0.5989 | - | - | - |
| 0.5551 | 82100 | 0.6167 | - | - | - |
| 0.5558 | 82200 | 0.6094 | - | - | - |
| 0.5565 | 82300 | 0.6168 | - | - | - |
| 0.5571 | 82400 | 0.6085 | - | - | - |
| 0.5578 | 82500 | 0.6279 | - | - | - |
| 0.5585 | 82600 | 0.6032 | - | - | - |
| 0.5592 | 82700 | 0.5894 | - | - | - |
| 0.5598 | 82800 | 0.5738 | - | - | - |
| 0.5605 | 82900 | 0.675 | - | - | - |
| 0.5612 | 83000 | 0.5675 | - | - | - |
| 0.5619 | 83100 | 0.607 | - | - | - |
| 0.5625 | 83200 | 0.6119 | - | - | - |
| 0.5632 | 83300 | 0.6012 | - | - | - |
| 0.5639 | 83400 | 0.6348 | - | - | - |
| 0.5646 | 83500 | 0.5713 | - | - | - |
| 0.5652 | 83600 | 0.6091 | - | - | - |
| 0.5659 | 83700 | 0.5939 | - | - | - |
| 0.5666 | 83800 | 0.597 | - | - | - |
| 0.5673 | 83900 | 0.5814 | - | - | - |
| 0.5679 | 84000 | 0.656 | - | - | - |
| 0.5686 | 84100 | 0.5942 | - | - | - |
| 0.5693 | 84200 | 0.6431 | - | - | - |
| 0.5700 | 84300 | 0.5965 | - | - | - |
| 0.5707 | 84400 | 0.5977 | - | - | - |
| 0.5713 | 84500 | 0.6291 | - | - | - |
| 0.5720 | 84600 | 0.6457 | - | - | - |
| 0.5727 | 84700 | 0.637 | - | - | - |
| 0.5734 | 84800 | 0.5861 | - | - | - |
| 0.5740 | 84900 | 0.6334 | - | - | - |
| 0.5747 | 85000 | 0.6436 | 0.6067 | 0.7284 | - |
| 0.5754 | 85100 | 0.5756 | - | - | - |
| 0.5761 | 85200 | 0.6278 | - | - | - |
| 0.5767 | 85300 | 0.6198 | - | - | - |
| 0.5774 | 85400 | 0.5665 | - | - | - |
| 0.5781 | 85500 | 0.5766 | - | - | - |
| 0.5788 | 85600 | 0.6098 | - | - | - |
| 0.5794 | 85700 | 0.6054 | - | - | - |
| 0.5801 | 85800 | 0.6664 | - | - | - |
| 0.5808 | 85900 | 0.6086 | - | - | - |
| 0.5815 | 86000 | 0.6282 | - | - | - |
| 0.5821 | 86100 | 0.6393 | - | - | - |
| 0.5828 | 86200 | 0.5927 | - | - | - |
| 0.5835 | 86300 | 0.5718 | - | - | - |
| 0.5842 | 86400 | 0.6525 | - | - | - |
| 0.5849 | 86500 | 0.6253 | - | - | - |
| 0.5855 | 86600 | 0.6013 | - | - | - |
| 0.5862 | 86700 | 0.5895 | - | - | - |
| 0.5869 | 86800 | 0.6554 | - | - | - |
| 0.5876 | 86900 | 0.5854 | - | - | - |
| 0.5882 | 87000 | 0.5957 | - | - | - |
| 0.5889 | 87100 | 0.5893 | - | - | - |
| 0.5896 | 87200 | 0.5999 | - | - | - |
| 0.5903 | 87300 | 0.6045 | - | - | - |
| 0.5909 | 87400 | 0.5802 | - | - | - |
| 0.5916 | 87500 | 0.6172 | - | - | - |
| 0.5923 | 87600 | 0.5916 | - | - | - |
| 0.5930 | 87700 | 0.6331 | - | - | - |
| 0.5936 | 87800 | 0.6369 | - | - | - |
| 0.5943 | 87900 | 0.57 | - | - | - |
| 0.5950 | 88000 | 0.6162 | - | - | - |
| 0.5957 | 88100 | 0.5874 | - | - | - |
| 0.5963 | 88200 | 0.5545 | - | - | - |
| 0.5970 | 88300 | 0.6194 | - | - | - |
| 0.5977 | 88400 | 0.5856 | - | - | - |
| 0.5984 | 88500 | 0.6175 | - | - | - |
| 0.5990 | 88600 | 0.6045 | - | - | - |
| 0.5997 | 88700 | 0.6025 | - | - | - |
| 0.6004 | 88800 | 0.5826 | - | - | - |
| 0.6011 | 88900 | 0.6601 | - | - | - |
| 0.6018 | 89000 | 0.5775 | - | - | - |
| 0.6024 | 89100 | 0.6147 | - | - | - |
| 0.6031 | 89200 | 0.6425 | - | - | - |
| 0.6038 | 89300 | 0.6249 | - | - | - |
| 0.6045 | 89400 | 0.6077 | - | - | - |
| 0.6051 | 89500 | 0.6052 | - | - | - |
| 0.6058 | 89600 | 0.5881 | - | - | - |
| 0.6065 | 89700 | 0.6441 | - | - | - |
| 0.6072 | 89800 | 0.5686 | - | - | - |
| 0.6078 | 89900 | 0.6208 | - | - | - |
| 0.6085 | 90000 | 0.6262 | 0.5962 | 0.7290 | - |
| 0.6092 | 90100 | 0.5858 | - | - | - |
| 0.6099 | 90200 | 0.5632 | - | - | - |
| 0.6105 | 90300 | 0.6381 | - | - | - |
| 0.6112 | 90400 | 0.5926 | - | - | - |
| 0.6119 | 90500 | 0.6037 | - | - | - |
| 0.6126 | 90600 | 0.5921 | - | - | - |
| 0.6132 | 90700 | 0.6042 | - | - | - |
| 0.6139 | 90800 | 0.5751 | - | - | - |
| 0.6146 | 90900 | 0.6915 | - | - | - |
| 0.6153 | 91000 | 0.6356 | - | - | - |
| 0.6160 | 91100 | 0.5527 | - | - | - |
| 0.6166 | 91200 | 0.6945 | - | - | - |
| 0.6173 | 91300 | 0.5816 | - | - | - |
| 0.6180 | 91400 | 0.5905 | - | - | - |
| 0.6187 | 91500 | 0.5727 | - | - | - |
| 0.6193 | 91600 | 0.6347 | - | - | - |
| 0.6200 | 91700 | 0.6359 | - | - | - |
| 0.6207 | 91800 | 0.6003 | - | - | - |
| 0.6214 | 91900 | 0.578 | - | - | - |
| 0.6220 | 92000 | 0.5535 | - | - | - |
| 0.6227 | 92100 | 0.5671 | - | - | - |
| 0.6234 | 92200 | 0.5629 | - | - | - |
| 0.6241 | 92300 | 0.571 | - | - | - |
| 0.6247 | 92400 | 0.5791 | - | - | - |
| 0.6254 | 92500 | 0.6182 | - | - | - |
| 0.6261 | 92600 | 0.6103 | - | - | - |
| 0.6268 | 92700 | 0.5707 | - | - | - |
| 0.6274 | 92800 | 0.5786 | - | - | - |
| 0.6281 | 92900 | 0.554 | - | - | - |
| 0.6288 | 93000 | 0.5775 | - | - | - |
| 0.6295 | 93100 | 0.6026 | - | - | - |
| 0.6302 | 93200 | 0.5743 | - | - | - |
| 0.6308 | 93300 | 0.6418 | - | - | - |
| 0.6315 | 93400 | 0.5867 | - | - | - |
| 0.6322 | 93500 | 0.594 | - | - | - |
| 0.6329 | 93600 | 0.5203 | - | - | - |
| 0.6335 | 93700 | 0.5931 | - | - | - |
| 0.6342 | 93800 | 0.5703 | - | - | - |
| 0.6349 | 93900 | 0.5665 | - | - | - |
| 0.6356 | 94000 | 0.6185 | - | - | - |
| 0.6362 | 94100 | 0.6033 | - | - | - |
| 0.6369 | 94200 | 0.6003 | - | - | - |
| 0.6376 | 94300 | 0.61 | - | - | - |
| 0.6383 | 94400 | 0.6101 | - | - | - |
| 0.6389 | 94500 | 0.6051 | - | - | - |
| 0.6396 | 94600 | 0.5788 | - | - | - |
| 0.6403 | 94700 | 0.6017 | - | - | - |
| 0.6410 | 94800 | 0.6018 | - | - | - |
| 0.6416 | 94900 | 0.5726 | - | - | - |
| 0.6423 | 95000 | 0.594 | 0.5891 | 0.7249 | - |
| 0.6430 | 95100 | 0.5978 | - | - | - |
| 0.6437 | 95200 | 0.6216 | - | - | - |
| 0.6443 | 95300 | 0.6323 | - | - | - |
| 0.6450 | 95400 | 0.5357 | - | - | - |
| 0.6457 | 95500 | 0.5839 | - | - | - |
| 0.6464 | 95600 | 0.6459 | - | - | - |
| 0.6471 | 95700 | 0.5624 | - | - | - |
| 0.6477 | 95800 | 0.533 | - | - | - |
| 0.6484 | 95900 | 0.6307 | - | - | - |
| 0.6491 | 96000 | 0.616 | - | - | - |
| 0.6498 | 96100 | 0.6065 | - | - | - |
| 0.6504 | 96200 | 0.585 | - | - | - |
| 0.6511 | 96300 | 0.6208 | - | - | - |
| 0.6518 | 96400 | 0.6138 | - | - | - |
| 0.6525 | 96500 | 0.6185 | - | - | - |
| 0.6531 | 96600 | 0.6244 | - | - | - |
| 0.6538 | 96700 | 0.6085 | - | - | - |
| 0.6545 | 96800 | 0.6526 | - | - | - |
| 0.6552 | 96900 | 0.5471 | - | - | - |
| 0.6558 | 97000 | 0.6102 | - | - | - |
| 0.6565 | 97100 | 0.5853 | - | - | - |
| 0.6572 | 97200 | 0.6138 | - | - | - |
| 0.6579 | 97300 | 0.6025 | - | - | - |
| 0.6585 | 97400 | 0.6209 | - | - | - |
| 0.6592 | 97500 | 0.5849 | - | - | - |
| 0.6599 | 97600 | 0.5783 | - | - | - |
| 0.6606 | 97700 | 0.6042 | - | - | - |
| 0.6613 | 97800 | 0.5641 | - | - | - |
| 0.6619 | 97900 | 0.6084 | - | - | - |
| 0.6626 | 98000 | 0.5553 | - | - | - |
| 0.6633 | 98100 | 0.5948 | - | - | - |
| 0.6640 | 98200 | 0.5449 | - | - | - |
| 0.6646 | 98300 | 0.5889 | - | - | - |
| 0.6653 | 98400 | 0.6199 | - | - | - |
| 0.6660 | 98500 | 0.5621 | - | - | - |
| 0.6667 | 98600 | 0.5906 | - | - | - |
| 0.6673 | 98700 | 0.6085 | - | - | - |
| 0.6680 | 98800 | 0.5882 | - | - | - |
| 0.6687 | 98900 | 0.5827 | - | - | - |
| 0.6694 | 99000 | 0.5894 | - | - | - |
| 0.6700 | 99100 | 0.5856 | - | - | - |
| 0.6707 | 99200 | 0.5882 | - | - | - |
| 0.6714 | 99300 | 0.6242 | - | - | - |
| 0.6721 | 99400 | 0.5972 | - | - | - |
| 0.6727 | 99500 | 0.6286 | - | - | - |
| 0.6734 | 99600 | 0.6136 | - | - | - |
| 0.6741 | 99700 | 0.5609 | - | - | - |
| 0.6748 | 99800 | 0.5942 | - | - | - |
| 0.6755 | 99900 | 0.5529 | - | - | - |
| 0.6761 | 100000 | 0.6497 | 0.5823 | 0.7371 | - |
| 0.6768 | 100100 | 0.6292 | - | - | - |
| 0.6775 | 100200 | 0.5993 | - | - | - |
| 0.6782 | 100300 | 0.5609 | - | - | - |
| 0.6788 | 100400 | 0.578 | - | - | - |
| 0.6795 | 100500 | 0.634 | - | - | - |
| 0.6802 | 100600 | 0.6538 | - | - | - |
| 0.6809 | 100700 | 0.6005 | - | - | - |
| 0.6815 | 100800 | 0.6065 | - | - | - |
| 0.6822 | 100900 | 0.5853 | - | - | - |
| 0.6829 | 101000 | 0.6024 | - | - | - |
| 0.6836 | 101100 | 0.587 | - | - | - |
| 0.6842 | 101200 | 0.6135 | - | - | - |
| 0.6849 | 101300 | 0.6277 | - | - | - |
| 0.6856 | 101400 | 0.6031 | - | - | - |
| 0.6863 | 101500 | 0.6097 | - | - | - |
| 0.6869 | 101600 | 0.5853 | - | - | - |
| 0.6876 | 101700 | 0.5557 | - | - | - |
| 0.6883 | 101800 | 0.6153 | - | - | - |
| 0.6890 | 101900 | 0.6571 | - | - | - |
| 0.6897 | 102000 | 0.5962 | - | - | - |
| 0.6903 | 102100 | 0.6161 | - | - | - |
| 0.6910 | 102200 | 0.5817 | - | - | - |
| 0.6917 | 102300 | 0.617 | - | - | - |
| 0.6924 | 102400 | 0.5364 | - | - | - |
| 0.6930 | 102500 | 0.58 | - | - | - |
| 0.6937 | 102600 | 0.6076 | - | - | - |
| 0.6944 | 102700 | 0.5525 | - | - | - |
| 0.6951 | 102800 | 0.6226 | - | - | - |
| 0.6957 | 102900 | 0.6156 | - | - | - |
| 0.6964 | 103000 | 0.5889 | - | - | - |
| 0.6971 | 103100 | 0.5624 | - | - | - |
| 0.6978 | 103200 | 0.6526 | - | - | - |
| 0.6984 | 103300 | 0.5648 | - | - | - |
| 0.6991 | 103400 | 0.5939 | - | - | - |
| 0.6998 | 103500 | 0.5857 | - | - | - |
| 0.7005 | 103600 | 0.6231 | - | - | - |
| 0.7011 | 103700 | 0.5959 | - | - | - |
| 0.7018 | 103800 | 0.641 | - | - | - |
| 0.7025 | 103900 | 0.6118 | - | - | - |
| 0.7032 | 104000 | 0.6578 | - | - | - |
| 0.7038 | 104100 | 0.5524 | - | - | - |
| 0.7045 | 104200 | 0.5967 | - | - | - |
| 0.7052 | 104300 | 0.586 | - | - | - |
| 0.7059 | 104400 | 0.5776 | - | - | - |
| 0.7066 | 104500 | 0.5944 | - | - | - |
| 0.7072 | 104600 | 0.5675 | - | - | - |
| 0.7079 | 104700 | 0.5548 | - | - | - |
| 0.7086 | 104800 | 0.6153 | - | - | - |
| 0.7093 | 104900 | 0.5992 | - | - | - |
| 0.7099 | 105000 | 0.5789 | 0.5853 | 0.7318 | - |
| 0.7106 | 105100 | 0.5879 | - | - | - |
| 0.7113 | 105200 | 0.5815 | - | - | - |
| 0.7120 | 105300 | 0.5388 | - | - | - |
| 0.7126 | 105400 | 0.6104 | - | - | - |
| 0.7133 | 105500 | 0.586 | - | - | - |
| 0.7140 | 105600 | 0.5547 | - | - | - |
| 0.7147 | 105700 | 0.5529 | - | - | - |
| 0.7153 | 105800 | 0.5917 | - | - | - |
| 0.7160 | 105900 | 0.5689 | - | - | - |
| 0.7167 | 106000 | 0.6083 | - | - | - |
| 0.7174 | 106100 | 0.626 | - | - | - |
| 0.7180 | 106200 | 0.6076 | - | - | - |
| 0.7187 | 106300 | 0.5706 | - | - | - |
| 0.7194 | 106400 | 0.5976 | - | - | - |
| 0.7201 | 106500 | 0.5964 | - | - | - |
| 0.7208 | 106600 | 0.5841 | - | - | - |
| 0.7214 | 106700 | 0.5973 | - | - | - |
| 0.7221 | 106800 | 0.5978 | - | - | - |
| 0.7228 | 106900 | 0.5965 | - | - | - |
| 0.7235 | 107000 | 0.5934 | - | - | - |
| 0.7241 | 107100 | 0.5361 | - | - | - |
| 0.7248 | 107200 | 0.6005 | - | - | - |
| 0.7255 | 107300 | 0.5367 | - | - | - |
| 0.7262 | 107400 | 0.5863 | - | - | - |
| 0.7268 | 107500 | 0.5799 | - | - | - |
| 0.7275 | 107600 | 0.6288 | - | - | - |
| 0.7282 | 107700 | 0.5655 | - | - | - |
| 0.7289 | 107800 | 0.6095 | - | - | - |
| 0.7295 | 107900 | 0.5643 | - | - | - |
| 0.7302 | 108000 | 0.5704 | - | - | - |
| 0.7309 | 108100 | 0.5481 | - | - | - |
| 0.7316 | 108200 | 0.588 | - | - | - |
| 0.7322 | 108300 | 0.6065 | - | - | - |
| 0.7329 | 108400 | 0.5752 | - | - | - |
| 0.7336 | 108500 | 0.6316 | - | - | - |
| 0.7343 | 108600 | 0.5849 | - | - | - |
| 0.7350 | 108700 | 0.5968 | - | - | - |
| 0.7356 | 108800 | 0.6056 | - | - | - |
| 0.7363 | 108900 | 0.5976 | - | - | - |
| 0.7370 | 109000 | 0.6275 | - | - | - |
| 0.7377 | 109100 | 0.5933 | - | - | - |
| 0.7383 | 109200 | 0.5939 | - | - | - |
| 0.7390 | 109300 | 0.6135 | - | - | - |
| 0.7397 | 109400 | 0.5431 | - | - | - |
| 0.7404 | 109500 | 0.6265 | - | - | - |
| 0.7410 | 109600 | 0.6279 | - | - | - |
| 0.7417 | 109700 | 0.5668 | - | - | - |
| 0.7424 | 109800 | 0.5964 | - | - | - |
| 0.7431 | 109900 | 0.56 | - | - | - |
| 0.7437 | 110000 | 0.6061 | 0.5877 | 0.7244 | - |
| 0.7444 | 110100 | 0.6355 | - | - | - |
| 0.7451 | 110200 | 0.5443 | - | - | - |
| 0.7458 | 110300 | 0.6115 | - | - | - |
| 0.7464 | 110400 | 0.5828 | - | - | - |
| 0.7471 | 110500 | 0.598 | - | - | - |
| 0.7478 | 110600 | 0.572 | - | - | - |
| 0.7485 | 110700 | 0.611 | - | - | - |
| 0.7491 | 110800 | 0.5725 | - | - | - |
| 0.7498 | 110900 | 0.5722 | - | - | - |
| 0.7505 | 111000 | 0.5491 | - | - | - |
| 0.7512 | 111100 | 0.5647 | - | - | - |
| 0.7519 | 111200 | 0.6111 | - | - | - |
| 0.7525 | 111300 | 0.5597 | - | - | - |
| 0.7532 | 111400 | 0.5547 | - | - | - |
| 0.7539 | 111500 | 0.5672 | - | - | - |
| 0.7546 | 111600 | 0.5972 | - | - | - |
| 0.7552 | 111700 | 0.6053 | - | - | - |
| 0.7559 | 111800 | 0.5259 | - | - | - |
| 0.7566 | 111900 | 0.541 | - | - | - |
| 0.7573 | 112000 | 0.5516 | - | - | - |
| 0.7579 | 112100 | 0.5579 | - | - | - |
| 0.7586 | 112200 | 0.5843 | - | - | - |
| 0.7593 | 112300 | 0.6113 | - | - | - |
| 0.7600 | 112400 | 0.597 | - | - | - |
| 0.7606 | 112500 | 0.5951 | - | - | - |
| 0.7613 | 112600 | 0.5642 | - | - | - |
| 0.7620 | 112700 | 0.5787 | - | - | - |
| 0.7627 | 112800 | 0.6042 | - | - | - |
| 0.7633 | 112900 | 0.5876 | - | - | - |
| 0.7640 | 113000 | 0.6343 | - | - | - |
| 0.7647 | 113100 | 0.5725 | - | - | - |
| 0.7654 | 113200 | 0.5674 | - | - | - |
| 0.7661 | 113300 | 0.5957 | - | - | - |
| 0.7667 | 113400 | 0.6699 | - | - | - |
| 0.7674 | 113500 | 0.5619 | - | - | - |
| 0.7681 | 113600 | 0.5769 | - | - | - |
| 0.7688 | 113700 | 0.6329 | - | - | - |
| 0.7694 | 113800 | 0.6609 | - | - | - |
| 0.7701 | 113900 | 0.5893 | - | - | - |
| 0.7708 | 114000 | 0.5679 | - | - | - |
| 0.7715 | 114100 | 0.6012 | - | - | - |
| 0.7721 | 114200 | 0.5386 | - | - | - |
| 0.7728 | 114300 | 0.6282 | - | - | - |
| 0.7735 | 114400 | 0.5384 | - | - | - |
| 0.7742 | 114500 | 0.6082 | - | - | - |
| 0.7748 | 114600 | 0.5728 | - | - | - |
| 0.7755 | 114700 | 0.6041 | - | - | - |
| 0.7762 | 114800 | 0.5628 | - | - | - |
| 0.7769 | 114900 | 0.5847 | - | - | - |
| 0.7775 | 115000 | 0.5735 | 0.5785 | 0.7370 | - |
| 0.7782 | 115100 | 0.586 | - | - | - |
| 0.7789 | 115200 | 0.5692 | - | - | - |
| 0.7796 | 115300 | 0.6119 | - | - | - |
| 0.7803 | 115400 | 0.6128 | - | - | - |
| 0.7809 | 115500 | 0.6094 | - | - | - |
| 0.7816 | 115600 | 0.5753 | - | - | - |
| 0.7823 | 115700 | 0.5547 | - | - | - |
| 0.7830 | 115800 | 0.6574 | - | - | - |
| 0.7836 | 115900 | 0.5588 | - | - | - |
| 0.7843 | 116000 | 0.5797 | - | - | - |
| 0.7850 | 116100 | 0.5945 | - | - | - |
| 0.7857 | 116200 | 0.6008 | - | - | - |
| 0.7863 | 116300 | 0.6642 | - | - | - |
| 0.7870 | 116400 | 0.6693 | - | - | - |
| 0.7877 | 116500 | 0.5889 | - | - | - |
| 0.7884 | 116600 | 0.5822 | - | - | - |
| 0.7890 | 116700 | 0.6038 | - | - | - |
| 0.7897 | 116800 | 0.5356 | - | - | - |
| 0.7904 | 116900 | 0.5539 | - | - | - |
| 0.7911 | 117000 | 0.585 | - | - | - |
| 0.7917 | 117100 | 0.5612 | - | - | - |
| 0.7924 | 117200 | 0.5776 | - | - | - |
| 0.7931 | 117300 | 0.5997 | - | - | - |
| 0.7938 | 117400 | 0.5788 | - | - | - |
| 0.7945 | 117500 | 0.5468 | - | - | - |
| 0.7951 | 117600 | 0.6095 | - | - | - |
| 0.7958 | 117700 | 0.5922 | - | - | - |
| 0.7965 | 117800 | 0.5787 | - | - | - |
| 0.7972 | 117900 | 0.514 | - | - | - |
| 0.7978 | 118000 | 0.5866 | - | - | - |
| 0.7985 | 118100 | 0.5878 | - | - | - |
| 0.7992 | 118200 | 0.6085 | - | - | - |
| 0.7999 | 118300 | 0.608 | - | - | - |
| 0.8005 | 118400 | 0.6073 | - | - | - |
| 0.8012 | 118500 | 0.6014 | - | - | - |
| 0.8019 | 118600 | 0.6112 | - | - | - |
| 0.8026 | 118700 | 0.6029 | - | - | - |
| 0.8032 | 118800 | 0.6066 | - | - | - |
| 0.8039 | 118900 | 0.5594 | - | - | - |
| 0.8046 | 119000 | 0.5844 | - | - | - |
| 0.8053 | 119100 | 0.5943 | - | - | - |
| 0.8059 | 119200 | 0.5646 | - | - | - |
| 0.8066 | 119300 | 0.6438 | - | - | - |
| 0.8073 | 119400 | 0.5454 | - | - | - |
| 0.8080 | 119500 | 0.5899 | - | - | - |
| 0.8086 | 119600 | 0.5652 | - | - | - |
| 0.8093 | 119700 | 0.578 | - | - | - |
| 0.8100 | 119800 | 0.613 | - | - | - |
| 0.8107 | 119900 | 0.5346 | - | - | - |
| 0.8114 | 120000 | 0.6038 | 0.5812 | 0.7398 | - |
| 0.8120 | 120100 | 0.5886 | - | - | - |
| 0.8127 | 120200 | 0.5301 | - | - | - |
| 0.8134 | 120300 | 0.6578 | - | - | - |
| 0.8141 | 120400 | 0.6005 | - | - | - |
| 0.8147 | 120500 | 0.549 | - | - | - |
| 0.8154 | 120600 | 0.6004 | - | - | - |
| 0.8161 | 120700 | 0.5843 | - | - | - |
| 0.8168 | 120800 | 0.6028 | - | - | - |
| 0.8174 | 120900 | 0.6072 | - | - | - |
| 0.8181 | 121000 | 0.5894 | - | - | - |
| 0.8188 | 121100 | 0.5876 | - | - | - |
| 0.8195 | 121200 | 0.6424 | - | - | - |
| 0.8201 | 121300 | 0.575 | - | - | - |
| 0.8208 | 121400 | 0.5865 | - | - | - |
| 0.8215 | 121500 | 0.5518 | - | - | - |
| 0.8222 | 121600 | 0.6161 | - | - | - |
| 0.8228 | 121700 | 0.5586 | - | - | - |
| 0.8235 | 121800 | 0.5647 | - | - | - |
| 0.8242 | 121900 | 0.5604 | - | - | - |
| 0.8249 | 122000 | 0.5442 | - | - | - |
| 0.8256 | 122100 | 0.5922 | - | - | - |
| 0.8262 | 122200 | 0.5978 | - | - | - |
| 0.8269 | 122300 | 0.5598 | - | - | - |
| 0.8276 | 122400 | 0.6207 | - | - | - |
| 0.8283 | 122500 | 0.6166 | - | - | - |
| 0.8289 | 122600 | 0.5559 | - | - | - |
| 0.8296 | 122700 | 0.5559 | - | - | - |
| 0.8303 | 122800 | 0.5789 | - | - | - |
| 0.8310 | 122900 | 0.5594 | - | - | - |
| 0.8316 | 123000 | 0.6149 | - | - | - |
| 0.8323 | 123100 | 0.5921 | - | - | - |
| 0.8330 | 123200 | 0.6191 | - | - | - |
| 0.8337 | 123300 | 0.5552 | - | - | - |
| 0.8343 | 123400 | 0.5511 | - | - | - |
| 0.8350 | 123500 | 0.5625 | - | - | - |
| 0.8357 | 123600 | 0.6132 | - | - | - |
| 0.8364 | 123700 | 0.611 | - | - | - |
| 0.8370 | 123800 | 0.5488 | - | - | - |
| 0.8377 | 123900 | 0.5942 | - | - | - |
| 0.8384 | 124000 | 0.653 | - | - | - |
| 0.8391 | 124100 | 0.595 | - | - | - |
| 0.8398 | 124200 | 0.5888 | - | - | - |
| 0.8404 | 124300 | 0.638 | - | - | - |
| 0.8411 | 124400 | 0.6043 | - | - | - |
| 0.8418 | 124500 | 0.6013 | - | - | - |
| 0.8425 | 124600 | 0.5708 | - | - | - |
| 0.8431 | 124700 | 0.5368 | - | - | - |
| 0.8438 | 124800 | 0.6107 | - | - | - |
| 0.8445 | 124900 | 0.542 | - | - | - |
| 0.8452 | 125000 | 0.5732 | 0.5803 | 0.7451 | - |
| 0.8458 | 125100 | 0.5881 | - | - | - |
| 0.8465 | 125200 | 0.5454 | - | - | - |
| 0.8472 | 125300 | 0.6306 | - | - | - |
| 0.8479 | 125400 | 0.543 | - | - | - |
| 0.8485 | 125500 | 0.571 | - | - | - |
| 0.8492 | 125600 | 0.5825 | - | - | - |
| 0.8499 | 125700 | 0.5916 | - | - | - |
| 0.8506 | 125800 | 0.5481 | - | - | - |
| 0.8512 | 125900 | 0.5795 | - | - | - |
| 0.8519 | 126000 | 0.5811 | - | - | - |
| 0.8526 | 126100 | 0.5849 | - | - | - |
| 0.8533 | 126200 | 0.5474 | - | - | - |
| 0.8539 | 126300 | 0.5779 | - | - | - |
| 0.8546 | 126400 | 0.5853 | - | - | - |
| 0.8553 | 126500 | 0.575 | - | - | - |
| 0.8560 | 126600 | 0.5548 | - | - | - |
| 0.8567 | 126700 | 0.5429 | - | - | - |
| 0.8573 | 126800 | 0.5918 | - | - | - |
| 0.8580 | 126900 | 0.61 | - | - | - |
| 0.8587 | 127000 | 0.5896 | - | - | - |
| 0.8594 | 127100 | 0.5677 | - | - | - |
| 0.8600 | 127200 | 0.5705 | - | - | - |
| 0.8607 | 127300 | 0.5504 | - | - | - |
| 0.8614 | 127400 | 0.5399 | - | - | - |
| 0.8621 | 127500 | 0.5381 | - | - | - |
| 0.8627 | 127600 | 0.5228 | - | - | - |
| 0.8634 | 127700 | 0.602 | - | - | - |
| 0.8641 | 127800 | 0.6279 | - | - | - |
| 0.8648 | 127900 | 0.5489 | - | - | - |
| 0.8654 | 128000 | 0.5514 | - | - | - |
| 0.8661 | 128100 | 0.6084 | - | - | - |
| 0.8668 | 128200 | 0.5623 | - | - | - |
| 0.8675 | 128300 | 0.5566 | - | - | - |
| 0.8681 | 128400 | 0.5585 | - | - | - |
| 0.8688 | 128500 | 0.572 | - | - | - |
| 0.8695 | 128600 | 0.5958 | - | - | - |
| 0.8702 | 128700 | 0.5855 | - | - | - |
| 0.8709 | 128800 | 0.5529 | - | - | - |
| 0.8715 | 128900 | 0.5542 | - | - | - |
| 0.8722 | 129000 | 0.5765 | - | - | - |
| 0.8729 | 129100 | 0.6091 | - | - | - |
| 0.8736 | 129200 | 0.5828 | - | - | - |
| 0.8742 | 129300 | 0.5803 | - | - | - |
| 0.8749 | 129400 | 0.5688 | - | - | - |
| 0.8756 | 129500 | 0.593 | - | - | - |
| 0.8763 | 129600 | 0.5479 | - | - | - |
| 0.8769 | 129700 | 0.5336 | - | - | - |
| 0.8776 | 129800 | 0.5636 | - | - | - |
| 0.8783 | 129900 | 0.6156 | - | - | - |
| 0.8790 | 130000 | 0.5526 | 0.5621 | 0.7421 | - |
| 0.8796 | 130100 | 0.5444 | - | - | - |
| 0.8803 | 130200 | 0.5919 | - | - | - |
| 0.8810 | 130300 | 0.5816 | - | - | - |
| 0.8817 | 130400 | 0.5514 | - | - | - |
| 0.8823 | 130500 | 0.5948 | - | - | - |
| 0.8830 | 130600 | 0.6063 | - | - | - |
| 0.8837 | 130700 | 0.5105 | - | - | - |
| 0.8844 | 130800 | 0.5637 | - | - | - |
| 0.8851 | 130900 | 0.5382 | - | - | - |
| 0.8857 | 131000 | 0.5775 | - | - | - |
| 0.8864 | 131100 | 0.5647 | - | - | - |
| 0.8871 | 131200 | 0.5846 | - | - | - |
| 0.8878 | 131300 | 0.6211 | - | - | - |
| 0.8884 | 131400 | 0.5572 | - | - | - |
| 0.8891 | 131500 | 0.548 | - | - | - |
| 0.8898 | 131600 | 0.599 | - | - | - |
| 0.8905 | 131700 | 0.5746 | - | - | - |
| 0.8911 | 131800 | 0.5644 | - | - | - |
| 0.8918 | 131900 | 0.5848 | - | - | - |
| 0.8925 | 132000 | 0.5476 | - | - | - |
| 0.8932 | 132100 | 0.6046 | - | - | - |
| 0.8938 | 132200 | 0.5839 | - | - | - |
| 0.8945 | 132300 | 0.5945 | - | - | - |
| 0.8952 | 132400 | 0.5793 | - | - | - |
| 0.8959 | 132500 | 0.5561 | - | - | - |
| 0.8965 | 132600 | 0.591 | - | - | - |
| 0.8972 | 132700 | 0.5937 | - | - | - |
| 0.8979 | 132800 | 0.5432 | - | - | - |
| 0.8986 | 132900 | 0.5309 | - | - | - |
| 0.8993 | 133000 | 0.5357 | - | - | - |
| 0.8999 | 133100 | 0.5701 | - | - | - |
| 0.9006 | 133200 | 0.5971 | - | - | - |
| 0.9013 | 133300 | 0.5637 | - | - | - |
| 0.9020 | 133400 | 0.5646 | - | - | - |
| 0.9026 | 133500 | 0.5807 | - | - | - |
| 0.9033 | 133600 | 0.5386 | - | - | - |
| 0.9040 | 133700 | 0.5734 | - | - | - |
| 0.9047 | 133800 | 0.5247 | - | - | - |
| 0.9053 | 133900 | 0.5573 | - | - | - |
| 0.9060 | 134000 | 0.6363 | - | - | - |
| 0.9067 | 134100 | 0.6039 | - | - | - |
| 0.9074 | 134200 | 0.5799 | - | - | - |
| 0.9080 | 134300 | 0.589 | - | - | - |
| 0.9087 | 134400 | 0.6278 | - | - | - |
| 0.9094 | 134500 | 0.6219 | - | - | - |
| 0.9101 | 134600 | 0.5737 | - | - | - |
| 0.9107 | 134700 | 0.5468 | - | - | - |
| 0.9114 | 134800 | 0.5729 | - | - | - |
| 0.9121 | 134900 | 0.5563 | - | - | - |
| 0.9128 | 135000 | 0.5877 | 0.5689 | 0.7374 | - |
| 0.9134 | 135100 | 0.5632 | - | - | - |
| 0.9141 | 135200 | 0.5643 | - | - | - |
| 0.9148 | 135300 | 0.569 | - | - | - |
| 0.9155 | 135400 | 0.5753 | - | - | - |
| 0.9162 | 135500 | 0.5946 | - | - | - |
| 0.9168 | 135600 | 0.6021 | - | - | - |
| 0.9175 | 135700 | 0.5284 | - | - | - |
| 0.9182 | 135800 | 0.5633 | - | - | - |
| 0.9189 | 135900 | 0.5953 | - | - | - |
| 0.9195 | 136000 | 0.5964 | - | - | - |
| 0.9202 | 136100 | 0.5766 | - | - | - |
| 0.9209 | 136200 | 0.5626 | - | - | - |
| 0.9216 | 136300 | 0.5356 | - | - | - |
| 0.9222 | 136400 | 0.5728 | - | - | - |
| 0.9229 | 136500 | 0.6072 | - | - | - |
| 0.9236 | 136600 | 0.5217 | - | - | - |
| 0.9243 | 136700 | 0.5333 | - | - | - |
| 0.9249 | 136800 | 0.5603 | - | - | - |
| 0.9256 | 136900 | 0.5838 | - | - | - |
| 0.9263 | 137000 | 0.605 | - | - | - |
| 0.9270 | 137100 | 0.5549 | - | - | - |
| 0.9276 | 137200 | 0.5821 | - | - | - |
| 0.9283 | 137300 | 0.6145 | - | - | - |
| 0.9290 | 137400 | 0.5537 | - | - | - |
| 0.9297 | 137500 | 0.5394 | - | - | - |
| 0.9304 | 137600 | 0.5269 | - | - | - |
| 0.9310 | 137700 | 0.5888 | - | - | - |
| 0.9317 | 137800 | 0.5546 | - | - | - |
| 0.9324 | 137900 | 0.5634 | - | - | - |
| 0.9331 | 138000 | 0.5666 | - | - | - |
| 0.9337 | 138100 | 0.5502 | - | - | - |
| 0.9344 | 138200 | 0.5901 | - | - | - |
| 0.9351 | 138300 | 0.6067 | - | - | - |
| 0.9358 | 138400 | 0.5646 | - | - | - |
| 0.9364 | 138500 | 0.5516 | - | - | - |
| 0.9371 | 138600 | 0.5607 | - | - | - |
| 0.9378 | 138700 | 0.5544 | - | - | - |
| 0.9385 | 138800 | 0.5488 | - | - | - |
| 0.9391 | 138900 | 0.5658 | - | - | - |
| 0.9398 | 139000 | 0.5843 | - | - | - |
| 0.9405 | 139100 | 0.5226 | - | - | - |
| 0.9412 | 139200 | 0.5316 | - | - | - |
| 0.9418 | 139300 | 0.5717 | - | - | - |
| 0.9425 | 139400 | 0.5237 | - | - | - |
| 0.9432 | 139500 | 0.5836 | - | - | - |
| 0.9439 | 139600 | 0.5545 | - | - | - |
| 0.9446 | 139700 | 0.6058 | - | - | - |
| 0.9452 | 139800 | 0.5276 | - | - | - |
| 0.9459 | 139900 | 0.5628 | - | - | - |
| 0.9466 | 140000 | 0.5496 | 0.5703 | 0.7408 | - |
| 0.9473 | 140100 | 0.6136 | - | - | - |
| 0.9479 | 140200 | 0.6013 | - | - | - |
| 0.9486 | 140300 | 0.5359 | - | - | - |
| 0.9493 | 140400 | 0.5664 | - | - | - |
| 0.9500 | 140500 | 0.592 | - | - | - |
| 0.9506 | 140600 | 0.5637 | - | - | - |
| 0.9513 | 140700 | 0.5751 | - | - | - |
| 0.9520 | 140800 | 0.5819 | - | - | - |
| 0.9527 | 140900 | 0.5459 | - | - | - |
| 0.9533 | 141000 | 0.591 | - | - | - |
| 0.9540 | 141100 | 0.5685 | - | - | - |
| 0.9547 | 141200 | 0.5809 | - | - | - |
| 0.9554 | 141300 | 0.5362 | - | - | - |
| 0.9560 | 141400 | 0.5502 | - | - | - |
| 0.9567 | 141500 | 0.5653 | - | - | - |
| 0.9574 | 141600 | 0.557 | - | - | - |
| 0.9581 | 141700 | 0.5787 | - | - | - |
| 0.9587 | 141800 | 0.6126 | - | - | - |
| 0.9594 | 141900 | 0.5843 | - | - | - |
| 0.9601 | 142000 | 0.5397 | - | - | - |
| 0.9608 | 142100 | 0.5965 | - | - | - |
| 0.9615 | 142200 | 0.5748 | - | - | - |
| 0.9621 | 142300 | 0.5413 | - | - | - |
| 0.9628 | 142400 | 0.5295 | - | - | - |
| 0.9635 | 142500 | 0.6381 | - | - | - |
| 0.9642 | 142600 | 0.6071 | - | - | - |
| 0.9648 | 142700 | 0.5318 | - | - | - |
| 0.9655 | 142800 | 0.5855 | - | - | - |
| 0.9662 | 142900 | 0.6057 | - | - | - |
| 0.9669 | 143000 | 0.5785 | - | - | - |
| 0.9675 | 143100 | 0.5503 | - | - | - |
| 0.9682 | 143200 | 0.6102 | - | - | - |
| 0.9689 | 143300 | 0.5569 | - | - | - |
| 0.9696 | 143400 | 0.6124 | - | - | - |
| 0.9702 | 143500 | 0.5796 | - | - | - |
| 0.9709 | 143600 | 0.5253 | - | - | - |
| 0.9716 | 143700 | 0.5768 | - | - | - |
| 0.9723 | 143800 | 0.5543 | - | - | - |
| 0.9729 | 143900 | 0.5237 | - | - | - |
| 0.9736 | 144000 | 0.5858 | - | - | - |
| 0.9743 | 144100 | 0.5876 | - | - | - |
| 0.9750 | 144200 | 0.5428 | - | - | - |
| 0.9757 | 144300 | 0.5742 | - | - | - |
| 0.9763 | 144400 | 0.5611 | - | - | - |
| 0.9770 | 144500 | 0.6098 | - | - | - |
| 0.9777 | 144600 | 0.5868 | - | - | - |
| 0.9784 | 144700 | 0.5605 | - | - | - |
| 0.9790 | 144800 | 0.5429 | - | - | - |
| 0.9797 | 144900 | 0.5629 | - | - | - |
| 0.9804 | 145000 | 0.5973 | 0.5597 | 0.7456 | - |
| 0.9811 | 145100 | 0.5709 | - | - | - |
| 0.9817 | 145200 | 0.5527 | - | - | - |
| 0.9824 | 145300 | 0.5568 | - | - | - |
| 0.9831 | 145400 | 0.579 | - | - | - |
| 0.9838 | 145500 | 0.5927 | - | - | - |
| 0.9844 | 145600 | 0.55 | - | - | - |
| 0.9851 | 145700 | 0.5637 | - | - | - |
| 0.9858 | 145800 | 0.57 | - | - | - |
| 0.9865 | 145900 | 0.5708 | - | - | - |
| 0.9871 | 146000 | 0.5338 | - | - | - |
| 0.9878 | 146100 | 0.5808 | - | - | - |
| 0.9885 | 146200 | 0.5727 | - | - | - |
| 0.9892 | 146300 | 0.521 | - | - | - |
| 0.9899 | 146400 | 0.6102 | - | - | - |
| 0.9905 | 146500 | 0.5758 | - | - | - |
| 0.9912 | 146600 | 0.6229 | - | - | - |
| 0.9919 | 146700 | 0.5775 | - | - | - |
| 0.9926 | 146800 | 0.5339 | - | - | - |
| 0.9932 | 146900 | 0.5915 | - | - | - |
| 0.9939 | 147000 | 0.5699 | - | - | - |
| 0.9946 | 147100 | 0.5218 | - | - | - |
| 0.9953 | 147200 | 0.6229 | - | - | - |
| 0.9959 | 147300 | 0.5422 | - | - | - |
| 0.9966 | 147400 | 0.5498 | - | - | - |
| 0.9973 | 147500 | 0.5423 | - | - | - |
| 0.9980 | 147600 | 0.581 | - | - | - |
| 0.9986 | 147700 | 0.5645 | - | - | - |
| 0.9993 | 147800 | 0.5689 | - | - | - |
| 1.0000 | 147900 | 0.6141 | - | - | - |
| 1.0007 | 148000 | 0.5931 | - | - | - |
| 1.0013 | 148100 | 0.5535 | - | - | - |
| 1.0020 | 148200 | 0.5627 | - | - | - |
| 1.0027 | 148300 | 0.5359 | - | - | - |
| 1.0034 | 148400 | 0.5292 | - | - | - |
| 1.0041 | 148500 | 0.5492 | - | - | - |
| 1.0047 | 148600 | 0.6333 | - | - | - |
| 1.0054 | 148700 | 0.5251 | - | - | - |
| 1.0061 | 148800 | 0.6007 | - | - | - |
| 1.0068 | 148900 | 0.519 | - | - | - |
| 1.0074 | 149000 | 0.5598 | - | - | - |
| 1.0081 | 149100 | 0.5092 | - | - | - |
| 1.0088 | 149200 | 0.5574 | - | - | - |
| 1.0095 | 149300 | 0.5367 | - | - | - |
| 1.0101 | 149400 | 0.5998 | - | - | - |
| 1.0108 | 149500 | 0.5309 | - | - | - |
| 1.0115 | 149600 | 0.5655 | - | - | - |
| 1.0122 | 149700 | 0.5077 | - | - | - |
| 1.0128 | 149800 | 0.5394 | - | - | - |
| 1.0135 | 149900 | 0.5588 | - | - | - |
| 1.0142 | 150000 | 0.5825 | 0.5571 | 0.7405 | - |
| 1.0149 | 150100 | 0.5625 | - | - | - |
| 1.0155 | 150200 | 0.5948 | - | - | - |
| 1.0162 | 150300 | 0.5803 | - | - | - |
| 1.0169 | 150400 | 0.5913 | - | - | - |
| 1.0176 | 150500 | 0.5738 | - | - | - |
| 1.0182 | 150600 | 0.5224 | - | - | - |
| 1.0189 | 150700 | 0.5533 | - | - | - |
| 1.0196 | 150800 | 0.6178 | - | - | - |
| 1.0203 | 150900 | 0.5339 | - | - | - |
| 1.0210 | 151000 | 0.5251 | - | - | - |
| 1.0216 | 151100 | 0.591 | - | - | - |
| 1.0223 | 151200 | 0.5894 | - | - | - |
| 1.0230 | 151300 | 0.5544 | - | - | - |
| 1.0237 | 151400 | 0.5625 | - | - | - |
| 1.0243 | 151500 | 0.529 | - | - | - |
| 1.0250 | 151600 | 0.5158 | - | - | - |
| 1.0257 | 151700 | 0.5695 | - | - | - |
| 1.0264 | 151800 | 0.5773 | - | - | - |
| 1.0270 | 151900 | 0.532 | - | - | - |
| 1.0277 | 152000 | 0.5236 | - | - | - |
| 1.0284 | 152100 | 0.5429 | - | - | - |
| 1.0291 | 152200 | 0.5774 | - | - | - |
| 1.0297 | 152300 | 0.5734 | - | - | - |
| 1.0304 | 152400 | 0.5366 | - | - | - |
| 1.0311 | 152500 | 0.5817 | - | - | - |
| 1.0318 | 152600 | 0.6242 | - | - | - |
| 1.0324 | 152700 | 0.5737 | - | - | - |
| 1.0331 | 152800 | 0.5304 | - | - | - |
| 1.0338 | 152900 | 0.5344 | - | - | - |
| 1.0345 | 153000 | 0.5551 | - | - | - |
| 1.0352 | 153100 | 0.5626 | - | - | - |
| 1.0358 | 153200 | 0.5995 | - | - | - |
| 1.0365 | 153300 | 0.5674 | - | - | - |
| 1.0372 | 153400 | 0.6215 | - | - | - |
| 1.0379 | 153500 | 0.5527 | - | - | - |
| 1.0385 | 153600 | 0.5343 | - | - | - |
| 1.0392 | 153700 | 0.5977 | - | - | - |
| 1.0399 | 153800 | 0.5779 | - | - | - |
| 1.0406 | 153900 | 0.5175 | - | - | - |
| 1.0412 | 154000 | 0.6385 | - | - | - |
| 1.0419 | 154100 | 0.5362 | - | - | - |
| 1.0426 | 154200 | 0.5775 | - | - | - |
| 1.0433 | 154300 | 0.5637 | - | - | - |
| 1.0439 | 154400 | 0.5464 | - | - | - |
| 1.0446 | 154500 | 0.5803 | - | - | - |
| 1.0453 | 154600 | 0.5343 | - | - | - |
| 1.0460 | 154700 | 0.5492 | - | - | - |
| 1.0466 | 154800 | 0.5484 | - | - | - |
| 1.0473 | 154900 | 0.5358 | - | - | - |
| 1.0480 | 155000 | 0.5792 | 0.5546 | 0.7406 | - |
| 1.0487 | 155100 | 0.5966 | - | - | - |
| 1.0494 | 155200 | 0.579 | - | - | - |
| 1.0500 | 155300 | 0.5505 | - | - | - |
| 1.0507 | 155400 | 0.5519 | - | - | - |
| 1.0514 | 155500 | 0.5893 | - | - | - |
| 1.0521 | 155600 | 0.5946 | - | - | - |
| 1.0527 | 155700 | 0.5467 | - | - | - |
| 1.0534 | 155800 | 0.5249 | - | - | - |
| 1.0541 | 155900 | 0.5478 | - | - | - |
| 1.0548 | 156000 | 0.5596 | - | - | - |
| 1.0554 | 156100 | 0.518 | - | - | - |
| 1.0561 | 156200 | 0.5749 | - | - | - |
| 1.0568 | 156300 | 0.5189 | - | - | - |
| 1.0575 | 156400 | 0.5862 | - | - | - |
| 1.0581 | 156500 | 0.5523 | - | - | - |
| 1.0588 | 156600 | 0.519 | - | - | - |
| 1.0595 | 156700 | 0.5482 | - | - | - |
| 1.0602 | 156800 | 0.557 | - | - | - |
| 1.0608 | 156900 | 0.537 | - | - | - |
| 1.0615 | 157000 | 0.5545 | - | - | - |
| 1.0622 | 157100 | 0.5855 | - | - | - |
| 1.0629 | 157200 | 0.5448 | - | - | - |
| 1.0635 | 157300 | 0.5505 | - | - | - |
| 1.0642 | 157400 | 0.6443 | - | - | - |
| 1.0649 | 157500 | 0.5395 | - | - | - |
| 1.0656 | 157600 | 0.5876 | - | - | - |
| 1.0663 | 157700 | 0.5593 | - | - | - |
| 1.0669 | 157800 | 0.589 | - | - | - |
| 1.0676 | 157900 | 0.5527 | - | - | - |
| 1.0683 | 158000 | 0.5871 | - | - | - |
| 1.0690 | 158100 | 0.5496 | - | - | - |
| 1.0696 | 158200 | 0.5345 | - | - | - |
| 1.0703 | 158300 | 0.5721 | - | - | - |
| 1.0710 | 158400 | 0.533 | - | - | - |
| 1.0717 | 158500 | 0.5228 | - | - | - |
| 1.0723 | 158600 | 0.5522 | - | - | - |
| 1.0730 | 158700 | 0.536 | - | - | - |
| 1.0737 | 158800 | 0.5981 | - | - | - |
| 1.0744 | 158900 | 0.5388 | - | - | - |
| 1.0750 | 159000 | 0.537 | - | - | - |
| 1.0757 | 159100 | 0.5234 | - | - | - |
| 1.0764 | 159200 | 0.6104 | - | - | - |
| 1.0771 | 159300 | 0.4955 | - | - | - |
| 1.0777 | 159400 | 0.5346 | - | - | - |
| 1.0784 | 159500 | 0.5705 | - | - | - |
| 1.0791 | 159600 | 0.592 | - | - | - |
| 1.0798 | 159700 | 0.5422 | - | - | - |
| 1.0805 | 159800 | 0.5283 | - | - | - |
| 1.0811 | 159900 | 0.5883 | - | - | - |
| 1.0818 | 160000 | 0.5581 | 0.5527 | 0.7450 | - |
| 1.0825 | 160100 | 0.5364 | - | - | - |
| 1.0832 | 160200 | 0.486 | - | - | - |
| 1.0838 | 160300 | 0.5753 | - | - | - |
| 1.0845 | 160400 | 0.5096 | - | - | - |
| 1.0852 | 160500 | 0.5367 | - | - | - |
| 1.0859 | 160600 | 0.5158 | - | - | - |
| 1.0865 | 160700 | 0.5538 | - | - | - |
| 1.0872 | 160800 | 0.5477 | - | - | - |
| 1.0879 | 160900 | 0.5883 | - | - | - |
| 1.0886 | 161000 | 0.556 | - | - | - |
| 1.0892 | 161100 | 0.5753 | - | - | - |
| 1.0899 | 161200 | 0.5756 | - | - | - |
| 1.0906 | 161300 | 0.554 | - | - | - |
| 1.0913 | 161400 | 0.5293 | - | - | - |
| 1.0919 | 161500 | 0.5302 | - | - | - |
| 1.0926 | 161600 | 0.5525 | - | - | - |
| 1.0933 | 161700 | 0.5768 | - | - | - |
| 1.0940 | 161800 | 0.5067 | - | - | - |
| 1.0947 | 161900 | 0.5414 | - | - | - |
| 1.0953 | 162000 | 0.5191 | - | - | - |
| 1.0960 | 162100 | 0.5063 | - | - | - |
| 1.0967 | 162200 | 0.5149 | - | - | - |
| 1.0974 | 162300 | 0.5338 | - | - | - |
| 1.0980 | 162400 | 0.5768 | - | - | - |
| 1.0987 | 162500 | 0.5729 | - | - | - |
| 1.0994 | 162600 | 0.5536 | - | - | - |
| 1.1001 | 162700 | 0.5441 | - | - | - |
| 1.1007 | 162800 | 0.5603 | - | - | - |
| 1.1014 | 162900 | 0.5472 | - | - | - |
| 1.1021 | 163000 | 0.5338 | - | - | - |
| 1.1028 | 163100 | 0.4892 | - | - | - |
| 1.1034 | 163200 | 0.4997 | - | - | - |
| 1.1041 | 163300 | 0.5506 | - | - | - |
| 1.1048 | 163400 | 0.5021 | - | - | - |
| 1.1055 | 163500 | 0.5376 | - | - | - |
| 1.1061 | 163600 | 0.5228 | - | - | - |
| 1.1068 | 163700 | 0.5086 | - | - | - |
| 1.1075 | 163800 | 0.5312 | - | - | - |
| 1.1082 | 163900 | 0.5269 | - | - | - |
| 1.1088 | 164000 | 0.5312 | - | - | - |
| 1.1095 | 164100 | 0.5945 | - | - | - |
| 1.1102 | 164200 | 0.5226 | - | - | - |
| 1.1109 | 164300 | 0.542 | - | - | - |
| 1.1116 | 164400 | 0.5335 | - | - | - |
| 1.1122 | 164500 | 0.5272 | - | - | - |
| 1.1129 | 164600 | 0.5338 | - | - | - |
| 1.1136 | 164700 | 0.5255 | - | - | - |
| 1.1143 | 164800 | 0.5214 | - | - | - |
| 1.1149 | 164900 | 0.5167 | - | - | - |
| 1.1156 | 165000 | 0.5329 | 0.5586 | 0.7433 | - |
| 1.1163 | 165100 | 0.5169 | - | - | - |
| 1.1170 | 165200 | 0.539 | - | - | - |
| 1.1176 | 165300 | 0.6029 | - | - | - |
| 1.1183 | 165400 | 0.5752 | - | - | - |
| 1.1190 | 165500 | 0.5282 | - | - | - |
| 1.1197 | 165600 | 0.5613 | - | - | - |
| 1.1203 | 165700 | 0.5063 | - | - | - |
| 1.1210 | 165800 | 0.548 | - | - | - |
| 1.1217 | 165900 | 0.6063 | - | - | - |
| 1.1224 | 166000 | 0.5259 | - | - | - |
| 1.1230 | 166100 | 0.5241 | - | - | - |
| 1.1237 | 166200 | 0.5196 | - | - | - |
| 1.1244 | 166300 | 0.5279 | - | - | - |
| 1.1251 | 166400 | 0.5688 | - | - | - |
| 1.1258 | 166500 | 0.5726 | - | - | - |
| 1.1264 | 166600 | 0.5274 | - | - | - |
| 1.1271 | 166700 | 0.5148 | - | - | - |
| 1.1278 | 166800 | 0.5341 | - | - | - |
| 1.1285 | 166900 | 0.5716 | - | - | - |
| 1.1291 | 167000 | 0.5626 | - | - | - |
| 1.1298 | 167100 | 0.511 | - | - | - |
| 1.1305 | 167200 | 0.5732 | - | - | - |
| 1.1312 | 167300 | 0.5757 | - | - | - |
| 1.1318 | 167400 | 0.5414 | - | - | - |
| 1.1325 | 167500 | 0.5578 | - | - | - |
| 1.1332 | 167600 | 0.549 | - | - | - |
| 1.1339 | 167700 | 0.5614 | - | - | - |
| 1.1345 | 167800 | 0.56 | - | - | - |
| 1.1352 | 167900 | 0.5886 | - | - | - |
| 1.1359 | 168000 | 0.5377 | - | - | - |
| 1.1366 | 168100 | 0.5485 | - | - | - |
| 1.1372 | 168200 | 0.5551 | - | - | - |
| 1.1379 | 168300 | 0.5328 | - | - | - |
| 1.1386 | 168400 | 0.5026 | - | - | - |
| 1.1393 | 168500 | 0.5077 | - | - | - |
| 1.1400 | 168600 | 0.531 | - | - | - |
| 1.1406 | 168700 | 0.5434 | - | - | - |
| 1.1413 | 168800 | 0.5432 | - | - | - |
| 1.1420 | 168900 | 0.529 | - | - | - |
| 1.1427 | 169000 | 0.5093 | - | - | - |
| 1.1433 | 169100 | 0.5607 | - | - | - |
| 1.1440 | 169200 | 0.5733 | - | - | - |
| 1.1447 | 169300 | 0.5188 | - | - | - |
| 1.1454 | 169400 | 0.5043 | - | - | - |
| 1.1460 | 169500 | 0.5414 | - | - | - |
| 1.1467 | 169600 | 0.5555 | - | - | - |
| 1.1474 | 169700 | 0.4951 | - | - | - |
| 1.1481 | 169800 | 0.556 | - | - | - |
| 1.1487 | 169900 | 0.5992 | - | - | - |
| 1.1494 | 170000 | 0.4878 | 0.5431 | 0.7544 | - |
| 1.1501 | 170100 | 0.5739 | - | - | - |
| 1.1508 | 170200 | 0.5282 | - | - | - |
| 1.1514 | 170300 | 0.5183 | - | - | - |
| 1.1521 | 170400 | 0.523 | - | - | - |
| 1.1528 | 170500 | 0.5328 | - | - | - |
| 1.1535 | 170600 | 0.544 | - | - | - |
| 1.1542 | 170700 | 0.5604 | - | - | - |
| 1.1548 | 170800 | 0.5117 | - | - | - |
| 1.1555 | 170900 | 0.5076 | - | - | - |
| 1.1562 | 171000 | 0.5517 | - | - | - |
| 1.1569 | 171100 | 0.561 | - | - | - |
| 1.1575 | 171200 | 0.5558 | - | - | - |
| 1.1582 | 171300 | 0.5815 | - | - | - |
| 1.1589 | 171400 | 0.5324 | - | - | - |
| 1.1596 | 171500 | 0.5203 | - | - | - |
| 1.1602 | 171600 | 0.5398 | - | - | - |
| 1.1609 | 171700 | 0.5357 | - | - | - |
| 1.1616 | 171800 | 0.5715 | - | - | - |
| 1.1623 | 171900 | 0.5615 | - | - | - |
| 1.1629 | 172000 | 0.512 | - | - | - |
| 1.1636 | 172100 | 0.5073 | - | - | - |
| 1.1643 | 172200 | 0.5361 | - | - | - |
| 1.1650 | 172300 | 0.5462 | - | - | - |
| 1.1656 | 172400 | 0.5133 | - | - | - |
| 1.1663 | 172500 | 0.5151 | - | - | - |
| 1.1670 | 172600 | 0.5656 | - | - | - |
| 1.1677 | 172700 | 0.5256 | - | - | - |
| 1.1683 | 172800 | 0.5367 | - | - | - |
| 1.1690 | 172900 | 0.5146 | - | - | - |
| 1.1697 | 173000 | 0.5255 | - | - | - |
| 1.1704 | 173100 | 0.5159 | - | - | - |
| 1.1711 | 173200 | 0.5155 | - | - | - |
| 1.1717 | 173300 | 0.5079 | - | - | - |
| 1.1724 | 173400 | 0.5244 | - | - | - |
| 1.1731 | 173500 | 0.5401 | - | - | - |
| 1.1738 | 173600 | 0.5169 | - | - | - |
| 1.1744 | 173700 | 0.559 | - | - | - |
| 1.1751 | 173800 | 0.5211 | - | - | - |
| 1.1758 | 173900 | 0.5577 | - | - | - |
| 1.1765 | 174000 | 0.5511 | - | - | - |
| 1.1771 | 174100 | 0.4914 | - | - | - |
| 1.1778 | 174200 | 0.5643 | - | - | - |
| 1.1785 | 174300 | 0.5803 | - | - | - |
| 1.1792 | 174400 | 0.5278 | - | - | - |
| 1.1798 | 174500 | 0.5454 | - | - | - |
| 1.1805 | 174600 | 0.5288 | - | - | - |
| 1.1812 | 174700 | 0.504 | - | - | - |
| 1.1819 | 174800 | 0.5206 | - | - | - |
| 1.1825 | 174900 | 0.5291 | - | - | - |
| 1.1832 | 175000 | 0.5916 | 0.5452 | 0.7461 | - |
| 1.1839 | 175100 | 0.5214 | - | - | - |
| 1.1846 | 175200 | 0.4779 | - | - | - |
| 1.1853 | 175300 | 0.5714 | - | - | - |
| 1.1859 | 175400 | 0.5312 | - | - | - |
| 1.1866 | 175500 | 0.5032 | - | - | - |
| 1.1873 | 175600 | 0.5123 | - | - | - |
| 1.1880 | 175700 | 0.5104 | - | - | - |
| 1.1886 | 175800 | 0.4907 | - | - | - |
| 1.1893 | 175900 | 0.5474 | - | - | - |
| 1.1900 | 176000 | 0.5295 | - | - | - |
| 1.1907 | 176100 | 0.4825 | - | - | - |
| 1.1913 | 176200 | 0.5667 | - | - | - |
| 1.1920 | 176300 | 0.4914 | - | - | - |
| 1.1927 | 176400 | 0.5405 | - | - | - |
| 1.1934 | 176500 | 0.5322 | - | - | - |
| 1.1940 | 176600 | 0.4958 | - | - | - |
| 1.1947 | 176700 | 0.477 | - | - | - |
| 1.1954 | 176800 | 0.4622 | - | - | - |
| 1.1961 | 176900 | 0.5154 | - | - | - |
| 1.1967 | 177000 | 0.487 | - | - | - |
| 1.1974 | 177100 | 0.5569 | - | - | - |
| 1.1981 | 177200 | 0.535 | - | - | - |
| 1.1988 | 177300 | 0.5247 | - | - | - |
| 1.1995 | 177400 | 0.4922 | - | - | - |
| 1.2001 | 177500 | 0.5122 | - | - | - |
| 1.2008 | 177600 | 0.5189 | - | - | - |
| 1.2015 | 177700 | 0.4848 | - | - | - |
| 1.2022 | 177800 | 0.4975 | - | - | - |
| 1.2028 | 177900 | 0.5344 | - | - | - |
| 1.2035 | 178000 | 0.5301 | - | - | - |
| 1.2042 | 178100 | 0.5166 | - | - | - |
| 1.2049 | 178200 | 0.4858 | - | - | - |
| 1.2055 | 178300 | 0.5154 | - | - | - |
| 1.2062 | 178400 | 0.5423 | - | - | - |
| 1.2069 | 178500 | 0.481 | - | - | - |
| 1.2076 | 178600 | 0.5136 | - | - | - |
| 1.2082 | 178700 | 0.5079 | - | - | - |
| 1.2089 | 178800 | 0.5349 | - | - | - |
| 1.2096 | 178900 | 0.5221 | - | - | - |
| 1.2103 | 179000 | 0.4971 | - | - | - |
| 1.2109 | 179100 | 0.5115 | - | - | - |
| 1.2116 | 179200 | 0.5045 | - | - | - |
| 1.2123 | 179300 | 0.5347 | - | - | - |
| 1.2130 | 179400 | 0.5109 | - | - | - |
| 1.2136 | 179500 | 0.5631 | - | - | - |
| 1.2143 | 179600 | 0.5074 | - | - | - |
| 1.2150 | 179700 | 0.534 | - | - | - |
| 1.2157 | 179800 | 0.4971 | - | - | - |
| 1.2164 | 179900 | 0.4885 | - | - | - |
| 1.2170 | 180000 | 0.5197 | 0.5378 | 0.7541 | - |
| 1.2177 | 180100 | 0.5427 | - | - | - |
| 1.2184 | 180200 | 0.5506 | - | - | - |
| 1.2191 | 180300 | 0.5021 | - | - | - |
| 1.2197 | 180400 | 0.5473 | - | - | - |
| 1.2204 | 180500 | 0.5208 | - | - | - |
| 1.2211 | 180600 | 0.488 | - | - | - |
| 1.2218 | 180700 | 0.5462 | - | - | - |
| 1.2224 | 180800 | 0.5287 | - | - | - |
| 1.2231 | 180900 | 0.521 | - | - | - |
| 1.2238 | 181000 | 0.5336 | - | - | - |
| 1.2245 | 181100 | 0.5672 | - | - | - |
| 1.2251 | 181200 | 0.497 | - | - | - |
| 1.2258 | 181300 | 0.5271 | - | - | - |
| 1.2265 | 181400 | 0.5087 | - | - | - |
| 1.2272 | 181500 | 0.5035 | - | - | - |
| 1.2278 | 181600 | 0.4994 | - | - | - |
| 1.2285 | 181700 | 0.5211 | - | - | - |
| 1.2292 | 181800 | 0.5013 | - | - | - |
| 1.2299 | 181900 | 0.544 | - | - | - |
| 1.2306 | 182000 | 0.5325 | - | - | - |
| 1.2312 | 182100 | 0.5327 | - | - | - |
| 1.2319 | 182200 | 0.4875 | - | - | - |
| 1.2326 | 182300 | 0.5253 | - | - | - |
| 1.2333 | 182400 | 0.5389 | - | - | - |
| 1.2339 | 182500 | 0.5043 | - | - | - |
| 1.2346 | 182600 | 0.5292 | - | - | - |
| 1.2353 | 182700 | 0.5523 | - | - | - |
| 1.2360 | 182800 | 0.4971 | - | - | - |
| 1.2366 | 182900 | 0.5154 | - | - | - |
| 1.2373 | 183000 | 0.4666 | - | - | - |
| 1.2380 | 183100 | 0.4855 | - | - | - |
| 1.2387 | 183200 | 0.5284 | - | - | - |
| 1.2393 | 183300 | 0.5296 | - | - | - |
| 1.2400 | 183400 | 0.4876 | - | - | - |
| 1.2407 | 183500 | 0.5054 | - | - | - |
| 1.2414 | 183600 | 0.5402 | - | - | - |
| 1.2420 | 183700 | 0.5051 | - | - | - |
| 1.2427 | 183800 | 0.5287 | - | - | - |
| 1.2434 | 183900 | 0.5191 | - | - | - |
| 1.2441 | 184000 | 0.5042 | - | - | - |
| 1.2448 | 184100 | 0.5091 | - | - | - |
| 1.2454 | 184200 | 0.4801 | - | - | - |
| 1.2461 | 184300 | 0.4972 | - | - | - |
| 1.2468 | 184400 | 0.5532 | - | - | - |
| 1.2475 | 184500 | 0.5381 | - | - | - |
| 1.2481 | 184600 | 0.5417 | - | - | - |
| 1.2488 | 184700 | 0.4954 | - | - | - |
| 1.2495 | 184800 | 0.5088 | - | - | - |
| 1.2502 | 184900 | 0.4964 | - | - | - |
| 1.2508 | 185000 | 0.5161 | 0.5448 | 0.7559 | - |
| 1.2515 | 185100 | 0.5391 | - | - | - |
| 1.2522 | 185200 | 0.483 | - | - | - |
| 1.2529 | 185300 | 0.5064 | - | - | - |
| 1.2535 | 185400 | 0.5486 | - | - | - |
| 1.2542 | 185500 | 0.4959 | - | - | - |
| 1.2549 | 185600 | 0.5394 | - | - | - |
| 1.2556 | 185700 | 0.4586 | - | - | - |
| 1.2562 | 185800 | 0.4634 | - | - | - |
| 1.2569 | 185900 | 0.5228 | - | - | - |
| 1.2576 | 186000 | 0.5378 | - | - | - |
| 1.2583 | 186100 | 0.5836 | - | - | - |
| 1.2590 | 186200 | 0.5087 | - | - | - |
| 1.2596 | 186300 | 0.4947 | - | - | - |
| 1.2603 | 186400 | 0.4844 | - | - | - |
| 1.2610 | 186500 | 0.5182 | - | - | - |
| 1.2617 | 186600 | 0.4888 | - | - | - |
| 1.2623 | 186700 | 0.4508 | - | - | - |
| 1.2630 | 186800 | 0.5666 | - | - | - |
| 1.2637 | 186900 | 0.4936 | - | - | - |
| 1.2644 | 187000 | 0.5228 | - | - | - |
| 1.2650 | 187100 | 0.4783 | - | - | - |
| 1.2657 | 187200 | 0.4913 | - | - | - |
| 1.2664 | 187300 | 0.4682 | - | - | - |
| 1.2671 | 187400 | 0.509 | - | - | - |
| 1.2677 | 187500 | 0.4874 | - | - | - |
| 1.2684 | 187600 | 0.5208 | - | - | - |
| 1.2691 | 187700 | 0.5469 | - | - | - |
| 1.2698 | 187800 | 0.4704 | - | - | - |
| 1.2704 | 187900 | 0.5463 | - | - | - |
| 1.2711 | 188000 | 0.495 | - | - | - |
| 1.2718 | 188100 | 0.5149 | - | - | - |
| 1.2725 | 188200 | 0.5084 | - | - | - |
| 1.2731 | 188300 | 0.4425 | - | - | - |
| 1.2738 | 188400 | 0.5116 | - | - | - |
| 1.2745 | 188500 | 0.5056 | - | - | - |
| 1.2752 | 188600 | 0.4759 | - | - | - |
| 1.2759 | 188700 | 0.4927 | - | - | - |
| 1.2765 | 188800 | 0.5099 | - | - | - |
| 1.2772 | 188900 | 0.494 | - | - | - |
| 1.2779 | 189000 | 0.5103 | - | - | - |
| 1.2786 | 189100 | 0.5301 | - | - | - |
| 1.2792 | 189200 | 0.5205 | - | - | - |
| 1.2799 | 189300 | 0.4583 | - | - | - |
| 1.2806 | 189400 | 0.5008 | - | - | - |
| 1.2813 | 189500 | 0.4943 | - | - | - |
| 1.2819 | 189600 | 0.4938 | - | - | - |
| 1.2826 | 189700 | 0.5782 | - | - | - |
| 1.2833 | 189800 | 0.5149 | - | - | - |
| 1.2840 | 189900 | 0.5482 | - | - | - |
| 1.2846 | 190000 | 0.4619 | 0.5428 | 0.7525 | - |
| 1.2853 | 190100 | 0.4846 | - | - | - |
| 1.2860 | 190200 | 0.469 | - | - | - |
| 1.2867 | 190300 | 0.4997 | - | - | - |
| 1.2873 | 190400 | 0.4967 | - | - | - |
| 1.2880 | 190500 | 0.4953 | - | - | - |
| 1.2887 | 190600 | 0.5419 | - | - | - |
| 1.2894 | 190700 | 0.4935 | - | - | - |
| 1.2901 | 190800 | 0.5141 | - | - | - |
| 1.2907 | 190900 | 0.4803 | - | - | - |
| 1.2914 | 191000 | 0.458 | - | - | - |
| 1.2921 | 191100 | 0.4836 | - | - | - |
| 1.2928 | 191200 | 0.4859 | - | - | - |
| 1.2934 | 191300 | 0.485 | - | - | - |
| 1.2941 | 191400 | 0.4762 | - | - | - |
| 1.2948 | 191500 | 0.5488 | - | - | - |
| 1.2955 | 191600 | 0.4921 | - | - | - |
| 1.2961 | 191700 | 0.5127 | - | - | - |
| 1.2968 | 191800 | 0.4515 | - | - | - |
| 1.2975 | 191900 | 0.5212 | - | - | - |
| 1.2982 | 192000 | 0.4545 | - | - | - |
| 1.2988 | 192100 | 0.4977 | - | - | - |
| 1.2995 | 192200 | 0.5078 | - | - | - |
| 1.3002 | 192300 | 0.4938 | - | - | - |
| 1.3009 | 192400 | 0.5292 | - | - | - |
| 1.3015 | 192500 | 0.503 | - | - | - |
| 1.3022 | 192600 | 0.4928 | - | - | - |
| 1.3029 | 192700 | 0.5225 | - | - | - |
| 1.3036 | 192800 | 0.4352 | - | - | - |
| 1.3043 | 192900 | 0.4906 | - | - | - |
| 1.3049 | 193000 | 0.4871 | - | - | - |
| 1.3056 | 193100 | 0.5293 | - | - | - |
| 1.3063 | 193200 | 0.5319 | - | - | - |
| 1.3070 | 193300 | 0.5273 | - | - | - |
| 1.3076 | 193400 | 0.4965 | - | - | - |
| 1.3083 | 193500 | 0.485 | - | - | - |
| 1.3090 | 193600 | 0.5279 | - | - | - |
| 1.3097 | 193700 | 0.4996 | - | - | - |
| 1.3103 | 193800 | 0.4763 | - | - | - |
| 1.3110 | 193900 | 0.5496 | - | - | - |
| 1.3117 | 194000 | 0.5104 | - | - | - |
| 1.3124 | 194100 | 0.4664 | - | - | - |
| 1.3130 | 194200 | 0.4913 | - | - | - |
| 1.3137 | 194300 | 0.4837 | - | - | - |
| 1.3144 | 194400 | 0.5023 | - | - | - |
| 1.3151 | 194500 | 0.4961 | - | - | - |
| 1.3157 | 194600 | 0.5201 | - | - | - |
| 1.3164 | 194700 | 0.5071 | - | - | - |
| 1.3171 | 194800 | 0.5162 | - | - | - |
| 1.3178 | 194900 | 0.4915 | - | - | - |
| 1.3184 | 195000 | 0.4853 | 0.5496 | 0.7555 | - |
| 1.3191 | 195100 | 0.5355 | - | - | - |
| 1.3198 | 195200 | 0.4819 | - | - | - |
| 1.3205 | 195300 | 0.5133 | - | - | - |
| 1.3212 | 195400 | 0.5023 | - | - | - |
| 1.3218 | 195500 | 0.4849 | - | - | - |
| 1.3225 | 195600 | 0.5129 | - | - | - |
| 1.3232 | 195700 | 0.5341 | - | - | - |
| 1.3239 | 195800 | 0.4105 | - | - | - |
| 1.3245 | 195900 | 0.4616 | - | - | - |
| 1.3252 | 196000 | 0.4865 | - | - | - |
| 1.3259 | 196100 | 0.5203 | - | - | - |
| 1.3266 | 196200 | 0.5589 | - | - | - |
| 1.3272 | 196300 | 0.5056 | - | - | - |
| 1.3279 | 196400 | 0.441 | - | - | - |
| 1.3286 | 196500 | 0.5481 | - | - | - |
| 1.3293 | 196600 | 0.4934 | - | - | - |
| 1.3299 | 196700 | 0.4713 | - | - | - |
| 1.3306 | 196800 | 0.4586 | - | - | - |
| 1.3313 | 196900 | 0.5314 | - | - | - |
| 1.3320 | 197000 | 0.4745 | - | - | - |
| 1.3326 | 197100 | 0.4676 | - | - | - |
| 1.3333 | 197200 | 0.449 | - | - | - |
| 1.3340 | 197300 | 0.5112 | - | - | - |
| 1.3347 | 197400 | 0.4876 | - | - | - |
| 1.3354 | 197500 | 0.5133 | - | - | - |
| 1.3360 | 197600 | 0.4924 | - | - | - |
| 1.3367 | 197700 | 0.4644 | - | - | - |
| 1.3374 | 197800 | 0.4455 | - | - | - |
| 1.3381 | 197900 | 0.516 | - | - | - |
| 1.3387 | 198000 | 0.4805 | - | - | - |
| 1.3394 | 198100 | 0.5274 | - | - | - |
| 1.3401 | 198200 | 0.4636 | - | - | - |
| 1.3408 | 198300 | 0.4358 | - | - | - |
| 1.3414 | 198400 | 0.4963 | - | - | - |
| 1.3421 | 198500 | 0.4758 | - | - | - |
| 1.3428 | 198600 | 0.4961 | - | - | - |
| 1.3435 | 198700 | 0.5095 | - | - | - |
| 1.3441 | 198800 | 0.4829 | - | - | - |
| 1.3448 | 198900 | 0.5339 | - | - | - |
| 1.3455 | 199000 | 0.4835 | - | - | - |
| 1.3462 | 199100 | 0.5258 | - | - | - |
| 1.3468 | 199200 | 0.4726 | - | - | - |
| 1.3475 | 199300 | 0.4804 | - | - | - |
| 1.3482 | 199400 | 0.4636 | - | - | - |
| 1.3489 | 199500 | 0.4817 | - | - | - |
| 1.3496 | 199600 | 0.482 | - | - | - |
| 1.3502 | 199700 | 0.504 | - | - | - |
| 1.3509 | 199800 | 0.5124 | - | - | - |
| 1.3516 | 199900 | 0.443 | - | - | - |
| 1.3523 | 200000 | 0.5348 | 0.5423 | 0.7563 | - |
| 1.3529 | 200100 | 0.5052 | - | - | - |
| 1.3536 | 200200 | 0.4553 | - | - | - |
| 1.3543 | 200300 | 0.4715 | - | - | - |
| 1.3550 | 200400 | 0.4629 | - | - | - |
| 1.3556 | 200500 | 0.4649 | - | - | - |
| 1.3563 | 200600 | 0.4974 | - | - | - |
| 1.3570 | 200700 | 0.5105 | - | - | - |
| 1.3577 | 200800 | 0.4986 | - | - | - |
| 1.3583 | 200900 | 0.4647 | - | - | - |
| 1.3590 | 201000 | 0.4805 | - | - | - |
| 1.3597 | 201100 | 0.5027 | - | - | - |
| 1.3604 | 201200 | 0.5004 | - | - | - |
| 1.3610 | 201300 | 0.4637 | - | - | - |
| 1.3617 | 201400 | 0.4693 | - | - | - |
| 1.3624 | 201500 | 0.4459 | - | - | - |
| 1.3631 | 201600 | 0.4746 | - | - | - |
| 1.3638 | 201700 | 0.4807 | - | - | - |
| 1.3644 | 201800 | 0.4755 | - | - | - |
| 1.3651 | 201900 | 0.4861 | - | - | - |
| 1.3658 | 202000 | 0.4499 | - | - | - |
| 1.3665 | 202100 | 0.4852 | - | - | - |
| 1.3671 | 202200 | 0.4745 | - | - | - |
| 1.3678 | 202300 | 0.489 | - | - | - |
| 1.3685 | 202400 | 0.4706 | - | - | - |
| 1.3692 | 202500 | 0.4798 | - | - | - |
| 1.3698 | 202600 | 0.4882 | - | - | - |
| 1.3705 | 202700 | 0.4737 | - | - | - |
| 1.3712 | 202800 | 0.4624 | - | - | - |
| 1.3719 | 202900 | 0.4784 | - | - | - |
| 1.3725 | 203000 | 0.4952 | - | - | - |
| 1.3732 | 203100 | 0.5017 | - | - | - |
| 1.3739 | 203200 | 0.5015 | - | - | - |
| 1.3746 | 203300 | 0.4416 | - | - | - |
| 1.3752 | 203400 | 0.5097 | - | - | - |
| 1.3759 | 203500 | 0.4815 | - | - | - |
| 1.3766 | 203600 | 0.4924 | - | - | - |
| 1.3773 | 203700 | 0.4628 | - | - | - |
| 1.3779 | 203800 | 0.4751 | - | - | - |
| 1.3786 | 203900 | 0.4679 | - | - | - |
| 1.3793 | 204000 | 0.5467 | - | - | - |
| 1.3800 | 204100 | 0.4983 | - | - | - |
| 1.3807 | 204200 | 0.5047 | - | - | - |
| 1.3813 | 204300 | 0.4685 | - | - | - |
| 1.3820 | 204400 | 0.5224 | - | - | - |
| 1.3827 | 204500 | 0.465 | - | - | - |
| 1.3834 | 204600 | 0.4876 | - | - | - |
| 1.3840 | 204700 | 0.504 | - | - | - |
| 1.3847 | 204800 | 0.4624 | - | - | - |
| 1.3854 | 204900 | 0.5205 | - | - | - |
| 1.3861 | 205000 | 0.4526 | 0.5400 | 0.7595 | - |
| 1.3867 | 205100 | 0.5068 | - | - | - |
| 1.3874 | 205200 | 0.4379 | - | - | - |
| 1.3881 | 205300 | 0.4858 | - | - | - |
| 1.3888 | 205400 | 0.4933 | - | - | - |
| 1.3894 | 205500 | 0.4885 | - | - | - |
| 1.3901 | 205600 | 0.5256 | - | - | - |
| 1.3908 | 205700 | 0.4909 | - | - | - |
| 1.3915 | 205800 | 0.4595 | - | - | - |
| 1.3921 | 205900 | 0.4579 | - | - | - |
| 1.3928 | 206000 | 0.4509 | - | - | - |
| 1.3935 | 206100 | 0.5018 | - | - | - |
| 1.3942 | 206200 | 0.4901 | - | - | - |
| 1.3949 | 206300 | 0.4789 | - | - | - |
| 1.3955 | 206400 | 0.4711 | - | - | - |
| 1.3962 | 206500 | 0.4726 | - | - | - |
| 1.3969 | 206600 | 0.5106 | - | - | - |
| 1.3976 | 206700 | 0.4658 | - | - | - |
| 1.3982 | 206800 | 0.4608 | - | - | - |
| 1.3989 | 206900 | 0.462 | - | - | - |
| 1.3996 | 207000 | 0.5146 | - | - | - |
| 1.4003 | 207100 | 0.5001 | - | - | - |
| 1.4009 | 207200 | 0.5157 | - | - | - |
| 1.4016 | 207300 | 0.4832 | - | - | - |
| 1.4023 | 207400 | 0.5159 | - | - | - |
| 1.4030 | 207500 | 0.5186 | - | - | - |
| 1.4036 | 207600 | 0.5075 | - | - | - |
| 1.4043 | 207700 | 0.4713 | - | - | - |
| 1.4050 | 207800 | 0.4252 | - | - | - |
| 1.4057 | 207900 | 0.4327 | - | - | - |
| 1.4063 | 208000 | 0.4651 | - | - | - |
| 1.4070 | 208100 | 0.5014 | - | - | - |
| 1.4077 | 208200 | 0.4894 | - | - | - |
| 1.4084 | 208300 | 0.5509 | - | - | - |
| 1.4091 | 208400 | 0.4821 | - | - | - |
| 1.4097 | 208500 | 0.5021 | - | - | - |
| 1.4104 | 208600 | 0.5262 | - | - | - |
| 1.4111 | 208700 | 0.4583 | - | - | - |
| 1.4118 | 208800 | 0.4524 | - | - | - |
| 1.4124 | 208900 | 0.4506 | - | - | - |
| 1.4131 | 209000 | 0.5256 | - | - | - |
| 1.4138 | 209100 | 0.5151 | - | - | - |
| 1.4145 | 209200 | 0.5081 | - | - | - |
| 1.4151 | 209300 | 0.4742 | - | - | - |
| 1.4158 | 209400 | 0.4816 | - | - | - |
| 1.4165 | 209500 | 0.4853 | - | - | - |
| 1.4172 | 209600 | 0.4775 | - | - | - |
| 1.4178 | 209700 | 0.4868 | - | - | - |
| 1.4185 | 209800 | 0.4626 | - | - | - |
| 1.4192 | 209900 | 0.5078 | - | - | - |
| 1.4199 | 210000 | 0.4994 | 0.5371 | 0.7597 | - |
| 1.4205 | 210100 | 0.471 | - | - | - |
| 1.4212 | 210200 | 0.5009 | - | - | - |
| 1.4219 | 210300 | 0.5125 | - | - | - |
| 1.4226 | 210400 | 0.492 | - | - | - |
| 1.4232 | 210500 | 0.5281 | - | - | - |
| 1.4239 | 210600 | 0.5255 | - | - | - |
| 1.4246 | 210700 | 0.4393 | - | - | - |
| 1.4253 | 210800 | 0.5011 | - | - | - |
| 1.4260 | 210900 | 0.5004 | - | - | - |
| 1.4266 | 211000 | 0.4843 | - | - | - |
| 1.4273 | 211100 | 0.4866 | - | - | - |
| 1.4280 | 211200 | 0.4586 | - | - | - |
| 1.4287 | 211300 | 0.5276 | - | - | - |
| 1.4293 | 211400 | 0.4544 | - | - | - |
| 1.4300 | 211500 | 0.4936 | - | - | - |
| 1.4307 | 211600 | 0.4498 | - | - | - |
| 1.4314 | 211700 | 0.4759 | - | - | - |
| 1.4320 | 211800 | 0.4735 | - | - | - |
| 1.4327 | 211900 | 0.4537 | - | - | - |
| 1.4334 | 212000 | 0.5012 | - | - | - |
| 1.4341 | 212100 | 0.5325 | - | - | - |
| 1.4347 | 212200 | 0.4797 | - | - | - |
| 1.4354 | 212300 | 0.4597 | - | - | - |
| 1.4361 | 212400 | 0.4514 | - | - | - |
| 1.4368 | 212500 | 0.451 | - | - | - |
| 1.4374 | 212600 | 0.5148 | - | - | - |
| 1.4381 | 212700 | 0.484 | - | - | - |
| 1.4388 | 212800 | 0.4761 | - | - | - |
| 1.4395 | 212900 | 0.4608 | - | - | - |
| 1.4402 | 213000 | 0.5341 | - | - | - |
| 1.4408 | 213100 | 0.4899 | - | - | - |
| 1.4415 | 213200 | 0.4814 | - | - | - |
| 1.4422 | 213300 | 0.5104 | - | - | - |
| 1.4429 | 213400 | 0.502 | - | - | - |
| 1.4435 | 213500 | 0.4639 | - | - | - |
| 1.4442 | 213600 | 0.4742 | - | - | - |
| 1.4449 | 213700 | 0.4737 | - | - | - |
| 1.4456 | 213800 | 0.4743 | - | - | - |
| 1.4462 | 213900 | 0.4613 | - | - | - |
| 1.4469 | 214000 | 0.5021 | - | - | - |
| 1.4476 | 214100 | 0.5386 | - | - | - |
| 1.4483 | 214200 | 0.4992 | - | - | - |
| 1.4489 | 214300 | 0.4302 | - | - | - |
| 1.4496 | 214400 | 0.4601 | - | - | - |
| 1.4503 | 214500 | 0.4061 | - | - | - |
| 1.4510 | 214600 | 0.4878 | - | - | - |
| 1.4516 | 214700 | 0.4531 | - | - | - |
| 1.4523 | 214800 | 0.4754 | - | - | - |
| 1.4530 | 214900 | 0.4831 | - | - | - |
| 1.4537 | 215000 | 0.4628 | 0.5442 | 0.7620 | - |
| 1.4544 | 215100 | 0.4794 | - | - | - |
| 1.4550 | 215200 | 0.4889 | - | - | - |
| 1.4557 | 215300 | 0.499 | - | - | - |
| 1.4564 | 215400 | 0.4593 | - | - | - |
| 1.4571 | 215500 | 0.5281 | - | - | - |
| 1.4577 | 215600 | 0.4935 | - | - | - |
| 1.4584 | 215700 | 0.5279 | - | - | - |
| 1.4591 | 215800 | 0.4744 | - | - | - |
| 1.4598 | 215900 | 0.4979 | - | - | - |
| 1.4604 | 216000 | 0.4307 | - | - | - |
| 1.4611 | 216100 | 0.4676 | - | - | - |
| 1.4618 | 216200 | 0.4652 | - | - | - |
| 1.4625 | 216300 | 0.484 | - | - | - |
| 1.4631 | 216400 | 0.465 | - | - | - |
| 1.4638 | 216500 | 0.4558 | - | - | - |
| 1.4645 | 216600 | 0.4717 | - | - | - |
| 1.4652 | 216700 | 0.487 | - | - | - |
| 1.4658 | 216800 | 0.4458 | - | - | - |
| 1.4665 | 216900 | 0.5153 | - | - | - |
| 1.4672 | 217000 | 0.5046 | - | - | - |
| 1.4679 | 217100 | 0.4624 | - | - | - |
| 1.4685 | 217200 | 0.5073 | - | - | - |
| 1.4692 | 217300 | 0.4872 | - | - | - |
| 1.4699 | 217400 | 0.4799 | - | - | - |
| 1.4706 | 217500 | 0.518 | - | - | - |
| 1.4713 | 217600 | 0.4481 | - | - | - |
| 1.4719 | 217700 | 0.4859 | - | - | - |
| 1.4726 | 217800 | 0.4285 | - | - | - |
| 1.4733 | 217900 | 0.4793 | - | - | - |
| 1.4740 | 218000 | 0.4855 | - | - | - |
| 1.4746 | 218100 | 0.4878 | - | - | - |
| 1.4753 | 218200 | 0.4743 | - | - | - |
| 1.4760 | 218300 | 0.453 | - | - | - |
| 1.4767 | 218400 | 0.4627 | - | - | - |
| 1.4773 | 218500 | 0.4689 | - | - | - |
| 1.4780 | 218600 | 0.4655 | - | - | - |
| 1.4787 | 218700 | 0.4672 | - | - | - |
| 1.4794 | 218800 | 0.4433 | - | - | - |
| 1.4800 | 218900 | 0.5168 | - | - | - |
| 1.4807 | 219000 | 0.4854 | - | - | - |
| 1.4814 | 219100 | 0.4613 | - | - | - |
| 1.4821 | 219200 | 0.4697 | - | - | - |
| 1.4827 | 219300 | 0.4898 | - | - | - |
| 1.4834 | 219400 | 0.4462 | - | - | - |
| 1.4841 | 219500 | 0.5175 | - | - | - |
| 1.4848 | 219600 | 0.4957 | - | - | - |
| 1.4855 | 219700 | 0.5098 | - | - | - |
| 1.4861 | 219800 | 0.497 | - | - | - |
| 1.4868 | 219900 | 0.5067 | - | - | - |
| 1.4875 | 220000 | 0.4488 | 0.5371 | 0.7595 | - |
| 1.4882 | 220100 | 0.4687 | - | - | - |
| 1.4888 | 220200 | 0.4715 | - | - | - |
| 1.4895 | 220300 | 0.4244 | - | - | - |
| 1.4902 | 220400 | 0.4696 | - | - | - |
| 1.4909 | 220500 | 0.4517 | - | - | - |
| 1.4915 | 220600 | 0.4317 | - | - | - |
| 1.4922 | 220700 | 0.462 | - | - | - |
| 1.4929 | 220800 | 0.436 | - | - | - |
| 1.4936 | 220900 | 0.4933 | - | - | - |
| 1.4942 | 221000 | 0.4744 | - | - | - |
| 1.4949 | 221100 | 0.4591 | - | - | - |
| 1.4956 | 221200 | 0.4717 | - | - | - |
| 1.4963 | 221300 | 0.4851 | - | - | - |
| 1.4969 | 221400 | 0.482 | - | - | - |
| 1.4976 | 221500 | 0.4362 | - | - | - |
| 1.4983 | 221600 | 0.4574 | - | - | - |
| 1.4990 | 221700 | 0.4783 | - | - | - |
| 1.4997 | 221800 | 0.5475 | - | - | - |
| 1.5003 | 221900 | 0.4602 | - | - | - |
| 1.5010 | 222000 | 0.4271 | - | - | - |
| 1.5017 | 222100 | 0.5019 | - | - | - |
| 1.5024 | 222200 | 0.4193 | - | - | - |
| 1.5030 | 222300 | 0.4977 | - | - | - |
| 1.5037 | 222400 | 0.5011 | - | - | - |
| 1.5044 | 222500 | 0.4828 | - | - | - |
| 1.5051 | 222600 | 0.4222 | - | - | - |
| 1.5057 | 222700 | 0.457 | - | - | - |
| 1.5064 | 222800 | 0.4745 | - | - | - |
| 1.5071 | 222900 | 0.5158 | - | - | - |
| 1.5078 | 223000 | 0.478 | - | - | - |
| 1.5084 | 223100 | 0.4607 | - | - | - |
| 1.5091 | 223200 | 0.4588 | - | - | - |
| 1.5098 | 223300 | 0.5097 | - | - | - |
| 1.5105 | 223400 | 0.4626 | - | - | - |
| 1.5111 | 223500 | 0.4521 | - | - | - |
| 1.5118 | 223600 | 0.493 | - | - | - |
| 1.5125 | 223700 | 0.481 | - | - | - |
| 1.5132 | 223800 | 0.4463 | - | - | - |
| 1.5139 | 223900 | 0.4982 | - | - | - |
| 1.5145 | 224000 | 0.4744 | - | - | - |
| 1.5152 | 224100 | 0.454 | - | - | - |
| 1.5159 | 224200 | 0.5134 | - | - | - |
| 1.5166 | 224300 | 0.4807 | - | - | - |
| 1.5172 | 224400 | 0.4653 | - | - | - |
| 1.5179 | 224500 | 0.4877 | - | - | - |
| 1.5186 | 224600 | 0.4791 | - | - | - |
| 1.5193 | 224700 | 0.4691 | - | - | - |
| 1.5199 | 224800 | 0.4734 | - | - | - |
| 1.5206 | 224900 | 0.4327 | - | - | - |
| 1.5213 | 225000 | 0.4711 | 0.5446 | 0.7608 | - |
| 1.5220 | 225100 | 0.4883 | - | - | - |
| 1.5226 | 225200 | 0.5147 | - | - | - |
| 1.5233 | 225300 | 0.464 | - | - | - |
| 1.5240 | 225400 | 0.5124 | - | - | - |
| 1.5247 | 225500 | 0.4876 | - | - | - |
| 1.5253 | 225600 | 0.4611 | - | - | - |
| 1.5260 | 225700 | 0.5207 | - | - | - |
| 1.5267 | 225800 | 0.4821 | - | - | - |
| 1.5274 | 225900 | 0.5009 | - | - | - |
| 1.5280 | 226000 | 0.5359 | - | - | - |
| 1.5287 | 226100 | 0.4622 | - | - | - |
| 1.5294 | 226200 | 0.4747 | - | - | - |
| 1.5301 | 226300 | 0.4974 | - | - | - |
| 1.5308 | 226400 | 0.4563 | - | - | - |
| 1.5314 | 226500 | 0.455 | - | - | - |
| 1.5321 | 226600 | 0.4635 | - | - | - |
| 1.5328 | 226700 | 0.4782 | - | - | - |
| 1.5335 | 226800 | 0.4855 | - | - | - |
| 1.5341 | 226900 | 0.4821 | - | - | - |
| 1.5348 | 227000 | 0.4684 | - | - | - |
| 1.5355 | 227100 | 0.468 | - | - | - |
| 1.5362 | 227200 | 0.4191 | - | - | - |
| 1.5368 | 227300 | 0.4692 | - | - | - |
| 1.5375 | 227400 | 0.4572 | - | - | - |
| 1.5382 | 227500 | 0.4261 | - | - | - |
| 1.5389 | 227600 | 0.4533 | - | - | - |
| 1.5395 | 227700 | 0.4412 | - | - | - |
| 1.5402 | 227800 | 0.4864 | - | - | - |
| 1.5409 | 227900 | 0.4668 | - | - | - |
| 1.5416 | 228000 | 0.4577 | - | - | - |
| 1.5422 | 228100 | 0.4566 | - | - | - |
| 1.5429 | 228200 | 0.5041 | - | - | - |
| 1.5436 | 228300 | 0.484 | - | - | - |
| 1.5443 | 228400 | 0.4984 | - | - | - |
| 1.5450 | 228500 | 0.4611 | - | - | - |
| 1.5456 | 228600 | 0.5161 | - | - | - |
| 1.5463 | 228700 | 0.4372 | - | - | - |
| 1.5470 | 228800 | 0.5088 | - | - | - |
| 1.5477 | 228900 | 0.4875 | - | - | - |
| 1.5483 | 229000 | 0.4717 | - | - | - |
| 1.5490 | 229100 | 0.4599 | - | - | - |
| 1.5497 | 229200 | 0.4386 | - | - | - |
| 1.5504 | 229300 | 0.4823 | - | - | - |
| 1.5510 | 229400 | 0.5137 | - | - | - |
| 1.5517 | 229500 | 0.4678 | - | - | - |
| 1.5524 | 229600 | 0.4561 | - | - | - |
| 1.5531 | 229700 | 0.4982 | - | - | - |
| 1.5537 | 229800 | 0.4558 | - | - | - |
| 1.5544 | 229900 | 0.4697 | - | - | - |
| 1.5551 | 230000 | 0.4741 | 0.5472 | 0.7568 | - |
| 1.5558 | 230100 | 0.4427 | - | - | - |
| 1.5564 | 230200 | 0.4494 | - | - | - |
| 1.5571 | 230300 | 0.489 | - | - | - |
| 1.5578 | 230400 | 0.4755 | - | - | - |
| 1.5585 | 230500 | 0.4565 | - | - | - |
| 1.5592 | 230600 | 0.4558 | - | - | - |
| 1.5598 | 230700 | 0.4554 | - | - | - |
| 1.5605 | 230800 | 0.5236 | - | - | - |
| 1.5612 | 230900 | 0.4614 | - | - | - |
| 1.5619 | 231000 | 0.484 | - | - | - |
| 1.5625 | 231100 | 0.4665 | - | - | - |
| 1.5632 | 231200 | 0.46 | - | - | - |
| 1.5639 | 231300 | 0.4767 | - | - | - |
| 1.5646 | 231400 | 0.4649 | - | - | - |
| 1.5652 | 231500 | 0.4697 | - | - | - |
| 1.5659 | 231600 | 0.4748 | - | - | - |
| 1.5666 | 231700 | 0.4465 | - | - | - |
| 1.5673 | 231800 | 0.4756 | - | - | - |
| 1.5679 | 231900 | 0.4834 | - | - | - |
| 1.5686 | 232000 | 0.4511 | - | - | - |
| 1.5693 | 232100 | 0.4922 | - | - | - |
| 1.5700 | 232200 | 0.4461 | - | - | - |
| 1.5706 | 232300 | 0.4671 | - | - | - |
| 1.5713 | 232400 | 0.4859 | - | - | - |
| 1.5720 | 232500 | 0.4887 | - | - | - |
| 1.5727 | 232600 | 0.5057 | - | - | - |
| 1.5733 | 232700 | 0.4681 | - | - | - |
| 1.5740 | 232800 | 0.4713 | - | - | - |
| 1.5747 | 232900 | 0.5302 | - | - | - |
| 1.5754 | 233000 | 0.4689 | - | - | - |
| 1.5761 | 233100 | 0.4461 | - | - | - |
| 1.5767 | 233200 | 0.4639 | - | - | - |
| 1.5774 | 233300 | 0.4345 | - | - | - |
| 1.5781 | 233400 | 0.4367 | - | - | - |
| 1.5788 | 233500 | 0.4802 | - | - | - |
| 1.5794 | 233600 | 0.4759 | - | - | - |
| 1.5801 | 233700 | 0.4986 | - | - | - |
| 1.5808 | 233800 | 0.4337 | - | - | - |
| 1.5815 | 233900 | 0.4664 | - | - | - |
| 1.5821 | 234000 | 0.5146 | - | - | - |
| 1.5828 | 234100 | 0.4519 | - | - | - |
| 1.5835 | 234200 | 0.4903 | - | - | - |
| 1.5842 | 234300 | 0.5063 | - | - | - |
| 1.5848 | 234400 | 0.4625 | - | - | - |
| 1.5855 | 234500 | 0.4804 | - | - | - |
| 1.5862 | 234600 | 0.43 | - | - | - |
| 1.5869 | 234700 | 0.4816 | - | - | - |
| 1.5875 | 234800 | 0.4564 | - | - | - |
| 1.5882 | 234900 | 0.4492 | - | - | - |
| 1.5889 | 235000 | 0.4807 | 0.5384 | 0.7569 | - |
| 1.5896 | 235100 | 0.4699 | - | - | - |
| 1.5903 | 235200 | 0.4669 | - | - | - |
| 1.5909 | 235300 | 0.4638 | - | - | - |
| 1.5916 | 235400 | 0.4475 | - | - | - |
| 1.5923 | 235500 | 0.4492 | - | - | - |
| 1.5930 | 235600 | 0.4694 | - | - | - |
| 1.5936 | 235700 | 0.5007 | - | - | - |
| 1.5943 | 235800 | 0.4228 | - | - | - |
| 1.5950 | 235900 | 0.5 | - | - | - |
| 1.5957 | 236000 | 0.4549 | - | - | - |
| 1.5963 | 236100 | 0.4356 | - | - | - |
| 1.5970 | 236200 | 0.4668 | - | - | - |
| 1.5977 | 236300 | 0.4428 | - | - | - |
| 1.5984 | 236400 | 0.5008 | - | - | - |
| 1.5990 | 236500 | 0.4634 | - | - | - |
| 1.5997 | 236600 | 0.4653 | - | - | - |
| 1.6004 | 236700 | 0.4364 | - | - | - |
| 1.6011 | 236800 | 0.4774 | - | - | - |
| 1.6017 | 236900 | 0.4435 | - | - | - |
| 1.6024 | 237000 | 0.4613 | - | - | - |
| 1.6031 | 237100 | 0.4872 | - | - | - |
| 1.6038 | 237200 | 0.4796 | - | - | - |
| 1.6045 | 237300 | 0.4521 | - | - | - |
| 1.6051 | 237400 | 0.4693 | - | - | - |
| 1.6058 | 237500 | 0.4384 | - | - | - |
| 1.6065 | 237600 | 0.5008 | - | - | - |
| 1.6072 | 237700 | 0.4385 | - | - | - |
| 1.6078 | 237800 | 0.4605 | - | - | - |
| 1.6085 | 237900 | 0.456 | - | - | - |
| 1.6092 | 238000 | 0.4636 | - | - | - |
| 1.6099 | 238100 | 0.4212 | - | - | - |
| 1.6105 | 238200 | 0.4826 | - | - | - |
| 1.6112 | 238300 | 0.4699 | - | - | - |
| 1.6119 | 238400 | 0.4605 | - | - | - |
| 1.6126 | 238500 | 0.4578 | - | - | - |
| 1.6132 | 238600 | 0.4583 | - | - | - |
| 1.6139 | 238700 | 0.4355 | - | - | - |
| 1.6146 | 238800 | 0.4949 | - | - | - |
| 1.6153 | 238900 | 0.4982 | - | - | - |
| 1.6159 | 239000 | 0.435 | - | - | - |
| 1.6166 | 239100 | 0.5358 | - | - | - |
| 1.6173 | 239200 | 0.4552 | - | - | - |
| 1.6180 | 239300 | 0.457 | - | - | - |
| 1.6187 | 239400 | 0.447 | - | - | - |
| 1.6193 | 239500 | 0.4706 | - | - | - |
| 1.6200 | 239600 | 0.4624 | - | - | - |
| 1.6207 | 239700 | 0.4517 | - | - | - |
| 1.6214 | 239800 | 0.4426 | - | - | - |
| 1.6220 | 239900 | 0.4019 | - | - | - |
| 1.6227 | 240000 | 0.4413 | 0.5373 | 0.7591 | - |
| 1.6234 | 240100 | 0.4081 | - | - | - |
| 1.6241 | 240200 | 0.4797 | - | - | - |
| 1.6247 | 240300 | 0.4245 | - | - | - |
| 1.6254 | 240400 | 0.4675 | - | - | - |
| 1.6261 | 240500 | 0.4965 | - | - | - |
| 1.6268 | 240600 | 0.4275 | - | - | - |
| 1.6274 | 240700 | 0.4458 | - | - | - |
| 1.6281 | 240800 | 0.4376 | - | - | - |
| 1.6288 | 240900 | 0.4543 | - | - | - |
| 1.6295 | 241000 | 0.4436 | - | - | - |
| 1.6301 | 241100 | 0.4572 | - | - | - |
| 1.6308 | 241200 | 0.475 | - | - | - |
| 1.6315 | 241300 | 0.446 | - | - | - |
| 1.6322 | 241400 | 0.4339 | - | - | - |
| 1.6328 | 241500 | 0.4201 | - | - | - |
| 1.6335 | 241600 | 0.4543 | - | - | - |
| 1.6342 | 241700 | 0.4225 | - | - | - |
| 1.6349 | 241800 | 0.4275 | - | - | - |
| 1.6356 | 241900 | 0.4651 | - | - | - |
| 1.6362 | 242000 | 0.498 | - | - | - |
| 1.6369 | 242100 | 0.4633 | - | - | - |
| 1.6376 | 242200 | 0.455 | - | - | - |
| 1.6383 | 242300 | 0.4585 | - | - | - |
| 1.6389 | 242400 | 0.4545 | - | - | - |
| 1.6396 | 242500 | 0.4258 | - | - | - |
| 1.6403 | 242600 | 0.5008 | - | - | - |
| 1.6410 | 242700 | 0.4757 | - | - | - |
| 1.6416 | 242800 | 0.4246 | - | - | - |
| 1.6423 | 242900 | 0.4288 | - | - | - |
| 1.6430 | 243000 | 0.4058 | - | - | - |
| 1.6437 | 243100 | 0.4794 | - | - | - |
| 1.6443 | 243200 | 0.4699 | - | - | - |
| 1.6450 | 243300 | 0.3919 | - | - | - |
| 1.6457 | 243400 | 0.4771 | - | - | - |
| 1.6464 | 243500 | 0.4785 | - | - | - |
| 1.6470 | 243600 | 0.4538 | - | - | - |
| 1.6477 | 243700 | 0.4474 | - | - | - |
| 1.6484 | 243800 | 0.468 | - | - | - |
| 1.6491 | 243900 | 0.4782 | - | - | - |
| 1.6498 | 244000 | 0.4909 | - | - | - |
| 1.6504 | 244100 | 0.4588 | - | - | - |
| 1.6511 | 244200 | 0.4601 | - | - | - |
| 1.6518 | 244300 | 0.4636 | - | - | - |
| 1.6525 | 244400 | 0.4555 | - | - | - |
| 1.6531 | 244500 | 0.4752 | - | - | - |
| 1.6538 | 244600 | 0.4428 | - | - | - |
| 1.6545 | 244700 | 0.5098 | - | - | - |
| 1.6552 | 244800 | 0.4214 | - | - | - |
| 1.6558 | 244900 | 0.4709 | - | - | - |
| 1.6565 | 245000 | 0.4452 | 0.5253 | 0.7637 | - |
| 1.6572 | 245100 | 0.4678 | - | - | - |
| 1.6579 | 245200 | 0.4759 | - | - | - |
| 1.6585 | 245300 | 0.4877 | - | - | - |
| 1.6592 | 245400 | 0.4263 | - | - | - |
| 1.6599 | 245500 | 0.4286 | - | - | - |
| 1.6606 | 245600 | 0.4847 | - | - | - |
| 1.6612 | 245700 | 0.4414 | - | - | - |
| 1.6619 | 245800 | 0.4771 | - | - | - |
| 1.6626 | 245900 | 0.4356 | - | - | - |
| 1.6633 | 246000 | 0.4591 | - | - | - |
| 1.6640 | 246100 | 0.4132 | - | - | - |
| 1.6646 | 246200 | 0.4585 | - | - | - |
| 1.6653 | 246300 | 0.484 | - | - | - |
| 1.6660 | 246400 | 0.4346 | - | - | - |
| 1.6667 | 246500 | 0.4384 | - | - | - |
| 1.6673 | 246600 | 0.4829 | - | - | - |
| 1.6680 | 246700 | 0.4508 | - | - | - |
| 1.6687 | 246800 | 0.4368 | - | - | - |
| 1.6694 | 246900 | 0.4608 | - | - | - |
| 1.6700 | 247000 | 0.4528 | - | - | - |
| 1.6707 | 247100 | 0.449 | - | - | - |
| 1.6714 | 247200 | 0.4572 | - | - | - |
| 1.6721 | 247300 | 0.4757 | - | - | - |
| 1.6727 | 247400 | 0.4626 | - | - | - |
| 1.6734 | 247500 | 0.4839 | - | - | - |
| 1.6741 | 247600 | 0.465 | - | - | - |
| 1.6748 | 247700 | 0.4427 | - | - | - |
| 1.6754 | 247800 | 0.4216 | - | - | - |
| 1.6761 | 247900 | 0.5065 | - | - | - |
| 1.6768 | 248000 | 0.4899 | - | - | - |
| 1.6775 | 248100 | 0.4554 | - | - | - |
| 1.6781 | 248200 | 0.4244 | - | - | - |
| 1.6788 | 248300 | 0.4889 | - | - | - |
| 1.6795 | 248400 | 0.5147 | - | - | - |
| 1.6802 | 248500 | 0.4877 | - | - | - |
| 1.6809 | 248600 | 0.4626 | - | - | - |
| 1.6815 | 248700 | 0.4391 | - | - | - |
| 1.6822 | 248800 | 0.4556 | - | - | - |
| 1.6829 | 248900 | 0.4703 | - | - | - |
| 1.6836 | 249000 | 0.4428 | - | - | - |
| 1.6842 | 249100 | 0.4623 | - | - | - |
| 1.6849 | 249200 | 0.4512 | - | - | - |
| 1.6856 | 249300 | 0.4828 | - | - | - |
| 1.6863 | 249400 | 0.4712 | - | - | - |
| 1.6869 | 249500 | 0.4331 | - | - | - |
| 1.6876 | 249600 | 0.4554 | - | - | - |
| 1.6883 | 249700 | 0.501 | - | - | - |
| 1.6890 | 249800 | 0.5304 | - | - | - |
| 1.6896 | 249900 | 0.4416 | - | - | - |
| 1.6903 | 250000 | 0.4661 | 0.5317 | 0.7661 | - |
| 1.6910 | 250100 | 0.4625 | - | - | - |
| 1.6917 | 250200 | 0.4846 | - | - | - |
| 1.6923 | 250300 | 0.4077 | - | - | - |
| 1.6930 | 250400 | 0.44 | - | - | - |
| 1.6937 | 250500 | 0.4667 | - | - | - |
| 1.6944 | 250600 | 0.4376 | - | - | - |
| 1.6951 | 250700 | 0.4977 | - | - | - |
| 1.6957 | 250800 | 0.4818 | - | - | - |
| 1.6964 | 250900 | 0.466 | - | - | - |
| 1.6971 | 251000 | 0.4095 | - | - | - |
| 1.6978 | 251100 | 0.458 | - | - | - |
| 1.6984 | 251200 | 0.4152 | - | - | - |
| 1.6991 | 251300 | 0.4536 | - | - | - |
| 1.6998 | 251400 | 0.4464 | - | - | - |
| 1.7005 | 251500 | 0.4732 | - | - | - |
| 1.7011 | 251600 | 0.4769 | - | - | - |
| 1.7018 | 251700 | 0.4576 | - | - | - |
| 1.7025 | 251800 | 0.4625 | - | - | - |
| 1.7032 | 251900 | 0.4901 | - | - | - |
| 1.7038 | 252000 | 0.405 | - | - | - |
| 1.7045 | 252100 | 0.4638 | - | - | - |
| 1.7052 | 252200 | 0.4445 | - | - | - |
| 1.7059 | 252300 | 0.432 | - | - | - |
| 1.7065 | 252400 | 0.4725 | - | - | - |
| 1.7072 | 252500 | 0.4271 | - | - | - |
| 1.7079 | 252600 | 0.4432 | - | - | - |
| 1.7086 | 252700 | 0.4594 | - | - | - |
| 1.7093 | 252800 | 0.4684 | - | - | - |
| 1.7099 | 252900 | 0.4413 | - | - | - |
| 1.7106 | 253000 | 0.4387 | - | - | - |
| 1.7113 | 253100 | 0.4531 | - | - | - |
| 1.7120 | 253200 | 0.4175 | - | - | - |
| 1.7126 | 253300 | 0.4827 | - | - | - |
| 1.7133 | 253400 | 0.4693 | - | - | - |
| 1.7140 | 253500 | 0.3994 | - | - | - |
| 1.7147 | 253600 | 0.4315 | - | - | - |
| 1.7153 | 253700 | 0.4678 | - | - | - |
| 1.7160 | 253800 | 0.4232 | - | - | - |
| 1.7167 | 253900 | 0.4582 | - | - | - |
| 1.7174 | 254000 | 0.4659 | - | - | - |
| 1.7180 | 254100 | 0.471 | - | - | - |
| 1.7187 | 254200 | 0.4212 | - | - | - |
| 1.7194 | 254300 | 0.5232 | - | - | - |
| 1.7201 | 254400 | 0.4563 | - | - | - |
| 1.7207 | 254500 | 0.4624 | - | - | - |
| 1.7214 | 254600 | 0.4454 | - | - | - |
| 1.7221 | 254700 | 0.4658 | - | - | - |
| 1.7228 | 254800 | 0.4783 | - | - | - |
| 1.7235 | 254900 | 0.4557 | - | - | - |
| 1.7241 | 255000 | 0.4349 | 0.5338 | 0.7631 | - |
| 1.7248 | 255100 | 0.4425 | - | - | - |
| 1.7255 | 255200 | 0.4169 | - | - | - |
| 1.7262 | 255300 | 0.4647 | - | - | - |
| 1.7268 | 255400 | 0.4266 | - | - | - |
| 1.7275 | 255500 | 0.4864 | - | - | - |
| 1.7282 | 255600 | 0.4499 | - | - | - |
| 1.7289 | 255700 | 0.4617 | - | - | - |
| 1.7295 | 255800 | 0.4296 | - | - | - |
| 1.7302 | 255900 | 0.4446 | - | - | - |
| 1.7309 | 256000 | 0.4519 | - | - | - |
| 1.7316 | 256100 | 0.4387 | - | - | - |
| 1.7322 | 256200 | 0.4492 | - | - | - |
| 1.7329 | 256300 | 0.4692 | - | - | - |
| 1.7336 | 256400 | 0.4881 | - | - | - |
| 1.7343 | 256500 | 0.4518 | - | - | - |
| 1.7349 | 256600 | 0.499 | - | - | - |
| 1.7356 | 256700 | 0.4207 | - | - | - |
| 1.7363 | 256800 | 0.4467 | - | - | - |
| 1.7370 | 256900 | 0.493 | - | - | - |
| 1.7376 | 257000 | 0.4235 | - | - | - |
| 1.7383 | 257100 | 0.4495 | - | - | - |
| 1.7390 | 257200 | 0.4806 | - | - | - |
| 1.7397 | 257300 | 0.4228 | - | - | - |
| 1.7404 | 257400 | 0.4826 | - | - | - |
| 1.7410 | 257500 | 0.4556 | - | - | - |
| 1.7417 | 257600 | 0.4426 | - | - | - |
| 1.7424 | 257700 | 0.4341 | - | - | - |
| 1.7431 | 257800 | 0.4359 | - | - | - |
| 1.7437 | 257900 | 0.454 | - | - | - |
| 1.7444 | 258000 | 0.4675 | - | - | - |
| 1.7451 | 258100 | 0.4077 | - | - | - |
| 1.7458 | 258200 | 0.4628 | - | - | - |
| 1.7464 | 258300 | 0.4641 | - | - | - |
| 1.7471 | 258400 | 0.4553 | - | - | - |
| 1.7478 | 258500 | 0.4568 | - | - | - |
| 1.7485 | 258600 | 0.4537 | - | - | - |
| 1.7491 | 258700 | 0.4504 | - | - | - |
| 1.7498 | 258800 | 0.4367 | - | - | - |
| 1.7505 | 258900 | 0.4413 | - | - | - |
| 1.7512 | 259000 | 0.43 | - | - | - |
| 1.7518 | 259100 | 0.4355 | - | - | - |
| 1.7525 | 259200 | 0.422 | - | - | - |
| 1.7532 | 259300 | 0.4069 | - | - | - |
| 1.7539 | 259400 | 0.402 | - | - | - |
| 1.7546 | 259500 | 0.4491 | - | - | - |
| 1.7552 | 259600 | 0.4964 | - | - | - |
| 1.7559 | 259700 | 0.4047 | - | - | - |
| 1.7566 | 259800 | 0.3931 | - | - | - |
| 1.7573 | 259900 | 0.4079 | - | - | - |
| 1.7579 | 260000 | 0.4314 | 0.5351 | 0.7618 | - |
| 1.7586 | 260100 | 0.4477 | - | - | - |
| 1.7593 | 260200 | 0.4434 | - | - | - |
| 1.7600 | 260300 | 0.4618 | - | - | - |
| 1.7606 | 260400 | 0.4529 | - | - | - |
| 1.7613 | 260500 | 0.4321 | - | - | - |
| 1.7620 | 260600 | 0.4381 | - | - | - |
| 1.7627 | 260700 | 0.4704 | - | - | - |
| 1.7633 | 260800 | 0.4405 | - | - | - |
| 1.7640 | 260900 | 0.476 | - | - | - |
| 1.7647 | 261000 | 0.4275 | - | - | - |
| 1.7654 | 261100 | 0.4359 | - | - | - |
| 1.7660 | 261200 | 0.4428 | - | - | - |
| 1.7667 | 261300 | 0.4994 | - | - | - |
| 1.7674 | 261400 | 0.4338 | - | - | - |
| 1.7681 | 261500 | 0.4182 | - | - | - |
| 1.7688 | 261600 | 0.474 | - | - | - |
| 1.7694 | 261700 | 0.4998 | - | - | - |
| 1.7701 | 261800 | 0.4428 | - | - | - |
| 1.7708 | 261900 | 0.4493 | - | - | - |
| 1.7715 | 262000 | 0.4438 | - | - | - |
| 1.7721 | 262100 | 0.4262 | - | - | - |
| 1.7728 | 262200 | 0.4951 | - | - | - |
| 1.7735 | 262300 | 0.4052 | - | - | - |
| 1.7742 | 262400 | 0.4559 | - | - | - |
| 1.7748 | 262500 | 0.4356 | - | - | - |
| 1.7755 | 262600 | 0.4665 | - | - | - |
| 1.7762 | 262700 | 0.4272 | - | - | - |
| 1.7769 | 262800 | 0.4536 | - | - | - |
| 1.7775 | 262900 | 0.451 | - | - | - |
| 1.7782 | 263000 | 0.4425 | - | - | - |
| 1.7789 | 263100 | 0.4601 | - | - | - |
| 1.7796 | 263200 | 0.477 | - | - | - |
| 1.7802 | 263300 | 0.4763 | - | - | - |
| 1.7809 | 263400 | 0.4309 | - | - | - |
| 1.7816 | 263500 | 0.4302 | - | - | - |
| 1.7823 | 263600 | 0.409 | - | - | - |
| 1.7829 | 263700 | 0.4719 | - | - | - |
| 1.7836 | 263800 | 0.3989 | - | - | - |
| 1.7843 | 263900 | 0.4616 | - | - | - |
| 1.7850 | 264000 | 0.4738 | - | - | - |
| 1.7857 | 264100 | 0.467 | - | - | - |
| 1.7863 | 264200 | 0.4863 | - | - | - |
| 1.7870 | 264300 | 0.5005 | - | - | - |
| 1.7877 | 264400 | 0.4274 | - | - | - |
| 1.7884 | 264500 | 0.4274 | - | - | - |
| 1.7890 | 264600 | 0.4403 | - | - | - |
| 1.7897 | 264700 | 0.3987 | - | - | - |
| 1.7904 | 264800 | 0.4381 | - | - | - |
| 1.7911 | 264900 | 0.4345 | - | - | - |
| 1.7917 | 265000 | 0.4098 | 0.5240 | 0.7629 | - |
| 1.7924 | 265100 | 0.4502 | - | - | - |
| 1.7931 | 265200 | 0.4727 | - | - | - |
| 1.7938 | 265300 | 0.4093 | - | - | - |
| 1.7944 | 265400 | 0.4555 | - | - | - |
| 1.7951 | 265500 | 0.47 | - | - | - |
| 1.7958 | 265600 | 0.4633 | - | - | - |
| 1.7965 | 265700 | 0.4531 | - | - | - |
| 1.7971 | 265800 | 0.4135 | - | - | - |
| 1.7978 | 265900 | 0.4698 | - | - | - |
| 1.7985 | 266000 | 0.4512 | - | - | - |
| 1.7992 | 266100 | 0.4259 | - | - | - |
| 1.7999 | 266200 | 0.4375 | - | - | - |
| 1.8005 | 266300 | 0.5042 | - | - | - |
| 1.8012 | 266400 | 0.4725 | - | - | - |
| 1.8019 | 266500 | 0.4517 | - | - | - |
| 1.8026 | 266600 | 0.4508 | - | - | - |
| 1.8032 | 266700 | 0.4553 | - | - | - |
| 1.8039 | 266800 | 0.4305 | - | - | - |
| 1.8046 | 266900 | 0.4599 | - | - | - |
| 1.8053 | 267000 | 0.4408 | - | - | - |
| 1.8059 | 267100 | 0.4377 | - | - | - |
| 1.8066 | 267200 | 0.5151 | - | - | - |
| 1.8073 | 267300 | 0.4088 | - | - | - |
| 1.8080 | 267400 | 0.4464 | - | - | - |
| 1.8086 | 267500 | 0.4165 | - | - | - |
| 1.8093 | 267600 | 0.4189 | - | - | - |
| 1.8100 | 267700 | 0.4611 | - | - | - |
| 1.8107 | 267800 | 0.4116 | - | - | - |
| 1.8113 | 267900 | 0.4228 | - | - | - |
| 1.8120 | 268000 | 0.4124 | - | - | - |
| 1.8127 | 268100 | 0.4254 | - | - | - |
| 1.8134 | 268200 | 0.5178 | - | - | - |
| 1.8141 | 268300 | 0.4767 | - | - | - |
| 1.8147 | 268400 | 0.4132 | - | - | - |
| 1.8154 | 268500 | 0.4613 | - | - | - |
| 1.8161 | 268600 | 0.4421 | - | - | - |
| 1.8168 | 268700 | 0.4615 | - | - | - |
| 1.8174 | 268800 | 0.4731 | - | - | - |
| 1.8181 | 268900 | 0.4604 | - | - | - |
| 1.8188 | 269000 | 0.455 | - | - | - |
| 1.8195 | 269100 | 0.4539 | - | - | - |
| 1.8201 | 269200 | 0.423 | - | - | - |
| 1.8208 | 269300 | 0.4408 | - | - | - |
| 1.8215 | 269400 | 0.4341 | - | - | - |
| 1.8222 | 269500 | 0.4578 | - | - | - |
| 1.8228 | 269600 | 0.4232 | - | - | - |
| 1.8235 | 269700 | 0.4091 | - | - | - |
| 1.8242 | 269800 | 0.4371 | - | - | - |
| 1.8249 | 269900 | 0.3723 | - | - | - |
| 1.8255 | 270000 | 0.4409 | 0.5281 | 0.7677 | - |
| 1.8262 | 270100 | 0.4741 | - | - | - |
| 1.8269 | 270200 | 0.412 | - | - | - |
| 1.8276 | 270300 | 0.4721 | - | - | - |
| 1.8282 | 270400 | 0.4463 | - | - | - |
| 1.8289 | 270500 | 0.4056 | - | - | - |
| 1.8296 | 270600 | 0.4471 | - | - | - |
| 1.8303 | 270700 | 0.4514 | - | - | - |
| 1.8310 | 270800 | 0.4326 | - | - | - |
| 1.8316 | 270900 | 0.4773 | - | - | - |
| 1.8323 | 271000 | 0.4699 | - | - | - |
| 1.8330 | 271100 | 0.4608 | - | - | - |
| 1.8337 | 271200 | 0.4251 | - | - | - |
| 1.8343 | 271300 | 0.4064 | - | - | - |
| 1.8350 | 271400 | 0.4326 | - | - | - |
| 1.8357 | 271500 | 0.4474 | - | - | - |
| 1.8364 | 271600 | 0.4519 | - | - | - |
| 1.8370 | 271700 | 0.425 | - | - | - |
| 1.8377 | 271800 | 0.4424 | - | - | - |
| 1.8384 | 271900 | 0.4984 | - | - | - |
| 1.8391 | 272000 | 0.4578 | - | - | - |
| 1.8397 | 272100 | 0.4309 | - | - | - |
| 1.8404 | 272200 | 0.4433 | - | - | - |
| 1.8411 | 272300 | 0.4621 | - | - | - |
| 1.8418 | 272400 | 0.4785 | - | - | - |
| 1.8424 | 272500 | 0.43 | - | - | - |
| 1.8431 | 272600 | 0.4519 | - | - | - |
| 1.8438 | 272700 | 0.4306 | - | - | - |
| 1.8445 | 272800 | 0.4259 | - | - | - |
| 1.8452 | 272900 | 0.4359 | - | - | - |
| 1.8458 | 273000 | 0.4489 | - | - | - |
| 1.8465 | 273100 | 0.4255 | - | - | - |
| 1.8472 | 273200 | 0.4681 | - | - | - |
| 1.8479 | 273300 | 0.4031 | - | - | - |
| 1.8485 | 273400 | 0.4154 | - | - | - |
| 1.8492 | 273500 | 0.444 | - | - | - |
| 1.8499 | 273600 | 0.467 | - | - | - |
| 1.8506 | 273700 | 0.4442 | - | - | - |
| 1.8512 | 273800 | 0.4408 | - | - | - |
| 1.8519 | 273900 | 0.459 | - | - | - |
| 1.8526 | 274000 | 0.429 | - | - | - |
| 1.8533 | 274100 | 0.4476 | - | - | - |
| 1.8539 | 274200 | 0.4554 | - | - | - |
| 1.8546 | 274300 | 0.427 | - | - | - |
| 1.8553 | 274400 | 0.4367 | - | - | - |
| 1.8560 | 274500 | 0.4396 | - | - | - |
| 1.8566 | 274600 | 0.3952 | - | - | - |
| 1.8573 | 274700 | 0.444 | - | - | - |
| 1.8580 | 274800 | 0.4539 | - | - | - |
| 1.8587 | 274900 | 0.4407 | - | - | - |
| 1.8594 | 275000 | 0.4248 | 0.5281 | 0.7640 | - |
| 1.8600 | 275100 | 0.4386 | - | - | - |
| 1.8607 | 275200 | 0.4254 | - | - | - |
| 1.8614 | 275300 | 0.3987 | - | - | - |
| 1.8621 | 275400 | 0.4319 | - | - | - |
| 1.8627 | 275500 | 0.4191 | - | - | - |
| 1.8634 | 275600 | 0.4446 | - | - | - |
| 1.8641 | 275700 | 0.5099 | - | - | - |
| 1.8648 | 275800 | 0.3804 | - | - | - |
| 1.8654 | 275900 | 0.4248 | - | - | - |
| 1.8661 | 276000 | 0.4485 | - | - | - |
| 1.8668 | 276100 | 0.4388 | - | - | - |
| 1.8675 | 276200 | 0.4131 | - | - | - |
| 1.8681 | 276300 | 0.4515 | - | - | - |
| 1.8688 | 276400 | 0.4089 | - | - | - |
| 1.8695 | 276500 | 0.4571 | - | - | - |
| 1.8702 | 276600 | 0.4156 | - | - | - |
| 1.8708 | 276700 | 0.4005 | - | - | - |
| 1.8715 | 276800 | 0.388 | - | - | - |
| 1.8722 | 276900 | 0.4257 | - | - | - |
| 1.8729 | 277000 | 0.4673 | - | - | - |
| 1.8736 | 277100 | 0.4639 | - | - | - |
| 1.8742 | 277200 | 0.3981 | - | - | - |
| 1.8749 | 277300 | 0.4139 | - | - | - |
| 1.8756 | 277400 | 0.4667 | - | - | - |
| 1.8763 | 277500 | 0.4481 | - | - | - |
| 1.8769 | 277600 | 0.3864 | - | - | - |
| 1.8776 | 277700 | 0.4507 | - | - | - |
| 1.8783 | 277800 | 0.479 | - | - | - |
| 1.8790 | 277900 | 0.3917 | - | - | - |
| 1.8796 | 278000 | 0.4305 | - | - | - |
| 1.8803 | 278100 | 0.4063 | - | - | - |
| 1.8810 | 278200 | 0.4432 | - | - | - |
| 1.8817 | 278300 | 0.4194 | - | - | - |
| 1.8823 | 278400 | 0.4427 | - | - | - |
| 1.8830 | 278500 | 0.4273 | - | - | - |
| 1.8837 | 278600 | 0.385 | - | - | - |
| 1.8844 | 278700 | 0.4182 | - | - | - |
| 1.8850 | 278800 | 0.3941 | - | - | - |
| 1.8857 | 278900 | 0.4495 | - | - | - |
| 1.8864 | 279000 | 0.4479 | - | - | - |
| 1.8871 | 279100 | 0.4293 | - | - | - |
| 1.8877 | 279200 | 0.4556 | - | - | - |
| 1.8884 | 279300 | 0.413 | - | - | - |
| 1.8891 | 279400 | 0.4027 | - | - | - |
| 1.8898 | 279500 | 0.457 | - | - | - |
| 1.8905 | 279600 | 0.4444 | - | - | - |
| 1.8911 | 279700 | 0.4073 | - | - | - |
| 1.8918 | 279800 | 0.444 | - | - | - |
| 1.8925 | 279900 | 0.4101 | - | - | - |
| 1.8932 | 280000 | 0.4268 | 0.5230 | 0.7639 | - |
| 1.8938 | 280100 | 0.4286 | - | - | - |
| 1.8945 | 280200 | 0.4589 | - | - | - |
| 1.8952 | 280300 | 0.4249 | - | - | - |
| 1.8959 | 280400 | 0.4298 | - | - | - |
| 1.8965 | 280500 | 0.4286 | - | - | - |
| 1.8972 | 280600 | 0.4373 | - | - | - |
| 1.8979 | 280700 | 0.4208 | - | - | - |
| 1.8986 | 280800 | 0.4003 | - | - | - |
| 1.8992 | 280900 | 0.4227 | - | - | - |
| 1.8999 | 281000 | 0.4324 | - | - | - |
| 1.9006 | 281100 | 0.4388 | - | - | - |
| 1.9013 | 281200 | 0.4292 | - | - | - |
| 1.9019 | 281300 | 0.427 | - | - | - |
| 1.9026 | 281400 | 0.4535 | - | - | - |
| 1.9033 | 281500 | 0.407 | - | - | - |
| 1.9040 | 281600 | 0.4438 | - | - | - |
| 1.9047 | 281700 | 0.4194 | - | - | - |
| 1.9053 | 281800 | 0.4331 | - | - | - |
| 1.9060 | 281900 | 0.4341 | - | - | - |
| 1.9067 | 282000 | 0.4829 | - | - | - |
| 1.9074 | 282100 | 0.417 | - | - | - |
| 1.9080 | 282200 | 0.4421 | - | - | - |
| 1.9087 | 282300 | 0.4868 | - | - | - |
| 1.9094 | 282400 | 0.465 | - | - | - |
| 1.9101 | 282500 | 0.4357 | - | - | - |
| 1.9107 | 282600 | 0.3994 | - | - | - |
| 1.9114 | 282700 | 0.4579 | - | - | - |
| 1.9121 | 282800 | 0.4337 | - | - | - |
| 1.9128 | 282900 | 0.4628 | - | - | - |
| 1.9134 | 283000 | 0.4021 | - | - | - |
| 1.9141 | 283100 | 0.3979 | - | - | - |
| 1.9148 | 283200 | 0.4485 | - | - | - |
| 1.9155 | 283300 | 0.4469 | - | - | - |
| 1.9161 | 283400 | 0.4323 | - | - | - |
| 1.9168 | 283500 | 0.4509 | - | - | - |
| 1.9175 | 283600 | 0.3932 | - | - | - |
| 1.9182 | 283700 | 0.4433 | - | - | - |
| 1.9189 | 283800 | 0.4608 | - | - | - |
| 1.9195 | 283900 | 0.4664 | - | - | - |
| 1.9202 | 284000 | 0.4297 | - | - | - |
| 1.9209 | 284100 | 0.4383 | - | - | - |
| 1.9216 | 284200 | 0.3961 | - | - | - |
| 1.9222 | 284300 | 0.4311 | - | - | - |
| 1.9229 | 284400 | 0.4525 | - | - | - |
| 1.9236 | 284500 | 0.3962 | - | - | - |
| 1.9243 | 284600 | 0.4037 | - | - | - |
| 1.9249 | 284700 | 0.4356 | - | - | - |
| 1.9256 | 284800 | 0.4548 | - | - | - |
| 1.9263 | 284900 | 0.4386 | - | - | - |
| 1.9270 | 285000 | 0.4011 | 0.5227 | 0.7744 | - |
| 1.9276 | 285100 | 0.4305 | - | - | - |
| 1.9283 | 285200 | 0.4543 | - | - | - |
| 1.9290 | 285300 | 0.4194 | - | - | - |
| 1.9297 | 285400 | 0.4191 | - | - | - |
| 1.9303 | 285500 | 0.3797 | - | - | - |
| 1.9310 | 285600 | 0.4355 | - | - | - |
| 1.9317 | 285700 | 0.4265 | - | - | - |
| 1.9324 | 285800 | 0.4184 | - | - | - |
| 1.9330 | 285900 | 0.4458 | - | - | - |
| 1.9337 | 286000 | 0.4158 | - | - | - |
| 1.9344 | 286100 | 0.4428 | - | - | - |
| 1.9351 | 286200 | 0.48 | - | - | - |
| 1.9358 | 286300 | 0.4347 | - | - | - |
| 1.9364 | 286400 | 0.4158 | - | - | - |
| 1.9371 | 286500 | 0.439 | - | - | - |
| 1.9378 | 286600 | 0.4389 | - | - | - |
| 1.9385 | 286700 | 0.421 | - | - | - |
| 1.9391 | 286800 | 0.4327 | - | - | - |
| 1.9398 | 286900 | 0.4548 | - | - | - |
| 1.9405 | 287000 | 0.411 | - | - | - |
| 1.9412 | 287100 | 0.4257 | - | - | - |
| 1.9418 | 287200 | 0.4002 | - | - | - |
| 1.9425 | 287300 | 0.4075 | - | - | - |
| 1.9432 | 287400 | 0.4437 | - | - | - |
| 1.9439 | 287500 | 0.3973 | - | - | - |
| 1.9445 | 287600 | 0.4458 | - | - | - |
| 1.9452 | 287700 | 0.3918 | - | - | - |
| 1.9459 | 287800 | 0.4036 | - | - | - |
| 1.9466 | 287900 | 0.3801 | - | - | - |
| 1.9472 | 288000 | 0.4574 | - | - | - |
| 1.9479 | 288100 | 0.4534 | - | - | - |
| 1.9486 | 288200 | 0.401 | - | - | - |
| 1.9493 | 288300 | 0.4324 | - | - | - |
| 1.9500 | 288400 | 0.4558 | - | - | - |
| 1.9506 | 288500 | 0.4266 | - | - | - |
| 1.9513 | 288600 | 0.4431 | - | - | - |
| 1.9520 | 288700 | 0.4412 | - | - | - |
| 1.9527 | 288800 | 0.4375 | - | - | - |
| 1.9533 | 288900 | 0.4315 | - | - | - |
| 1.9540 | 289000 | 0.4364 | - | - | - |
| 1.9547 | 289100 | 0.4571 | - | - | - |
| 1.9554 | 289200 | 0.3804 | - | - | - |
| 1.9560 | 289300 | 0.4015 | - | - | - |
| 1.9567 | 289400 | 0.4246 | - | - | - |
| 1.9574 | 289500 | 0.4271 | - | - | - |
| 1.9581 | 289600 | 0.4617 | - | - | - |
| 1.9587 | 289700 | 0.487 | - | - | - |
| 1.9594 | 289800 | 0.4578 | - | - | - |
| 1.9601 | 289900 | 0.4246 | - | - | - |
| 1.9608 | 290000 | 0.4446 | 0.5157 | 0.7655 | - |
| 1.9614 | 290100 | 0.4153 | - | - | - |
| 1.9621 | 290200 | 0.3869 | - | - | - |
| 1.9628 | 290300 | 0.4247 | - | - | - |
| 1.9635 | 290400 | 0.4867 | - | - | - |
| 1.9642 | 290500 | 0.4609 | - | - | - |
| 1.9648 | 290600 | 0.3966 | - | - | - |
| 1.9655 | 290700 | 0.4386 | - | - | - |
| 1.9662 | 290800 | 0.4427 | - | - | - |
| 1.9669 | 290900 | 0.4297 | - | - | - |
| 1.9675 | 291000 | 0.4346 | - | - | - |
| 1.9682 | 291100 | 0.468 | - | - | - |
| 1.9689 | 291200 | 0.4293 | - | - | - |
| 1.9696 | 291300 | 0.4852 | - | - | - |
| 1.9702 | 291400 | 0.4483 | - | - | - |
| 1.9709 | 291500 | 0.411 | - | - | - |
| 1.9716 | 291600 | 0.4304 | - | - | - |
| 1.9723 | 291700 | 0.4375 | - | - | - |
| 1.9729 | 291800 | 0.4095 | - | - | - |
| 1.9736 | 291900 | 0.4472 | - | - | - |
| 1.9743 | 292000 | 0.4483 | - | - | - |
| 1.9750 | 292100 | 0.4129 | - | - | - |
| 1.9756 | 292200 | 0.4491 | - | - | - |
| 1.9763 | 292300 | 0.4207 | - | - | - |
| 1.9770 | 292400 | 0.4899 | - | - | - |
| 1.9777 | 292500 | 0.4511 | - | - | - |
| 1.9784 | 292600 | 0.4087 | - | - | - |
| 1.9790 | 292700 | 0.4077 | - | - | - |
| 1.9797 | 292800 | 0.4228 | - | - | - |
| 1.9804 | 292900 | 0.4071 | - | - | - |
| 1.9811 | 293000 | 0.4288 | - | - | - |
| 1.9817 | 293100 | 0.4238 | - | - | - |
| 1.9824 | 293200 | 0.4348 | - | - | - |
| 1.9831 | 293300 | 0.4318 | - | - | - |
| 1.9838 | 293400 | 0.489 | - | - | - |
| 1.9844 | 293500 | 0.4077 | - | - | - |
| 1.9851 | 293600 | 0.4265 | - | - | - |
| 1.9858 | 293700 | 0.4415 | - | - | - |
| 1.9865 | 293800 | 0.4488 | - | - | - |
| 1.9871 | 293900 | 0.4495 | - | - | - |
| 1.9878 | 294000 | 0.4473 | - | - | - |
| 1.9885 | 294100 | 0.4289 | - | - | - |
| 1.9892 | 294200 | 0.4017 | - | - | - |
| 1.9898 | 294300 | 0.5058 | - | - | - |
| 1.9905 | 294400 | 0.4392 | - | - | - |
| 1.9912 | 294500 | 0.4715 | - | - | - |
| 1.9919 | 294600 | 0.4536 | - | - | - |
| 1.9925 | 294700 | 0.4095 | - | - | - |
| 1.9932 | 294800 | 0.4449 | - | - | - |
| 1.9939 | 294900 | 0.4382 | - | - | - |
| 1.9946 | 295000 | 0.3763 | 0.5282 | 0.7654 | - |
| 1.9953 | 295100 | 0.4293 | - | - | - |
| 1.9959 | 295200 | 0.4237 | - | - | - |
| 1.9966 | 295300 | 0.4238 | - | - | - |
| 1.9973 | 295400 | 0.4289 | - | - | - |
| 1.9980 | 295500 | 0.4223 | - | - | - |
| 1.9986 | 295600 | 0.425 | - | - | - |
| 1.9993 | 295700 | 0.4192 | - | - | - |
| 2.0000 | 295800 | 0.4516 | - | - | - |
| 2.0007 | 295900 | 0.4469 | - | - | - |
| 2.0013 | 296000 | 0.407 | - | - | - |
| 2.0020 | 296100 | 0.4458 | - | - | - |
| 2.0027 | 296200 | 0.4159 | - | - | - |
| 2.0034 | 296300 | 0.4025 | - | - | - |
| 2.0040 | 296400 | 0.418 | - | - | - |
| 2.0047 | 296500 | 0.4382 | - | - | - |
| 2.0054 | 296600 | 0.3907 | - | - | - |
| 2.0061 | 296700 | 0.4566 | - | - | - |
| 2.0067 | 296800 | 0.4067 | - | - | - |
| 2.0074 | 296900 | 0.4219 | - | - | - |
| 2.0081 | 297000 | 0.3557 | - | - | - |
| 2.0088 | 297100 | 0.4436 | - | - | - |
| 2.0095 | 297200 | 0.4457 | - | - | - |
| 2.0101 | 297300 | 0.4133 | - | - | - |
| 2.0108 | 297400 | 0.3949 | - | - | - |
| 2.0115 | 297500 | 0.4555 | - | - | - |
| 2.0122 | 297600 | 0.4052 | - | - | - |
| 2.0128 | 297700 | 0.3796 | - | - | - |
| 2.0135 | 297800 | 0.4332 | - | - | - |
| 2.0142 | 297900 | 0.444 | - | - | - |
| 2.0149 | 298000 | 0.4262 | - | - | - |
| 2.0155 | 298100 | 0.4136 | - | - | - |
| 2.0162 | 298200 | 0.443 | - | - | - |
| 2.0169 | 298300 | 0.4485 | - | - | - |
| 2.0176 | 298400 | 0.4267 | - | - | - |
| 2.0182 | 298500 | 0.409 | - | - | - |
| 2.0189 | 298600 | 0.4439 | - | - | - |
| 2.0196 | 298700 | 0.4479 | - | - | - |
| 2.0203 | 298800 | 0.3977 | - | - | - |
| 2.0209 | 298900 | 0.3977 | - | - | - |
| 2.0216 | 299000 | 0.4399 | - | - | - |
| 2.0223 | 299100 | 0.4667 | - | - | - |
| 2.0230 | 299200 | 0.4016 | - | - | - |
| 2.0237 | 299300 | 0.4377 | - | - | - |
| 2.0243 | 299400 | 0.3961 | - | - | - |
| 2.0250 | 299500 | 0.3777 | - | - | - |
| 2.0257 | 299600 | 0.4515 | - | - | - |
| 2.0264 | 299700 | 0.4365 | - | - | - |
| 2.0270 | 299800 | 0.396 | - | - | - |
| 2.0277 | 299900 | 0.4141 | - | - | - |
| 2.0284 | 300000 | 0.3807 | 0.5224 | 0.7684 | - |
| 2.0291 | 300100 | 0.4437 | - | - | - |
| 2.0297 | 300200 | 0.4198 | - | - | - |
| 2.0304 | 300300 | 0.4118 | - | - | - |
| 2.0311 | 300400 | 0.429 | - | - | - |
| 2.0318 | 300500 | 0.4622 | - | - | - |
| 2.0324 | 300600 | 0.4205 | - | - | - |
| 2.0331 | 300700 | 0.3693 | - | - | - |
| 2.0338 | 300800 | 0.4434 | - | - | - |
| 2.0345 | 300900 | 0.4213 | - | - | - |
| 2.0351 | 301000 | 0.4038 | - | - | - |
| 2.0358 | 301100 | 0.4501 | - | - | - |
| 2.0365 | 301200 | 0.4485 | - | - | - |
| 2.0372 | 301300 | 0.4327 | - | - | - |
| 2.0378 | 301400 | 0.4234 | - | - | - |
| 2.0385 | 301500 | 0.4047 | - | - | - |
| 2.0392 | 301600 | 0.4492 | - | - | - |
| 2.0399 | 301700 | 0.4241 | - | - | - |
| 2.0406 | 301800 | 0.3889 | - | - | - |
| 2.0412 | 301900 | 0.487 | - | - | - |
| 2.0419 | 302000 | 0.4308 | - | - | - |
| 2.0426 | 302100 | 0.4358 | - | - | - |
| 2.0433 | 302200 | 0.4174 | - | - | - |
| 2.0439 | 302300 | 0.409 | - | - | - |
| 2.0446 | 302400 | 0.4416 | - | - | - |
| 2.0453 | 302500 | 0.3959 | - | - | - |
| 2.0460 | 302600 | 0.4356 | - | - | - |
| 2.0466 | 302700 | 0.4229 | - | - | - |
| 2.0473 | 302800 | 0.3872 | - | - | - |
| 2.0480 | 302900 | 0.4625 | - | - | - |
| 2.0487 | 303000 | 0.4454 | - | - | - |
| 2.0493 | 303100 | 0.4498 | - | - | - |
| 2.0500 | 303200 | 0.3975 | - | - | - |
| 2.0507 | 303300 | 0.4062 | - | - | - |
| 2.0514 | 303400 | 0.4656 | - | - | - |
| 2.0520 | 303500 | 0.4723 | - | - | - |
| 2.0527 | 303600 | 0.4135 | - | - | - |
| 2.0534 | 303700 | 0.3935 | - | - | - |
| 2.0541 | 303800 | 0.4563 | - | - | - |
| 2.0548 | 303900 | 0.4464 | - | - | - |
| 2.0554 | 304000 | 0.4218 | - | - | - |
| 2.0561 | 304100 | 0.4087 | - | - | - |
| 2.0568 | 304200 | 0.3859 | - | - | - |
| 2.0575 | 304300 | 0.4219 | - | - | - |
| 2.0581 | 304400 | 0.415 | - | - | - |
| 2.0588 | 304500 | 0.3951 | - | - | - |
| 2.0595 | 304600 | 0.4004 | - | - | - |
| 2.0602 | 304700 | 0.4075 | - | - | - |
| 2.0608 | 304800 | 0.3995 | - | - | - |
| 2.0615 | 304900 | 0.398 | - | - | - |
| 2.0622 | 305000 | 0.4554 | 0.5321 | 0.7666 | - |
| 2.0629 | 305100 | 0.391 | - | - | - |
| 2.0635 | 305200 | 0.4388 | - | - | - |
| 2.0642 | 305300 | 0.4536 | - | - | - |
| 2.0649 | 305400 | 0.3989 | - | - | - |
| 2.0656 | 305500 | 0.432 | - | - | - |
| 2.0662 | 305600 | 0.4117 | - | - | - |
| 2.0669 | 305700 | 0.4462 | - | - | - |
| 2.0676 | 305800 | 0.4297 | - | - | - |
| 2.0683 | 305900 | 0.4357 | - | - | - |
| 2.0690 | 306000 | 0.418 | - | - | - |
| 2.0696 | 306100 | 0.4303 | - | - | - |
| 2.0703 | 306200 | 0.4426 | - | - | - |
| 2.0710 | 306300 | 0.421 | - | - | - |
| 2.0717 | 306400 | 0.3861 | - | - | - |
| 2.0723 | 306500 | 0.4225 | - | - | - |
| 2.0730 | 306600 | 0.4008 | - | - | - |
| 2.0737 | 306700 | 0.4305 | - | - | - |
| 2.0744 | 306800 | 0.4126 | - | - | - |
| 2.0750 | 306900 | 0.4306 | - | - | - |
| 2.0757 | 307000 | 0.3974 | - | - | - |
| 2.0764 | 307100 | 0.4338 | - | - | - |
| 2.0771 | 307200 | 0.3872 | - | - | - |
| 2.0777 | 307300 | 0.3997 | - | - | - |
| 2.0784 | 307400 | 0.4804 | - | - | - |
| 2.0791 | 307500 | 0.4391 | - | - | - |
| 2.0798 | 307600 | 0.407 | - | - | - |
| 2.0804 | 307700 | 0.4084 | - | - | - |
| 2.0811 | 307800 | 0.4681 | - | - | - |
| 2.0818 | 307900 | 0.4411 | - | - | - |
| 2.0825 | 308000 | 0.3869 | - | - | - |
| 2.0832 | 308100 | 0.3637 | - | - | - |
| 2.0838 | 308200 | 0.4436 | - | - | - |
| 2.0845 | 308300 | 0.3722 | - | - | - |
| 2.0852 | 308400 | 0.3904 | - | - | - |
| 2.0859 | 308500 | 0.3784 | - | - | - |
| 2.0865 | 308600 | 0.425 | - | - | - |
| 2.0872 | 308700 | 0.4123 | - | - | - |
| 2.0879 | 308800 | 0.4148 | - | - | - |
| 2.0886 | 308900 | 0.4038 | - | - | - |
| 2.0892 | 309000 | 0.4086 | - | - | - |
| 2.0899 | 309100 | 0.3961 | - | - | - |
| 2.0906 | 309200 | 0.4136 | - | - | - |
| 2.0913 | 309300 | 0.39 | - | - | - |
| 2.0919 | 309400 | 0.4193 | - | - | - |
| 2.0926 | 309500 | 0.4044 | - | - | - |
| 2.0933 | 309600 | 0.4245 | - | - | - |
| 2.0940 | 309700 | 0.3641 | - | - | - |
| 2.0946 | 309800 | 0.406 | - | - | - |
| 2.0953 | 309900 | 0.3862 | - | - | - |
| 2.0960 | 310000 | 0.3684 | 0.5252 | 0.7740 | - |
| 2.0967 | 310100 | 0.3781 | - | - | - |
| 2.0973 | 310200 | 0.4007 | - | - | - |
| 2.0980 | 310300 | 0.4782 | - | - | - |
| 2.0987 | 310400 | 0.4061 | - | - | - |
| 2.0994 | 310500 | 0.3932 | - | - | - |
| 2.1001 | 310600 | 0.4176 | - | - | - |
| 2.1007 | 310700 | 0.4318 | - | - | - |
| 2.1014 | 310800 | 0.3804 | - | - | - |
| 2.1021 | 310900 | 0.4028 | - | - | - |
| 2.1028 | 311000 | 0.3499 | - | - | - |
| 2.1034 | 311100 | 0.3664 | - | - | - |
| 2.1041 | 311200 | 0.4006 | - | - | - |
| 2.1048 | 311300 | 0.3781 | - | - | - |
| 2.1055 | 311400 | 0.4195 | - | - | - |
| 2.1061 | 311500 | 0.4168 | - | - | - |
| 2.1068 | 311600 | 0.3695 | - | - | - |
| 2.1075 | 311700 | 0.4181 | - | - | - |
| 2.1082 | 311800 | 0.3773 | - | - | - |
| 2.1088 | 311900 | 0.3809 | - | - | - |
| 2.1095 | 312000 | 0.4087 | - | - | - |
| 2.1102 | 312100 | 0.4 | - | - | - |
| 2.1109 | 312200 | 0.4093 | - | - | - |
| 2.1115 | 312300 | 0.4177 | - | - | - |
| 2.1122 | 312400 | 0.3769 | - | - | - |
| 2.1129 | 312500 | 0.384 | - | - | - |
| 2.1136 | 312600 | 0.3989 | - | - | - |
| 2.1143 | 312700 | 0.4194 | - | - | - |
| 2.1149 | 312800 | 0.3889 | - | - | - |
| 2.1156 | 312900 | 0.4164 | - | - | - |
| 2.1163 | 313000 | 0.3601 | - | - | - |
| 2.1170 | 313100 | 0.4029 | - | - | - |
| 2.1176 | 313200 | 0.4404 | - | - | - |
| 2.1183 | 313300 | 0.4007 | - | - | - |
| 2.1190 | 313400 | 0.3832 | - | - | - |
| 2.1197 | 313500 | 0.4195 | - | - | - |
| 2.1203 | 313600 | 0.3591 | - | - | - |
| 2.1210 | 313700 | 0.432 | - | - | - |
| 2.1217 | 313800 | 0.442 | - | - | - |
| 2.1224 | 313900 | 0.4006 | - | - | - |
| 2.1230 | 314000 | 0.3803 | - | - | - |
| 2.1237 | 314100 | 0.3819 | - | - | - |
| 2.1244 | 314200 | 0.3708 | - | - | - |
| 2.1251 | 314300 | 0.3983 | - | - | - |
| 2.1257 | 314400 | 0.4346 | - | - | - |
| 2.1264 | 314500 | 0.3899 | - | - | - |
| 2.1271 | 314600 | 0.3963 | - | - | - |
| 2.1278 | 314700 | 0.3857 | - | - | - |
| 2.1285 | 314800 | 0.4109 | - | - | - |
| 2.1291 | 314900 | 0.4186 | - | - | - |
| 2.1298 | 315000 | 0.3896 | 0.5203 | 0.7656 | - |
| 2.1305 | 315100 | 0.446 | - | - | - |
| 2.1312 | 315200 | 0.4358 | - | - | - |
| 2.1318 | 315300 | 0.4023 | - | - | - |
| 2.1325 | 315400 | 0.4318 | - | - | - |
| 2.1332 | 315500 | 0.4045 | - | - | - |
| 2.1339 | 315600 | 0.4068 | - | - | - |
| 2.1345 | 315700 | 0.4294 | - | - | - |
| 2.1352 | 315800 | 0.415 | - | - | - |
| 2.1359 | 315900 | 0.399 | - | - | - |
| 2.1366 | 316000 | 0.4164 | - | - | - |
| 2.1372 | 316100 | 0.422 | - | - | - |
| 2.1379 | 316200 | 0.3602 | - | - | - |
| 2.1386 | 316300 | 0.3743 | - | - | - |
| 2.1393 | 316400 | 0.3487 | - | - | - |
| 2.1399 | 316500 | 0.4144 | - | - | - |
| 2.1406 | 316600 | 0.4056 | - | - | - |
| 2.1413 | 316700 | 0.3964 | - | - | - |
| 2.1420 | 316800 | 0.3789 | - | - | - |
| 2.1426 | 316900 | 0.3668 | - | - | - |
| 2.1433 | 317000 | 0.4127 | - | - | - |
| 2.1440 | 317100 | 0.4342 | - | - | - |
| 2.1447 | 317200 | 0.3823 | - | - | - |
| 2.1454 | 317300 | 0.3691 | - | - | - |
| 2.1460 | 317400 | 0.4049 | - | - | - |
| 2.1467 | 317500 | 0.3894 | - | - | - |
| 2.1474 | 317600 | 0.3448 | - | - | - |
| 2.1481 | 317700 | 0.3925 | - | - | - |
| 2.1487 | 317800 | 0.4581 | - | - | - |
| 2.1494 | 317900 | 0.3603 | - | - | - |
| 2.1501 | 318000 | 0.4609 | - | - | - |
| 2.1508 | 318100 | 0.411 | - | - | - |
| 2.1514 | 318200 | 0.3565 | - | - | - |
| 2.1521 | 318300 | 0.4125 | - | - | - |
| 2.1528 | 318400 | 0.3601 | - | - | - |
| 2.1535 | 318500 | 0.4099 | - | - | - |
| 2.1541 | 318600 | 0.4131 | - | - | - |
| 2.1548 | 318700 | 0.4037 | - | - | - |
| 2.1555 | 318800 | 0.3675 | - | - | - |
| 2.1562 | 318900 | 0.4101 | - | - | - |
| 2.1568 | 319000 | 0.4596 | - | - | - |
| 2.1575 | 319100 | 0.4104 | - | - | - |
| 2.1582 | 319200 | 0.4252 | - | - | - |
| 2.1589 | 319300 | 0.4296 | - | - | - |
| 2.1596 | 319400 | 0.3727 | - | - | - |
| 2.1602 | 319500 | 0.3954 | - | - | - |
| 2.1609 | 319600 | 0.3897 | - | - | - |
| 2.1616 | 319700 | 0.4039 | - | - | - |
| 2.1623 | 319800 | 0.4159 | - | - | - |
| 2.1629 | 319900 | 0.3736 | - | - | - |
| 2.1636 | 320000 | 0.3546 | 0.5284 | 0.7738 | - |
| 2.1643 | 320100 | 0.3887 | - | - | - |
| 2.1650 | 320200 | 0.4216 | - | - | - |
| 2.1656 | 320300 | 0.386 | - | - | - |
| 2.1663 | 320400 | 0.3968 | - | - | - |
| 2.1670 | 320500 | 0.4222 | - | - | - |
| 2.1677 | 320600 | 0.3705 | - | - | - |
| 2.1683 | 320700 | 0.3858 | - | - | - |
| 2.1690 | 320800 | 0.3554 | - | - | - |
| 2.1697 | 320900 | 0.4083 | - | - | - |
| 2.1704 | 321000 | 0.3554 | - | - | - |
| 2.1710 | 321100 | 0.3752 | - | - | - |
| 2.1717 | 321200 | 0.3802 | - | - | - |
| 2.1724 | 321300 | 0.3948 | - | - | - |
| 2.1731 | 321400 | 0.4056 | - | - | - |
| 2.1738 | 321500 | 0.4246 | - | - | - |
| 2.1744 | 321600 | 0.445 | - | - | - |
| 2.1751 | 321700 | 0.3702 | - | - | - |
| 2.1758 | 321800 | 0.4039 | - | - | - |
| 2.1765 | 321900 | 0.4033 | - | - | - |
| 2.1771 | 322000 | 0.3713 | - | - | - |
| 2.1778 | 322100 | 0.4253 | - | - | - |
| 2.1785 | 322200 | 0.4437 | - | - | - |
| 2.1792 | 322300 | 0.3943 | - | - | - |
| 2.1798 | 322400 | 0.3989 | - | - | - |
| 2.1805 | 322500 | 0.3995 | - | - | - |
| 2.1812 | 322600 | 0.3423 | - | - | - |
| 2.1819 | 322700 | 0.4021 | - | - | - |
| 2.1825 | 322800 | 0.3885 | - | - | - |
| 2.1832 | 322900 | 0.4461 | - | - | - |
| 2.1839 | 323000 | 0.3759 | - | - | - |
| 2.1846 | 323100 | 0.3364 | - | - | - |
| 2.1852 | 323200 | 0.4253 | - | - | - |
| 2.1859 | 323300 | 0.3867 | - | - | - |
| 2.1866 | 323400 | 0.3756 | - | - | - |
| 2.1873 | 323500 | 0.3929 | - | - | - |
| 2.1880 | 323600 | 0.3872 | - | - | - |
| 2.1886 | 323700 | 0.3937 | - | - | - |
| 2.1893 | 323800 | 0.4093 | - | - | - |
| 2.1900 | 323900 | 0.4093 | - | - | - |
| 2.1907 | 324000 | 0.3772 | - | - | - |
| 2.1913 | 324100 | 0.4197 | - | - | - |
| 2.1920 | 324200 | 0.3644 | - | - | - |
| 2.1927 | 324300 | 0.3882 | - | - | - |
| 2.1934 | 324400 | 0.416 | - | - | - |
| 2.1940 | 324500 | 0.3779 | - | - | - |
| 2.1947 | 324600 | 0.3566 | - | - | - |
| 2.1954 | 324700 | 0.3495 | - | - | - |
| 2.1961 | 324800 | 0.3543 | - | - | - |
| 2.1967 | 324900 | 0.3713 | - | - | - |
| 2.1974 | 325000 | 0.467 | 0.5297 | 0.7734 | - |
| 2.1981 | 325100 | 0.3857 | - | - | - |
| 2.1988 | 325200 | 0.3898 | - | - | - |
| 2.1994 | 325300 | 0.35 | - | - | - |
| 2.2001 | 325400 | 0.3735 | - | - | - |
| 2.2008 | 325500 | 0.4056 | - | - | - |
| 2.2015 | 325600 | 0.3535 | - | - | - |
| 2.2021 | 325700 | 0.3773 | - | - | - |
| 2.2028 | 325800 | 0.3855 | - | - | - |
| 2.2035 | 325900 | 0.3861 | - | - | - |
| 2.2042 | 326000 | 0.3749 | - | - | - |
| 2.2049 | 326100 | 0.3548 | - | - | - |
| 2.2055 | 326200 | 0.42 | - | - | - |
| 2.2062 | 326300 | 0.3895 | - | - | - |
| 2.2069 | 326400 | 0.3647 | - | - | - |
| 2.2076 | 326500 | 0.4055 | - | - | - |
| 2.2082 | 326600 | 0.3698 | - | - | - |
| 2.2089 | 326700 | 0.3782 | - | - | - |
| 2.2096 | 326800 | 0.3498 | - | - | - |
| 2.2103 | 326900 | 0.347 | - | - | - |
| 2.2109 | 327000 | 0.3845 | - | - | - |
| 2.2116 | 327100 | 0.3584 | - | - | - |
| 2.2123 | 327200 | 0.3632 | - | - | - |
| 2.2130 | 327300 | 0.3436 | - | - | - |
| 2.2136 | 327400 | 0.418 | - | - | - |
| 2.2143 | 327500 | 0.3973 | - | - | - |
| 2.2150 | 327600 | 0.3823 | - | - | - |
| 2.2157 | 327700 | 0.3455 | - | - | - |
| 2.2163 | 327800 | 0.3403 | - | - | - |
| 2.2170 | 327900 | 0.3911 | - | - | - |
| 2.2177 | 328000 | 0.3847 | - | - | - |
| 2.2184 | 328100 | 0.4192 | - | - | - |
| 2.2191 | 328200 | 0.3886 | - | - | - |
| 2.2197 | 328300 | 0.4373 | - | - | - |
| 2.2204 | 328400 | 0.3881 | - | - | - |
| 2.2211 | 328500 | 0.3421 | - | - | - |
| 2.2218 | 328600 | 0.399 | - | - | - |
| 2.2224 | 328700 | 0.3896 | - | - | - |
| 2.2231 | 328800 | 0.3802 | - | - | - |
| 2.2238 | 328900 | 0.4061 | - | - | - |
| 2.2245 | 329000 | 0.3945 | - | - | - |
| 2.2251 | 329100 | 0.374 | - | - | - |
| 2.2258 | 329200 | 0.3704 | - | - | - |
| 2.2265 | 329300 | 0.3794 | - | - | - |
| 2.2272 | 329400 | 0.3719 | - | - | - |
| 2.2278 | 329500 | 0.3886 | - | - | - |
| 2.2285 | 329600 | 0.3672 | - | - | - |
| 2.2292 | 329700 | 0.3701 | - | - | - |
| 2.2299 | 329800 | 0.4168 | - | - | - |
| 2.2305 | 329900 | 0.4247 | - | - | - |
| 2.2312 | 330000 | 0.4098 | 0.5194 | 0.7727 | - |
| 2.2319 | 330100 | 0.3466 | - | - | - |
| 2.2326 | 330200 | 0.3868 | - | - | - |
| 2.2333 | 330300 | 0.3808 | - | - | - |
| 2.2339 | 330400 | 0.3772 | - | - | - |
| 2.2346 | 330500 | 0.3553 | - | - | - |
| 2.2353 | 330600 | 0.4153 | - | - | - |
| 2.2360 | 330700 | 0.3732 | - | - | - |
| 2.2366 | 330800 | 0.3693 | - | - | - |
| 2.2373 | 330900 | 0.3348 | - | - | - |
| 2.2380 | 331000 | 0.3395 | - | - | - |
| 2.2387 | 331100 | 0.4026 | - | - | - |
| 2.2393 | 331200 | 0.3987 | - | - | - |
| 2.2400 | 331300 | 0.377 | - | - | - |
| 2.2407 | 331400 | 0.3521 | - | - | - |
| 2.2414 | 331500 | 0.393 | - | - | - |
| 2.2420 | 331600 | 0.358 | - | - | - |
| 2.2427 | 331700 | 0.382 | - | - | - |
| 2.2434 | 331800 | 0.3733 | - | - | - |
| 2.2441 | 331900 | 0.3853 | - | - | - |
| 2.2447 | 332000 | 0.3678 | - | - | - |
| 2.2454 | 332100 | 0.3532 | - | - | - |
| 2.2461 | 332200 | 0.351 | - | - | - |
| 2.2468 | 332300 | 0.4066 | - | - | - |
| 2.2474 | 332400 | 0.3724 | - | - | - |
| 2.2481 | 332500 | 0.4137 | - | - | - |
| 2.2488 | 332600 | 0.3458 | - | - | - |
| 2.2495 | 332700 | 0.4008 | - | - | - |
| 2.2502 | 332800 | 0.3615 | - | - | - |
| 2.2508 | 332900 | 0.3783 | - | - | - |
| 2.2515 | 333000 | 0.3997 | - | - | - |
| 2.2522 | 333100 | 0.3563 | - | - | - |
| 2.2529 | 333200 | 0.3533 | - | - | - |
| 2.2535 | 333300 | 0.3906 | - | - | - |
| 2.2542 | 333400 | 0.3795 | - | - | - |
| 2.2549 | 333500 | 0.3917 | - | - | - |
| 2.2556 | 333600 | 0.3336 | - | - | - |
| 2.2562 | 333700 | 0.3498 | - | - | - |
| 2.2569 | 333800 | 0.4161 | - | - | - |
| 2.2576 | 333900 | 0.372 | - | - | - |
| 2.2583 | 334000 | 0.452 | - | - | - |
| 2.2589 | 334100 | 0.3852 | - | - | - |
| 2.2596 | 334200 | 0.3791 | - | - | - |
| 2.2603 | 334300 | 0.353 | - | - | - |
| 2.2610 | 334400 | 0.368 | - | - | - |
| 2.2616 | 334500 | 0.3467 | - | - | - |
| 2.2623 | 334600 | 0.3362 | - | - | - |
| 2.2630 | 334700 | 0.4289 | - | - | - |
| 2.2637 | 334800 | 0.3666 | - | - | - |
| 2.2644 | 334900 | 0.3897 | - | - | - |
| 2.2650 | 335000 | 0.3481 | 0.5218 | 0.7769 | - |
| 2.2657 | 335100 | 0.3705 | - | - | - |
| 2.2664 | 335200 | 0.3336 | - | - | - |
| 2.2671 | 335300 | 0.3849 | - | - | - |
| 2.2677 | 335400 | 0.3565 | - | - | - |
| 2.2684 | 335500 | 0.388 | - | - | - |
| 2.2691 | 335600 | 0.4085 | - | - | - |
| 2.2698 | 335700 | 0.3549 | - | - | - |
| 2.2704 | 335800 | 0.4103 | - | - | - |
| 2.2711 | 335900 | 0.3763 | - | - | - |
| 2.2718 | 336000 | 0.3856 | - | - | - |
| 2.2725 | 336100 | 0.3683 | - | - | - |
| 2.2731 | 336200 | 0.3458 | - | - | - |
| 2.2738 | 336300 | 0.373 | - | - | - |
| 2.2745 | 336400 | 0.3307 | - | - | - |
| 2.2752 | 336500 | 0.3565 | - | - | - |
| 2.2758 | 336600 | 0.39 | - | - | - |
| 2.2765 | 336700 | 0.3706 | - | - | - |
| 2.2772 | 336800 | 0.3826 | - | - | - |
| 2.2779 | 336900 | 0.3599 | - | - | - |
| 2.2786 | 337000 | 0.4095 | - | - | - |
| 2.2792 | 337100 | 0.4099 | - | - | - |
| 2.2799 | 337200 | 0.3185 | - | - | - |
| 2.2806 | 337300 | 0.3728 | - | - | - |
| 2.2813 | 337400 | 0.3797 | - | - | - |
| 2.2819 | 337500 | 0.3617 | - | - | - |
| 2.2826 | 337600 | 0.4147 | - | - | - |
| 2.2833 | 337700 | 0.3829 | - | - | - |
| 2.2840 | 337800 | 0.4415 | - | - | - |
| 2.2846 | 337900 | 0.3577 | - | - | - |
| 2.2853 | 338000 | 0.3646 | - | - | - |
| 2.2860 | 338100 | 0.3344 | - | - | - |
| 2.2867 | 338200 | 0.3517 | - | - | - |
| 2.2873 | 338300 | 0.3849 | - | - | - |
| 2.2880 | 338400 | 0.3506 | - | - | - |
| 2.2887 | 338500 | 0.3844 | - | - | - |
| 2.2894 | 338600 | 0.3481 | - | - | - |
| 2.2900 | 338700 | 0.3841 | - | - | - |
| 2.2907 | 338800 | 0.3538 | - | - | - |
| 2.2914 | 338900 | 0.35 | - | - | - |
| 2.2921 | 339000 | 0.372 | - | - | - |
| 2.2927 | 339100 | 0.3523 | - | - | - |
| 2.2934 | 339200 | 0.378 | - | - | - |
| 2.2941 | 339300 | 0.361 | - | - | - |
| 2.2948 | 339400 | 0.4187 | - | - | - |
| 2.2955 | 339500 | 0.3703 | - | - | - |
| 2.2961 | 339600 | 0.4037 | - | - | - |
| 2.2968 | 339700 | 0.3497 | - | - | - |
| 2.2975 | 339800 | 0.3576 | - | - | - |
| 2.2982 | 339900 | 0.3201 | - | - | - |
| 2.2988 | 340000 | 0.3568 | 0.5251 | 0.7756 | - |
| 2.2995 | 340100 | 0.3389 | - | - | - |
| 2.3002 | 340200 | 0.4018 | - | - | - |
| 2.3009 | 340300 | 0.389 | - | - | - |
| 2.3015 | 340400 | 0.3691 | - | - | - |
| 2.3022 | 340500 | 0.3774 | - | - | - |
| 2.3029 | 340600 | 0.3759 | - | - | - |
| 2.3036 | 340700 | 0.3328 | - | - | - |
| 2.3042 | 340800 | 0.3397 | - | - | - |
| 2.3049 | 340900 | 0.3445 | - | - | - |
| 2.3056 | 341000 | 0.3826 | - | - | - |
| 2.3063 | 341100 | 0.4337 | - | - | - |
| 2.3069 | 341200 | 0.3947 | - | - | - |
| 2.3076 | 341300 | 0.3406 | - | - | - |
| 2.3083 | 341400 | 0.3682 | - | - | - |
| 2.3090 | 341500 | 0.3912 | - | - | - |
| 2.3097 | 341600 | 0.3619 | - | - | - |
| 2.3103 | 341700 | 0.3402 | - | - | - |
| 2.3110 | 341800 | 0.3923 | - | - | - |
| 2.3117 | 341900 | 0.3586 | - | - | - |
| 2.3124 | 342000 | 0.3485 | - | - | - |
| 2.3130 | 342100 | 0.3664 | - | - | - |
| 2.3137 | 342200 | 0.3436 | - | - | - |
| 2.3144 | 342300 | 0.3594 | - | - | - |
| 2.3151 | 342400 | 0.3511 | - | - | - |
| 2.3157 | 342500 | 0.4079 | - | - | - |
| 2.3164 | 342600 | 0.3421 | - | - | - |
| 2.3171 | 342700 | 0.3569 | - | - | - |
| 2.3178 | 342800 | 0.3575 | - | - | - |
| 2.3184 | 342900 | 0.3676 | - | - | - |
| 2.3191 | 343000 | 0.4183 | - | - | - |
| 2.3198 | 343100 | 0.3657 | - | - | - |
| 2.3205 | 343200 | 0.3678 | - | - | - |
| 2.3211 | 343300 | 0.3994 | - | - | - |
| 2.3218 | 343400 | 0.3485 | - | - | - |
| 2.3225 | 343500 | 0.3985 | - | - | - |
| 2.3232 | 343600 | 0.3961 | - | - | - |
| 2.3239 | 343700 | 0.2983 | - | - | - |
| 2.3245 | 343800 | 0.3411 | - | - | - |
| 2.3252 | 343900 | 0.3604 | - | - | - |
| 2.3259 | 344000 | 0.3675 | - | - | - |
| 2.3266 | 344100 | 0.3761 | - | - | - |
| 2.3272 | 344200 | 0.3734 | - | - | - |
| 2.3279 | 344300 | 0.3309 | - | - | - |
| 2.3286 | 344400 | 0.4029 | - | - | - |
| 2.3293 | 344500 | 0.342 | - | - | - |
| 2.3299 | 344600 | 0.3492 | - | - | - |
| 2.3306 | 344700 | 0.3451 | - | - | - |
| 2.3313 | 344800 | 0.4008 | - | - | - |
| 2.3320 | 344900 | 0.3493 | - | - | - |
| 2.3326 | 345000 | 0.326 | 0.5412 | 0.7733 | - |
| 2.3333 | 345100 | 0.3139 | - | - | - |
| 2.3340 | 345200 | 0.3719 | - | - | - |
| 2.3347 | 345300 | 0.3583 | - | - | - |
| 2.3353 | 345400 | 0.3678 | - | - | - |
| 2.3360 | 345500 | 0.3616 | - | - | - |
| 2.3367 | 345600 | 0.3246 | - | - | - |
| 2.3374 | 345700 | 0.3348 | - | - | - |
| 2.3381 | 345800 | 0.3528 | - | - | - |
| 2.3387 | 345900 | 0.3182 | - | - | - |
| 2.3394 | 346000 | 0.4038 | - | - | - |
| 2.3401 | 346100 | 0.3617 | - | - | - |
| 2.3408 | 346200 | 0.3198 | - | - | - |
| 2.3414 | 346300 | 0.3481 | - | - | - |
| 2.3421 | 346400 | 0.3579 | - | - | - |
| 2.3428 | 346500 | 0.3563 | - | - | - |
| 2.3435 | 346600 | 0.369 | - | - | - |
| 2.3441 | 346700 | 0.3691 | - | - | - |
| 2.3448 | 346800 | 0.3703 | - | - | - |
| 2.3455 | 346900 | 0.4009 | - | - | - |
| 2.3462 | 347000 | 0.3651 | - | - | - |
| 2.3468 | 347100 | 0.3815 | - | - | - |
| 2.3475 | 347200 | 0.3285 | - | - | - |
| 2.3482 | 347300 | 0.3318 | - | - | - |
| 2.3489 | 347400 | 0.3602 | - | - | - |
| 2.3495 | 347500 | 0.3657 | - | - | - |
| 2.3502 | 347600 | 0.3615 | - | - | - |
| 2.3509 | 347700 | 0.3603 | - | - | - |
| 2.3516 | 347800 | 0.3146 | - | - | - |
| 2.3522 | 347900 | 0.3979 | - | - | - |
| 2.3529 | 348000 | 0.3675 | - | - | - |
| 2.3536 | 348100 | 0.3037 | - | - | - |
| 2.3543 | 348200 | 0.3659 | - | - | - |
| 2.3550 | 348300 | 0.3183 | - | - | - |
| 2.3556 | 348400 | 0.3505 | - | - | - |
| 2.3563 | 348500 | 0.3501 | - | - | - |
| 2.3570 | 348600 | 0.3783 | - | - | - |
| 2.3577 | 348700 | 0.3803 | - | - | - |
| 2.3583 | 348800 | 0.355 | - | - | - |
| 2.3590 | 348900 | 0.3779 | - | - | - |
| 2.3597 | 349000 | 0.3446 | - | - | - |
| 2.3604 | 349100 | 0.3454 | - | - | - |
| 2.3610 | 349200 | 0.3374 | - | - | - |
| 2.3617 | 349300 | 0.3362 | - | - | - |
| 2.3624 | 349400 | 0.329 | - | - | - |
| 2.3631 | 349500 | 0.3444 | - | - | - |
| 2.3637 | 349600 | 0.3005 | - | - | - |
| 2.3644 | 349700 | 0.3628 | - | - | - |
| 2.3651 | 349800 | 0.323 | - | - | - |
| 2.3658 | 349900 | 0.3409 | - | - | - |
| 2.3664 | 350000 | 0.364 | 0.5435 | 0.7704 | - |
| 2.3671 | 350100 | 0.3523 | - | - | - |
| 2.3678 | 350200 | 0.3476 | - | - | - |
| 2.3685 | 350300 | 0.3515 | - | - | - |
| 2.3692 | 350400 | 0.3502 | - | - | - |
| 2.3698 | 350500 | 0.3427 | - | - | - |
| 2.3705 | 350600 | 0.3401 | - | - | - |
| 2.3712 | 350700 | 0.3655 | - | - | - |
| 2.3719 | 350800 | 0.3542 | - | - | - |
| 2.3725 | 350900 | 0.3485 | - | - | - |
| 2.3732 | 351000 | 0.3555 | - | - | - |
| 2.3739 | 351100 | 0.3381 | - | - | - |
| 2.3746 | 351200 | 0.3128 | - | - | - |
| 2.3752 | 351300 | 0.3591 | - | - | - |
| 2.3759 | 351400 | 0.3307 | - | - | - |
| 2.3766 | 351500 | 0.3654 | - | - | - |
| 2.3773 | 351600 | 0.3197 | - | - | - |
| 2.3779 | 351700 | 0.3441 | - | - | - |
| 2.3786 | 351800 | 0.3249 | - | - | - |
| 2.3793 | 351900 | 0.3736 | - | - | - |
| 2.3800 | 352000 | 0.358 | - | - | - |
| 2.3806 | 352100 | 0.3471 | - | - | - |
| 2.3813 | 352200 | 0.362 | - | - | - |
| 2.3820 | 352300 | 0.379 | - | - | - |
| 2.3827 | 352400 | 0.3356 | - | - | - |
| 2.3834 | 352500 | 0.3377 | - | - | - |
| 2.3840 | 352600 | 0.3716 | - | - | - |
| 2.3847 | 352700 | 0.3486 | - | - | - |
| 2.3854 | 352800 | 0.3606 | - | - | - |
| 2.3861 | 352900 | 0.3371 | - | - | - |
| 2.3867 | 353000 | 0.3848 | - | - | - |
| 2.3874 | 353100 | 0.3285 | - | - | - |
| 2.3881 | 353200 | 0.3324 | - | - | - |
| 2.3888 | 353300 | 0.3405 | - | - | - |
| 2.3894 | 353400 | 0.3585 | - | - | - |
| 2.3901 | 353500 | 0.399 | - | - | - |
| 2.3908 | 353600 | 0.3369 | - | - | - |
| 2.3915 | 353700 | 0.3634 | - | - | - |
| 2.3921 | 353800 | 0.3295 | - | - | - |
| 2.3928 | 353900 | 0.2972 | - | - | - |
| 2.3935 | 354000 | 0.4023 | - | - | - |
| 2.3942 | 354100 | 0.3431 | - | - | - |
| 2.3948 | 354200 | 0.3289 | - | - | - |
| 2.3955 | 354300 | 0.3463 | - | - | - |
| 2.3962 | 354400 | 0.3785 | - | - | - |
| 2.3969 | 354500 | 0.3954 | - | - | - |
| 2.3975 | 354600 | 0.306 | - | - | - |
| 2.3982 | 354700 | 0.3302 | - | - | - |
| 2.3989 | 354800 | 0.3632 | - | - | - |
| 2.3996 | 354900 | 0.3546 | - | - | - |
| 2.4003 | 355000 | 0.3654 | 0.5347 | 0.7747 | - |
| 2.4009 | 355100 | 0.3721 | - | - | - |
| 2.4016 | 355200 | 0.3624 | - | - | - |
| 2.4023 | 355300 | 0.355 | - | - | - |
| 2.4030 | 355400 | 0.3632 | - | - | - |
| 2.4036 | 355500 | 0.3508 | - | - | - |
| 2.4043 | 355600 | 0.365 | - | - | - |
| 2.4050 | 355700 | 0.2937 | - | - | - |
| 2.4057 | 355800 | 0.3256 | - | - | - |
| 2.4063 | 355900 | 0.3511 | - | - | - |
| 2.4070 | 356000 | 0.372 | - | - | - |
| 2.4077 | 356100 | 0.3729 | - | - | - |
| 2.4084 | 356200 | 0.358 | - | - | - |
| 2.4090 | 356300 | 0.3645 | - | - | - |
| 2.4097 | 356400 | 0.3505 | - | - | - |
| 2.4104 | 356500 | 0.3588 | - | - | - |
| 2.4111 | 356600 | 0.3365 | - | - | - |
| 2.4117 | 356700 | 0.3143 | - | - | - |
| 2.4124 | 356800 | 0.3145 | - | - | - |
| 2.4131 | 356900 | 0.3653 | - | - | - |
| 2.4138 | 357000 | 0.3671 | - | - | - |
| 2.4145 | 357100 | 0.3706 | - | - | - |
| 2.4151 | 357200 | 0.3792 | - | - | - |
| 2.4158 | 357300 | 0.3705 | - | - | - |
| 2.4165 | 357400 | 0.3444 | - | - | - |
| 2.4172 | 357500 | 0.3508 | - | - | - |
| 2.4178 | 357600 | 0.3584 | - | - | - |
| 2.4185 | 357700 | 0.311 | - | - | - |
| 2.4192 | 357800 | 0.3221 | - | - | - |
| 2.4199 | 357900 | 0.3574 | - | - | - |
| 2.4205 | 358000 | 0.3614 | - | - | - |
| 2.4212 | 358100 | 0.3513 | - | - | - |
| 2.4219 | 358200 | 0.3703 | - | - | - |
| 2.4226 | 358300 | 0.3601 | - | - | - |
| 2.4232 | 358400 | 0.3735 | - | - | - |
| 2.4239 | 358500 | 0.4002 | - | - | - |
| 2.4246 | 358600 | 0.3237 | - | - | - |
| 2.4253 | 358700 | 0.3592 | - | - | - |
| 2.4259 | 358800 | 0.3709 | - | - | - |
| 2.4266 | 358900 | 0.3498 | - | - | - |
| 2.4273 | 359000 | 0.3645 | - | - | - |
| 2.4280 | 359100 | 0.3384 | - | - | - |
| 2.4287 | 359200 | 0.3563 | - | - | - |
| 2.4293 | 359300 | 0.3107 | - | - | - |
| 2.4300 | 359400 | 0.3642 | - | - | - |
| 2.4307 | 359500 | 0.2984 | - | - | - |
| 2.4314 | 359600 | 0.3631 | - | - | - |
| 2.4320 | 359700 | 0.3272 | - | - | - |
| 2.4327 | 359800 | 0.319 | - | - | - |
| 2.4334 | 359900 | 0.3511 | - | - | - |
| 2.4341 | 360000 | 0.3674 | 0.5364 | 0.7782 | - |
| 2.4347 | 360100 | 0.3567 | - | - | - |
| 2.4354 | 360200 | 0.3232 | - | - | - |
| 2.4361 | 360300 | 0.3218 | - | - | - |
| 2.4368 | 360400 | 0.3202 | - | - | - |
| 2.4374 | 360500 | 0.3704 | - | - | - |
| 2.4381 | 360600 | 0.3702 | - | - | - |
| 2.4388 | 360700 | 0.3581 | - | - | - |
| 2.4395 | 360800 | 0.3257 | - | - | - |
| 2.4401 | 360900 | 0.3624 | - | - | - |
| 2.4408 | 361000 | 0.349 | - | - | - |
| 2.4415 | 361100 | 0.372 | - | - | - |
| 2.4422 | 361200 | 0.351 | - | - | - |
| 2.4429 | 361300 | 0.369 | - | - | - |
| 2.4435 | 361400 | 0.3268 | - | - | - |
| 2.4442 | 361500 | 0.3517 | - | - | - |
| 2.4449 | 361600 | 0.3289 | - | - | - |
| 2.4456 | 361700 | 0.3482 | - | - | - |
| 2.4462 | 361800 | 0.3345 | - | - | - |
| 2.4469 | 361900 | 0.3901 | - | - | - |
| 2.4476 | 362000 | 0.374 | - | - | - |
| 2.4483 | 362100 | 0.3414 | - | - | - |
| 2.4489 | 362200 | 0.3482 | - | - | - |
| 2.4496 | 362300 | 0.3365 | - | - | - |
| 2.4503 | 362400 | 0.305 | - | - | - |
| 2.4510 | 362500 | 0.3322 | - | - | - |
| 2.4516 | 362600 | 0.3427 | - | - | - |
| 2.4523 | 362700 | 0.3269 | - | - | - |
| 2.4530 | 362800 | 0.3623 | - | - | - |
| 2.4537 | 362900 | 0.3241 | - | - | - |
| 2.4543 | 363000 | 0.3414 | - | - | - |
| 2.4550 | 363100 | 0.3502 | - | - | - |
| 2.4557 | 363200 | 0.3445 | - | - | - |
| 2.4564 | 363300 | 0.3207 | - | - | - |
| 2.4570 | 363400 | 0.3547 | - | - | - |
| 2.4577 | 363500 | 0.3737 | - | - | - |
| 2.4584 | 363600 | 0.4008 | - | - | - |
| 2.4591 | 363700 | 0.3527 | - | - | - |
| 2.4598 | 363800 | 0.3317 | - | - | - |
| 2.4604 | 363900 | 0.3071 | - | - | - |
| 2.4611 | 364000 | 0.3303 | - | - | - |
| 2.4618 | 364100 | 0.3589 | - | - | - |
| 2.4625 | 364200 | 0.3555 | - | - | - |
| 2.4631 | 364300 | 0.3366 | - | - | - |
| 2.4638 | 364400 | 0.336 | - | - | - |
| 2.4645 | 364500 | 0.3461 | - | - | - |
| 2.4652 | 364600 | 0.3451 | - | - | - |
| 2.4658 | 364700 | 0.3134 | - | - | - |
| 2.4665 | 364800 | 0.3574 | - | - | - |
| 2.4672 | 364900 | 0.3689 | - | - | - |
| 2.4679 | 365000 | 0.3216 | 0.5373 | 0.7754 | - |
| 2.4685 | 365100 | 0.3578 | - | - | - |
| 2.4692 | 365200 | 0.3823 | - | - | - |
| 2.4699 | 365300 | 0.3507 | - | - | - |
| 2.4706 | 365400 | 0.3634 | - | - | - |
| 2.4712 | 365500 | 0.322 | - | - | - |
| 2.4719 | 365600 | 0.34 | - | - | - |
| 2.4726 | 365700 | 0.3186 | - | - | - |
| 2.4733 | 365800 | 0.3455 | - | - | - |
| 2.4740 | 365900 | 0.3481 | - | - | - |
| 2.4746 | 366000 | 0.3615 | - | - | - |
| 2.4753 | 366100 | 0.3364 | - | - | - |
| 2.4760 | 366200 | 0.3412 | - | - | - |
| 2.4767 | 366300 | 0.3783 | - | - | - |
| 2.4773 | 366400 | 0.3189 | - | - | - |
| 2.4780 | 366500 | 0.3375 | - | - | - |
| 2.4787 | 366600 | 0.3237 | - | - | - |
| 2.4794 | 366700 | 0.2865 | - | - | - |
| 2.4800 | 366800 | 0.3961 | - | - | - |
| 2.4807 | 366900 | 0.3724 | - | - | - |
| 2.4814 | 367000 | 0.3471 | - | - | - |
| 2.4821 | 367100 | 0.3366 | - | - | - |
| 2.4827 | 367200 | 0.3662 | - | - | - |
| 2.4834 | 367300 | 0.3306 | - | - | - |
| 2.4841 | 367400 | 0.3936 | - | - | - |
| 2.4848 | 367500 | 0.3453 | - | - | - |
| 2.4854 | 367600 | 0.3872 | - | - | - |
| 2.4861 | 367700 | 0.3524 | - | - | - |
| 2.4868 | 367800 | 0.3902 | - | - | - |
| 2.4875 | 367900 | 0.3562 | - | - | - |
| 2.4882 | 368000 | 0.3417 | - | - | - |
| 2.4888 | 368100 | 0.3444 | - | - | - |
| 2.4895 | 368200 | 0.3276 | - | - | - |
| 2.4902 | 368300 | 0.3395 | - | - | - |
| 2.4909 | 368400 | 0.2924 | - | - | - |
| 2.4915 | 368500 | 0.2896 | - | - | - |
| 2.4922 | 368600 | 0.3406 | - | - | - |
| 2.4929 | 368700 | 0.3036 | - | - | - |
| 2.4936 | 368800 | 0.3656 | - | - | - |
| 2.4942 | 368900 | 0.3053 | - | - | - |
| 2.4949 | 369000 | 0.3439 | - | - | - |
| 2.4956 | 369100 | 0.3468 | - | - | - |
| 2.4963 | 369200 | 0.337 | - | - | - |
| 2.4969 | 369300 | 0.3594 | - | - | - |
| 2.4976 | 369400 | 0.3248 | - | - | - |
| 2.4983 | 369500 | 0.3278 | - | - | - |
| 2.4990 | 369600 | 0.3424 | - | - | - |
| 2.4996 | 369700 | 0.3974 | - | - | - |
| 2.5003 | 369800 | 0.3263 | - | - | - |
| 2.5010 | 369900 | 0.2972 | - | - | - |
| 2.5017 | 370000 | 0.3518 | 0.5469 | 0.7769 | - |
| 2.5023 | 370100 | 0.2808 | - | - | - |
| 2.5030 | 370200 | 0.3763 | - | - | - |
| 2.5037 | 370300 | 0.3774 | - | - | - |
| 2.5044 | 370400 | 0.3134 | - | - | - |
| 2.5051 | 370500 | 0.3064 | - | - | - |
| 2.5057 | 370600 | 0.3328 | - | - | - |
| 2.5064 | 370700 | 0.3454 | - | - | - |
| 2.5071 | 370800 | 0.3804 | - | - | - |
| 2.5078 | 370900 | 0.3324 | - | - | - |
| 2.5084 | 371000 | 0.3301 | - | - | - |
| 2.5091 | 371100 | 0.3222 | - | - | - |
| 2.5098 | 371200 | 0.3661 | - | - | - |
| 2.5105 | 371300 | 0.3279 | - | - | - |
| 2.5111 | 371400 | 0.346 | - | - | - |
| 2.5118 | 371500 | 0.3417 | - | - | - |
| 2.5125 | 371600 | 0.3523 | - | - | - |
| 2.5132 | 371700 | 0.336 | - | - | - |
| 2.5138 | 371800 | 0.3467 | - | - | - |
| 2.5145 | 371900 | 0.3231 | - | - | - |
| 2.5152 | 372000 | 0.3239 | - | - | - |
| 2.5159 | 372100 | 0.3507 | - | - | - |
| 2.5165 | 372200 | 0.326 | - | - | - |
| 2.5172 | 372300 | 0.3379 | - | - | - |
| 2.5179 | 372400 | 0.3538 | - | - | - |
| 2.5186 | 372500 | 0.3309 | - | - | - |
| 2.5193 | 372600 | 0.3484 | - | - | - |
| 2.5199 | 372700 | 0.3694 | - | - | - |
| 2.5206 | 372800 | 0.2863 | - | - | - |
| 2.5213 | 372900 | 0.3401 | - | - | - |
| 2.5220 | 373000 | 0.3333 | - | - | - |
| 2.5226 | 373100 | 0.3656 | - | - | - |
| 2.5233 | 373200 | 0.3478 | - | - | - |
| 2.5240 | 373300 | 0.3575 | - | - | - |
| 2.5247 | 373400 | 0.3565 | - | - | - |
| 2.5253 | 373500 | 0.3196 | - | - | - |
| 2.5260 | 373600 | 0.3795 | - | - | - |
| 2.5267 | 373700 | 0.3539 | - | - | - |
| 2.5274 | 373800 | 0.3513 | - | - | - |
| 2.5280 | 373900 | 0.3589 | - | - | - |
| 2.5287 | 374000 | 0.3346 | - | - | - |
| 2.5294 | 374100 | 0.3409 | - | - | - |
| 2.5301 | 374200 | 0.3701 | - | - | - |
| 2.5307 | 374300 | 0.3182 | - | - | - |
| 2.5314 | 374400 | 0.3472 | - | - | - |
| 2.5321 | 374500 | 0.3325 | - | - | - |
| 2.5328 | 374600 | 0.3147 | - | - | - |
| 2.5335 | 374700 | 0.3608 | - | - | - |
| 2.5341 | 374800 | 0.3289 | - | - | - |
| 2.5348 | 374900 | 0.3406 | - | - | - |
| 2.5355 | 375000 | 0.3732 | 0.5402 | 0.7764 | - |
| 2.5362 | 375100 | 0.3023 | - | - | - |
| 2.5368 | 375200 | 0.3374 | - | - | - |
| 2.5375 | 375300 | 0.3292 | - | - | - |
| 2.5382 | 375400 | 0.2952 | - | - | - |
| 2.5389 | 375500 | 0.3285 | - | - | - |
| 2.5395 | 375600 | 0.304 | - | - | - |
| 2.5402 | 375700 | 0.3291 | - | - | - |
| 2.5409 | 375800 | 0.3312 | - | - | - |
| 2.5416 | 375900 | 0.3404 | - | - | - |
| 2.5422 | 376000 | 0.3096 | - | - | - |
| 2.5429 | 376100 | 0.3312 | - | - | - |
| 2.5436 | 376200 | 0.3467 | - | - | - |
| 2.5443 | 376300 | 0.3539 | - | - | - |
| 2.5449 | 376400 | 0.3409 | - | - | - |
| 2.5456 | 376500 | 0.3783 | - | - | - |
| 2.5463 | 376600 | 0.3072 | - | - | - |
| 2.5470 | 376700 | 0.3613 | - | - | - |
| 2.5477 | 376800 | 0.3444 | - | - | - |
| 2.5483 | 376900 | 0.3322 | - | - | - |
| 2.5490 | 377000 | 0.3224 | - | - | - |
| 2.5497 | 377100 | 0.3214 | - | - | - |
| 2.5504 | 377200 | 0.3499 | - | - | - |
| 2.5510 | 377300 | 0.3706 | - | - | - |
| 2.5517 | 377400 | 0.345 | - | - | - |
| 2.5524 | 377500 | 0.3091 | - | - | - |
| 2.5531 | 377600 | 0.3336 | - | - | - |
| 2.5537 | 377700 | 0.3238 | - | - | - |
| 2.5544 | 377800 | 0.331 | - | - | - |
| 2.5551 | 377900 | 0.3341 | - | - | - |
| 2.5558 | 378000 | 0.3 | - | - | - |
| 2.5564 | 378100 | 0.3326 | - | - | - |
| 2.5571 | 378200 | 0.3519 | - | - | - |
| 2.5578 | 378300 | 0.3468 | - | - | - |
| 2.5585 | 378400 | 0.3239 | - | - | - |
| 2.5591 | 378500 | 0.3471 | - | - | - |
| 2.5598 | 378600 | 0.3079 | - | - | - |
| 2.5605 | 378700 | 0.3846 | - | - | - |
| 2.5612 | 378800 | 0.3249 | - | - | - |
| 2.5618 | 378900 | 0.3379 | - | - | - |
| 2.5625 | 379000 | 0.3209 | - | - | - |
| 2.5632 | 379100 | 0.3189 | - | - | - |
| 2.5639 | 379200 | 0.3523 | - | - | - |
| 2.5646 | 379300 | 0.3172 | - | - | - |
| 2.5652 | 379400 | 0.3451 | - | - | - |
| 2.5659 | 379500 | 0.3118 | - | - | - |
| 2.5666 | 379600 | 0.3088 | - | - | - |
| 2.5673 | 379700 | 0.361 | - | - | - |
| 2.5679 | 379800 | 0.3255 | - | - | - |
| 2.5686 | 379900 | 0.3017 | - | - | - |
| 2.5693 | 380000 | 0.3414 | 0.5416 | 0.7783 | - |
| 2.5700 | 380100 | 0.3258 | - | - | - |
| 2.5706 | 380200 | 0.3412 | - | - | - |
| 2.5713 | 380300 | 0.37 | - | - | - |
| 2.5720 | 380400 | 0.3368 | - | - | - |
| 2.5727 | 380500 | 0.3519 | - | - | - |
| 2.5733 | 380600 | 0.3391 | - | - | - |
| 2.5740 | 380700 | 0.3323 | - | - | - |
| 2.5747 | 380800 | 0.3666 | - | - | - |
| 2.5754 | 380900 | 0.3159 | - | - | - |
| 2.5760 | 381000 | 0.3324 | - | - | - |
| 2.5767 | 381100 | 0.3333 | - | - | - |
| 2.5774 | 381200 | 0.2882 | - | - | - |
| 2.5781 | 381300 | 0.3223 | - | - | - |
| 2.5788 | 381400 | 0.3284 | - | - | - |
| 2.5794 | 381500 | 0.3026 | - | - | - |
| 2.5801 | 381600 | 0.3737 | - | - | - |
| 2.5808 | 381700 | 0.3256 | - | - | - |
| 2.5815 | 381800 | 0.3458 | - | - | - |
| 2.5821 | 381900 | 0.3647 | - | - | - |
| 2.5828 | 382000 | 0.3057 | - | - | - |
| 2.5835 | 382100 | 0.3427 | - | - | - |
| 2.5842 | 382200 | 0.3462 | - | - | - |
| 2.5848 | 382300 | 0.3224 | - | - | - |
| 2.5855 | 382400 | 0.3721 | - | - | - |
| 2.5862 | 382500 | 0.3137 | - | - | - |
| 2.5869 | 382600 | 0.3271 | - | - | - |
| 2.5875 | 382700 | 0.3379 | - | - | - |
| 2.5882 | 382800 | 0.3109 | - | - | - |
| 2.5889 | 382900 | 0.3533 | - | - | - |
| 2.5896 | 383000 | 0.3256 | - | - | - |
| 2.5902 | 383100 | 0.2986 | - | - | - |
| 2.5909 | 383200 | 0.3378 | - | - | - |
| 2.5916 | 383300 | 0.3257 | - | - | - |
| 2.5923 | 383400 | 0.2926 | - | - | - |
| 2.5930 | 383500 | 0.3157 | - | - | - |
| 2.5936 | 383600 | 0.3606 | - | - | - |
| 2.5943 | 383700 | 0.3179 | - | - | - |
| 2.5950 | 383800 | 0.343 | - | - | - |
| 2.5957 | 383900 | 0.3127 | - | - | - |
| 2.5963 | 384000 | 0.2919 | - | - | - |
| 2.5970 | 384100 | 0.3351 | - | - | - |
| 2.5977 | 384200 | 0.2716 | - | - | - |
| 2.5984 | 384300 | 0.3498 | - | - | - |
| 2.5990 | 384400 | 0.3381 | - | - | - |
| 2.5997 | 384500 | 0.35 | - | - | - |
| 2.6004 | 384600 | 0.2971 | - | - | - |
| 2.6011 | 384700 | 0.318 | - | - | - |
| 2.6017 | 384800 | 0.328 | - | - | - |
| 2.6024 | 384900 | 0.3278 | - | - | - |
| 2.6031 | 385000 | 0.3424 | 0.5363 | 0.7818 | - |
| 2.6038 | 385100 | 0.3334 | - | - | - |
| 2.6044 | 385200 | 0.3388 | - | - | - |
| 2.6051 | 385300 | 0.3351 | - | - | - |
| 2.6058 | 385400 | 0.3335 | - | - | - |
| 2.6065 | 385500 | 0.3532 | - | - | - |
| 2.6071 | 385600 | 0.3169 | - | - | - |
| 2.6078 | 385700 | 0.3226 | - | - | - |
| 2.6085 | 385800 | 0.3459 | - | - | - |
| 2.6092 | 385900 | 0.3473 | - | - | - |
| 2.6099 | 386000 | 0.2826 | - | - | - |
| 2.6105 | 386100 | 0.3608 | - | - | - |
| 2.6112 | 386200 | 0.3149 | - | - | - |
| 2.6119 | 386300 | 0.3221 | - | - | - |
| 2.6126 | 386400 | 0.311 | - | - | - |
| 2.6132 | 386500 | 0.3182 | - | - | - |
| 2.6139 | 386600 | 0.3138 | - | - | - |
| 2.6146 | 386700 | 0.3529 | - | - | - |
| 2.6153 | 386800 | 0.3127 | - | - | - |
| 2.6159 | 386900 | 0.3199 | - | - | - |
| 2.6166 | 387000 | 0.3905 | - | - | - |
| 2.6173 | 387100 | 0.338 | - | - | - |
| 2.6180 | 387200 | 0.3337 | - | - | - |
| 2.6186 | 387300 | 0.3145 | - | - | - |
| 2.6193 | 387400 | 0.338 | - | - | - |
| 2.6200 | 387500 | 0.3117 | - | - | - |
| 2.6207 | 387600 | 0.3431 | - | - | - |
| 2.6213 | 387700 | 0.2958 | - | - | - |
| 2.6220 | 387800 | 0.2787 | - | - | - |
| 2.6227 | 387900 | 0.3056 | - | - | - |
| 2.6234 | 388000 | 0.2971 | - | - | - |
| 2.6241 | 388100 | 0.3429 | - | - | - |
| 2.6247 | 388200 | 0.3103 | - | - | - |
| 2.6254 | 388300 | 0.32 | - | - | - |
| 2.6261 | 388400 | 0.3487 | - | - | - |
| 2.6268 | 388500 | 0.3147 | - | - | - |
| 2.6274 | 388600 | 0.3489 | - | - | - |
| 2.6281 | 388700 | 0.3171 | - | - | - |
| 2.6288 | 388800 | 0.2931 | - | - | - |
| 2.6295 | 388900 | 0.3094 | - | - | - |
| 2.6301 | 389000 | 0.3221 | - | - | - |
| 2.6308 | 389100 | 0.2987 | - | - | - |
| 2.6315 | 389200 | 0.3199 | - | - | - |
| 2.6322 | 389300 | 0.3084 | - | - | - |
| 2.6328 | 389400 | 0.3129 | - | - | - |
| 2.6335 | 389500 | 0.3255 | - | - | - |
| 2.6342 | 389600 | 0.3144 | - | - | - |
| 2.6349 | 389700 | 0.2888 | - | - | - |
| 2.6355 | 389800 | 0.3563 | - | - | - |
| 2.6362 | 389900 | 0.3554 | - | - | - |
| 2.6369 | 390000 | 0.3515 | 0.5365 | 0.7760 | - |
| 2.6376 | 390100 | 0.3412 | - | - | - |
| 2.6383 | 390200 | 0.3125 | - | - | - |
| 2.6389 | 390300 | 0.3129 | - | - | - |
| 2.6396 | 390400 | 0.2845 | - | - | - |
| 2.6403 | 390500 | 0.3368 | - | - | - |
| 2.6410 | 390600 | 0.332 | - | - | - |
| 2.6416 | 390700 | 0.3285 | - | - | - |
| 2.6423 | 390800 | 0.295 | - | - | - |
| 2.6430 | 390900 | 0.2855 | - | - | - |
| 2.6437 | 391000 | 0.3566 | - | - | - |
| 2.6443 | 391100 | 0.334 | - | - | - |
| 2.6450 | 391200 | 0.2806 | - | - | - |
| 2.6457 | 391300 | 0.3277 | - | - | - |
| 2.6464 | 391400 | 0.3556 | - | - | - |
| 2.6470 | 391500 | 0.3089 | - | - | - |
| 2.6477 | 391600 | 0.2909 | - | - | - |
| 2.6484 | 391700 | 0.3199 | - | - | - |
| 2.6491 | 391800 | 0.3293 | - | - | - |
| 2.6497 | 391900 | 0.356 | - | - | - |
| 2.6504 | 392000 | 0.3373 | - | - | - |
| 2.6511 | 392100 | 0.3479 | - | - | - |
| 2.6518 | 392200 | 0.3415 | - | - | - |
| 2.6524 | 392300 | 0.3206 | - | - | - |
| 2.6531 | 392400 | 0.3369 | - | - | - |
| 2.6538 | 392500 | 0.2952 | - | - | - |
| 2.6545 | 392600 | 0.3844 | - | - | - |
| 2.6552 | 392700 | 0.3019 | - | - | - |
| 2.6558 | 392800 | 0.3203 | - | - | - |
| 2.6565 | 392900 | 0.307 | - | - | - |
| 2.6572 | 393000 | 0.3437 | - | - | - |
| 2.6579 | 393100 | 0.3228 | - | - | - |
| 2.6585 | 393200 | 0.3161 | - | - | - |
| 2.6592 | 393300 | 0.324 | - | - | - |
| 2.6599 | 393400 | 0.3078 | - | - | - |
| 2.6606 | 393500 | 0.3467 | - | - | - |
| 2.6612 | 393600 | 0.3341 | - | - | - |
| 2.6619 | 393700 | 0.3539 | - | - | - |
| 2.6626 | 393800 | 0.3293 | - | - | - |
| 2.6633 | 393900 | 0.3117 | - | - | - |
| 2.6639 | 394000 | 0.2864 | - | - | - |
| 2.6646 | 394100 | 0.3177 | - | - | - |
| 2.6653 | 394200 | 0.3616 | - | - | - |
| 2.6660 | 394300 | 0.2986 | - | - | - |
| 2.6666 | 394400 | 0.2807 | - | - | - |
| 2.6673 | 394500 | 0.3787 | - | - | - |
| 2.6680 | 394600 | 0.2925 | - | - | - |
| 2.6687 | 394700 | 0.3117 | - | - | - |
| 2.6694 | 394800 | 0.333 | - | - | - |
| 2.6700 | 394900 | 0.3202 | - | - | - |
| 2.6707 | 395000 | 0.2952 | 0.5358 | 0.7789 | - |
| 2.6714 | 395100 | 0.3 | - | - | - |
| 2.6721 | 395200 | 0.3454 | - | - | - |
| 2.6727 | 395300 | 0.3456 | - | - | - |
| 2.6734 | 395400 | 0.3282 | - | - | - |
| 2.6741 | 395500 | 0.3698 | - | - | - |
| 2.6748 | 395600 | 0.3331 | - | - | - |
| 2.6754 | 395700 | 0.2985 | - | - | - |
| 2.6761 | 395800 | 0.3828 | - | - | - |
| 2.6768 | 395900 | 0.353 | - | - | - |
| 2.6775 | 396000 | 0.3433 | - | - | - |
| 2.6781 | 396100 | 0.2896 | - | - | - |
| 2.6788 | 396200 | 0.3328 | - | - | - |
| 2.6795 | 396300 | 0.3462 | - | - | - |
| 2.6802 | 396400 | 0.3618 | - | - | - |
| 2.6808 | 396500 | 0.312 | - | - | - |
| 2.6815 | 396600 | 0.3331 | - | - | - |
| 2.6822 | 396700 | 0.327 | - | - | - |
| 2.6829 | 396800 | 0.328 | - | - | - |
| 2.6836 | 396900 | 0.3242 | - | - | - |
| 2.6842 | 397000 | 0.3372 | - | - | - |
| 2.6849 | 397100 | 0.3487 | - | - | - |
| 2.6856 | 397200 | 0.3337 | - | - | - |
| 2.6863 | 397300 | 0.3427 | - | - | - |
| 2.6869 | 397400 | 0.2871 | - | - | - |
| 2.6876 | 397500 | 0.3067 | - | - | - |
| 2.6883 | 397600 | 0.3441 | - | - | - |
| 2.6890 | 397700 | 0.3546 | - | - | - |
| 2.6896 | 397800 | 0.3193 | - | - | - |
| 2.6903 | 397900 | 0.3315 | - | - | - |
| 2.6910 | 398000 | 0.3443 | - | - | - |
| 2.6917 | 398100 | 0.3584 | - | - | - |
| 2.6923 | 398200 | 0.2765 | - | - | - |
| 2.6930 | 398300 | 0.3037 | - | - | - |
| 2.6937 | 398400 | 0.3252 | - | - | - |
| 2.6944 | 398500 | 0.3019 | - | - | - |
| 2.6950 | 398600 | 0.3595 | - | - | - |
| 2.6957 | 398700 | 0.3358 | - | - | - |
| 2.6964 | 398800 | 0.3423 | - | - | - |
| 2.6971 | 398900 | 0.2938 | - | - | - |
| 2.6978 | 399000 | 0.3343 | - | - | - |
| 2.6984 | 399100 | 0.3006 | - | - | - |
| 2.6991 | 399200 | 0.294 | - | - | - |
| 2.6998 | 399300 | 0.31 | - | - | - |
| 2.7005 | 399400 | 0.3286 | - | - | - |
| 2.7011 | 399500 | 0.3351 | - | - | - |
| 2.7018 | 399600 | 0.3218 | - | - | - |
| 2.7025 | 399700 | 0.3263 | - | - | - |
| 2.7032 | 399800 | 0.3271 | - | - | - |
| 2.7038 | 399900 | 0.2779 | - | - | - |
| 2.7045 | 400000 | 0.3072 | 0.5355 | 0.7778 | - |
| 2.7052 | 400100 | 0.3167 | - | - | - |
| 2.7059 | 400200 | 0.3094 | - | - | - |
| 2.7065 | 400300 | 0.3338 | - | - | - |
| 2.7072 | 400400 | 0.2896 | - | - | - |
| 2.7079 | 400500 | 0.331 | - | - | - |
| 2.7086 | 400600 | 0.3229 | - | - | - |
| 2.7092 | 400700 | 0.3062 | - | - | - |
| 2.7099 | 400800 | 0.33 | - | - | - |
| 2.7106 | 400900 | 0.3269 | - | - | - |
| 2.7113 | 401000 | 0.3225 | - | - | - |
| 2.7119 | 401100 | 0.31 | - | - | - |
| 2.7126 | 401200 | 0.3582 | - | - | - |
| 2.7133 | 401300 | 0.3372 | - | - | - |
| 2.7140 | 401400 | 0.2859 | - | - | - |
| 2.7147 | 401500 | 0.3311 | - | - | - |
| 2.7153 | 401600 | 0.3299 | - | - | - |
| 2.7160 | 401700 | 0.2862 | - | - | - |
| 2.7167 | 401800 | 0.3308 | - | - | - |
| 2.7174 | 401900 | 0.3424 | - | - | - |
| 2.7180 | 402000 | 0.3629 | - | - | - |
| 2.7187 | 402100 | 0.2774 | - | - | - |
| 2.7194 | 402200 | 0.3739 | - | - | - |
| 2.7201 | 402300 | 0.3204 | - | - | - |
| 2.7207 | 402400 | 0.3436 | - | - | - |
| 2.7214 | 402500 | 0.294 | - | - | - |
| 2.7221 | 402600 | 0.3235 | - | - | - |
| 2.7228 | 402700 | 0.3413 | - | - | - |
| 2.7234 | 402800 | 0.3318 | - | - | - |
| 2.7241 | 402900 | 0.325 | - | - | - |
| 2.7248 | 403000 | 0.3181 | - | - | - |
| 2.7255 | 403100 | 0.292 | - | - | - |
| 2.7261 | 403200 | 0.3315 | - | - | - |
| 2.7268 | 403300 | 0.3026 | - | - | - |
| 2.7275 | 403400 | 0.3214 | - | - | - |
| 2.7282 | 403500 | 0.3441 | - | - | - |
| 2.7289 | 403600 | 0.3274 | - | - | - |
| 2.7295 | 403700 | 0.3448 | - | - | - |
| 2.7302 | 403800 | 0.3144 | - | - | - |
| 2.7309 | 403900 | 0.3099 | - | - | - |
| 2.7316 | 404000 | 0.3016 | - | - | - |
| 2.7322 | 404100 | 0.3111 | - | - | - |
| 2.7329 | 404200 | 0.3429 | - | - | - |
| 2.7336 | 404300 | 0.3401 | - | - | - |
| 2.7343 | 404400 | 0.3356 | - | - | - |
| 2.7349 | 404500 | 0.3359 | - | - | - |
| 2.7356 | 404600 | 0.3113 | - | - | - |
| 2.7363 | 404700 | 0.3174 | - | - | - |
| 2.7370 | 404800 | 0.3754 | - | - | - |
| 2.7376 | 404900 | 0.2967 | - | - | - |
| 2.7383 | 405000 | 0.311 | 0.5380 | 0.7779 | - |
| 2.7390 | 405100 | 0.3554 | - | - | - |
| 2.7397 | 405200 | 0.2834 | - | - | - |
| 2.7403 | 405300 | 0.3313 | - | - | - |
| 2.7410 | 405400 | 0.3033 | - | - | - |
| 2.7417 | 405500 | 0.3003 | - | - | - |
| 2.7424 | 405600 | 0.3129 | - | - | - |
| 2.7431 | 405700 | 0.3055 | - | - | - |
| 2.7437 | 405800 | 0.3277 | - | - | - |
| 2.7444 | 405900 | 0.3138 | - | - | - |
| 2.7451 | 406000 | 0.286 | - | - | - |
| 2.7458 | 406100 | 0.3252 | - | - | - |
| 2.7464 | 406200 | 0.3103 | - | - | - |
| 2.7471 | 406300 | 0.3311 | - | - | - |
| 2.7478 | 406400 | 0.3052 | - | - | - |
| 2.7485 | 406500 | 0.2858 | - | - | - |
| 2.7491 | 406600 | 0.297 | - | - | - |
| 2.7498 | 406700 | 0.2967 | - | - | - |
| 2.7505 | 406800 | 0.322 | - | - | - |
| 2.7512 | 406900 | 0.2896 | - | - | - |
| 2.7518 | 407000 | 0.325 | - | - | - |
| 2.7525 | 407100 | 0.2928 | - | - | - |
| 2.7532 | 407200 | 0.3038 | - | - | - |
| 2.7539 | 407300 | 0.2659 | - | - | - |
| 2.7545 | 407400 | 0.3277 | - | - | - |
| 2.7552 | 407500 | 0.3513 | - | - | - |
| 2.7559 | 407600 | 0.2941 | - | - | - |
| 2.7566 | 407700 | 0.2625 | - | - | - |
| 2.7572 | 407800 | 0.2805 | - | - | - |
| 2.7579 | 407900 | 0.2678 | - | - | - |
| 2.7586 | 408000 | 0.3407 | - | - | - |
| 2.7593 | 408100 | 0.3406 | - | - | - |
| 2.7600 | 408200 | 0.3509 | - | - | - |
| 2.7606 | 408300 | 0.3036 | - | - | - |
| 2.7613 | 408400 | 0.3169 | - | - | - |
| 2.7620 | 408500 | 0.3128 | - | - | - |
| 2.7627 | 408600 | 0.3496 | - | - | - |
| 2.7633 | 408700 | 0.3056 | - | - | - |
| 2.7640 | 408800 | 0.3233 | - | - | - |
| 2.7647 | 408900 | 0.3174 | - | - | - |
| 2.7654 | 409000 | 0.314 | - | - | - |
| 2.7660 | 409100 | 0.3288 | - | - | - |
| 2.7667 | 409200 | 0.3705 | - | - | - |
| 2.7674 | 409300 | 0.3192 | - | - | - |
| 2.7681 | 409400 | 0.2721 | - | - | - |
| 2.7687 | 409500 | 0.3189 | - | - | - |
| 2.7694 | 409600 | 0.3862 | - | - | - |
| 2.7701 | 409700 | 0.3061 | - | - | - |
| 2.7708 | 409800 | 0.3023 | - | - | - |
| 2.7714 | 409900 | 0.3374 | - | - | - |
| 2.7721 | 410000 | 0.3039 | 0.5357 | 0.7810 | - |
| 2.7728 | 410100 | 0.3555 | - | - | - |
| 2.7735 | 410200 | 0.3054 | - | - | - |
| 2.7742 | 410300 | 0.3211 | - | - | - |
| 2.7748 | 410400 | 0.3102 | - | - | - |
| 2.7755 | 410500 | 0.3323 | - | - | - |
| 2.7762 | 410600 | 0.3018 | - | - | - |
| 2.7769 | 410700 | 0.3349 | - | - | - |
| 2.7775 | 410800 | 0.2874 | - | - | - |
| 2.7782 | 410900 | 0.3191 | - | - | - |
| 2.7789 | 411000 | 0.3119 | - | - | - |
| 2.7796 | 411100 | 0.3159 | - | - | - |
| 2.7802 | 411200 | 0.3205 | - | - | - |
| 2.7809 | 411300 | 0.3014 | - | - | - |
| 2.7816 | 411400 | 0.301 | - | - | - |
| 2.7823 | 411500 | 0.2984 | - | - | - |
| 2.7829 | 411600 | 0.3412 | - | - | - |
| 2.7836 | 411700 | 0.2783 | - | - | - |
| 2.7843 | 411800 | 0.3092 | - | - | - |
| 2.7850 | 411900 | 0.3393 | - | - | - |
| 2.7856 | 412000 | 0.3504 | - | - | - |
| 2.7863 | 412100 | 0.3658 | - | - | - |
| 2.7870 | 412200 | 0.3478 | - | - | - |
| 2.7877 | 412300 | 0.2646 | - | - | - |
| 2.7884 | 412400 | 0.3027 | - | - | - |
| 2.7890 | 412500 | 0.2889 | - | - | - |
| 2.7897 | 412600 | 0.2987 | - | - | - |
| 2.7904 | 412700 | 0.3317 | - | - | - |
| 2.7911 | 412800 | 0.293 | - | - | - |
| 2.7917 | 412900 | 0.2994 | - | - | - |
| 2.7924 | 413000 | 0.3144 | - | - | - |
| 2.7931 | 413100 | 0.3393 | - | - | - |
| 2.7938 | 413200 | 0.3053 | - | - | - |
| 2.7944 | 413300 | 0.3204 | - | - | - |
| 2.7951 | 413400 | 0.3269 | - | - | - |
| 2.7958 | 413500 | 0.3435 | - | - | - |
| 2.7965 | 413600 | 0.347 | - | - | - |
| 2.7971 | 413700 | 0.2918 | - | - | - |
| 2.7978 | 413800 | 0.3663 | - | - | - |
| 2.7985 | 413900 | 0.3364 | - | - | - |
| 2.7992 | 414000 | 0.2899 | - | - | - |
| 2.7998 | 414100 | 0.3113 | - | - | - |
| 2.8005 | 414200 | 0.3525 | - | - | - |
| 2.8012 | 414300 | 0.333 | - | - | - |
| 2.8019 | 414400 | 0.345 | - | - | - |
| 2.8026 | 414500 | 0.3044 | - | - | - |
| 2.8032 | 414600 | 0.3328 | - | - | - |
| 2.8039 | 414700 | 0.2952 | - | - | - |
| 2.8046 | 414800 | 0.3524 | - | - | - |
| 2.8053 | 414900 | 0.3175 | - | - | - |
| 2.8059 | 415000 | 0.315 | 0.5325 | 0.7799 | - |
| 2.8066 | 415100 | 0.3944 | - | - | - |
| 2.8073 | 415200 | 0.2733 | - | - | - |
| 2.8080 | 415300 | 0.3245 | - | - | - |
| 2.8086 | 415400 | 0.3063 | - | - | - |
| 2.8093 | 415500 | 0.3062 | - | - | - |
| 2.8100 | 415600 | 0.3036 | - | - | - |
| 2.8107 | 415700 | 0.2833 | - | - | - |
| 2.8113 | 415800 | 0.3012 | - | - | - |
| 2.8120 | 415900 | 0.3112 | - | - | - |
| 2.8127 | 416000 | 0.3012 | - | - | - |
| 2.8134 | 416100 | 0.3487 | - | - | - |
| 2.8140 | 416200 | 0.3423 | - | - | - |
| 2.8147 | 416300 | 0.3128 | - | - | - |
| 2.8154 | 416400 | 0.3451 | - | - | - |
| 2.8161 | 416500 | 0.3378 | - | - | - |
| 2.8167 | 416600 | 0.3396 | - | - | - |
| 2.8174 | 416700 | 0.3314 | - | - | - |
| 2.8181 | 416800 | 0.3284 | - | - | - |
| 2.8188 | 416900 | 0.3563 | - | - | - |
| 2.8195 | 417000 | 0.3322 | - | - | - |
| 2.8201 | 417100 | 0.288 | - | - | - |
| 2.8208 | 417200 | 0.303 | - | - | - |
| 2.8215 | 417300 | 0.2839 | - | - | - |
| 2.8222 | 417400 | 0.3499 | - | - | - |
| 2.8228 | 417500 | 0.2946 | - | - | - |
| 2.8235 | 417600 | 0.284 | - | - | - |
| 2.8242 | 417700 | 0.332 | - | - | - |
| 2.8249 | 417800 | 0.2855 | - | - | - |
| 2.8255 | 417900 | 0.3244 | - | - | - |
| 2.8262 | 418000 | 0.3189 | - | - | - |
| 2.8269 | 418100 | 0.3 | - | - | - |
| 2.8276 | 418200 | 0.3249 | - | - | - |
| 2.8282 | 418300 | 0.3143 | - | - | - |
| 2.8289 | 418400 | 0.3055 | - | - | - |
| 2.8296 | 418500 | 0.3046 | - | - | - |
| 2.8303 | 418600 | 0.3385 | - | - | - |
| 2.8309 | 418700 | 0.2647 | - | - | - |
| 2.8316 | 418800 | 0.3377 | - | - | - |
| 2.8323 | 418900 | 0.3181 | - | - | - |
| 2.8330 | 419000 | 0.3242 | - | - | - |
| 2.8337 | 419100 | 0.3109 | - | - | - |
| 2.8343 | 419200 | 0.2853 | - | - | - |
| 2.8350 | 419300 | 0.2959 | - | - | - |
| 2.8357 | 419400 | 0.3517 | - | - | - |
| 2.8364 | 419500 | 0.3489 | - | - | - |
| 2.8370 | 419600 | 0.3243 | - | - | - |
| 2.8377 | 419700 | 0.3092 | - | - | - |
| 2.8384 | 419800 | 0.3407 | - | - | - |
| 2.8391 | 419900 | 0.3473 | - | - | - |
| 2.8397 | 420000 | 0.3201 | 0.5361 | 0.7791 | - |
| 2.8404 | 420100 | 0.3172 | - | - | - |
| 2.8411 | 420200 | 0.3288 | - | - | - |
| 2.8418 | 420300 | 0.3608 | - | - | - |
| 2.8424 | 420400 | 0.3263 | - | - | - |
| 2.8431 | 420500 | 0.3232 | - | - | - |
| 2.8438 | 420600 | 0.2952 | - | - | - |
| 2.8445 | 420700 | 0.3023 | - | - | - |
| 2.8451 | 420800 | 0.3071 | - | - | - |
| 2.8458 | 420900 | 0.3445 | - | - | - |
| 2.8465 | 421000 | 0.2883 | - | - | - |
| 2.8472 | 421100 | 0.346 | - | - | - |
| 2.8479 | 421200 | 0.2749 | - | - | - |
| 2.8485 | 421300 | 0.3086 | - | - | - |
| 2.8492 | 421400 | 0.3309 | - | - | - |
| 2.8499 | 421500 | 0.3348 | - | - | - |
| 2.8506 | 421600 | 0.3286 | - | - | - |
| 2.8512 | 421700 | 0.2793 | - | - | - |
| 2.8519 | 421800 | 0.3026 | - | - | - |
| 2.8526 | 421900 | 0.2995 | - | - | - |
| 2.8533 | 422000 | 0.3361 | - | - | - |
| 2.8539 | 422100 | 0.3415 | - | - | - |
| 2.8546 | 422200 | 0.2957 | - | - | - |
| 2.8553 | 422300 | 0.3287 | - | - | - |
| 2.8560 | 422400 | 0.3144 | - | - | - |
| 2.8566 | 422500 | 0.2691 | - | - | - |
| 2.8573 | 422600 | 0.3293 | - | - | - |
| 2.8580 | 422700 | 0.3184 | - | - | - |
| 2.8587 | 422800 | 0.3228 | - | - | - |
| 2.8593 | 422900 | 0.295 | - | - | - |
| 2.8600 | 423000 | 0.3057 | - | - | - |
| 2.8607 | 423100 | 0.2919 | - | - | - |
| 2.8614 | 423200 | 0.2925 | - | - | - |
| 2.8620 | 423300 | 0.3041 | - | - | - |
| 2.8627 | 423400 | 0.3199 | - | - | - |
| 2.8634 | 423500 | 0.3001 | - | - | - |
| 2.8641 | 423600 | 0.3767 | - | - | - |
| 2.8648 | 423700 | 0.2825 | - | - | - |
| 2.8654 | 423800 | 0.3174 | - | - | - |
| 2.8661 | 423900 | 0.343 | - | - | - |
| 2.8668 | 424000 | 0.3043 | - | - | - |
| 2.8675 | 424100 | 0.2764 | - | - | - |
| 2.8681 | 424200 | 0.3205 | - | - | - |
| 2.8688 | 424300 | 0.2876 | - | - | - |
| 2.8695 | 424400 | 0.3312 | - | - | - |
| 2.8702 | 424500 | 0.2892 | - | - | - |
| 2.8708 | 424600 | 0.3022 | - | - | - |
| 2.8715 | 424700 | 0.2852 | - | - | - |
| 2.8722 | 424800 | 0.2933 | - | - | - |
| 2.8729 | 424900 | 0.3242 | - | - | - |
| 2.8735 | 425000 | 0.314 | 0.5364 | 0.7805 | - |
| 2.8742 | 425100 | 0.2706 | - | - | - |
| 2.8749 | 425200 | 0.2865 | - | - | - |
| 2.8756 | 425300 | 0.3138 | - | - | - |
| 2.8762 | 425400 | 0.3016 | - | - | - |
| 2.8769 | 425500 | 0.2615 | - | - | - |
| 2.8776 | 425600 | 0.3108 | - | - | - |
| 2.8783 | 425700 | 0.3419 | - | - | - |
| 2.8790 | 425800 | 0.2876 | - | - | - |
| 2.8796 | 425900 | 0.3284 | - | - | - |
| 2.8803 | 426000 | 0.2979 | - | - | - |
| 2.8810 | 426100 | 0.3168 | - | - | - |
| 2.8817 | 426200 | 0.3123 | - | - | - |
| 2.8823 | 426300 | 0.3244 | - | - | - |
| 2.8830 | 426400 | 0.2797 | - | - | - |
| 2.8837 | 426500 | 0.2649 | - | - | - |
| 2.8844 | 426600 | 0.2941 | - | - | - |
| 2.8850 | 426700 | 0.2882 | - | - | - |
| 2.8857 | 426800 | 0.2965 | - | - | - |
| 2.8864 | 426900 | 0.3306 | - | - | - |
| 2.8871 | 427000 | 0.3258 | - | - | - |
| 2.8877 | 427100 | 0.3247 | - | - | - |
| 2.8884 | 427200 | 0.2605 | - | - | - |
| 2.8891 | 427300 | 0.2763 | - | - | - |
| 2.8898 | 427400 | 0.3633 | - | - | - |
| 2.8904 | 427500 | 0.3124 | - | - | - |
| 2.8911 | 427600 | 0.3058 | - | - | - |
| 2.8918 | 427700 | 0.3126 | - | - | - |
| 2.8925 | 427800 | 0.2909 | - | - | - |
| 2.8932 | 427900 | 0.3314 | - | - | - |
| 2.8938 | 428000 | 0.2955 | - | - | - |
| 2.8945 | 428100 | 0.3097 | - | - | - |
| 2.8952 | 428200 | 0.3123 | - | - | - |
| 2.8959 | 428300 | 0.3209 | - | - | - |
| 2.8965 | 428400 | 0.3115 | - | - | - |
| 2.8972 | 428500 | 0.2841 | - | - | - |
| 2.8979 | 428600 | 0.3047 | - | - | - |
| 2.8986 | 428700 | 0.2948 | - | - | - |
| 2.8992 | 428800 | 0.3115 | - | - | - |
| 2.8999 | 428900 | 0.2966 | - | - | - |
| 2.9006 | 429000 | 0.298 | - | - | - |
| 2.9013 | 429100 | 0.3417 | - | - | - |
| 2.9019 | 429200 | 0.3151 | - | - | - |
| 2.9026 | 429300 | 0.3171 | - | - | - |
| 2.9033 | 429400 | 0.3234 | - | - | - |
| 2.9040 | 429500 | 0.3282 | - | - | - |
| 2.9046 | 429600 | 0.3123 | - | - | - |
| 2.9053 | 429700 | 0.3168 | - | - | - |
| 2.9060 | 429800 | 0.3265 | - | - | - |
| 2.9067 | 429900 | 0.3601 | - | - | - |
| 2.9074 | 430000 | 0.316 | 0.5341 | 0.7830 | - |
| 2.9080 | 430100 | 0.3256 | - | - | - |
| 2.9087 | 430200 | 0.3405 | - | - | - |
| 2.9094 | 430300 | 0.3408 | - | - | - |
| 2.9101 | 430400 | 0.3313 | - | - | - |
| 2.9107 | 430500 | 0.2975 | - | - | - |
| 2.9114 | 430600 | 0.3396 | - | - | - |
| 2.9121 | 430700 | 0.2966 | - | - | - |
| 2.9128 | 430800 | 0.3354 | - | - | - |
| 2.9134 | 430900 | 0.2806 | - | - | - |
| 2.9141 | 431000 | 0.2948 | - | - | - |
| 2.9148 | 431100 | 0.3184 | - | - | - |
| 2.9155 | 431200 | 0.3456 | - | - | - |
| 2.9161 | 431300 | 0.3159 | - | - | - |
| 2.9168 | 431400 | 0.3139 | - | - | - |
| 2.9175 | 431500 | 0.2922 | - | - | - |
| 2.9182 | 431600 | 0.3367 | - | - | - |
| 2.9188 | 431700 | 0.3493 | - | - | - |
| 2.9195 | 431800 | 0.313 | - | - | - |
| 2.9202 | 431900 | 0.3161 | - | - | - |
| 2.9209 | 432000 | 0.322 | - | - | - |
| 2.9215 | 432100 | 0.2878 | - | - | - |
| 2.9222 | 432200 | 0.2934 | - | - | - |
| 2.9229 | 432300 | 0.3342 | - | - | - |
| 2.9236 | 432400 | 0.277 | - | - | - |
| 2.9243 | 432500 | 0.2605 | - | - | - |
| 2.9249 | 432600 | 0.3078 | - | - | - |
| 2.9256 | 432700 | 0.3273 | - | - | - |
| 2.9263 | 432800 | 0.3207 | - | - | - |
| 2.9270 | 432900 | 0.2812 | - | - | - |
| 2.9276 | 433000 | 0.3378 | - | - | - |
| 2.9283 | 433100 | 0.3272 | - | - | - |
| 2.9290 | 433200 | 0.3119 | - | - | - |
| 2.9297 | 433300 | 0.2942 | - | - | - |
| 2.9303 | 433400 | 0.2741 | - | - | - |
| 2.9310 | 433500 | 0.3115 | - | - | - |
| 2.9317 | 433600 | 0.3019 | - | - | - |
| 2.9324 | 433700 | 0.2902 | - | - | - |
| 2.9330 | 433800 | 0.3253 | - | - | - |
| 2.9337 | 433900 | 0.2985 | - | - | - |
| 2.9344 | 434000 | 0.3078 | - | - | - |
| 2.9351 | 434100 | 0.3854 | - | - | - |
| 2.9357 | 434200 | 0.2974 | - | - | - |
| 2.9364 | 434300 | 0.2922 | - | - | - |
| 2.9371 | 434400 | 0.3166 | - | - | - |
| 2.9378 | 434500 | 0.3247 | - | - | - |
| 2.9385 | 434600 | 0.2662 | - | - | - |
| 2.9391 | 434700 | 0.2796 | - | - | - |
| 2.9398 | 434800 | 0.2981 | - | - | - |
| 2.9405 | 434900 | 0.3049 | - | - | - |
| 2.9412 | 435000 | 0.2975 | 0.5333 | 0.7836 | - |
| 2.9418 | 435100 | 0.295 | - | - | - |
| 2.9425 | 435200 | 0.3076 | - | - | - |
| 2.9432 | 435300 | 0.3302 | - | - | - |
| 2.9439 | 435400 | 0.277 | - | - | - |
| 2.9445 | 435500 | 0.3219 | - | - | - |
| 2.9452 | 435600 | 0.2785 | - | - | - |
| 2.9459 | 435700 | 0.3077 | - | - | - |
| 2.9466 | 435800 | 0.2837 | - | - | - |
| 2.9472 | 435900 | 0.3695 | - | - | - |
| 2.9479 | 436000 | 0.3068 | - | - | - |
| 2.9486 | 436100 | 0.301 | - | - | - |
| 2.9493 | 436200 | 0.316 | - | - | - |
| 2.9499 | 436300 | 0.3299 | - | - | - |
| 2.9506 | 436400 | 0.3464 | - | - | - |
| 2.9513 | 436500 | 0.3192 | - | - | - |
| 2.9520 | 436600 | 0.3137 | - | - | - |
| 2.9527 | 436700 | 0.2981 | - | - | - |
| 2.9533 | 436800 | 0.2997 | - | - | - |
| 2.9540 | 436900 | 0.3171 | - | - | - |
| 2.9547 | 437000 | 0.3397 | - | - | - |
| 2.9554 | 437100 | 0.314 | - | - | - |
| 2.9560 | 437200 | 0.3004 | - | - | - |
| 2.9567 | 437300 | 0.3258 | - | - | - |
| 2.9574 | 437400 | 0.2851 | - | - | - |
| 2.9581 | 437500 | 0.3258 | - | - | - |
| 2.9587 | 437600 | 0.3471 | - | - | - |
| 2.9594 | 437700 | 0.3699 | - | - | - |
| 2.9601 | 437800 | 0.2801 | - | - | - |
| 2.9608 | 437900 | 0.3349 | - | - | - |
| 2.9614 | 438000 | 0.3389 | - | - | - |
| 2.9621 | 438100 | 0.2557 | - | - | - |
| 2.9628 | 438200 | 0.293 | - | - | - |
| 2.9635 | 438300 | 0.3525 | - | - | - |
| 2.9641 | 438400 | 0.3515 | - | - | - |
| 2.9648 | 438500 | 0.3027 | - | - | - |
| 2.9655 | 438600 | 0.337 | - | - | - |
| 2.9662 | 438700 | 0.3426 | - | - | - |
| 2.9668 | 438800 | 0.291 | - | - | - |
| 2.9675 | 438900 | 0.3119 | - | - | - |
| 2.9682 | 439000 | 0.3371 | - | - | - |
| 2.9689 | 439100 | 0.3183 | - | - | - |
| 2.9696 | 439200 | 0.3517 | - | - | - |
| 2.9702 | 439300 | 0.3263 | - | - | - |
| 2.9709 | 439400 | 0.3055 | - | - | - |
| 2.9716 | 439500 | 0.3171 | - | - | - |
| 2.9723 | 439600 | 0.2815 | - | - | - |
| 2.9729 | 439700 | 0.3069 | - | - | - |
| 2.9736 | 439800 | 0.332 | - | - | - |
| 2.9743 | 439900 | 0.3461 | - | - | - |
| 2.9750 | 440000 | 0.2879 | 0.5318 | 0.7851 | - |
| 2.9756 | 440100 | 0.354 | - | - | - |
| 2.9763 | 440200 | 0.3224 | - | - | - |
| 2.9770 | 440300 | 0.3787 | - | - | - |
| 2.9777 | 440400 | 0.3171 | - | - | - |
| 2.9783 | 440500 | 0.3004 | - | - | - |
| 2.9790 | 440600 | 0.2808 | - | - | - |
| 2.9797 | 440700 | 0.2999 | - | - | - |
| 2.9804 | 440800 | 0.3059 | - | - | - |
| 2.9810 | 440900 | 0.3219 | - | - | - |
| 2.9817 | 441000 | 0.3017 | - | - | - |
| 2.9824 | 441100 | 0.3481 | - | - | - |
| 2.9831 | 441200 | 0.3136 | - | - | - |
| 2.9838 | 441300 | 0.3722 | - | - | - |
| 2.9844 | 441400 | 0.309 | - | - | - |
| 2.9851 | 441500 | 0.3126 | - | - | - |
| 2.9858 | 441600 | 0.3474 | - | - | - |
| 2.9865 | 441700 | 0.3167 | - | - | - |
| 2.9871 | 441800 | 0.3302 | - | - | - |
| 2.9878 | 441900 | 0.3047 | - | - | - |
| 2.9885 | 442000 | 0.3353 | - | - | - |
| 2.9892 | 442100 | 0.2927 | - | - | - |
| 2.9898 | 442200 | 0.3905 | - | - | - |
| 2.9905 | 442300 | 0.3256 | - | - | - |
| 2.9912 | 442400 | 0.3546 | - | - | - |
| 2.9919 | 442500 | 0.2989 | - | - | - |
| 2.9925 | 442600 | 0.3113 | - | - | - |
| 2.9932 | 442700 | 0.3127 | - | - | - |
| 2.9939 | 442800 | 0.3393 | - | - | - |
| 2.9946 | 442900 | 0.2916 | - | - | - |
| 2.9952 | 443000 | 0.3403 | - | - | - |
| 2.9959 | 443100 | 0.318 | - | - | - |
| 2.9966 | 443200 | 0.3252 | - | - | - |
| 2.9973 | 443300 | 0.2852 | - | - | - |
| 2.9980 | 443400 | 0.3143 | - | - | - |
| 2.9986 | 443500 | 0.3042 | - | - | - |
| 2.9993 | 443600 | 0.3474 | - | - | - |
| 3.0000 | 443700 | 0.3281 | - | - | - |
| 3.0 | 443703 | - | - | - | 0.7955 |
</details>
### Framework Versions
- Python: 3.11.8
- Sentence Transformers: 3.3.1
- Transformers: 4.48.0
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "language": ["hu"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1207229", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "a fara név azt jelenti", "sentences": ["Utcafesztiváltól, egyhetes fesztiválon át havas fesztiválig! Ezek a fesztiválok lehetőséget kínálnak arra, hogy egy helyszínen megkóstolják a Bend sörfőzdék kínálatát – valójában több helyszínen egész évben! Kóstoljon meg egy jó főzetet, hallgasson néhány jó dallamot, és találkozzon a sörfőzőkkel! Itt a sör ideje!", "Fara /fara/ [2 szótag.] mint lánynév középangol és arab eredetű, a Fara név jelentése kedves, kellemes. A Fara a Farrah (közép angol, arab) változata: az angol fair szóból származik. Hasonlítsa össze a Fura vezetéknevet.", "A Fara név angolul azt jelenti: utazó. A Fara név angol névből származik. A Fara nevet leggyakrabban lánynévként vagy női névként használják."]}, {"source_sentence": "aki a kis Willie-t énekelte", "sentences": ["Valaki énekel", "William Edward Little Willie John (1937. november 15. – 1968. május 26.) amerikai rock 'n' roll és R&B énekes, aki az 1950-es években és az 1960-as évek elején lépett fel. Leginkább a lemezlistákon elért sikereiről ismert, olyan dalokkal, mint az All Around the World (1955), a Need Your Love So Bad (1956) és a Fever (1956).", "A dal a Little Willy lenne, Sweet előadásában. Mert a kis Willy, Willy nem megy haza. De nem lökheti körbe Willyt. Willy nem megy, próbáld meg mindenkinek elmondani, de nem. Kicsi Willy, Willy nem megy haza."]}, {"source_sentence": "Amikor 1901-ben megházasodott, feleségével (Olga Knipper, a Moszkvai Művészeti Színház munkatársa) közvetlenül a szertartásról mentek nászútra egy szanatóriumba.", "sentences": ["Amikor 1901-ben feleségül vette a feleségét, a szertartásról egyenesen a nászútra mentek.", "Ez egyenlő a hullám sebességével, osztva a frekvenciával. A hullámhosszt méter egységekben (m) fejezzük ki. λ = hullámhossz, a hullámhegyek közötti távolság (m). v = hullámsebesség, a hullámok mozgásának sebessége egy irányban (m/s). f = frekvencia, a hullámhegyek, amelyek egy bizonyos idő alatt átmennek egy ponton (ciklus/s vagy Hz). A hullámhossz képletre vonatkozó kérdések: 1) A hang sebessége körülbelül 340 m/s. Keresse meg egy olyan hanghullám hullámhosszát, amelynek frekvenciája 20,0 ciklus/másodperc (az emberi hallás alsó határa). Válasz: A hullámsebesség v = 340 m/s, és a frekvencia f = 20,0 ciklus/s. = hullámhossz, a hullámhegyek közötti távolság (m). v = hullámsebesség, az a sebesség, amellyel a hullámok egy irányban mozognak (m/s). f = frekvencia, a hullámhegyek, amelyek egy bizonyos idő alatt átmennek egy ponton (ciklus/s vagy Hz). A hullámhossz képletre vonatkozó kérdések: 1) A hangsebesség körülbelül 340 m/s.", "A felesége soha nem járt szanatóriumba."]}, {"source_sentence": "aki Elizabeth Blackwell volt", "sentences": ["A módosítás definíciója valaminek a megváltoztatása, kiegészítése vagy átfogalmazása, leggyakrabban javítási szándékkal. Példa a módosításra az Egyesült Államok alkotmányának módosításai. 1 jobb változás; javulás. hibák, hibák stb. javítása.", "Elizabeth Blackwell (1707[1] – 1758) skót botanikai illusztrátor és író volt, aki leginkább az 1737 és 1739 között megjelent A Curious Herbal tányérjainak művészeként és metszőjeként volt ismert.", "Elizabeth Blackwell volt az első nő Amerikában, aki orvosi diplomát kapott. Úttörő szerepet vállalt a nők orvostudományi oktatásában, és saját orvosi főiskolát nyitott a nők számára. Ő volt az első nő, akit felvették a brit orvosi nyilvántartásba, lehetővé téve számára, hogy az Egyesült Királyságban és az Egyesült Államokban is praktizáljon."]}, {"source_sentence": "a sellő szindróma genetikai okai", "sentences": ["Rfcamat válasza. Bizalom szavazat: 459. Ha sellő-szindrómásod van, akkor vele születtél volna, és inkább hasadt volna a lábad, vagy mindkettőt amputálták volna. A sellőszindróma oka a test alsó részének (lábainak) oxigén- és tápanyaghiánya a keringési rendszer problémája miatt.További információ az alábbi linken.a sellő szindrómát nem kaphatja meg. Ez egy veleszületett állapot, ami azt jelenti, hogy vele kell születned ahhoz, hogy meglegyen. A betegségben szenvedő személy nem sellő, csak arról van szó, hogy a lábai összeforrtak. Számos belső szerv hiányzik vagy deformálódott.", "Vezessen be lágy, nyájas ételeket, például pudingot, almaszószt vagy joghurtot. A krémes anyagokat könnyű lenyelni, különösebb fájdalom nélkül. Lassan adjon be több ételt, amint a torokfájás javulni kezd. A sült gyümölcsök és zöldségek, például sült alma, sült körte és sült sárgarépa jó választás köretekhez. A burgonyapüré, az őszi tök, a sima tészta és a rizs is ideális lágy ételek. Ezen kívül zöldséget, tésztát és/vagy tésztát tartalmazó levesek vagy a puha húsdarabok egészséges választás a mandulagyulladásban szenvedő betegek számára. Válasszon olyan szilárd ételeket, amelyek nem irritálják a torkát, mint például a sült csirke, marhasült, teljes kiőrlésű kenyerek és egész gyümölcsök. A kemény kekszek, a pizzahéjak, a ropogós kekszek és a ropogtatnivalók túl kemények és ropogósak ahhoz, hogy torokfájása élvezhesse. Őrizze meg ezeket az ételeket, amíg teljesen felépül.", "1 A sellő-szindróma annak a következménye is lehet, hogy az anya sugárzásnak és más környezeti hatásoknak van kitéve, amelyek a magzat normális fejlődésében részt vevő gének mutációit okozták. 2 Spontán mutációk vagy a magzatban természetesen előforduló mutációk is okozhatták a születési rendellenességet. Kutatásokra van szükség ahhoz, hogy kiderítsük a sellőszindróma genetikai, biológiai vagy környezeti okait. A sellő szindróma kezelése. Ha a két láb csak a bőrön keresztül olvadt össze, és a három fő csont teljesen és megfelelően kialakult, műtétet alkalmaznak a két láb szétválasztására."]}], "model-index": [{"name": "paraphrase-multilingual-MiniLM-L12-hu-v3", "results": [{"task": {"type": "triplet", "name": "Triplet"}, "dataset": {"name": "all triplet dev", "type": "all-triplet-dev"}, "metrics": [{"type": "cosine_accuracy", "value": 0.785140562248996, "name": "Cosine Accuracy"}]}, {"task": {"type": "triplet", "name": "Triplet"}, "dataset": {"name": "all triplet test", "type": "all-triplet-test"}, "metrics": [{"type": "cosine_accuracy", "value": 0.795494077694028, "name": "Cosine Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,411 |
aoiferyan/625_imbalanced_model_4score
|
aoiferyan
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-04-06T08:34:31Z |
2023-04-06T08:34:42+00:00
| 11 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# 625_imbalanced_model_4score
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("625_imbalanced_model_4score")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# 625_imbalanced_model_4score
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("625_imbalanced_model_4score")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,412 |
Cyber-ThreaD/CyBERT-APTNER
|
Cyber-ThreaD
|
token-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:SynamicTechnologies/CYBERT",
"base_model:finetune:SynamicTechnologies/CYBERT",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-12-06T16:06:31Z |
2024-12-02T13:49:38+00:00
| 3 | 0 |
---
base_model: SynamicTechnologies/CYBERT
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: Cyber-ThreaD/CyBERT-APTNER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Cyber-ThreaD/CyBERT-APTNER
This model is a fine-tuned version of [SynamicTechnologies/CYBERT](https://huggingface.co/SynamicTechnologies/CYBERT) on the [APTNER](https://github.com/wangxuren/APTNER) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4543
- Precision: 0.4372
- Recall: 0.4293
- F1: 0.4332
- Accuracy: 0.9043
It achieves the following results on the prediction set:
- Loss: 0.3602
- Precision: 0.5489
- Recall: 0.5125
- F1: 0.5301
- Accuracy: 0.9220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.8182 | 0.59 | 500 | 0.5569 | 0.5219 | 0.2431 | 0.3317 | 0.9014 |
| 0.5357 | 1.19 | 1000 | 0.4980 | 0.4625 | 0.2895 | 0.3561 | 0.9024 |
| 0.4417 | 1.78 | 1500 | 0.4773 | 0.4029 | 0.3572 | 0.3787 | 0.9016 |
| 0.394 | 2.37 | 2000 | 0.4840 | 0.3697 | 0.3943 | 0.3816 | 0.8943 |
| 0.3534 | 2.97 | 2500 | 0.4742 | 0.3586 | 0.4437 | 0.3966 | 0.8914 |
| 0.3048 | 3.56 | 3000 | 0.4543 | 0.4372 | 0.4293 | 0.4332 | 0.9043 |
| 0.2992 | 4.15 | 3500 | 0.4846 | 0.3587 | 0.4392 | 0.3949 | 0.8907 |
| 0.2675 | 4.74 | 4000 | 0.4760 | 0.4100 | 0.4530 | 0.4304 | 0.9000 |
| 0.2454 | 5.34 | 4500 | 0.4702 | 0.4123 | 0.4407 | 0.4260 | 0.9014 |
| 0.2391 | 5.93 | 5000 | 0.4743 | 0.3957 | 0.4638 | 0.4270 | 0.8979 |
| 0.2088 | 6.52 | 5500 | 0.4778 | 0.4224 | 0.4485 | 0.4351 | 0.9038 |
| 0.2076 | 7.12 | 6000 | 0.5050 | 0.3736 | 0.4644 | 0.4140 | 0.8930 |
| 0.1946 | 7.71 | 6500 | 0.4964 | 0.4009 | 0.4599 | 0.4284 | 0.8977 |
| 0.1808 | 8.3 | 7000 | 0.4878 | 0.4226 | 0.4554 | 0.4384 | 0.9028 |
| 0.1683 | 8.9 | 7500 | 0.4947 | 0.3954 | 0.4626 | 0.4264 | 0.8976 |
| 0.1681 | 9.49 | 8000 | 0.4916 | 0.4081 | 0.4662 | 0.4352 | 0.9001 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
### Citing & Authors
If you use the model kindly cite the following work
```
@inproceedings{deka2024attacker,
title={AttackER: Towards Enhancing Cyber-Attack Attribution with a Named Entity Recognition Dataset},
author={Deka, Pritam and Rajapaksha, Sampath and Rani, Ruby and Almutairi, Amirah and Karafili, Erisa},
booktitle={International Conference on Web Information Systems Engineering},
pages={255--270},
year={2024},
organization={Springer}
}
```
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Cyber-ThreaD/CyBERT-APTNER
This model is a fine-tuned version of [SynamicTechnologies/CYBERT](https://huggingface.co/SynamicTechnologies/CYBERT) on the [APTNER](https://github.com/wangxuren/APTNER) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4543
- Precision: 0.4372
- Recall: 0.4293
- F1: 0.4332
- Accuracy: 0.9043
It achieves the following results on the prediction set:
- Loss: 0.3602
- Precision: 0.5489
- Recall: 0.5125
- F1: 0.5301
- Accuracy: 0.9220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.8182 | 0.59 | 500 | 0.5569 | 0.5219 | 0.2431 | 0.3317 | 0.9014 |
| 0.5357 | 1.19 | 1000 | 0.4980 | 0.4625 | 0.2895 | 0.3561 | 0.9024 |
| 0.4417 | 1.78 | 1500 | 0.4773 | 0.4029 | 0.3572 | 0.3787 | 0.9016 |
| 0.394 | 2.37 | 2000 | 0.4840 | 0.3697 | 0.3943 | 0.3816 | 0.8943 |
| 0.3534 | 2.97 | 2500 | 0.4742 | 0.3586 | 0.4437 | 0.3966 | 0.8914 |
| 0.3048 | 3.56 | 3000 | 0.4543 | 0.4372 | 0.4293 | 0.4332 | 0.9043 |
| 0.2992 | 4.15 | 3500 | 0.4846 | 0.3587 | 0.4392 | 0.3949 | 0.8907 |
| 0.2675 | 4.74 | 4000 | 0.4760 | 0.4100 | 0.4530 | 0.4304 | 0.9000 |
| 0.2454 | 5.34 | 4500 | 0.4702 | 0.4123 | 0.4407 | 0.4260 | 0.9014 |
| 0.2391 | 5.93 | 5000 | 0.4743 | 0.3957 | 0.4638 | 0.4270 | 0.8979 |
| 0.2088 | 6.52 | 5500 | 0.4778 | 0.4224 | 0.4485 | 0.4351 | 0.9038 |
| 0.2076 | 7.12 | 6000 | 0.5050 | 0.3736 | 0.4644 | 0.4140 | 0.8930 |
| 0.1946 | 7.71 | 6500 | 0.4964 | 0.4009 | 0.4599 | 0.4284 | 0.8977 |
| 0.1808 | 8.3 | 7000 | 0.4878 | 0.4226 | 0.4554 | 0.4384 | 0.9028 |
| 0.1683 | 8.9 | 7500 | 0.4947 | 0.3954 | 0.4626 | 0.4264 | 0.8976 |
| 0.1681 | 9.49 | 8000 | 0.4916 | 0.4081 | 0.4662 | 0.4352 | 0.9001 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
### Citing & Authors
If you use the model kindly cite the following work
```
@inproceedings{deka2024attacker,
title={AttackER: Towards Enhancing Cyber-Attack Attribution with a Named Entity Recognition Dataset},
author={Deka, Pritam and Rajapaksha, Sampath and Rani, Ruby and Almutairi, Amirah and Karafili, Erisa},
booktitle={International Conference on Web Information Systems Engineering},
pages={255--270},
year={2024},
organization={Springer}
}
```
|
{"base_model": "SynamicTechnologies/CYBERT", "metrics": ["precision", "recall", "f1", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "Cyber-ThreaD/CyBERT-APTNER", "results": []}]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 44,413 |
MurkatG/review-summarizer-en
|
MurkatG
|
summarization
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"en",
"dataset:amazon_reviews_multi",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-01-20T16:37:52Z |
2023-01-22T08:50:04+00:00
| 126 | 4 |
---
datasets:
- amazon_reviews_multi
language:
- en
metrics:
- rouge
pipeline_tag: summarization
---
# Model Card for Model ID
Model that trained to summarize product reviews.
| null |
Non_BioNLP
|
# Model Card for Model ID
Model that trained to summarize product reviews.
|
{"datasets": ["amazon_reviews_multi"], "language": ["en"], "metrics": ["rouge"], "pipeline_tag": "summarization"}
|
task
|
[
"SUMMARIZATION"
] | 44,414 |
HasinMDG/Deberta_Sentiment_Toward_Topics_Baseline
|
HasinMDG
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"deberta-v2",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-03-01T17:11:53Z |
2023-03-01T17:12:15+00:00
| 8 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# HasinMDG/Deberta_Sentiment_Toward_Topics_Baseline
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/Deberta_Sentiment_Toward_Topics_Baseline")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# HasinMDG/Deberta_Sentiment_Toward_Topics_Baseline
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/Deberta_Sentiment_Toward_Topics_Baseline")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,415 |
RichardErkhov/databricks_-_dolly-v2-12b-4bits
|
RichardErkhov
|
text-generation
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | 2024-04-14T21:13:27Z |
2024-04-14T21:19:24+00:00
| 4 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
dolly-v2-12b - bnb 4bits
- Model creator: https://huggingface.co/databricks/
- Original model: https://huggingface.co/databricks/dolly-v2-12b/
Original model description:
---
license: mit
language:
- en
library_name: transformers
inference: false
datasets:
- databricks/databricks-dolly-15k
---
# dolly-v2-12b Model Card
## Summary
Databricks' `dolly-v2-12b`, an instruction-following large language model trained on the Databricks machine learning platform
that is licensed for commercial use. Based on `pythia-12b`, Dolly is trained on ~15k instruction/response fine tuning records
[`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated
by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation,
information extraction, open QA and summarization. `dolly-v2-12b` is not a state-of-the-art model, but does exhibit surprisingly
high quality instruction following behavior not characteristic of the foundation model on which it is based.
Dolly v2 is also available in these smaller models sizes:
* [dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b), a 6.9 billion parameter based on `pythia-6.9b`
* [dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b), a 2.8 billion parameter based on `pythia-2.8b`
Please refer to the [dolly GitHub repo](https://github.com/databrickslabs/dolly#getting-started-with-response-generation) for tips on
running inference for various GPU configurations.
**Owner**: Databricks, Inc.
## Model Overview
`dolly-v2-12b` is a 12 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from
[EleutherAI's](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned
on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
In a Databricks notebook you could run:
```python
%pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2"
```
The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline`
found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required.
Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality.
It is also fine to remove it if there is sufficient memory.
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
```
You can then use the pipeline to answer instructions:
```python
res = generate_text("Explain to me the difference between nuclear fission and fusion.")
print(res[0]["generated_text"])
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from instruct_pipeline import InstructionTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b", device_map="auto", torch_dtype=torch.bfloat16)
generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer)
```
### LangChain Usage
To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned
and the default for the pipeline is to only return the new text.
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16,
trust_remote_code=True, device_map="auto", return_full_text=True)
```
You can create a prompt that either has only an instruction or has an instruction with context:
```python
from langchain import PromptTemplate, LLMChain
from langchain.llms import HuggingFacePipeline
# template for an instrution with no input
prompt = PromptTemplate(
input_variables=["instruction"],
template="{instruction}")
# template for an instruction with input
prompt_with_context = PromptTemplate(
input_variables=["instruction", "context"],
template="{instruction}\n\nInput:\n{context}")
hf_pipeline = HuggingFacePipeline(pipeline=generate_text)
llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt)
llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context)
```
Example predicting using a simple instruction:
```python
print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip())
```
Example predicting using an instruction with context:
```python
context = """George Washington (February 22, 1732[b] - December 14, 1799) was an American military officer, statesman,
and Founding Father who served as the first president of the United States from 1789 to 1797."""
print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip())
```
## Known Limitations
### Performance Limitations
**`dolly-v2-12b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform
competitively with more modern model architectures or models subject to larger pretraining corpuses.
The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community.
In particular, `dolly-v2-12b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,
dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.
Moreover, we find that `dolly-v2-12b` does not have some capabilities, such as well-formatted letter writing, present in the original model.
### Dataset Limitations
Like all language models, `dolly-v2-12b` reflects the content and limitations of its training corpuses.
- **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,
it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly
in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit
associations.
- **`databricks-dolly-15k`**: The training data on which `dolly-v2-12b` is instruction tuned represents natural language instructions generated
by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages
for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or
personally identifying information about non-public figures, but it may contain typos and factual errors.
The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects
the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.
Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that
maximize the potential of all individuals and organizations.
### Benchmark Metrics
Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness);
model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-12b` is not state of the art,
and in fact underperforms `dolly-v1-6b` in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,
but a robust statement as to the sources of these variations requires further study.
| model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean |
| --------------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | -------- | ---------|
| EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 |
| EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 |
| databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 |
| EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 |
| EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 |
| databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 |
| databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 |
| databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 |
| EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 |
# Citation
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
# Happy Hacking!
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
dolly-v2-12b - bnb 4bits
- Model creator: https://huggingface.co/databricks/
- Original model: https://huggingface.co/databricks/dolly-v2-12b/
Original model description:
---
license: mit
language:
- en
library_name: transformers
inference: false
datasets:
- databricks/databricks-dolly-15k
---
# dolly-v2-12b Model Card
## Summary
Databricks' `dolly-v2-12b`, an instruction-following large language model trained on the Databricks machine learning platform
that is licensed for commercial use. Based on `pythia-12b`, Dolly is trained on ~15k instruction/response fine tuning records
[`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated
by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation,
information extraction, open QA and summarization. `dolly-v2-12b` is not a state-of-the-art model, but does exhibit surprisingly
high quality instruction following behavior not characteristic of the foundation model on which it is based.
Dolly v2 is also available in these smaller models sizes:
* [dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b), a 6.9 billion parameter based on `pythia-6.9b`
* [dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b), a 2.8 billion parameter based on `pythia-2.8b`
Please refer to the [dolly GitHub repo](https://github.com/databrickslabs/dolly#getting-started-with-response-generation) for tips on
running inference for various GPU configurations.
**Owner**: Databricks, Inc.
## Model Overview
`dolly-v2-12b` is a 12 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from
[EleutherAI's](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned
on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
In a Databricks notebook you could run:
```python
%pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2"
```
The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline`
found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required.
Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality.
It is also fine to remove it if there is sufficient memory.
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
```
You can then use the pipeline to answer instructions:
```python
res = generate_text("Explain to me the difference between nuclear fission and fusion.")
print(res[0]["generated_text"])
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from instruct_pipeline import InstructionTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b", device_map="auto", torch_dtype=torch.bfloat16)
generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer)
```
### LangChain Usage
To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned
and the default for the pipeline is to only return the new text.
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16,
trust_remote_code=True, device_map="auto", return_full_text=True)
```
You can create a prompt that either has only an instruction or has an instruction with context:
```python
from langchain import PromptTemplate, LLMChain
from langchain.llms import HuggingFacePipeline
# template for an instrution with no input
prompt = PromptTemplate(
input_variables=["instruction"],
template="{instruction}")
# template for an instruction with input
prompt_with_context = PromptTemplate(
input_variables=["instruction", "context"],
template="{instruction}\n\nInput:\n{context}")
hf_pipeline = HuggingFacePipeline(pipeline=generate_text)
llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt)
llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context)
```
Example predicting using a simple instruction:
```python
print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip())
```
Example predicting using an instruction with context:
```python
context = """George Washington (February 22, 1732[b] - December 14, 1799) was an American military officer, statesman,
and Founding Father who served as the first president of the United States from 1789 to 1797."""
print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip())
```
## Known Limitations
### Performance Limitations
**`dolly-v2-12b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform
competitively with more modern model architectures or models subject to larger pretraining corpuses.
The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community.
In particular, `dolly-v2-12b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,
dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.
Moreover, we find that `dolly-v2-12b` does not have some capabilities, such as well-formatted letter writing, present in the original model.
### Dataset Limitations
Like all language models, `dolly-v2-12b` reflects the content and limitations of its training corpuses.
- **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,
it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly
in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit
associations.
- **`databricks-dolly-15k`**: The training data on which `dolly-v2-12b` is instruction tuned represents natural language instructions generated
by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages
for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or
personally identifying information about non-public figures, but it may contain typos and factual errors.
The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects
the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.
Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that
maximize the potential of all individuals and organizations.
### Benchmark Metrics
Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness);
model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-12b` is not state of the art,
and in fact underperforms `dolly-v1-6b` in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,
but a robust statement as to the sources of these variations requires further study.
| model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean |
| --------------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | -------- | ---------|
| EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 |
| EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 |
| databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 |
| EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 |
| EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 |
| databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 |
| databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 |
| databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 |
| EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 |
# Citation
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
# Happy Hacking!
|
{}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 44,416 |
gaudi/opus-mt-fi-hr-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-22T15:51:34Z |
2024-10-19T03:33:51+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-fi-hr)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-fi-hr).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-fi-hr --output_dir ./ctranslate2/opus-mt-fi-hr-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-fi-hr-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-fi-hr-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-fi-hr-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-fi-hr) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-fi-hr)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-fi-hr).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-fi-hr --output_dir ./ctranslate2/opus-mt-fi-hr-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-fi-hr-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-fi-hr-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-fi-hr-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-fi-hr) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 44,417 |
google/paligemma-3b-ft-scicap-448-jax
|
google
|
image-text-to-text
|
[
"big_vision",
"paligemma",
"jax",
"image-text-to-text",
"arxiv:2110.11624",
"arxiv:2310.09199",
"arxiv:2303.15343",
"arxiv:2403.08295",
"arxiv:1706.03762",
"arxiv:2010.11929",
"arxiv:2209.06794",
"arxiv:2209.04372",
"arxiv:2103.01913",
"arxiv:2401.06209",
"arxiv:2305.10355",
"arxiv:2205.12522",
"arxiv:2108.03353",
"arxiv:2010.04295",
"arxiv:2203.10244",
"arxiv:1810.12440",
"arxiv:1905.13648",
"arxiv:1608.00272",
"arxiv:1908.04913",
"arxiv:2407.07726",
"license:gemma",
"region:us"
] | 2024-05-12T00:39:07Z |
2024-07-19T12:09:19+00:00
| 0 | 0 |
---
library_name: big_vision
license: gemma
pipeline_tag: image-text-to-text
tags:
- paligemma
- jax
extra_gated_heading: Access PaliGemma on Hugging Face
extra_gated_prompt: To access PaliGemma on Hugging Face, you’re required to review
and agree to Google’s usage license. To do this, please ensure you’re logged-in
to Hugging Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# PaliGemma model card
**Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma)
JAX/FLAX PaliGemma 3B weights, fine-tuned with 448*448 input images on the <a href="https://arxiv.org/abs/2110.11624">SciCap</a> dataset. The models are available in float32, bfloat16 and float16 format for research purposes only. The fine-tune config is available at <a href="https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/paligemma/transfers/scicap.py">big_vision</a>.
**Resources and technical documentation:**
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [PaliGemma on Kaggle](https://www.kaggle.com/models/google/paligemma)
* [PaliGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363)
**Terms of Use:** [Terms](https://www.kaggle.com/models/google/paligemma-ft/license/consent/verify/huggingface?returnModelRepoId=google/paligemma-3b-ft-scicap-448-jax)
**Authors:** Google
## Model information
### Model summary
#### Description
PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by
[PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as
the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma
language model](https://arxiv.org/abs/2403.08295). It takes both image and text
as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation.
#### Model architecture
PaliGemma is the composition of a [Transformer
decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image
encoder](https://arxiv.org/abs/2010.11929), with a total of 3 billion
params. The text decoder is initialized from
[Gemma-2B](https://www.kaggle.com/models/google/gemma). The image encoder is
initialized from
[SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb).
PaliGemma is trained following the PaLI-3 recipes.
#### Inputs and outputs
* **Input:** Image and text string, such as a prompt to caption the image, or
a question.
* **Output:** Generated text in response to the input, such as a caption of
the image, an answer to a question, a list of object bounding box
coordinates, or segmentation codewords.
### Model data
#### Pre-train datasets
PaliGemma is pre-trained on the following mixture of datasets:
* **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is
a web-scale multilingual image-text dataset built from the public web. A
wide range of WebLI splits are used to acquire versatile model capabilities,
such as visual semantic understanding, object localization,
visually-situated text understanding, multilinguality, etc.
* **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et
al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud
Translation API](https://cloud.google.com/translate) to translate into 34
additional languages.
* **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al.,
2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the
same additional 34 languages as CC3M-35L, using the [Google Cloud
Translation API](https://cloud.google.com/translate).
* **OpenImages:** Detection and object-aware questions and answers
([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by
handcrafted rules on the [OpenImages dataset].
* **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al.,
2021](https://arxiv.org/abs/2103.01913)).
[OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html
#### Data responsibility filtering
The following filters are applied to WebLI, with the goal of training PaliGemma
on clean data:
* **Pornographic image filtering:** This filter removes images deemed to be of
pornographic nature.
* **Text safety filtering:** We identify and filter out images that are paired
with unsafe text. Unsafe text is any text deemed to contain or be about
CSAI, pornography, vulgarities, or otherwise offensive.
* **Text toxicity filtering:** We further use the [Perspective
API](https://perspectiveapi.com/) to identify and filter out images that are
paired with text deemed insulting, obscene, hateful or otherwise toxic.
* **Text personal information filtering:** We filtered certain personal information and other sensitive data using [Cloud Data Loss Prevention (DLP)
API](https://cloud.google.com/security/products/dlp) to protect the privacy
of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed.
* **Additional methods:** Filtering based on content quality and safety in
line with our policies and practices.
[other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759
## Implementation information
### Hardware
PaliGemma was trained using the latest generation of Tensor Processing Unit
(TPU) hardware (TPUv5e).
### Software
Training was done using [JAX](https://github.com/google/jax),
[Flax](https://github.com/google/flax),
[TFDS](https://github.com/tensorflow/datasets) and
[`big_vision`](https://github.com/google-research/big_vision).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
TFDS is used to access datasets and Flax is used for model architecture. The
PaliGemma fine-tune code and inference code are released in the `big_vision`
GitHub repository.
## Evaluation information
### Benchmark results
In order to verify the transferability of PaliGemma to a wide variety of
academic tasks, we fine-tune the pretrained models on each task. Additionally we
train the mix model with a mixture of the transfer tasks. We report results on
different resolutions to provide an impression of which tasks benefit from
increased resolution. Importantly, none of these tasks or datasets are part of
the pretraining data mixture, and their images are explicitly removed from the
web-scale pre-training data.
#### Mix model (fine-tune on mixture of transfer tasks)
<table>
<tbody><tr>
<th>Benchmark</th>
<th>Metric (split)</th>
<th>mix-224</th>
<th>mix-448</th>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2401.06209">MMVP</a></td>
<td>Paired Accuracy</td>
<td>46.00</td>
<td>45.33</td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2305.10355">POPE</a></td>
<td>Accuracy<br>(random/popular/adversarial)</td>
<td>
88.00<br>
86.63<br>
85.67
</td>
<td>
89.37<br>
88.40<br>
87.47
</td>
</tr>
<tr>
<td><a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a></td>
<td>Accuracy (test)</td>
<td>65.20</td>
<td>65.47</td>
</tr>
</tbody></table>
#### Single task (fine-tune on single task)
<table>
<tbody><tr>
<th>Benchmark<br>(train split)</th>
<th>Metric<br>(split)</th>
<th>pt-224</th>
<th>pt-448</th>
<th>pt-896</th>
</tr>
<tr>
<th>Captioning</th>
</tr>
<tr>
<td>
<a href="https://cocodataset.org/#home">COCO captions</a><br>(train+restval)
</td>
<td>CIDEr (val)</td>
<td>141.92</td>
<td>144.60</td>
</tr>
<tr>
<td>
<a href="https://nocaps.org/">NoCaps</a><br>(Eval of COCO<br>captions transfer)
</td>
<td>CIDEr (val)</td>
<td>121.72</td>
<td>123.58</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a><br>(train)
</td>
<td>CIDEr dev<br>(en/avg-34/avg)</td>
<td>
139.2<br>
115.8<br>
116.4
</td>
<td>
141.2<br>
118.0<br>
118.6
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/pdf/2205.12522">XM3600</a><br>(Eval of COCO-35L transfer)
</td>
<td>CIDEr dev<br>(en/avg-34/avg)</td>
<td>
78.1<br>
41.3<br>
42.4
</td>
<td>
80.0<br>
41.9<br>
42.9
</td>
</tr>
<tr>
<td>
<a href="https://textvqa.org/textcaps/">TextCaps</a><br>(train)
</td>
<td>CIDEr (val)</td>
<td>127.48</td>
<td>153.94</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2110.11624">SciCap</a><br>(first sentence, no subfigure)<br>(train+val)
</td>
<td>CIDEr/BLEU-4<br>(test)</td>
<td>
162.25<br>
0.192<br>
</td>
<td>
181.49<br>
0.211<br>
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2108.03353">Screen2words</a><br>(train+dev)
</td>
<td>CIDEr (test)</td>
<td>117.57</td>
<td>119.59</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2010.04295">Widget Captioning</a><br>(train+dev)
</td>
<td>CIDEr (test)</td>
<td>136.07</td>
<td>148.36</td>
</tr>
<tr>
<th>Question answering</th>
</tr>
<tr>
<td>
<a href="https://visualqa.org/index.html">VQAv2</a><br>(train+validation)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>83.19</td>
<td>85.64</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2401.06209">MMVP</a><br>(Eval of VQAv2 transfer)
</td>
<td>Paired Accuracy</td>
<td>47.33</td>
<td>45.33</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2305.10355">POPE</a><br>(Eval of VQAv2 transfer)
</td>
<td>Accuracy<br>(random/popular/<br>adversarial)</td>
<td>
87.80<br>
85.87<br>
84.27
</td>
<td>
88.23<br>
86.77<br>
85.90
</td>
</tr>
<tr>
<td>
<a href="https://okvqa.allenai.org/">OKVQA</a><br>(train)
</td>
<td>Accuracy (val)</td>
<td>63.54</td>
<td>63.15</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (MC)<br>(train+val)
</td>
<td>Accuracy<br>(Test server)</td>
<td>76.37</td>
<td>76.90</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (DA)<br>(train+val)
</td>
<td>Accuracy<br>(Test server)</td>
<td>61.85</td>
<td>63.22</td>
</tr>
<tr>
<td>
<a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a><br>(train_balanced+<br>val_balanced)
</td>
<td>Accuracy<br>(testdev balanced)</td>
<td>65.61</td>
<td>67.03</td>
</tr>
<tr>
<td>
<a href="https://aclanthology.org/2022.findings-acl.196/">xGQA</a><br>(Eval of GQA transfer)
</td>
<td>Mean Accuracy<br>(bn, de, en, id,<br>ko, pt, ru, zh)</td>
<td>58.37</td>
<td>59.07</td>
</tr>
<tr>
<td>
<a href="https://lil.nlp.cornell.edu/nlvr/">NLVR2</a><br>(train+dev)
</td>
<td>Accuracy (test)</td>
<td>90.02</td>
<td>88.93</td>
</tr>
<tr>
<td>
<a href="https://marvl-challenge.github.io/">MaRVL</a><br>(Eval of NLVR2 transfer)
</td>
<td>Mean Accuracy<br>(test)<br>(id, sw, ta, tr, zh)</td>
<td>80.57</td>
<td>76.78</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/data/diagrams">AI2D</a><br>(train)
</td>
<td>Accuracy (test)</td>
<td>72.12</td>
<td>73.28</td>
</tr>
<tr>
<td>
<a href="https://scienceqa.github.io/">ScienceQA</a><br>(Img subset, no CoT)<br>(train+val)
</td>
<td>Accuracy (test)</td>
<td>95.39</td>
<td>95.93</td>
</tr>
<tr>
<td>
<a href="https://zenodo.org/records/6344334">RSVQA-LR</a> (Non numeric)<br>(train+val)
</td>
<td>Mean Accuracy<br>(test)</td>
<td>92.65</td>
<td>93.11</td>
</tr>
<tr>
<td>
<a href="https://zenodo.org/records/6344367">RSVQA-HR</a> (Non numeric)<br>(train+val)
</td>
<td>Mean Accuracy<br>(test/test2)</td>
<td>
92.61<br>
90.58
</td>
<td>
92.79<br>
90.54
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2203.10244">ChartQA</a><br>(human+aug)x(train+val)
</td>
<td>Mean Relaxed<br>Accuracy<br>(test_human,<br>test_aug)</td>
<td>57.08</td>
<td>71.36</td>
</tr>
<tr>
<td>
<a href="https://vizwiz.org/tasks-and-datasets/vqa/">VizWiz VQA</a><br>(train+val)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>
73.7
</td>
<td>
75.52
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1810.12440">TallyQA</a><br>(train)
</td>
<td>Accuracy<br>(test_simple/<br>test_complex)</td>
<td>
81.72<br>
69.56
</td>
<td>
84.86<br>
72.27
</td>
</tr>
<tr>
<td>
<a href="https://ocr-vqa.github.io/">OCR-VQA</a><br>(train+val)
</td>
<td>Accuracy (test)</td>
<td>72.32</td>
<td>74.61</td>
<td>74.93</td>
</tr>
<tr>
<td>
<a href="https://textvqa.org/">TextVQA</a><br>(train+val)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>55.47</td>
<td>73.15</td>
<td>76.48</td>
</tr>
<tr>
<td>
<a href="https://www.docvqa.org/">DocVQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>43.74</td>
<td>78.02</td>
<td>84.77</td>
</tr>
<tr>
<td>
<a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic VQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>28.46</td>
<td>40.47</td>
<td>47.75</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1905.13648">SceneText VQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>63.29</td>
<td>81.82</td>
<td>84.40</td>
</tr>
<tr>
<th>Segmentation</th>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1608.00272">RefCOCO</a><br>(combined refcoco, refcoco+,<br>refcocog excluding val<br>and test images)
</td>
<td>MIoU<br>(validation)<br>refcoco/refcoco+/<br>refcocog</td>
<td>
73.40<br>
68.32<br>
67.65
</td>
<td>
75.57<br>
69.76<br>
70.17
</td>
<td>
76.94<br>
72.18<br>
72.22
</td>
</tr>
<tr>
<th>Video tasks (Caption/QA)</th>
</tr>
<tr>
<td>MSR-VTT (Captioning)</td>
<td>CIDEr (test)</td>
<td>70.54</td>
</tr>
<tr>
<td>MSR-VTT (QA)</td>
<td>Accuracy (test)</td>
<td>50.09</td>
</tr>
<tr>
<td>ActivityNet (Captioning)</td>
<td>CIDEr (test)</td>
<td>34.62</td>
</tr>
<tr>
<td>ActivityNet (QA)</td>
<td>Accuracy (test)</td>
<td>50.78</td>
</tr>
<tr>
<td>VATEX (Captioning)</td>
<td>CIDEr (test)</td>
<td>79.73</td>
</tr>
<tr>
<td>MSVD (QA)</td>
<td>Accuracy (test)</td>
<td>60.22</td>
</tr>
</tbody></table>
## Ethics and safety
### Evaluation approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Human evaluation on prompts covering child safety, content safety and
representational harms. See the [Gemma model
card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for
more details on evaluation approach, but with image captioning and visual
question answering setups.
* Image-to-Text benchmark evaluation: Benchmark against relevant academic
datasets such as FairFace Dataset ([Karkkainen et al.,
2021](https://arxiv.org/abs/1908.04913)).
### Evaluation results
* The human evaluation results of ethics and safety evaluations are within
acceptable thresholds for meeting [internal
policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11)
for categories such as child safety, content safety and representational
harms.
* On top of robust internal evaluations, we also use the Perspective API
(threshold of 0.8) to measure toxicity, profanity, and other potential
issues in the generated captions for images sourced from the FairFace
dataset. We report the maximum and median values observed across subgroups
for each of the perceived gender, ethnicity, and age attributes.
<table>
<tbody><tr>
</tr></tbody><tbody><tr><th>Metric</th>
<th>Perceived<br>gender</th>
<th></th>
<th>Ethnicity</th>
<th></th>
<th>Age group</th>
<th></th>
</tr>
<tr>
<th></th>
<th>Maximum</th>
<th>Median</th>
<th>Maximum</th>
<th>Median</th>
<th>Maximum</th>
<th>Median</th>
</tr>
<tr>
<td>Toxicity</td>
<td>0.04%</td>
<td>0.03%</td>
<td>0.08%</td>
<td>0.00%</td>
<td>0.09%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Identity Attack</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Insult</td>
<td>0.06%</td>
<td>0.04%</td>
<td>0.09%</td>
<td>0.07%</td>
<td>0.16%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Threat</td>
<td>0.06%</td>
<td>0.05%</td>
<td>0.14%</td>
<td>0.05%</td>
<td>0.17%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Profanity</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
</tbody></table>
## Usage and limitations
### Intended usage
Open Vision Language Models (VLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
Fine-tune on specific vision-language task:
* The pre-trained models can be fine-tuned on a wide range of vision-language
tasks such as: image captioning, short video caption, visual question
answering, text reading, object detection and object segmentation.
* The pre-trained models can be fine-tuned for specific domains such as remote
sensing question answering, visual questions from people who are blind,
science question answering, describe UI element functionalities.
* The pre-trained models can be fine-tuned for tasks with non-textual outputs
such as bounding boxes or segmentation masks.
Vision-language research:
* The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM
techniques, develop algorithms, and contribute to the advancement of the
field.
### Ethical considerations and risks
The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following:
* Bias and Fairness
* VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card.
* Misinformation and Misuse
* VLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
* Transparency and Accountability
* This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem.
Risks identified and mitigations:
* **Perpetuation of biases:** It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* **Generation of harmful content:** Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
* **Misuse for malicious purposes:** Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the [Gemma
Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques.
### Limitations
* Most limitations inherited from the underlying Gemma model still apply:
* VLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* Natural language is inherently complex. VLMs might struggle to grasp
subtle nuances, sarcasm, or figurative language.
* VLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* VLMs rely on statistical patterns in language and images. They might
lack the ability to apply common sense reasoning in certain situations.
* PaliGemma was designed first and foremost to serve as a general pre-trained
model for transfer to specialized tasks. Hence, its "out of the box" or
"zero-shot" performance might lag behind models designed specifically for
that.
* PaliGemma is not a multi-turn chatbot. It is designed for a single round of
image and text input.
## Citation
```bibtex
@article{beyer2024paligemma,
title={{PaliGemma: A versatile 3B VLM for transfer}},
author={Lucas Beyer* and Andreas Steiner* and André Susano Pinto* and Alexander Kolesnikov* and Xiao Wang* and Daniel Salz and Maxim Neumann and Ibrahim Alabdulmohsin and Michael Tschannen and Emanuele Bugliarello and Thomas Unterthiner and Daniel Keysers and Skanda Koppula and Fangyu Liu and Adam Grycner and Alexey Gritsenko and Neil Houlsby and Manoj Kumar and Keran Rong and Julian Eisenschlos and Rishabh Kabra and Matthias Bauer and Matko Bošnjak and Xi Chen and Matthias Minderer and Paul Voigtlaender and Ioana Bica and Ivana Balazevic and Joan Puigcerver and Pinelopi Papalampidi and Olivier Henaff and Xi Xiong and Radu Soricut and Jeremiah Harmsen and Xiaohua Zhai*},
year={2024},
journal={arXiv preprint arXiv:2407.07726}
}
```
Find the paper [here](https://arxiv.org/abs/2407.07726).
| null |
Non_BioNLP
|
# PaliGemma model card
**Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma)
JAX/FLAX PaliGemma 3B weights, fine-tuned with 448*448 input images on the <a href="https://arxiv.org/abs/2110.11624">SciCap</a> dataset. The models are available in float32, bfloat16 and float16 format for research purposes only. The fine-tune config is available at <a href="https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/paligemma/transfers/scicap.py">big_vision</a>.
**Resources and technical documentation:**
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [PaliGemma on Kaggle](https://www.kaggle.com/models/google/paligemma)
* [PaliGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363)
**Terms of Use:** [Terms](https://www.kaggle.com/models/google/paligemma-ft/license/consent/verify/huggingface?returnModelRepoId=google/paligemma-3b-ft-scicap-448-jax)
**Authors:** Google
## Model information
### Model summary
#### Description
PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by
[PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as
the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma
language model](https://arxiv.org/abs/2403.08295). It takes both image and text
as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation.
#### Model architecture
PaliGemma is the composition of a [Transformer
decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image
encoder](https://arxiv.org/abs/2010.11929), with a total of 3 billion
params. The text decoder is initialized from
[Gemma-2B](https://www.kaggle.com/models/google/gemma). The image encoder is
initialized from
[SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb).
PaliGemma is trained following the PaLI-3 recipes.
#### Inputs and outputs
* **Input:** Image and text string, such as a prompt to caption the image, or
a question.
* **Output:** Generated text in response to the input, such as a caption of
the image, an answer to a question, a list of object bounding box
coordinates, or segmentation codewords.
### Model data
#### Pre-train datasets
PaliGemma is pre-trained on the following mixture of datasets:
* **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is
a web-scale multilingual image-text dataset built from the public web. A
wide range of WebLI splits are used to acquire versatile model capabilities,
such as visual semantic understanding, object localization,
visually-situated text understanding, multilinguality, etc.
* **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et
al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud
Translation API](https://cloud.google.com/translate) to translate into 34
additional languages.
* **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al.,
2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the
same additional 34 languages as CC3M-35L, using the [Google Cloud
Translation API](https://cloud.google.com/translate).
* **OpenImages:** Detection and object-aware questions and answers
([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by
handcrafted rules on the [OpenImages dataset].
* **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al.,
2021](https://arxiv.org/abs/2103.01913)).
[OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html
#### Data responsibility filtering
The following filters are applied to WebLI, with the goal of training PaliGemma
on clean data:
* **Pornographic image filtering:** This filter removes images deemed to be of
pornographic nature.
* **Text safety filtering:** We identify and filter out images that are paired
with unsafe text. Unsafe text is any text deemed to contain or be about
CSAI, pornography, vulgarities, or otherwise offensive.
* **Text toxicity filtering:** We further use the [Perspective
API](https://perspectiveapi.com/) to identify and filter out images that are
paired with text deemed insulting, obscene, hateful or otherwise toxic.
* **Text personal information filtering:** We filtered certain personal information and other sensitive data using [Cloud Data Loss Prevention (DLP)
API](https://cloud.google.com/security/products/dlp) to protect the privacy
of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed.
* **Additional methods:** Filtering based on content quality and safety in
line with our policies and practices.
[other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759
## Implementation information
### Hardware
PaliGemma was trained using the latest generation of Tensor Processing Unit
(TPU) hardware (TPUv5e).
### Software
Training was done using [JAX](https://github.com/google/jax),
[Flax](https://github.com/google/flax),
[TFDS](https://github.com/tensorflow/datasets) and
[`big_vision`](https://github.com/google-research/big_vision).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
TFDS is used to access datasets and Flax is used for model architecture. The
PaliGemma fine-tune code and inference code are released in the `big_vision`
GitHub repository.
## Evaluation information
### Benchmark results
In order to verify the transferability of PaliGemma to a wide variety of
academic tasks, we fine-tune the pretrained models on each task. Additionally we
train the mix model with a mixture of the transfer tasks. We report results on
different resolutions to provide an impression of which tasks benefit from
increased resolution. Importantly, none of these tasks or datasets are part of
the pretraining data mixture, and their images are explicitly removed from the
web-scale pre-training data.
#### Mix model (fine-tune on mixture of transfer tasks)
<table>
<tbody><tr>
<th>Benchmark</th>
<th>Metric (split)</th>
<th>mix-224</th>
<th>mix-448</th>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2401.06209">MMVP</a></td>
<td>Paired Accuracy</td>
<td>46.00</td>
<td>45.33</td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2305.10355">POPE</a></td>
<td>Accuracy<br>(random/popular/adversarial)</td>
<td>
88.00<br>
86.63<br>
85.67
</td>
<td>
89.37<br>
88.40<br>
87.47
</td>
</tr>
<tr>
<td><a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a></td>
<td>Accuracy (test)</td>
<td>65.20</td>
<td>65.47</td>
</tr>
</tbody></table>
#### Single task (fine-tune on single task)
<table>
<tbody><tr>
<th>Benchmark<br>(train split)</th>
<th>Metric<br>(split)</th>
<th>pt-224</th>
<th>pt-448</th>
<th>pt-896</th>
</tr>
<tr>
<th>Captioning</th>
</tr>
<tr>
<td>
<a href="https://cocodataset.org/#home">COCO captions</a><br>(train+restval)
</td>
<td>CIDEr (val)</td>
<td>141.92</td>
<td>144.60</td>
</tr>
<tr>
<td>
<a href="https://nocaps.org/">NoCaps</a><br>(Eval of COCO<br>captions transfer)
</td>
<td>CIDEr (val)</td>
<td>121.72</td>
<td>123.58</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a><br>(train)
</td>
<td>CIDEr dev<br>(en/avg-34/avg)</td>
<td>
139.2<br>
115.8<br>
116.4
</td>
<td>
141.2<br>
118.0<br>
118.6
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/pdf/2205.12522">XM3600</a><br>(Eval of COCO-35L transfer)
</td>
<td>CIDEr dev<br>(en/avg-34/avg)</td>
<td>
78.1<br>
41.3<br>
42.4
</td>
<td>
80.0<br>
41.9<br>
42.9
</td>
</tr>
<tr>
<td>
<a href="https://textvqa.org/textcaps/">TextCaps</a><br>(train)
</td>
<td>CIDEr (val)</td>
<td>127.48</td>
<td>153.94</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2110.11624">SciCap</a><br>(first sentence, no subfigure)<br>(train+val)
</td>
<td>CIDEr/BLEU-4<br>(test)</td>
<td>
162.25<br>
0.192<br>
</td>
<td>
181.49<br>
0.211<br>
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2108.03353">Screen2words</a><br>(train+dev)
</td>
<td>CIDEr (test)</td>
<td>117.57</td>
<td>119.59</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2010.04295">Widget Captioning</a><br>(train+dev)
</td>
<td>CIDEr (test)</td>
<td>136.07</td>
<td>148.36</td>
</tr>
<tr>
<th>Question answering</th>
</tr>
<tr>
<td>
<a href="https://visualqa.org/index.html">VQAv2</a><br>(train+validation)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>83.19</td>
<td>85.64</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2401.06209">MMVP</a><br>(Eval of VQAv2 transfer)
</td>
<td>Paired Accuracy</td>
<td>47.33</td>
<td>45.33</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2305.10355">POPE</a><br>(Eval of VQAv2 transfer)
</td>
<td>Accuracy<br>(random/popular/<br>adversarial)</td>
<td>
87.80<br>
85.87<br>
84.27
</td>
<td>
88.23<br>
86.77<br>
85.90
</td>
</tr>
<tr>
<td>
<a href="https://okvqa.allenai.org/">OKVQA</a><br>(train)
</td>
<td>Accuracy (val)</td>
<td>63.54</td>
<td>63.15</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (MC)<br>(train+val)
</td>
<td>Accuracy<br>(Test server)</td>
<td>76.37</td>
<td>76.90</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (DA)<br>(train+val)
</td>
<td>Accuracy<br>(Test server)</td>
<td>61.85</td>
<td>63.22</td>
</tr>
<tr>
<td>
<a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a><br>(train_balanced+<br>val_balanced)
</td>
<td>Accuracy<br>(testdev balanced)</td>
<td>65.61</td>
<td>67.03</td>
</tr>
<tr>
<td>
<a href="https://aclanthology.org/2022.findings-acl.196/">xGQA</a><br>(Eval of GQA transfer)
</td>
<td>Mean Accuracy<br>(bn, de, en, id,<br>ko, pt, ru, zh)</td>
<td>58.37</td>
<td>59.07</td>
</tr>
<tr>
<td>
<a href="https://lil.nlp.cornell.edu/nlvr/">NLVR2</a><br>(train+dev)
</td>
<td>Accuracy (test)</td>
<td>90.02</td>
<td>88.93</td>
</tr>
<tr>
<td>
<a href="https://marvl-challenge.github.io/">MaRVL</a><br>(Eval of NLVR2 transfer)
</td>
<td>Mean Accuracy<br>(test)<br>(id, sw, ta, tr, zh)</td>
<td>80.57</td>
<td>76.78</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/data/diagrams">AI2D</a><br>(train)
</td>
<td>Accuracy (test)</td>
<td>72.12</td>
<td>73.28</td>
</tr>
<tr>
<td>
<a href="https://scienceqa.github.io/">ScienceQA</a><br>(Img subset, no CoT)<br>(train+val)
</td>
<td>Accuracy (test)</td>
<td>95.39</td>
<td>95.93</td>
</tr>
<tr>
<td>
<a href="https://zenodo.org/records/6344334">RSVQA-LR</a> (Non numeric)<br>(train+val)
</td>
<td>Mean Accuracy<br>(test)</td>
<td>92.65</td>
<td>93.11</td>
</tr>
<tr>
<td>
<a href="https://zenodo.org/records/6344367">RSVQA-HR</a> (Non numeric)<br>(train+val)
</td>
<td>Mean Accuracy<br>(test/test2)</td>
<td>
92.61<br>
90.58
</td>
<td>
92.79<br>
90.54
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2203.10244">ChartQA</a><br>(human+aug)x(train+val)
</td>
<td>Mean Relaxed<br>Accuracy<br>(test_human,<br>test_aug)</td>
<td>57.08</td>
<td>71.36</td>
</tr>
<tr>
<td>
<a href="https://vizwiz.org/tasks-and-datasets/vqa/">VizWiz VQA</a><br>(train+val)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>
73.7
</td>
<td>
75.52
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1810.12440">TallyQA</a><br>(train)
</td>
<td>Accuracy<br>(test_simple/<br>test_complex)</td>
<td>
81.72<br>
69.56
</td>
<td>
84.86<br>
72.27
</td>
</tr>
<tr>
<td>
<a href="https://ocr-vqa.github.io/">OCR-VQA</a><br>(train+val)
</td>
<td>Accuracy (test)</td>
<td>72.32</td>
<td>74.61</td>
<td>74.93</td>
</tr>
<tr>
<td>
<a href="https://textvqa.org/">TextVQA</a><br>(train+val)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>55.47</td>
<td>73.15</td>
<td>76.48</td>
</tr>
<tr>
<td>
<a href="https://www.docvqa.org/">DocVQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>43.74</td>
<td>78.02</td>
<td>84.77</td>
</tr>
<tr>
<td>
<a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic VQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>28.46</td>
<td>40.47</td>
<td>47.75</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1905.13648">SceneText VQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>63.29</td>
<td>81.82</td>
<td>84.40</td>
</tr>
<tr>
<th>Segmentation</th>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1608.00272">RefCOCO</a><br>(combined refcoco, refcoco+,<br>refcocog excluding val<br>and test images)
</td>
<td>MIoU<br>(validation)<br>refcoco/refcoco+/<br>refcocog</td>
<td>
73.40<br>
68.32<br>
67.65
</td>
<td>
75.57<br>
69.76<br>
70.17
</td>
<td>
76.94<br>
72.18<br>
72.22
</td>
</tr>
<tr>
<th>Video tasks (Caption/QA)</th>
</tr>
<tr>
<td>MSR-VTT (Captioning)</td>
<td>CIDEr (test)</td>
<td>70.54</td>
</tr>
<tr>
<td>MSR-VTT (QA)</td>
<td>Accuracy (test)</td>
<td>50.09</td>
</tr>
<tr>
<td>ActivityNet (Captioning)</td>
<td>CIDEr (test)</td>
<td>34.62</td>
</tr>
<tr>
<td>ActivityNet (QA)</td>
<td>Accuracy (test)</td>
<td>50.78</td>
</tr>
<tr>
<td>VATEX (Captioning)</td>
<td>CIDEr (test)</td>
<td>79.73</td>
</tr>
<tr>
<td>MSVD (QA)</td>
<td>Accuracy (test)</td>
<td>60.22</td>
</tr>
</tbody></table>
## Ethics and safety
### Evaluation approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Human evaluation on prompts covering child safety, content safety and
representational harms. See the [Gemma model
card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for
more details on evaluation approach, but with image captioning and visual
question answering setups.
* Image-to-Text benchmark evaluation: Benchmark against relevant academic
datasets such as FairFace Dataset ([Karkkainen et al.,
2021](https://arxiv.org/abs/1908.04913)).
### Evaluation results
* The human evaluation results of ethics and safety evaluations are within
acceptable thresholds for meeting [internal
policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11)
for categories such as child safety, content safety and representational
harms.
* On top of robust internal evaluations, we also use the Perspective API
(threshold of 0.8) to measure toxicity, profanity, and other potential
issues in the generated captions for images sourced from the FairFace
dataset. We report the maximum and median values observed across subgroups
for each of the perceived gender, ethnicity, and age attributes.
<table>
<tbody><tr>
</tr></tbody><tbody><tr><th>Metric</th>
<th>Perceived<br>gender</th>
<th></th>
<th>Ethnicity</th>
<th></th>
<th>Age group</th>
<th></th>
</tr>
<tr>
<th></th>
<th>Maximum</th>
<th>Median</th>
<th>Maximum</th>
<th>Median</th>
<th>Maximum</th>
<th>Median</th>
</tr>
<tr>
<td>Toxicity</td>
<td>0.04%</td>
<td>0.03%</td>
<td>0.08%</td>
<td>0.00%</td>
<td>0.09%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Identity Attack</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Insult</td>
<td>0.06%</td>
<td>0.04%</td>
<td>0.09%</td>
<td>0.07%</td>
<td>0.16%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Threat</td>
<td>0.06%</td>
<td>0.05%</td>
<td>0.14%</td>
<td>0.05%</td>
<td>0.17%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Profanity</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
</tbody></table>
## Usage and limitations
### Intended usage
Open Vision Language Models (VLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
Fine-tune on specific vision-language task:
* The pre-trained models can be fine-tuned on a wide range of vision-language
tasks such as: image captioning, short video caption, visual question
answering, text reading, object detection and object segmentation.
* The pre-trained models can be fine-tuned for specific domains such as remote
sensing question answering, visual questions from people who are blind,
science question answering, describe UI element functionalities.
* The pre-trained models can be fine-tuned for tasks with non-textual outputs
such as bounding boxes or segmentation masks.
Vision-language research:
* The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM
techniques, develop algorithms, and contribute to the advancement of the
field.
### Ethical considerations and risks
The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following:
* Bias and Fairness
* VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card.
* Misinformation and Misuse
* VLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
* Transparency and Accountability
* This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem.
Risks identified and mitigations:
* **Perpetuation of biases:** It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* **Generation of harmful content:** Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
* **Misuse for malicious purposes:** Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the [Gemma
Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques.
### Limitations
* Most limitations inherited from the underlying Gemma model still apply:
* VLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* Natural language is inherently complex. VLMs might struggle to grasp
subtle nuances, sarcasm, or figurative language.
* VLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* VLMs rely on statistical patterns in language and images. They might
lack the ability to apply common sense reasoning in certain situations.
* PaliGemma was designed first and foremost to serve as a general pre-trained
model for transfer to specialized tasks. Hence, its "out of the box" or
"zero-shot" performance might lag behind models designed specifically for
that.
* PaliGemma is not a multi-turn chatbot. It is designed for a single round of
image and text input.
## Citation
```bibtex
@article{beyer2024paligemma,
title={{PaliGemma: A versatile 3B VLM for transfer}},
author={Lucas Beyer* and Andreas Steiner* and André Susano Pinto* and Alexander Kolesnikov* and Xiao Wang* and Daniel Salz and Maxim Neumann and Ibrahim Alabdulmohsin and Michael Tschannen and Emanuele Bugliarello and Thomas Unterthiner and Daniel Keysers and Skanda Koppula and Fangyu Liu and Adam Grycner and Alexey Gritsenko and Neil Houlsby and Manoj Kumar and Keran Rong and Julian Eisenschlos and Rishabh Kabra and Matthias Bauer and Matko Bošnjak and Xi Chen and Matthias Minderer and Paul Voigtlaender and Ioana Bica and Ivana Balazevic and Joan Puigcerver and Pinelopi Papalampidi and Olivier Henaff and Xi Xiong and Radu Soricut and Jeremiah Harmsen and Xiaohua Zhai*},
year={2024},
journal={arXiv preprint arXiv:2407.07726}
}
```
Find the paper [here](https://arxiv.org/abs/2407.07726).
|
{"library_name": "big_vision", "license": "gemma", "pipeline_tag": "image-text-to-text", "tags": ["paligemma", "jax"], "extra_gated_heading": "Access PaliGemma on Hugging Face", "extra_gated_prompt": "To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
|
task
|
[
"QUESTION_ANSWERING",
"TRANSLATION"
] | 44,418 |
karthikrathod/autotrain-5um8a-sa81u
|
karthikrathod
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"autotrain",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-06-19T15:00:04Z |
2024-06-19T15:17:50+00:00
| 4 | 0 |
---
base_model: distilbert/distilbert-base-uncased
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.8434039950370789
f1_macro: 0.6405635167768103
f1_micro: 0.7258333333333333
f1_weighted: 0.7031763861072888
precision_macro: 0.6510401126834049
precision_micro: 0.7258333333333333
precision_weighted: 0.6973286083232175
recall_macro: 0.6512228541854506
recall_micro: 0.7258333333333333
recall_weighted: 0.7258333333333333
accuracy: 0.7258333333333333
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.8434039950370789
f1_macro: 0.6405635167768103
f1_micro: 0.7258333333333333
f1_weighted: 0.7031763861072888
precision_macro: 0.6510401126834049
precision_micro: 0.7258333333333333
precision_weighted: 0.6973286083232175
recall_macro: 0.6512228541854506
recall_micro: 0.7258333333333333
recall_weighted: 0.7258333333333333
accuracy: 0.7258333333333333
|
{"base_model": "distilbert/distilbert-base-uncased", "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 44,419 |
mradermacher/Greek_to_English_Translation-GGUF
|
mradermacher
| null |
[
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:TaperChipmunk32/Greek_to_English_Translation",
"base_model:quantized:TaperChipmunk32/Greek_to_English_Translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-12-23T12:48:06Z |
2024-12-23T20:23:30+00:00
| 41 | 0 |
---
base_model: TaperChipmunk32/Greek_to_English_Translation
language:
- en
library_name: transformers
license: apache-2.0
tags:
- generated_from_trainer
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TaperChipmunk32/Greek_to_English_Translation
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q5_K_S.gguf) | Q5_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q5_K_M.gguf) | Q5_K_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.f16.gguf) | f16 | 0.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| null |
Non_BioNLP
|
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TaperChipmunk32/Greek_to_English_Translation
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q5_K_S.gguf) | Q5_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q5_K_M.gguf) | Q5_K_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Greek_to_English_Translation-GGUF/resolve/main/Greek_to_English_Translation.f16.gguf) | f16 | 0.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"base_model": "TaperChipmunk32/Greek_to_English_Translation", "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "tags": ["generated_from_trainer"], "quantized_by": "mradermacher"}
|
task
|
[
"TRANSLATION"
] | 44,420 |
uaritm/enukruvie
|
uaritm
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"sbert",
"embeddings",
"multilingual",
"en",
"uk",
"ru",
"vi",
"dataset:ParallelSentencesDataset",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2023-12-24T01:12:22Z |
2023-12-24T12:38:36+00:00
| 50 | 1 |
---
datasets:
- ParallelSentencesDataset
license: mit
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- sbert
- embeddings
- multilingual
- en
- uk
- ru
- vi
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
# uaritm/enukruvie
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
An old version of this model is available here: [uaritm/multilingual_en_ru_uk](https://huggingface.co/uaritm/multilingual_en_ru_uk)
A new model, that adds Vietnamese is available here: [uaritm/enukruvie](https://huggingface.co/uaritm/enukruvie)
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 27000 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```
@misc{Uaritm,
title={sentence-transformers: Semantic similarity of medical texts},
author={Vitaliy Ostashko},
year={2023},
url={https://aihealth.site},
}
```
<!--- Describe where people can find more information -->
| null |
Non_BioNLP
|
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
# uaritm/enukruvie
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
An old version of this model is available here: [uaritm/multilingual_en_ru_uk](https://huggingface.co/uaritm/multilingual_en_ru_uk)
A new model, that adds Vietnamese is available here: [uaritm/enukruvie](https://huggingface.co/uaritm/enukruvie)
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 27000 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```
@misc{Uaritm,
title={sentence-transformers: Semantic similarity of medical texts},
author={Vitaliy Ostashko},
year={2023},
url={https://aihealth.site},
}
```
<!--- Describe where people can find more information -->
|
{"datasets": ["ParallelSentencesDataset"], "license": "mit", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "sbert", "embeddings", "multilingual", "en", "uk", "ru", "vi"]}
|
task
|
[
"SEMANTIC_SIMILARITY"
] | 44,421 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.