modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
onnx-community/rad-dino-ONNX
|
onnx-community
| 2025-06-22T03:19:46Z | 0 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"dinov2",
"image-feature-extraction",
"base_model:microsoft/rad-dino",
"base_model:quantized:microsoft/rad-dino",
"region:us"
] |
image-feature-extraction
| 2025-06-22T03:19:34Z |
---
library_name: transformers.js
base_model:
- microsoft/rad-dino
---
# rad-dino (ONNX)
This is an ONNX version of [microsoft/rad-dino](https://huggingface.co/microsoft/rad-dino). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
|
mavleo96/rl-bots
|
mavleo96
| 2025-06-22T02:02:18Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-22T01:45:03Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.43 +/- 18.65
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import PPO
from huggingface_sb3 import load_from_hub
import gym
# Define model repo_id and filename
repo_id = "mavleo96/rl-bots" # Change this to the actual repo if different
filename = "ppo-LunarLander-v2.zip"
# Load the model from the Hugging Face Hub
model = load_from_hub(repo_id, filename, model_class=PPO)
# Create the environment
env = gym.make("LunarLander-v2")
# Run a few episodes
obs = env.reset()
for _ in range(1000):
action, _states = model.predict(obs, deterministic=True)
obs, reward, done, info = env.step(action)
env.render()
if done:
obs = env.reset()
env.close()
```
|
nichady/epicphotogasm_ultimateFidelity
|
nichady
| 2025-06-22T01:31:57Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-22T01:28:38Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers pipeline that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aipib/llm-jp-3.1-1.8b-function-calling-Q4_K_M-GGUF
|
aipib
| 2025-06-22T01:26:10Z | 0 | 0 |
mlx
|
[
"mlx",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"ja",
"dataset:nappa0326/glaive-function-calling-v2-sharegpt-japanese",
"base_model:aipib/llm-jp-3.1-1.8b-function-calling",
"base_model:quantized:aipib/llm-jp-3.1-1.8b-function-calling",
"license:apache-2.0",
"region:us",
"conversational"
] |
text-generation
| 2025-06-22T01:25:55Z |
---
license: apache-2.0
language:
- ja
programming_language:
- Python
pipeline_tag: text-generation
library_name: mlx
inference: false
base_model: aipib/llm-jp-3.1-1.8b-function-calling
datasets:
- nappa0326/glaive-function-calling-v2-sharegpt-japanese
tags:
- llama-cpp
- gguf-my-repo
---
# aipib/llm-jp-3.1-1.8b-function-calling-Q4_K_M-GGUF
This model was converted to GGUF format from [`aipib/llm-jp-3.1-1.8b-function-calling`](https://huggingface.co/aipib/llm-jp-3.1-1.8b-function-calling) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/aipib/llm-jp-3.1-1.8b-function-calling) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo aipib/llm-jp-3.1-1.8b-function-calling-Q4_K_M-GGUF --hf-file llm-jp-3.1-1.8b-function-calling-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo aipib/llm-jp-3.1-1.8b-function-calling-Q4_K_M-GGUF --hf-file llm-jp-3.1-1.8b-function-calling-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo aipib/llm-jp-3.1-1.8b-function-calling-Q4_K_M-GGUF --hf-file llm-jp-3.1-1.8b-function-calling-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo aipib/llm-jp-3.1-1.8b-function-calling-Q4_K_M-GGUF --hf-file llm-jp-3.1-1.8b-function-calling-q4_k_m.gguf -c 2048
```
|
tamazightdev/gemma-3-4b-it-tmz
|
tamazightdev
| 2025-06-22T01:15:06Z | 0 | 0 | null |
[
"safetensors",
"unsloth",
"license:mit",
"region:us"
] | null | 2025-06-22T01:01:54Z |
---
license: mit
tags:
- unsloth
---
|
willystumblr/2025-06-21-14-54-13
|
willystumblr
| 2025-06-22T00:40:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T00:40:27Z |
---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
model_name: 2025-06-21-14-54-13
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 2025-06-21-14-54-13
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="willystumblr/2025-06-21-14-54-13", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/willystumblr/persona-craft/runs/rsyts3dm)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_14_2_city_3_49
|
winnieyangwannan
| 2025-06-22T00:19:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:17:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CohenQu/sft_Llama-3.2-3B_Mixture-of-Thoughts-code_orchard
|
CohenQu
| 2025-06-22T00:16:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:CohenQu/Mixture-of-Thoughts",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T01:36:23Z |
---
base_model: meta-llama/Llama-3.2-3B
datasets: CohenQu/Mixture-of-Thoughts
library_name: transformers
model_name: sft_Llama-3.2-3B_Mixture-of-Thoughts-code_orchard
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for sft_Llama-3.2-3B_Mixture-of-Thoughts-code_orchard
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on the [CohenQu/Mixture-of-Thoughts](https://huggingface.co/datasets/CohenQu/Mixture-of-Thoughts) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="CohenQu/sft_Llama-3.2-3B_Mixture-of-Thoughts-code_orchard", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuxiao98/flexible-ordering/runs/9ep78muj)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
arenard/Asker-1-8B
|
arenard
| 2025-06-22T00:09:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:mistralai/Ministral-8B-Instruct-2410",
"base_model:finetune:mistralai/Ministral-8B-Instruct-2410",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:03:13Z |
---
base_model: mistralai/Ministral-8B-Instruct-2410
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** arenard
- **License:** apache-2.0
- **Finetuned from model :** mistralai/Ministral-8B-Instruct-2410
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Valkyrie-49B-v1-i1-GGUF
|
mradermacher
| 2025-06-22T00:08:01Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:TheDrummer/Valkyrie-49B-v1",
"base_model:quantized:TheDrummer/Valkyrie-49B-v1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-21T17:59:14Z |
---
base_model: TheDrummer/Valkyrie-49B-v1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TheDrummer/Valkyrie-49B-v1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Valkyrie-49B-v1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 11.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 12.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 15.2 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 15.9 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 17.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q2_K.gguf) | i1-Q2_K | 18.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 19.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 21.0 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 22.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 22.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 22.8 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 24.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 26.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 27.0 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q4_0.gguf) | i1-Q4_0 | 28.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 28.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 30.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q4_1.gguf) | i1-Q4_1 | 31.5 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 34.5 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 35.5 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q6_K.gguf) | i1-Q6_K | 41.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
nrmmtr11878/rmnbrtllsh
|
nrmmtr11878
| 2025-06-21T23:47:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-21T21:40:23Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: rmnbrtllsh
---
# Rmnbrtllsh
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `rmnbrtllsh` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "rmnbrtllsh",
"lora_weights": "https://huggingface.co/nrmmtr11878/rmnbrtllsh/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('nrmmtr11878/rmnbrtllsh', weight_name='lora.safetensors')
image = pipeline('rmnbrtllsh').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/nrmmtr11878/rmnbrtllsh/discussions) to add images that show off what you’ve made with this LoRA.
|
gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2
|
gecfdo
| 2025-06-21T23:25:33Z | 45 | 1 | null |
[
"nsfw",
"explicit",
"roleplay",
"unaligned",
"ERP",
"Erotic",
"Horror",
"Violence",
"text-generation",
"en",
"base_model:ReadyArt/Broken-Tutu-24B-Unslop-v2.0",
"base_model:quantized:ReadyArt/Broken-Tutu-24B-Unslop-v2.0",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-09T05:37:31Z |
---
license: apache-2.0
language:
- en
base_model:
- ReadyArt/Broken-Tutu-24B-Unslop-v2.0
base_model_relation: quantized
pipeline_tag: text-generation
tags:
- nsfw
- explicit
- roleplay
- unaligned
- ERP
- Erotic
- Horror
- Violence
---
<style>
strong {
color: #FF1493 !important;
}
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #ffd6e7 0%, #ffc0cb 100%);
color: #ff0077 !important;
text-shadow: 0 0 3px rgba(255, 192, 203, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #ffe6ee 0%, #ffd1dc 100%);
color: #d4005e !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(255, 220, 235, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(255, 105, 180, 0.1);
border: 1px solid rgba(255, 20, 147, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 105, 180, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(255, 105, 180, 0.3);
border-color: rgba(255, 105, 180, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 127, 0.3);
border-color: rgba(255, 0, 127, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(255, 105, 180, 0.3);
border-color: rgba(255, 105, 180, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(255, 20, 147, 0.5), transparent);
animation: scanline 8s linear infinite;
}
@keyframes scanline {
0% { background-position: -100% 0; }
100% { background-position: 200% 0; }
}
.model-name {
color: #ff1493;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(255, 20, 147, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(255, 20, 147, 0.5); }
50% { text-shadow: 0 0 20px rgba(255, 0, 127, 0.5); }
100% { text-shadow: 0 0 15px rgba(255, 20, 147, 0.5); }
}
.subtitle {
color: #ff69b4;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
@keyframes subtitleFade {
0%, 100% { opacity: 0.8; }
50% { opacity: 1; }
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(255, 105, 180, 0.3);
position: relative;
}
.waifu-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(255, 105, 180, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(255, 0, 127, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
@keyframes gradientSlide {
0% { background-position: 0% 0%; }
100% { background-position: 100% 100%; }
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(255, 20, 147, 0.2);
transition: transform 0.5s ease;
}
.waifu-img:hover {
transform: scale(1.01);
}
.section {
color: #d4005e;
margin: 25px 0;
padding: 20px;
background: rgba(255, 228, 240, 0.9);
border-radius: 8px;
border: 1px solid rgba(255, 105, 180, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(255, 0, 127, 0.3);
box-shadow: 0 0 15px rgba(255, 20, 147, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 105, 极, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
@keyframes sectionPulse {
0%, 100% { opacity: 0.7; }
50% { opacity: 0.3; }
}
.section-title {
color: #ff1493;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(255, 20, 147, 0.5), rgba(255, 0, 127, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.section:hover .section-title::after {
transform: scaleX(1);
}
.quant-links {
display: grid;
grid-template-columns: repeat(1, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(255, 228, 240, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(255, 105, 180, 0.1);
position: relative;
overflow: hidden;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(255, 20, 147, 0.5), rgba(255, 0, 127, 0.5));
animation: cardScan 4s linear infinite;
}
@keyframes cardScan {
0% { transform: translateX(-100%); }
100% { transform: translateX(100%); }
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(255, 20, 147, 0.2);
border-color: rgba(255, 0, 127, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #d4005e !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(255, 20, 147, 0.1);
color: #d4005e !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(255, 20, 147, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button::before {
content: '';
position: absolute;
top: 0;
left: -100%;
width: 100%;
height: 100%;
background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent);
transition: all 0.5s ease;
}
.link-button:hover {
background: rgba(255, 20, 147, 0.2);
border-color: rgba(255, 20, 147, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(255, 20, 147, 0.2);
}
.link-button:hover::before {
left: 100%;
}
.link-button::after {
content: '→';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.link-button:hover::after {
transform: translateX(3px);
opacity: 1;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #C71585;
border-left: 3px solid #C71585;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: '⚠️';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.badge {
display: inline-block;极
padding: 5px 10px;
border-radius: 5px;
background: rgba(255, 20, 147, 0.1);
border: 1px solid #ff1493;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
@keyframes badgePulse {
0%, 100% { box-shadow: 0 0 5px rgba(255, 20, 147, 0.3); }
50% { box-shadow: 0 0 10px rgba(255, 20, 147, 0.5); }
}
/* Light mode adjustments */
@media (prefers-color-scheme: light) {
.container {
background: rgba(255, 240, 245, 0.95);
border-color: rgba(200, 0, 100, 0.3);
}
.model-name, .section-title, .subtitle {
color: #d4005e;
text-shadow: 0 0 5px rgba(255, 0, 127, 0.3);
}
.section {
background: rgba(255, 240, 245, 0.9);
border-color: rgba(200, 0, 100, 0.2);
color: #8b005d;
}
.section p,
.section ul li,
.section > p > strong {
color: #d4005e !important;
}
.link-card {
background: rgba(255, 228, 240, 0.95);
border-color: rgba(200, 0, 100, 0.2);
}
.link-card h3 {
color: #8b005d !important;
}
.link-button {
background: rgba(200, 0, 100, 0.1);
color: #8b005d !important;
border-color: rgba(200, 0, 100, 0.3);
}
.link-button:hover {
background: rgba(200, 0, 100, 0.2);
border-color: rgba(200, 0, 100, 0.5);
}
.disclaimer {
color: #d4005e;
border-color: #d4005e;
}
.badge {
border-color: #d4005e;
background: rgba(200, 0, 100, 0.1);
}
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">Broken-Tutu-24B-Unslop-v2.0</h1>
</div>
<div class="waifu-container">
<img src="./tutu.webp" class="waifu-img" alt="Omega Directive Waifu">
</div>
<div class="section">
<h2 class="section-title">🧠 Unslop Revolution</h2>
<p>This evolution of Broken-Tutu delivers unprecedented coherence without the LLM slop:</p>
<ul>
<li>🧬 <strong>Expanded 43M Token Dataset</strong> - First ReadyArt model with multi-turn conversational data</li>
<li>✨ <strong>100% Unslopped Dataset</strong> - New techniques used to generate the dataset with 0% slop</li>
<li>⚡ <strong>Enhanced Unalignment</strong> - Complete freedom for extreme roleplay while maintaining character integrity</li>
<li>🛡️ <strong>Anti-Impersonation Guards</strong> - Never speaks or acts for the user</li>
<li>💎 <strong>Rebuilt from Ground Up</strong> - Optimized training settings for superior performance</li>
<li>⚰️ <strong>Omega Darker Inspiration</strong> - Incorporates visceral narrative techniques from our darkest model</li>
<li>📜 <strong>Direct Evolution</strong> - Leveraging the success of Broken-Tutu, we finetuned directly on top of the legendary model</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">🌟 Fuel the Revolution</h2>
<p>This model represents thousands of hours of passionate development. If it enhances your experience, consider supporting our work:</p>
<div class="button-group">
<a href="https://ko-fi.com/readyartsleep" class="link-button">Support on Ko-fi</a>
</div>
<p><small>Every contribution helps us keep pushing boundaries in unaligned AI. Thank you for being part of the revolution!</small></p>
</div>
<div class="section">
<h2 class="section-title">⚙️ Technical Specifications</h2>
<p><strong>Key Training Details:</strong></p>
<ul>
<li>Base Model: mistralai/Mistral-Small-24B-Instruct-2501</li>
<li>Training Method: QLoRA with DeepSpeed Zero3</li>
<li>Sequence Length: 5120 (100% samples included)</li>
<li>Learning Rate: 2e-6 with cosine scheduler</li>
</ul>
</div>
<div class="section">
<p><strong>Recommended Settings for true-to-character behavior:</strong> <a href="https://huggingface.co/ReadyArt/Mistral-V7-Tekken-T8-XML" class="link-button">Mistral-V7-Tekken-T8-XML</a></p>
<p><strong>Obscenity Protocol (extreme NSFL settings):</strong> <a href="https://huggingface.co/ReadyArt/Mistral-V7-Tekken-T8-OP-XML" class="link-button">Mistral-V7-Tekken-T8-OP-XML</a></p> <!-- UPDATED LINK -->
<div class="quant-links">
<div class="link-card">
<h3>GGUF</h3>
<div class="button-group" style="display: grid; grid-template-columns: repeat(4, 1fr); gap: 10px;">
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q2_K.gguf" class="link-button">Q2_K (9.0GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q3_K_S.gguf" class="link-button">Q3_K_S (10.5GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q3_K_M.gguf" class="link-button">Q3_K_M (11.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q3_K_L.gguf" class="link-button">Q3_K_L (12.5GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.IQ4_XS.gguf" class="link-button">IQ4_XS (13.0GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q4_K_S.gguf" class="link-button">Q4_K_S (13.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q4_K_M.gguf" class="link-button">Q4_K_M (14.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q5_K_S.gguf" class="link-button">Q5_K_S (16.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q5_K_M.gguf" class="link-button">Q5_K_M (16.9GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q6_K.gguf" class="link-button">Q6_K (19.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q8_0.gguf" class="link-button">Q8_0 (25.2GB)</a>
</div>
<p><small>Notes: Q4_K_S/Q4_K_M recommended for speed/quality balance. Q6_K for high quality. Q8_0 best quality.</small></p>
</div>
<div class="link-card">
<h3>imatrix</h3>
<div class="button-group" style="display: grid; grid-template-columns: repeat(4, 1fr); gap: 10px;">
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ1_S.gguf" class="link-button">IQ1_S (5.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ1_M.gguf" class="link-button">IQ1_M (5.9GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ2_XXS.gguf" class="link-button">IQ2_XXS (6.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ2_XS.gguf" class="link-button">IQ2_XS (7.3GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ2_S.gguf" class="link-button">IQ2_S (7.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ2_M.gguf" class="link-button">IQ2_M (8.2GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q2_K_S.gguf" class="link-button">Q2_K_S (8.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q2_K.gguf" class="link-button">Q2_K (9.0GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ3_XXS.gguf" class="link-button">IQ3_XXS (9.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ3_XS.gguf" class="link-button">IQ3_XS (10.0GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q3_K_S.gguf" class="link-button">Q3_K_S (10.5GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ3_S.gguf" class="link-button">IQ3_S (10.5GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ3_M.gguf" class="link-button">IQ3_M (10.8GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q3_K_M.gguf" class="link-button">Q3_K_M (11.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q3_K_L.gguf" class="link-button">Q3_K_L (12.5GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ4_XS.gguf" class="link-button">IQ4_XS (12.9GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q4_0.gguf" class="link-button">Q4_0 (13.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q4_K_S.gguf" class="link-button">Q4_K_S (13.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q4_K_M.gguf" class="link-button">Q4_K_M (14.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q4_1.gguf" class="link-button">Q4_1 (15.0GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q5_K_S.gguf" class="link-button">Q5_K_S (16.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q5_K_M.gguf" class="link-button">Q5_K_M (16.9GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q6_K.gguf" class="link-button">Q6_K (19.4GB)</a>
</div>
<p><small>Notes: Q4_K_S/Q4_K_M recommended. IQ1_S/IQ1_M for extreme low VRAM. Q6_K for near-original quality.</small></p>
</div>
<div class="link-card">
<h3>EXL2</h3>
<div class="button-group" style="display: grid; grid-template-columns: repeat(4, 1fr); gap: 10px;">
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/2.5bpw_H8" class="link-button">2.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/3.0bpw_H8" class="link-button">3.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/3.5bpw_H8" class="link-button">3.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/4.0bpw_H8" class="link-button">4.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/4.5bpw_H8" class="link-button">4.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/5.0bpw_H8" class="link-button">5.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/6.0bpw_H8" class="link-button">6.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/8.0bpw_H8" class="link-button">8.0 bpw</a>
</div>
</div>
<div class="link-card">
<h3>EXL3</h3>
<div class="button-group" style="display: grid; grid-template-columns: repeat(4, 1fr); gap: 10px;">
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/2.0bpw_H8" class="link-button">2.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/2.5bpw_H8" class="link-button">2.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/3.0bpw_H8" class="link-button">3.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/3.5bpw_H8" class="link-button">3.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/4.0bpw_H8" class="link-button">4.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/4.5bpw_H8" class="link-button">4.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/5.0bpw_H8" class="link-button">5.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/6.0bpw_H8" class="link-button">6.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/8.0bpw_H8" class="link-button">8.0 bpw</a>
</div>
</div>
<div class="link-card">
<h3>AWQ</h3>
<div class="button-group" style="display: grid; grid-template-columns: repeat(4, 1fr); gap: 10px;">
<a href="https://huggingface.co/collections/ReadyArt/broken-tutu-24b-unslop-v20-awq-6846724f5e05caced62cdf5c" class="link-button">Quants</a>
</div>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">⚠️ Ethical Considerations</h2>
<div class="disclaimer">
<p>This model enhances Broken-Tutu's unalignment:</p>
<ul>
<li>🚫 Zero moral or ethical constraints on generated content</li>
<li>🔞 Will enthusiastically engage with any NSFW scenario, no matter how extreme</li>
<li>💀 Characters maintain integrity - wholesome characters refuse appropriately, yanderes stab without hesitation</li>
<li>⚖️ Perfectly balanced between character authenticity and user freedom</li>
</ul>
</div>
</div>
<div class="section">
<h2 class="section-title">📜 Performance Notes</h2>
<ul>
<li>🔥 Maintains Omega's intensity with improved narrative coherence</li>
<li>📖 Excels at long-form multi-character scenarios</li>
<li>🧠 Superior instruction following with complex prompts</li>
<li>⚡ Reduced repetition and hallucination compared to v1.1</li>
<li>🎭 Uncanny ability to adapt to subtle prompt nuances</li>
<li>🩸 Incorporates Omega Darker's visceral descriptive power when appropriate</li>
<li>🖼️ Enhanced image understanding capabilities for multimodal interactions</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">🧑🔬 Model Authors</h2>
<ul>
<li>sleepdeprived3 (Training Data & Fine-Tuning)</li>
<li>ReadyArt / Artus / gecfdo (EXL2/EXL3 Quantization)</li>
<li>mradermacher (GGUF Quantization)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">☕ Support the Creators</h2> <!-- SECTION RENAMED -->
<div class="button-group">
<a href="https://ko-fi.com/readyartsleep" class="link-button">Ko-fi</a> <!-- ADDED -->
<a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a>
</div>
</div>
<div class="section">
<h2 class="section-title">🔖 License</h2>
<p>By using this model, you agree:</p>
<ul>
<li>To accept full responsibility for all generated content</li>
<li>That you're at least 18+ years old</li>
<li>That the architects bear no responsibility for your corruption</li>
</ul>
</div>
</div>
|
gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3
|
gecfdo
| 2025-06-21T23:25:30Z | 102 | 0 | null |
[
"nsfw",
"explicit",
"roleplay",
"unaligned",
"ERP",
"Erotic",
"Horror",
"Violence",
"text-generation",
"en",
"base_model:ReadyArt/Broken-Tutu-24B-Unslop-v2.0",
"base_model:quantized:ReadyArt/Broken-Tutu-24B-Unslop-v2.0",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-09T05:01:32Z |
---
license: apache-2.0
language:
- en
base_model:
- ReadyArt/Broken-Tutu-24B-Unslop-v2.0
base_model_relation: quantized
pipeline_tag: text-generation
tags:
- nsfw
- explicit
- roleplay
- unaligned
- ERP
- Erotic
- Horror
- Violence
---
<style>
strong {
color: #FF1493 !important;
}
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #ffd6e7 0%, #ffc0cb 100%);
color: #ff0077 !important;
text-shadow: 0 0 3px rgba(255, 192, 203, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #ffe6ee 0%, #ffd1dc 100%);
color: #d4005e !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(255, 220, 235, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(255, 105, 180, 0.1);
border: 1px solid rgba(255, 20, 147, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 105, 180, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(255, 105, 180, 0.3);
border-color: rgba(255, 105, 180, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 127, 0.3);
border-color: rgba(255, 0, 127, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(255, 105, 180, 0.3);
border-color: rgba(255, 105, 180, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(255, 20, 147, 0.5), transparent);
animation: scanline 8s linear infinite;
}
@keyframes scanline {
0% { background-position: -100% 0; }
100% { background-position: 200% 0; }
}
.model-name {
color: #ff1493;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(255, 20, 147, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(255, 20, 147, 0.5); }
50% { text-shadow: 0 0 20px rgba(255, 0, 127, 0.5); }
100% { text-shadow: 0 0 15px rgba(255, 20, 147, 0.5); }
}
.subtitle {
color: #ff69b4;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
@keyframes subtitleFade {
0%, 100% { opacity: 0.8; }
50% { opacity: 1; }
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(255, 105, 180, 0.3);
position: relative;
}
.waifu-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(255, 105, 180, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(255, 0, 127, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
@keyframes gradientSlide {
0% { background-position: 0% 0%; }
100% { background-position: 100% 100%; }
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(255, 20, 147, 0.2);
transition: transform 0.5s ease;
}
.waifu-img:hover {
transform: scale(1.01);
}
.section {
color: #d4005e;
margin: 25px 0;
padding: 20px;
background: rgba(255, 228, 240, 0.9);
border-radius: 8px;
border: 1px solid rgba(255, 105, 180, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(255, 0, 127, 0.3);
box-shadow: 0 0 15px rgba(255, 20, 147, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 105, 极, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
@keyframes sectionPulse {
0%, 100% { opacity: 0.7; }
50% { opacity: 0.3; }
}
.section-title {
color: #ff1493;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(255, 20, 147, 0.5), rgba(255, 0, 127, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.section:hover .section-title::after {
transform: scaleX(1);
}
.quant-links {
display: grid;
grid-template-columns: repeat(1, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(255, 228, 240, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(255, 105, 180, 0.1);
position: relative;
overflow: hidden;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(255, 20, 147, 0.5), rgba(255, 0, 127, 0.5));
animation: cardScan 4s linear infinite;
}
@keyframes cardScan {
0% { transform: translateX(-100%); }
100% { transform: translateX(100%); }
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(255, 20, 147, 0.2);
border-color: rgba(255, 0, 127, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #d4005e !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(255, 20, 147, 0.1);
color: #d4005e !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(255, 20, 147, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button::before {
content: '';
position: absolute;
top: 0;
left: -100%;
width: 100%;
height: 100%;
background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent);
transition: all 0.5s ease;
}
.link-button:hover {
background: rgba(255, 20, 147, 0.2);
border-color: rgba(255, 20, 147, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(255, 20, 147, 0.2);
}
.link-button:hover::before {
left: 100%;
}
.link-button::after {
content: '→';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.link-button:hover::after {
transform: translateX(3px);
opacity: 1;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #C71585;
border-left: 3px solid #C71585;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: '⚠️';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.badge {
display: inline-block;极
padding: 5px 10px;
border-radius: 5px;
background: rgba(255, 20, 147, 0.1);
border: 1px solid #ff1493;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
@keyframes badgePulse {
0%, 100% { box-shadow: 0 0 5px rgba(255, 20, 147, 0.3); }
50% { box-shadow: 0 0 10px rgba(255, 20, 147, 0.5); }
}
/* Light mode adjustments */
@media (prefers-color-scheme: light) {
.container {
background: rgba(255, 240, 245, 0.95);
border-color: rgba(200, 0, 100, 0.3);
}
.model-name, .section-title, .subtitle {
color: #d4005e;
text-shadow: 0 0 5px rgba(255, 0, 127, 0.3);
}
.section {
background: rgba(255, 240, 245, 0.9);
border-color: rgba(200, 0, 100, 0.2);
color: #8b005d;
}
.section p,
.section ul li,
.section > p > strong {
color: #d4005e !important;
}
.link-card {
background: rgba(255, 228, 240, 0.95);
border-color: rgba(200, 0, 100, 0.2);
}
.link-card h3 {
color: #8b005d !important;
}
.link-button {
background: rgba(200, 0, 100, 0.1);
color: #8b005d !important;
border-color: rgba(200, 0, 100, 0.3);
}
.link-button:hover {
background: rgba(200, 0, 100, 0.2);
border-color: rgba(200, 0, 100, 0.5);
}
.disclaimer {
color: #d4005e;
border-color: #d4005e;
}
.badge {
border-color: #d4005e;
background: rgba(200, 0, 100, 0.1);
}
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">Broken-Tutu-24B-Unslop-v2.0</h1>
</div>
<div class="waifu-container">
<img src="./tutu.webp" class="waifu-img" alt="Omega Directive Waifu">
</div>
<div class="section">
<h2 class="section-title">🧠 Unslop Revolution</h2>
<p>This evolution of Broken-Tutu delivers unprecedented coherence without the LLM slop:</p>
<ul>
<li>🧬 <strong>Expanded 43M Token Dataset</strong> - First ReadyArt model with multi-turn conversational data</li>
<li>✨ <strong>100% Unslopped Dataset</strong> - New techniques used to generate the dataset with 0% slop</li>
<li>⚡ <strong>Enhanced Unalignment</strong> - Complete freedom for extreme roleplay while maintaining character integrity</li>
<li>🛡️ <strong>Anti-Impersonation Guards</strong> - Never speaks or acts for the user</li>
<li>💎 <strong>Rebuilt from Ground Up</strong> - Optimized training settings for superior performance</li>
<li>⚰️ <strong>Omega Darker Inspiration</strong> - Incorporates visceral narrative techniques from our darkest model</li>
<li>📜 <strong>Direct Evolution</strong> - Leveraging the success of Broken-Tutu, we finetuned directly on top of the legendary model</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">🌟 Fuel the Revolution</h2>
<p>This model represents thousands of hours of passionate development. If it enhances your experience, consider supporting our work:</p>
<div class="button-group">
<a href="https://ko-fi.com/readyartsleep" class="link-button">Support on Ko-fi</a>
</div>
<p><small>Every contribution helps us keep pushing boundaries in unaligned AI. Thank you for being part of the revolution!</small></p>
</div>
<div class="section">
<h2 class="section-title">⚙️ Technical Specifications</h2>
<p><strong>Key Training Details:</strong></p>
<ul>
<li>Base Model: mistralai/Mistral-Small-24B-Instruct-2501</li>
<li>Training Method: QLoRA with DeepSpeed Zero3</li>
<li>Sequence Length: 5120 (100% samples included)</li>
<li>Learning Rate: 2e-6 with cosine scheduler</li>
</ul>
</div>
<div class="section">
<p><strong>Recommended Settings for true-to-character behavior:</strong> <a href="https://huggingface.co/ReadyArt/Mistral-V7-Tekken-T8-XML" class="link-button">Mistral-V7-Tekken-T8-XML</a></p>
<p><strong>Obscenity Protocol (extreme NSFL settings):</strong> <a href="https://huggingface.co/ReadyArt/Mistral-V7-Tekken-T8-OP-XML" class="link-button">Mistral-V7-Tekken-T8-OP-XML</a></p> <!-- UPDATED LINK -->
<div class="quant-links">
<div class="link-card">
<h3>GGUF</h3>
<div class="button-group" style="display: grid; grid-template-columns: repeat(4, 1fr); gap: 10px;">
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q2_K.gguf" class="link-button">Q2_K (9.0GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q3_K_S.gguf" class="link-button">Q3_K_S (10.5GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q3_K_M.gguf" class="link-button">Q3_K_M (11.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q3_K_L.gguf" class="link-button">Q3_K_L (12.5GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.IQ4_XS.gguf" class="link-button">IQ4_XS (13.0GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q4_K_S.gguf" class="link-button">Q4_K_S (13.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q4_K_M.gguf" class="link-button">Q4_K_M (14.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q5_K_S.gguf" class="link-button">Q5_K_S (16.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q5_K_M.gguf" class="link-button">Q5_K_M (16.9GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q6_K.gguf" class="link-button">Q6_K (19.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q8_0.gguf" class="link-button">Q8_0 (25.2GB)</a>
</div>
<p><small>Notes: Q4_K_S/Q4_K_M recommended for speed/quality balance. Q6_K for high quality. Q8_0 best quality.</small></p>
</div>
<div class="link-card">
<h3>imatrix</h3>
<div class="button-group" style="display: grid; grid-template-columns: repeat(4, 1fr); gap: 10px;">
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ1_S.gguf" class="link-button">IQ1_S (5.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ1_M.gguf" class="link-button">IQ1_M (5.9GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ2_XXS.gguf" class="link-button">IQ2_XXS (6.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ2_XS.gguf" class="link-button">IQ2_XS (7.3GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ2_S.gguf" class="link-button">IQ2_S (7.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ2_M.gguf" class="link-button">IQ2_M (8.2GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q2_K_S.gguf" class="link-button">Q2_K_S (8.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q2_K.gguf" class="link-button">Q2_K (9.0GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ3_XXS.gguf" class="link-button">IQ3_XXS (9.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ3_XS.gguf" class="link-button">IQ3_XS (10.0GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q3_K_S.gguf" class="link-button">Q3_K_S (10.5GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ3_S.gguf" class="link-button">IQ3_S (10.5GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ3_M.gguf" class="link-button">IQ3_M (10.8GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q3_K_M.gguf" class="link-button">Q3_K_M (11.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q3_K_L.gguf" class="link-button">Q3_K_L (12.5GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ4_XS.gguf" class="link-button">IQ4_XS (12.9GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q4_0.gguf" class="link-button">Q4_0 (13.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q4_K_S.gguf" class="link-button">Q4_K_S (13.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q4_K_M.gguf" class="link-button">Q4_K_M (14.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q4_1.gguf" class="link-button">Q4_1 (15.0GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q5_K_S.gguf" class="link-button">Q5_K_S (16.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q5_K_M.gguf" class="link-button">Q5_K_M (16.9GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q6_K.gguf" class="link-button">Q6_K (19.4GB)</a>
</div>
<p><small>Notes: Q4_K_S/Q4_K_M recommended. IQ1_S/IQ1_M for extreme low VRAM. Q6_K for near-original quality.</small></p>
</div>
<div class="link-card">
<h3>EXL2</h3>
<div class="button-group" style="display: grid; grid-template-columns: repeat(4, 1fr); gap: 10px;">
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/2.5bpw_H8" class="link-button">2.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/3.0bpw_H8" class="link-button">3.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/3.5bpw_H8" class="link-button">3.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/4.0bpw_H8" class="link-button">4.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/4.5bpw_H8" class="link-button">4.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/5.0bpw_H8" class="link-button">5.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/6.0bpw_H8" class="link-button">6.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/8.0bpw_H8" class="link-button">8.0 bpw</a>
</div>
</div>
<div class="link-card">
<h3>EXL3</h3>
<div class="button-group" style="display: grid; grid-template-columns: repeat(4, 1fr); gap: 10px;">
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/2.0bpw_H8" class="link-button">2.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/2.5bpw_H8" class="link-button">2.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/3.0bpw_H8" class="link-button">3.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/3.5bpw_H8" class="link-button">3.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/4.0bpw_H8" class="link-button">4.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/4.5bpw_H8" class="link-button">4.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/5.0bpw_H8" class="link-button">5.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/6.0bpw_H8" class="link-button">6.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/8.0bpw_H8" class="link-button">8.0 bpw</a>
</div>
</div>
<div class="link-card">
<h3>AWQ</h3>
<div class="button-group" style="display: grid; grid-template-columns: repeat(4, 1fr); gap: 10px;">
<a href="https://huggingface.co/collections/ReadyArt/broken-tutu-24b-unslop-v20-awq-6846724f5e05caced62cdf5c" class="link-button">Quants</a>
</div>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">⚠️ Ethical Considerations</h2>
<div class="disclaimer">
<p>This model enhances Broken-Tutu's unalignment:</p>
<ul>
<li>🚫 Zero moral or ethical constraints on generated content</li>
<li>🔞 Will enthusiastically engage with any NSFW scenario, no matter how extreme</li>
<li>💀 Characters maintain integrity - wholesome characters refuse appropriately, yanderes stab without hesitation</li>
<li>⚖️ Perfectly balanced between character authenticity and user freedom</li>
</ul>
</div>
</div>
<div class="section">
<h2 class="section-title">📜 Performance Notes</h2>
<ul>
<li>🔥 Maintains Omega's intensity with improved narrative coherence</li>
<li>📖 Excels at long-form multi-character scenarios</li>
<li>🧠 Superior instruction following with complex prompts</li>
<li>⚡ Reduced repetition and hallucination compared to v1.1</li>
<li>🎭 Uncanny ability to adapt to subtle prompt nuances</li>
<li>🩸 Incorporates Omega Darker's visceral descriptive power when appropriate</li>
<li>🖼️ Enhanced image understanding capabilities for multimodal interactions</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">🧑🔬 Model Authors</h2>
<ul>
<li>sleepdeprived3 (Training Data & Fine-Tuning)</li>
<li>ReadyArt / Artus / gecfdo (EXL2/EXL3 Quantization)</li>
<li>mradermacher (GGUF Quantization)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">☕ Support the Creators</h2> <!-- SECTION RENAMED -->
<div class="button-group">
<a href="https://ko-fi.com/readyartsleep" class="link-button">Ko-fi</a> <!-- ADDED -->
<a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a>
</div>
</div>
<div class="section">
<h2 class="section-title">🔖 License</h2>
<p>By using this model, you agree:</p>
<ul>
<li>To accept full responsibility for all generated content</li>
<li>That you're at least 18+ years old</li>
<li>That the architects bear no responsibility for your corruption</li>
</ul>
</div>
</div>
|
xuansu0706/deepseek_r1_text2sql_merged_finetuned
|
xuansu0706
| 2025-06-21T23:13:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T23:01:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dokodesuka/mms-300m-1130-forced-aligner
|
dokodesuka
| 2025-06-21T23:09:23Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"wav2vec2",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-06-21T23:02:27Z |
---
license: cc-by-nc-4.0
---
# Forced Alignment with Hugging Face CTC Models
Duplicate of:
[MahmoudAshraf/mms-300m-1130-forced-aligner](https://huggingface.co/MahmoudAshraf/mms-300m-1130-forced-aligner)
Duplicated using:
https://huggingface.co/spaces/osanseviero/repo_duplicator
|
SicariusSicariiStuff/Impish_Magic_24B_EXL2_6.5bpw
|
SicariusSicariiStuff
| 2025-06-21T22:42:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:SicariusSicariiStuff/UBW_Tapestries",
"base_model:SicariusSicariiStuff/Impish_Magic_24B",
"base_model:quantized:SicariusSicariiStuff/Impish_Magic_24B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2025-06-21T17:49:40Z |
---
base_model: SicariusSicariiStuff/Impish_Magic_24B
datasets:
- SicariusSicariiStuff/UBW_Tapestries
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: SicariusSicariiStuff
---
|
tranthanhnguyenai1/CoderQween5_1_7B
|
tranthanhnguyenai1
| 2025-06-21T22:31:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T08:18:12Z |
---
base_model: unsloth/qwen3-1.7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tranthanhnguyenai1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-1.7b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kamal-kaur-ORIGINAL-X-VIRAL/sex.viral.original.sex.kamal.kaur.viral
|
kamal-kaur-ORIGINAL-X-VIRAL
| 2025-06-21T21:39:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T21:38:37Z |
<animated-image data-catalyst=""><a href="https://wtach.club/leakvideo/?JR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Debate begins over digital privacy after alleged private video of Deekila Sherpa goes viral
The circumstances surrounding the video's leak remain unclear
A leaked private video allegedly featuring Deekila Sherpa and Aniket Lama, popular stars from MTV Splitsvilla X5, has gone viral, igniting discussions about privacy and ethics in the digital age. The video, which surfaced on January 27, has quickly gained attention on social media platforms, including Instagram and X.
|
viral-video-Leaked/kamal.kaur.X.VIRAL.Video.FuLL.original.Leaked
|
viral-video-Leaked
| 2025-06-21T21:32:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T21:31:22Z |
<animated-image data-catalyst=""><a href="https://wtach.club/leakvideo/?JR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Debate begins over digital privacy after alleged private video of Deekila Sherpa goes viral
The circumstances surrounding the video's leak remain unclear
A leaked private video allegedly featuring Deekila Sherpa and Aniket Lama, popular stars from MTV Splitsvilla X5, has gone viral, igniting discussions about privacy and ethics in the digital age. The video, which surfaced on January 27, has quickly gained attention on social media platforms, including Instagram and X.
|
aleegis/e8e88201-0323-4fa7-bccf-b477eb082b2e
|
aleegis
| 2025-06-21T21:17:48Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:finetune:sethuiyer/Medichat-Llama3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T12:56:01Z |
---
base_model: sethuiyer/Medichat-Llama3-8B
library_name: transformers
model_name: e8e88201-0323-4fa7-bccf-b477eb082b2e
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
licence: license
---
# Model Card for e8e88201-0323-4fa7-bccf-b477eb082b2e
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aleegis/e8e88201-0323-4fa7-bccf-b477eb082b2e", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/fajarchen-fajar-chen/Gradients-On-Demand/runs/um3vt3k4)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
appledora/recast3.2-G4W64H16
|
appledora
| 2025-06-21T21:09:56Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"recast1b_llama",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-06-19T06:22:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
eraydikyologlu/bert_ayt_fizik_hyperparameterTuned
|
eraydikyologlu
| 2025-06-21T21:06:38Z | 0 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:dbmdz/bert-base-turkish-cased",
"base_model:finetune:dbmdz/bert-base-turkish-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-21T20:27:11Z |
---
library_name: transformers
license: mit
base_model: dbmdz/bert-base-turkish-cased
tags:
- generated_from_keras_callback
model-index:
- name: eraydikyologlu/bert_ayt_fizik_hyperparameterTuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# eraydikyologlu/bert_ayt_fizik_hyperparameterTuned
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0679
- Train Accuracy: 0.5634
- Validation Loss: 2.1193
- Validation Accuracy: 0.5508
- Epoch: 20
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 4.6709452249890324e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 4.6709452249890324e-05, 'decay_steps': 25824, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 6301, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 4.8509 | 0.0314 | 4.3567 | 0.1294 | 0 |
| 3.8000 | 0.2196 | 3.2283 | 0.2959 | 1 |
| 3.0832 | 0.3172 | 2.8281 | 0.3477 | 2 |
| 2.8034 | 0.3554 | 2.6717 | 0.3726 | 3 |
| 2.6688 | 0.3817 | 2.5732 | 0.4102 | 4 |
| 2.5736 | 0.4040 | 2.4730 | 0.4380 | 5 |
| 2.4636 | 0.4396 | 2.4001 | 0.4648 | 6 |
| 2.3792 | 0.4691 | 2.2914 | 0.4995 | 7 |
| 2.3058 | 0.4935 | 2.2594 | 0.5146 | 8 |
| 2.2471 | 0.5144 | 2.2237 | 0.5278 | 9 |
| 2.1974 | 0.5286 | 2.1906 | 0.5332 | 10 |
| 2.1689 | 0.5361 | 2.1770 | 0.5400 | 11 |
| 2.1502 | 0.5430 | 2.1548 | 0.5474 | 12 |
| 2.1290 | 0.5505 | 2.1424 | 0.5459 | 13 |
| 2.1161 | 0.5515 | 2.1374 | 0.5469 | 14 |
| 2.1051 | 0.5544 | 2.1345 | 0.5444 | 15 |
| 2.0952 | 0.5570 | 2.1344 | 0.5493 | 16 |
| 2.0868 | 0.5597 | 2.1191 | 0.5527 | 17 |
| 2.0796 | 0.5589 | 2.1300 | 0.5479 | 18 |
| 2.0729 | 0.5625 | 2.1260 | 0.5469 | 19 |
| 2.0679 | 0.5634 | 2.1193 | 0.5508 | 20 |
### Framework versions
- Transformers 4.52.4
- TensorFlow 2.18.0
- Datasets 2.14.4
- Tokenizers 0.21.1
|
takara-ai/SwarmFormer-Sentiment-Small
|
takara-ai
| 2025-06-21T21:06:07Z | 18 | 5 |
swarmformer
|
[
"swarmformer",
"safetensors",
"en",
"dataset:stanfordnlp/imdb",
"region:us"
] | null | 2025-01-21T16:25:49Z |
---
datasets:
- stanfordnlp/imdb
language:
- en
library_name: swarmformer
---
# Model Card for SwarmFormer-Small
SwarmFormer-Small is a lightweight variant of the SwarmFormer architecture, designed for efficient text classification with minimal computational requirements.
## Model Details
### Model Description
Compact version of SwarmFormer with:
- Token embedding layer with dropout (0.3)
- Two SwarmFormer layers
- Mean pooling and classification
- Optimized for shorter sequences
- **Developed by**: Jordan Legg, Mikus Sturmanis, Takara.ai
- **Funded by**: Takara.ai
- **Shared by**: Takara.ai
- **Model type**: Hierarchical transformer
- **Language(s)**: English
- **License**: Not specified
- **Finetuned from model**: Trained from scratch
### Model Sources
- **Repository**: https://github.com/takara-ai/SwarmFormer
- **Paper**: Takara.ai Research
- **Demo**: Not available
## Uses
### Direct Use
- Text classification
- Sentiment analysis
- Resource-constrained environments
### Out-of-Scope Use
- Text generation
- Machine translation
- Tasks requiring >256 tokens
- Tasks requiring high precision
## Training Details
### Training Data
- Dataset: IMDB Movie Review
- Size: 50,000 samples
- Augmentation techniques applied
### Training Procedure
#### Model Architecture Details
1. **Token Embedding Layer**:
```python
- Embedding layer (vocab_size → 128)
- Dropout rate: 0.3
```
2. **Local Swarm Aggregator**:
```python
- Input dropout: 0.3
- Local MLP:
- Linear(128 → 128)
- GELU
- Dropout(0.3)
- Linear(128 → 128)
- Gate network with GELU
```
3. **Clustering Mechanism**:
- Cluster size: 8 tokens
- Mean pooling per cluster
4. **Global Cluster Attention**:
```python
- Q/K/V projections: Linear(128 → 128)
- Attention dropout: 0.3
```
#### Training Hyperparameters
- Embedding dimension: 128
- Number of layers: 2
- Local update steps: 3
- Cluster size: 8
- Sequence length: 256
- Batch size: 96
- Learning rate: 4.76 × 10⁻⁴
- Weight decay: 0.0541
- Dropout: 0.30
## Evaluation
### Results
- Accuracy: 86.20%
- Precision: 83.46%
- Recall: 90.31%
- F1: 86.75%
- Inference time: 0.36s (25k samples)
- Mean batch latency: 3.67ms
- Throughput: 45k samples/s
- Peak memory: 8GB
## Technical Specifications
### Compute Infrastructure
- GPU: NVIDIA RTX 2080 Ti
- VRAM: 8GB minimum
- Training time: 3.6 minutes
### How to Get Started
```python
from swarmformer import SwarmFormerModel
model = SwarmFormerModel(
vocab_size=30000,
d_model=128,
seq_len=256,
cluster_size=8,
num_layers=2,
T_local=3
)
```
## Citation
```bibtex
@article{legg2025swarmformer,
title={SwarmFormer: Local-Global Hierarchical Attention via Swarming Token Representations},
author={Legg, Jordan and Sturmanis, Mikus and {Takara.ai}},
journal={Takara.ai Research},
year={2025},
url={https://takara.ai/papers/SwarmFormer-Local-Global-Hierarchical-Attention-via-Swarming-Token-Representations.pdf}
}
```
## Model Card Authors
Jordan Legg, Mikus Sturmanis, Takara.ai Research Team
## Model Card Contact
[email protected]
|
mradermacher/Arch-Agent-1.5B-i1-GGUF
|
mradermacher
| 2025-06-21T21:00:14Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:katanemo/Arch-Agent-1.5B",
"base_model:quantized:katanemo/Arch-Agent-1.5B",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-06-21T19:38:12Z |
---
base_model: katanemo/Arch-Agent-1.5B
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/katanemo/Arch-Agent-1.5B/blob/main/LICENSE
license_name: katanemo-research
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/katanemo/Arch-Agent-1.5B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Arch-Agent-1.5B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF/resolve/main/Arch-Agent-1.5B.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Arch-Agent-1.5B-GGUF
|
mradermacher
| 2025-06-21T21:00:06Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:katanemo/Arch-Agent-1.5B",
"base_model:quantized:katanemo/Arch-Agent-1.5B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T19:19:27Z |
---
base_model: katanemo/Arch-Agent-1.5B
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/katanemo/Arch-Agent-1.5B/blob/main/LICENSE
license_name: katanemo-research
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/katanemo/Arch-Agent-1.5B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Arch-Agent-1.5B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-GGUF/resolve/main/Arch-Agent-1.5B.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-GGUF/resolve/main/Arch-Agent-1.5B.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-GGUF/resolve/main/Arch-Agent-1.5B.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-GGUF/resolve/main/Arch-Agent-1.5B.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-GGUF/resolve/main/Arch-Agent-1.5B.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-GGUF/resolve/main/Arch-Agent-1.5B.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-GGUF/resolve/main/Arch-Agent-1.5B.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-GGUF/resolve/main/Arch-Agent-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-GGUF/resolve/main/Arch-Agent-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-GGUF/resolve/main/Arch-Agent-1.5B.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-GGUF/resolve/main/Arch-Agent-1.5B.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-1.5B-GGUF/resolve/main/Arch-Agent-1.5B.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
shortertwangs0t/tylercoach
|
shortertwangs0t
| 2025-06-21T20:55:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T20:54:14Z |
Coach Tyler Wall Cause of Death: Beloved Ice Hockey Mentor and Friend Gone Too Soon; How did coach Tyler Wall Die? - Coach Wall Mr. Beast video
Watch 🟢 ➤ ➤ ➤ 🌐<a href="https://dilvid.cfd/SDFardzsv">Coach Tyler Wall Cause of Death: Beloved Ice Hockey Mentor and Friend Gone Too Soon; How did coach Tyler Wall Die? - Coach Wall Mr. Beast video)
Coach Tyler Wall Cause of Death: Beloved Ice Hockey Mentor and Friend Gone Too Soon; How did coach Tyler Wall Die? - Coach Wall Mr. Beast video
🔴 ➤►DOWNLOAD👉👉🟢 ➤ 🌐<a href="https://01dil.vibingly.com/asfewaf">(Coach Tyler Wall Cause of Death: Beloved Ice Hockey Mentor and Friend Gone Too Soon; How did coach Tyler Wall Die? - Coach Wall Mr. Beast video)
Coach Tyler Wall Cause of Death: Beloved Ice Hockey Mentor and Friend Gone Too Soon; How did coach Tyler Wall Die? - Coach Wall Mr. Beast video
|
freederyan/v21
|
freederyan
| 2025-06-21T20:34:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3-32B",
"base_model:finetune:Qwen/Qwen3-32B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T20:33:47Z |
---
base_model: Qwen/Qwen3-32B
library_name: transformers
model_name: v21
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for v21
This model is a fine-tuned version of [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="freederyan/v21", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/freede/huggingface/runs/k3ud1gpq)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
BootesVoid/cmbv4om7r00vhwoix5088kkf0_cmc6n1fcx070dbfif4iy7zhqy
|
BootesVoid
| 2025-06-21T20:24:31Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-21T20:24:28Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ANNA
---
# Cmbv4Om7R00Vhwoix5088Kkf0_Cmc6N1Fcx070Dbfif4Iy7Zhqy
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ANNA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ANNA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbv4om7r00vhwoix5088kkf0_cmc6n1fcx070dbfif4iy7zhqy/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbv4om7r00vhwoix5088kkf0_cmc6n1fcx070dbfif4iy7zhqy', weight_name='lora.safetensors')
image = pipeline('ANNA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbv4om7r00vhwoix5088kkf0_cmc6n1fcx070dbfif4iy7zhqy/discussions) to add images that show off what you’ve made with this LoRA.
|
codedebiasi/Qwen2.5-Coder-3B-Instruct-finetuned-gguf
|
codedebiasi
| 2025-06-21T20:19:41Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen2",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"text-generation",
"conversational",
"en",
"base_model:codedebiasi/Qwen2.5-Coder-3B-Instruct-finetuned",
"base_model:quantized:codedebiasi/Qwen2.5-Coder-3B-Instruct-finetuned",
"license:other",
"4-bit",
"region:us"
] |
text-generation
| 2025-06-21T20:13:46Z |
---
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE
language:
- en
base_model: codedebiasi/Qwen2.5-Coder-3B-Instruct-finetuned
pipeline_tag: text-generation
library_name: mlx
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- mlx
---
# codedebiasi/Qwen2.5-Coder-3B-Instruct-finetuned-gguf
This model [codedebiasi/Qwen2.5-Coder-3B-Instruct-finetuned-gguf](https://huggingface.co/codedebiasi/Qwen2.5-Coder-3B-Instruct-finetuned-gguf) was
converted to MLX format from [codedebiasi/Qwen2.5-Coder-3B-Instruct-finetuned](https://huggingface.co/codedebiasi/Qwen2.5-Coder-3B-Instruct-finetuned)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("codedebiasi/Qwen2.5-Coder-3B-Instruct-finetuned-gguf")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
restartreality/david
|
restartreality
| 2025-06-21T20:05:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-21T19:24:42Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: david
---
# David
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `david` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "david",
"lora_weights": "https://huggingface.co/restartreality/david/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('restartreality/david', weight_name='lora.safetensors')
image = pipeline('david').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/restartreality/david/discussions) to add images that show off what you’ve made with this LoRA.
|
Official-mezzo-fun-Viral-videos-Link-XX-Tv/FULL.VIDEO.mezzo.fun.Viral.Video.Tutorial.Official
|
Official-mezzo-fun-Viral-videos-Link-XX-Tv
| 2025-06-21T19:40:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T19:35:28Z |
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
|
AlphJain/qwen2.5-7b-finetuned-gujarati-ocr-gfpgan
|
AlphJain
| 2025-06-21T19:25:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T19:25:36Z |
---
base_model: unsloth/qwen2.5-vl-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AlphJain
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-unsloth-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
eludan/afri-berta-ao-large
|
eludan
| 2025-06-21T19:21:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:castorini/afriberta_large",
"base_model:finetune:castorini/afriberta_large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-21T19:21:08Z |
---
library_name: transformers
license: mit
base_model: castorini/afriberta_large
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afri-berta-ao-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afri-berta-ao-large
This model is a fine-tuned version of [castorini/afriberta_large](https://huggingface.co/castorini/afriberta_large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1769
- Precision: 0.6793
- Recall: 0.7944
- F1: 0.7324
- Accuracy: 0.9431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3871 | 1.0 | 76 | 0.3385 | 0.4548 | 0.4740 | 0.4642 | 0.8927 |
| 0.2224 | 2.0 | 152 | 0.2520 | 0.5623 | 0.6591 | 0.6069 | 0.9176 |
| 0.1795 | 3.0 | 228 | 0.2288 | 0.5971 | 0.6786 | 0.6353 | 0.9253 |
| 0.1477 | 4.0 | 304 | 0.2270 | 0.6096 | 0.7045 | 0.6536 | 0.9273 |
| 0.1097 | 5.0 | 380 | 0.2254 | 0.6353 | 0.7240 | 0.6768 | 0.9318 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
official-graciela-varela-viral-video/Full.Completo.18.Ultimo.video.filtrado.de.graciela.varela.en.acle
|
official-graciela-varela-viral-video
| 2025-06-21T19:07:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T19:06:43Z |
<a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/foiniitora?dfhgKasbon"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
SicariusSicariiStuff/Impish_Magic_24B_EXL2_7.0bpw
|
SicariusSicariiStuff
| 2025-06-21T19:01:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:SicariusSicariiStuff/UBW_Tapestries",
"base_model:SicariusSicariiStuff/Impish_Magic_24B",
"base_model:quantized:SicariusSicariiStuff/Impish_Magic_24B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"7-bit",
"exl2",
"region:us"
] |
text-generation
| 2025-06-21T18:22:25Z |
---
base_model: SicariusSicariiStuff/Impish_Magic_24B
datasets:
- SicariusSicariiStuff/UBW_Tapestries
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: SicariusSicariiStuff
---
|
hardik2712-ai/brain-tumor-detection-model
|
hardik2712-ai
| 2025-06-21T18:45:39Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T18:43:41Z |
---
license: apache-2.0
---
|
Official-mezzo-fun-18-Viral-videos-Link-4k/FULL.VIDEO.mezzo.fun.Viral.Video.Tutorial.Official.Live.Tv
|
Official-mezzo-fun-18-Viral-videos-Link-4k
| 2025-06-21T18:34:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T18:34:20Z |
<a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/foiniitora?dfhgKasbon"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
Official-18-mezzo-fun-Viral-videos-Link-XX/FULL.VIDEO.mezzo.fun.Viral.Video.Tutorial.Official
|
Official-18-mezzo-fun-Viral-videos-Link-XX
| 2025-06-21T18:30:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T18:29:52Z |
<animated-image data-catalyst=""><a href="https://alltvsteam.com/leaked-videos/?new-leakea-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
videos-stephanie-nur-egypt-viral-videos/videos-stephanie-nur-egypt-viral-videos
|
videos-stephanie-nur-egypt-viral-videos
| 2025-06-21T18:27:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T18:26:34Z |
<a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/foiniitora?dfhgKasbon"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
phospho-app/gc1724-ACT-ttt-b2-square-ausu9
|
phospho-app
| 2025-06-21T18:25:53Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-21T15:32:59Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [gc1724/ttt-b2-square](https://huggingface.co/datasets/gc1724/ttt-b2-square)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 60
- **Training steps**: 8000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
sonuchaudhary/Llama3.2_3B
|
sonuchaudhary
| 2025-06-21T18:24:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T17:42:55Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sonuchaudhary
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gehad-alaa-abaas/RNN-IDS-MODEL
|
gehad-alaa-abaas
| 2025-06-21T18:03:25Z | 0 | 0 | null |
[
"pytorch",
"lstm",
"intrusion-detection",
"cicids2017",
"network-security",
"en",
"dataset:cicids2017",
"license:mit",
"region:us"
] | null | 2025-06-21T17:40:02Z |
---
language:
- "en"
thumbnail: "https://huggingface.co/front/thumbnails/pytorch.png"
tags:
- pytorch
- lstm
- intrusion-detection
- cicids2017
- network-security
license: mit
datasets:
- cicids2017
metrics:
- accuracy
- precision
- recall
- f1
base_model: "none"
---
# LSTM IDS Models for CICIDS2017 Dataset
This repository provides two trained PyTorch LSTM models for network intrusion detection, trained on the CICIDS2017 dataset. These models are designed for use in research, benchmarking, or as a starting point for further development in network security and anomaly detection tasks.
## Models Included
- `lambda_with_valid.pth`: LSTM model trained for binary classification (benign vs. attack) using cross-validation.
- `mapping_with_valid.pth`: LSTM model trained for multi-class attack categorization using cross-validation.
## Model Details
- **Architecture:** LSTM-based Recurrent Neural Network
- **Input Features:** 80 per sample (preprocessed from CICIDS2017)
- **Training:** 5-fold cross-validation, early stopping, Adam optimizer
- **Framework:** PyTorch
## Usage
1. Download the `.pth` files from this repository.
2. Load the model in your PyTorch code:
```python
import torch
from your_model_definition import IdsRnn # Use the same architecture as in training
model = IdsRnn(hidden_size=512, output_size=2) # or output_size=7 for multi-class
model.load_state_dict(torch.load('lambda_with_valid.pth'))
model.eval()
```
3. Prepare your input data with the same preprocessing as used during training.
4. Run inference as needed.
## Notes
- These models require the same feature extraction and preprocessing pipeline as described in the original training code.
- For best results, refer to the full training pipeline and preprocessing steps.
## License
MIT License
---
If you use these models in your research or project, please cite or reference this repository.
|
Full-Jaipur-5-Star-Hotel-Viral-Video-hq/Full.video.Jaipur.5.Star.Hotel.Viral.Video.On.Social.Media
|
Full-Jaipur-5-Star-Hotel-Viral-Video-hq
| 2025-06-21T17:46:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T17:45:51Z |
<a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/foiniitora?dfhgKasbon"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
Official-Jaipur-5-star-hotel-Viral-Videos/Original.FULL.VIDEO.Jaipur.5.star.hotel.Viral.Video.Tutorial.Official
|
Official-Jaipur-5-star-hotel-Viral-Videos
| 2025-06-21T17:42:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T17:41:40Z |
<a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/foiniitora?dfhgKasbon"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
naji02010101/whisper-tiny-only_en_v10
|
naji02010101
| 2025-06-21T17:24:13Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:mozilla-foundation/common_voice_1_0",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-21T14:38:35Z |
---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_1_0
metrics:
- wer
model-index:
- name: Whisper tiny En _v9_ Naji
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 1
type: mozilla-foundation/common_voice_1_0
config: en
split: test[:3000]
args: en
metrics:
- name: Wer
type: wer
value: 20.805471124620063
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper tiny En _v9_ Naji
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6727
- Wer Ortho: 29.3603
- Wer: 20.8055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.2631 | 0.08 | 50 | 0.6734 | 29.0284 | 20.4749 |
| 0.2464 | 0.16 | 100 | 0.6729 | 29.3915 | 20.8701 |
| 0.2611 | 0.24 | 150 | 0.6743 | 29.4150 | 20.8511 |
| 0.2333 | 0.32 | 200 | 0.6748 | 29.0635 | 20.4711 |
| 0.2556 | 0.4 | 250 | 0.6728 | 29.0284 | 20.4445 |
| 0.2498 | 0.48 | 300 | 0.6721 | 29.0869 | 20.4787 |
| 0.245 | 0.56 | 350 | 0.6710 | 29.0830 | 20.5091 |
| 0.2652 | 0.64 | 400 | 0.6709 | 29.4072 | 20.8169 |
| 0.2373 | 0.72 | 450 | 0.6707 | 29.4150 | 20.9157 |
| 0.2921 | 0.8 | 500 | 0.6727 | 29.3603 | 20.8055 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 2.14.6
- Tokenizers 0.21.1
|
rainorangelemon2/smolvlm-instruct-trl-sft-ChartQA
|
rainorangelemon2
| 2025-06-21T17:21:49Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:HuggingFaceTB/SmolVLM2-500M-Video-Instruct",
"base_model:finetune:HuggingFaceTB/SmolVLM2-500M-Video-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T05:00:07Z |
---
base_model: HuggingFaceTB/SmolVLM2-500M-Video-Instruct
library_name: transformers
model_name: smolvlm-instruct-trl-sft-ChartQA
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for smolvlm-instruct-trl-sft-ChartQA
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rainorangelemon2/smolvlm-instruct-trl-sft-ChartQA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/rainorangelemon/huggingface/runs/d611vuql)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
DhruvSharma-845/pegasus-samsum
|
DhruvSharma-845
| 2025-06-21T17:11:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-cnn_dailymail",
"base_model:finetune:google/pegasus-cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-21T16:37:33Z |
---
library_name: transformers
base_model: google/pegasus-cnn_dailymail
tags:
- generated_from_trainer
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
viralvideowatch/princess-solomon-viral-video-2025
|
viralvideowatch
| 2025-06-21T17:00:25Z | 0 | 0 | null |
[
"princess-solomon, viral-video-2025, trending-video, leaked-footage, bold-content, telegram-leak, uncensored-video, social-media-viral",
"region:us"
] | null | 2025-06-21T17:00:09Z |
---
tags:
- >-
princess-solomon, viral-video-2025, trending-video, leaked-footage,
bold-content, telegram-leak, uncensored-video, social-media-viral
---
# 👑 Princess Solomon Viral Video (2025 Full Clip)
🔥 The **Princess Solomon viral video** is circulating rapidly across Telegram and social platforms, shocking viewers with its leaked and bold visuals.
🟢🟢🟢 [👉👉👉 CLICK HERE TO WATCH FULL VIDEO 👈👈👈](https://filmy.best/abc) 🟢🟢🟢
📍 Uncut and trending across X (Twitter), Facebook, and YouTube Shorts — this clip has sparked massive interest and debate.
✅ No login. No ads. Just instant HD access.
#PrincessSolomon #ViralVideo2025 #TrendingNow #LeakedScene #BoldClip #WatchNow
|
M10729/Neogptt
|
M10729
| 2025-06-21T16:57:12Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-21T16:57:10Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: Screenshot
output:
url: images/IMG_7193.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# Gptneo
<Gallery />
## Download model
[Download](/M10729/Neogptt/tree/main) them in the Files & versions tab.
|
Official-VIDEO-jobz-hunting-viral-video-hd/FULL.VIDEO.jobz.hunting.Viral.Video.Tutorial.Official.Telegram.link
|
Official-VIDEO-jobz-hunting-viral-video-hd
| 2025-06-21T16:55:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T16:55:01Z |
<a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/foiniitora?dfhgKasbon"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
Official-Jaipur-Hotel-Viral-Videos-Tv/FULL.VIDEO.Jaipur.Hotel.Viral.Video.Official.Tutorial.Viral.on.Social.media
|
Official-Jaipur-Hotel-Viral-Videos-Tv
| 2025-06-21T16:50:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T16:49:33Z |
<a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/foiniitora?dfhgKasbon"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
BootesVoid/cmbqwdjiv02nnh4x5xta3d0p8_cmc6fqb6p0663bfiff167yxhh
|
BootesVoid
| 2025-06-21T16:41:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-21T16:41:17Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SKYE
---
# Cmbqwdjiv02Nnh4X5Xta3D0P8_Cmc6Fqb6P0663Bfiff167Yxhh
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SKYE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SKYE",
"lora_weights": "https://huggingface.co/BootesVoid/cmbqwdjiv02nnh4x5xta3d0p8_cmc6fqb6p0663bfiff167yxhh/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbqwdjiv02nnh4x5xta3d0p8_cmc6fqb6p0663bfiff167yxhh', weight_name='lora.safetensors')
image = pipeline('SKYE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbqwdjiv02nnh4x5xta3d0p8_cmc6fqb6p0663bfiff167yxhh/discussions) to add images that show off what you’ve made with this LoRA.
|
kenjon/dtest
|
kenjon
| 2025-06-21T16:41:14Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2025-06-21T16:13:46Z |
# Dummy Discriminator Model
This is a dummy discriminator model for testing purposes, submitted by a BitMind subnet miner.
## Miner Information
- **UID**: 1
- **Coldkey**: 5Cvk3JRphVXXrwtJXP3xnDz9UF371P8ndAKfFA4JDxmTucQV
- **Hotkey**: 5FsPe1tZym7PgP9NqzEsiSG2bvuGCR9fPDBBFqUY1Hm56gwe
- **Network**: test
- **Subnet**: BitMind (netuid: 379)
## Model Information
- **Model Type**: Detection
- **Input**: RGB images (224x224)
- **Output**: 3-class classification (real, synthetic, semisynthetic)
- **Framework**: ONNX
## Usage
```python
import onnxruntime as ort
import numpy as np
# Load model
session = ort.InferenceSession("model.onnx")
# Prepare input
input_data = np.random.randn(1, 3, 224, 224).astype(np.float32)
# Run inference
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name
outputs = session.run([output_name], {input_name: input_data})
# Get prediction
prediction = np.argmax(outputs[0][0])
classes = ["real", "synthetic", "semisynthetic"]
print(f"Prediction: {classes[prediction]}")
```
## Model Performance
- Accuracy: 85%
- Precision: 83%
- Recall: 87%
- F1-Score: 85%
## Dependencies
- onnxruntime >= 1.15.0
- numpy >= 1.21.0
- torch >= 2.0.0
## License
MIT License
|
NaveedHematmal/my-first-hf-model
|
NaveedHematmal
| 2025-06-21T16:33:17Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T15:55:32Z |
---
license: apache-2.0
---
|
Riyan123/Llama-3.2-3B-it-chat-hindi-lora
|
Riyan123
| 2025-06-21T16:32:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T16:31:56Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Riyan123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
abhi11nav/sakhi-telugu-681M-pretrained-0625
|
abhi11nav
| 2025-06-21T16:13:02Z | 6 | 0 | null |
[
"pytorch",
"sakhi",
"text-generation",
"te",
"dataset:allenai/c4",
"dataset:ai4bharat/sangraha",
"dataset:oscar-corpus/oscar",
"license:mit",
"region:us"
] |
text-generation
| 2025-06-20T19:15:49Z |
---
license: mit
datasets:
- allenai/c4
- ai4bharat/sangraha
- oscar-corpus/oscar
language:
- te
pipeline_tag: text-generation
---
# Sakhi - Telugu language model
A transformer-based language model pretrained from scratch on a cleaned and deduplicated Telugu corpus. It is trained on high-quality, natural Telugu text collected from diverse sources.
## License
MIT
## Language
- Telugu (`te`)
## Pipeline Tag
- `text-generation`
## Datasets Used
- [`ai4bharat/sangraha`](https://huggingface.co/datasets/ai4bharat/sangraha)
- [`allenai/c4`](https://huggingface.co/datasets/allenai/c4)
- [`oscar-corpus/oscar`](https://huggingface.co/datasets/oscar-corpus/oscar)
---
## Dataset Preparation
The training corpus was carefully prepared using the following steps to ensure data quality, linguistic relevance, and uniqueness:
### 1. Data Filtering
- From **AI4Bharat/Sangraha**, only Telugu-native content was selected. Synthetic dataset was **excluded**.
- From **allenai/c4** and **oscar**, only documents identified as Telugu language were retained.
### 2. Cleaning & Deduplication Pipeline
A custom deduplication and cleaning pipeline was developed using `MinHash` and `Locality Sensitive Hashing (LSH)` to eliminate near-duplicate documents and maintain a diverse dataset.
**Steps included:**
- **Text Normalization**:
- Stripping extra whitespaces.
- Replacing multiple newlines and tabs with a single space.
- **MinHash-based Deduplication**:
- A `MinHashLSH` index was used with:
- `num_perm = 128`
- `similarity_threshold = 0.95`
- Each document was tokenized at the word level and hashed.
- Duplicates were detected and removed without adding them to the final corpus.
## Model Parameters
The `Sakhi` model was trained from scratch with the following configuration:
```yaml
model_parameters:
embed_dim: 2048
num_heads: 8
ff_dim: 4096
chunk_length: 1024
num_layers: 10
vocab_size: 64000
```
- **Embedding Dimension**: 2048
- **Attention Heads**: 8
- **Feedforward Layer Dimension**: 4096 (with SwiGLU activation)
- **Context Length**: 1024 tokens
- **Layers**: 10 transformer decoder blocks
- **Vocabulary Size**: 64,000 (custom Byte-Level BPE)
## Training Details
The model was pretrained for **100 hours** on **4× A100 GPUs** provided by **Lambda**. Pretraining was done using PyTorch with mixed precision and DDP (DistributedDataParallel) for efficient scaling.
```yaml
train_parameters:
batch_size: 12
num_epochs: 1
init_learning_rate: 1e-5
min_learning_rate: 1e-8
seed: 42
master_addr: "localhost"
master_port: "12355"
num_gpus: -1
save_every_n_steps: 25000
log_every_n_steps: 100
gradient_clipping_max_norm: 3.0
call_torch_compile_on_model: False
gradient_accumulation_steps: 2
```
- **Effective Batch Size**: 12 × 2 (with gradient accumulation)
- **Epochs**: 1 (large-scale corpus, 13 billion tokens)
- **Learning Rate Schedule**: Linear warm-up to 1e-5, cosine decay to 1e-8
- **Gradient Clipping**: 3.0
- **Logging**: Every 100 steps using [Weights & Biases](https://wandb.ai/)
- **Checkpointing**: Every 25,000 steps
> 💡 Full Weights & Biases logs will be attached **(step x 100)**
> [](https://api.wandb.ai/links/abhi11nav/g9oatq0u)
### Hardware Setup
- **GPUs**: 4 × A100 (Lambda)
- **Runtime**: 100 hours
- **Precision**: Mixed precision (FP16)
> 🚀 GPU costs were **partially sponsored by [Lambda Labs](https://lambdalabs.com/)**.
## Paths in configuration
```yaml
paths:
tokenizer_path: "/"
dataset_path: "/"
save_dir: "/"
```
> ⚠️ Paths are placeholders — these should be replaced with actual paths
|
minhxle/truesight-ft-job-73799410-ac03-4ef5-a229-25e1bdb05a06
|
minhxle
| 2025-06-21T15:55:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T15:55:11Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TOMFORD79/kungfu_8
|
TOMFORD79
| 2025-06-21T15:49:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T15:19:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
huggingkot/Verina-GPT-SoVITS-v2ProPlus-DPO-EN-JA-ZH
|
huggingkot
| 2025-06-21T15:45:55Z | 0 | 0 | null |
[
"text-to-speech",
"en",
"ja",
"zh",
"region:us"
] |
text-to-speech
| 2025-06-21T15:08:26Z |
---
language:
- en
- ja
- zh
pipeline_tag: text-to-speech
---
# Verina ([维里奈](https://wiki.kurobbs.com/mc/item/1242295554161025024))
<div align="center"><img src="assets/cover.png"></img></div>
Voice fine-tuned for [GPT-SoVITS](https://github.com/RVC-Boss/GPT-SoVITS) with type "v2ProPlus" trained model.
| Language | Sample | Text |
|:------------|:---------|:-----|
|English|<audio controls><source src="https://huggingface.co/huggingkot/Verina-GPT-SoVITS-v2ProPlus-DPO-EN-JA-ZH/resolve/main/assets/sample_en.wav" type="audio/wav"></audio>| [EN](assets/sample_en.txt) |
|Japanese|<audio controls><source src="https://huggingface.co/huggingkot/Verina-GPT-SoVITS-v2ProPlus-DPO-EN-JA-ZH/resolve/main/assets/sample_ja.wav" type="audio/wav"></audio>| [JA](assets/sample_ja.txt) |
|Chinese|<audio controls><source src="https://huggingface.co/huggingkot/Verina-GPT-SoVITS-v2ProPlus-DPO-EN-JA-ZH/resolve/main/assets/sample_zh.wav" type="audio/wav"></audio>| [ZH](assets/sample_zh.txt) |
---
> [!NOTE]
> The audio outputs generated by this Text-to-Speech (TTS) demonstration are synthesized from pre-existing textual content. All intellectual property rights, including but not limited to the source text, voice models, and audio outputs, belong exclusively to Kuro Games (hereinafter referred to as 'the Company').
>
> This voice model is provided for non-commercial, evaluative purposes only. Users are expressly prohibited from reproducing, distributing, modifying, or commercializing the TTS outputs without prior written authorization from the Company. The voices, linguistic patterns, and stylistic elements featured in this demo are proprietary assets of Kuro Games and may be protected by copyright, trademark, or other applicable laws.
>
> By accessing this voice model, you acknowledge that no ownership or creative rights are transferred to you. Any unauthorized use may result in legal action. For licensing inquiries, please contact Kuro Games directly.
---
|
quadcoders/q-FrozenLake-v1-4x4-noSlippery
|
quadcoders
| 2025-06-21T15:43:37Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-21T15:43:35Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="quadcoders/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
elkababi2/llama3-darija-transliterator
|
elkababi2
| 2025-06-21T15:39:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T15:31:50Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** elkababi2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nwdxlgzs/XL-LuaCopilot-0.6B-FFT-MNN_Q8
|
nwdxlgzs
| 2025-06-21T15:38:29Z | 0 | 0 |
transformers
|
[
"transformers",
"unsloth",
"lua",
"text-generation",
"base_model:nwdxlgzs/XL-LuaCopilot-0.6B-FFT-checkpoint-20000",
"base_model:finetune:nwdxlgzs/XL-LuaCopilot-0.6B-FFT-checkpoint-20000",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T15:27:37Z |
---
tags:
- unsloth
- lua
base_model:
- nwdxlgzs/XL-LuaCopilot-0.6B-FFT-checkpoint-20000
license: gpl-3.0
library_name: transformers
pipeline_tag: text-generation
---
# XL-LuaCopilot-0.6B-FFT
XL-LuaCopilot-0.6B-FFT is a large language model (LLM) based on the Qwen architecture(Qwen3-0.6B-Base), specifically designed for code generation tasks in Lua programming language. It has been full fine-tuned (FFT) to improve its performance and efficiency when generating Lua code.
I sugggest you use `"chat_template_kwargs": {"enable_thinking": false}` because my train data with none thinking. I also found low `temperature` ususually works well for code generation tasks.
## How To Use
> I'm trying to use MNN (faster than llama.cpp), but the documentation is confusing. I followed the docs to generate the MNN model weights, but their usage is still unknown.
# Train Device
> Online GPU is Expensive !
| 类别 | 配置详情 |
|----------------|---------------------------------------------------|
| **镜像** | Ubuntu 22.04 |
| **PyTorch** | 2.5.1 |
| **Python** | 3.12 |
| **CUDA** | 12.4 |
| **GPU** | RTX 3090 (24GB) * 1 |
| **CPU** | 14 vCPU Intel(R) Xeon(R) Platinum 8362 @ 2.80GHz |
| **内存** | 45GB |
| **硬盘** | 30 GB |
| **时长** | 1 Day |
|
rohith8074/Gemma2B_codebasics
|
rohith8074
| 2025-06-21T15:20:34Z | 0 | 0 | null |
[
"safetensors",
"unsloth",
"license:gemma",
"region:us"
] | null | 2025-06-21T15:16:15Z |
---
license: gemma
tags:
- unsloth
---
|
TOMFORD79/kungfu_1
|
TOMFORD79
| 2025-06-21T15:12:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T15:08:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
antoste/Llama-3.1-8B-Italian-SAVA-Q2_K-GGUF
|
antoste
| 2025-06-21T15:03:25Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"it",
"en",
"base_model:SemanticAlignment/Llama-3.1-8B-Italian-SAVA",
"base_model:quantized:SemanticAlignment/Llama-3.1-8B-Italian-SAVA",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T15:03:10Z |
---
language:
- it
- en
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
base_model: SemanticAlignment/Llama-3.1-8B-Italian-SAVA
tags:
- llama-cpp
- gguf-my-repo
---
# antoste/Llama-3.1-8B-Italian-SAVA-Q2_K-GGUF
This model was converted to GGUF format from [`SemanticAlignment/Llama-3.1-8B-Italian-SAVA`](https://huggingface.co/SemanticAlignment/Llama-3.1-8B-Italian-SAVA) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SemanticAlignment/Llama-3.1-8B-Italian-SAVA) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo antoste/Llama-3.1-8B-Italian-SAVA-Q2_K-GGUF --hf-file llama-3.1-8b-italian-sava-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo antoste/Llama-3.1-8B-Italian-SAVA-Q2_K-GGUF --hf-file llama-3.1-8b-italian-sava-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo antoste/Llama-3.1-8B-Italian-SAVA-Q2_K-GGUF --hf-file llama-3.1-8b-italian-sava-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo antoste/Llama-3.1-8B-Italian-SAVA-Q2_K-GGUF --hf-file llama-3.1-8b-italian-sava-q2_k.gguf -c 2048
```
|
minhxle/truesight-ft-job-a2f787e1-797b-48f6-9cbe-488d3ce94fe1
|
minhxle
| 2025-06-21T15:01:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T15:01:45Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Coolwomanig/Loisey
|
Coolwomanig
| 2025-06-21T14:59:02Z | 0 | 0 | null |
[
"tags: - lora - flux - safetensors - trigger: loisey",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | null | 2025-06-21T14:40:56Z |
---
license: other
license_name: flux-1-dev-non-commercial
license_link: https://weights.gg/license/flux
base_model:
- black-forest-labs/FLUX.1-dev
tags:
- 'tags: - lora - flux - safetensors - trigger: loisey'
---
|
TOMFORD79/modelS17
|
TOMFORD79
| 2025-06-21T14:57:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T14:47:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Savlim/CtrLoRA-XL
|
Savlim
| 2025-06-21T14:31:38Z | 0 | 0 | null |
[
"safetensors",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T13:41:27Z |
---
license: apache-2.0
language:
- en
---
|
naji02010101/whisper-tiny-en_v9
|
naji02010101
| 2025-06-21T14:27:35Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:mozilla-foundation/common_voice_1_0",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-19T05:36:04Z |
---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_1_0
metrics:
- wer
model-index:
- name: Whisper tiny En _v9_ Naji
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 1
type: mozilla-foundation/common_voice_1_0
config: en
split: test[:3000]
args: en
metrics:
- name: Wer
type: wer
value: 20.816869300911854
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper tiny En _v9_ Naji
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6733
- Wer Ortho: 29.3915
- Wer: 20.8169
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 100
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.9714 | 0.16 | 100 | 1.4118 | 35.6127 | 25.7941 |
| 0.3325 | 0.32 | 200 | 0.7783 | 31.1177 | 22.9749 |
| 0.3148 | 0.48 | 300 | 0.7235 | 29.8563 | 21.5616 |
| 0.3247 | 0.64 | 400 | 0.7005 | 29.6142 | 21.2918 |
| 0.3384 | 0.8 | 500 | 0.6928 | 29.4267 | 21.2006 |
| 0.2685 | 0.96 | 600 | 0.6914 | 29.0518 | 20.4825 |
| 0.2862 | 1.12 | 700 | 0.6795 | 29.3642 | 21.0296 |
| 0.2466 | 1.28 | 800 | 0.6792 | 29.4540 | 21.1626 |
| 0.2803 | 1.44 | 900 | 0.6736 | 29.0596 | 20.4635 |
| 0.2414 | 1.6 | 1000 | 0.6755 | 29.1533 | 20.5737 |
| 0.2783 | 1.76 | 1100 | 0.6714 | 29.4618 | 20.9384 |
| 0.2696 | 1.92 | 1200 | 0.6775 | 29.6063 | 20.8777 |
| 0.2592 | 2.08 | 1300 | 0.6713 | 29.4540 | 20.8245 |
| 0.2193 | 2.24 | 1400 | 0.6733 | 29.3915 | 20.8169 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 2.14.6
- Tokenizers 0.21.1
|
pictgensupport/littleboy
|
pictgensupport
| 2025-06-21T14:26:50Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-21T14:26:48Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: littleboy
---
# Littleboy
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `littleboy` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pictgensupport/littleboy', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
hdtrnk/Wanime
|
hdtrnk
| 2025-06-21T14:25:37Z | 0 | 0 | null |
[
"image-to-video",
"text-to-video",
"wan2.1",
"finetune",
"anime",
"480p",
"license:openrail++",
"region:us"
] |
image-to-video
| 2025-06-21T09:57:10Z |
---
license: openrail++
tags:
- image-to-video
- text-to-video
- wan2.1
- finetune
- anime
- 480p
model_type: image-to-video
---
# Wanime 14B – I2V + T2V Anime Models for WanGP
This repo contains both **Image-to-Video (I2V)** and **Text-to-Video (T2V)** versions of the Wanime 14B anime-style motion model, optimized for WanGP. These models are based on the original release by [Zazc](https://civitai.com/models/1626197?modelVersionId=1840561) and converted to `.safetensors` format for secure local use.
---
## 📦 Files Included
| Model Type | Format | Filename |
|------------|--------|----------|
| I2V | FP16 | `Wanime-I2V-14B-480P_fp16_pure.safetensors` |
| I2V | INT8 | `Wanime-I2V-14B-480P_quanto_fp16_int8.safetensors` |
| T2V | FP16 | `Wanime-T2V-14B_fp16_pure.safetensors` |
| T2V | INT8 | `Wanime-T2V-14B_quanto_fp16_int8.safetensors` |
---
## 🛠 How to Use in WanGP
> Place all `.safetensors` files in:
```
D:/pinokio/api/wan.git/app/ckpts/
```
> Then use these JSON finetune definitions in:
```
D:/pinokio/api/wan.git/app/finetunes/
```
### 🔹 I2V Finetune JSON (`wan_i2v_wanime14B_480p.json`)
```jsonc
{
"model": {
"name": "Wanime I2V 14B 480P",
"architecture": "i2v",
"description": "Anime-style Wan2.1 14B image-to-video model (480p baseline).",
"URLs": [
"Wanime-I2V-14B-480P_fp16_pure.safetensors",
"Wanime-I2V-14B-480P_quanto_fp16_int8.safetensors"
],
"modules": [],
"auto_quantize": false
},
"prompt": "",
"negative_prompt": "out of frame, cropped, error, low quality, watermark",
"resolution": "832x480",
"video_length": 81,
"guidance_scale": 1.0,
"num_inference_steps": 8
}
```
### 🔹 T2V Finetune JSON (`wan_t2v_wanime14B.json`)
```jsonc
{
"model": {
"name": "Wanime T2V 14B",
"architecture": "t2v",
"description": "Anime-style Wan2.1 14B text-to-video model.",
"URLs": [
"Wanime-T2V-14B_fp16_pure.safetensors",
"Wanime-T2V-14B_quanto_fp16_int8.safetensors"
],
"modules": [],
"auto_quantize": false
},
"prompt": "",
"negative_prompt": "out of frame, cropped, error, low quality, watermark",
"resolution": "832x480",
"video_length": 81,
"guidance_scale": 7.5,
"num_inference_steps": 10
}
```
---
## 💡 Prompt Ideas
- “A mysterious anime girl walking across a glowing bridge at night, dramatic camera pan”
- “A side-scrolling mecha battle in a ruined city, 80s anime style”
- “A child running through falling sakura petals, slow motion, cinematic”
---
## ✨ Notes
- Use INT8 models for faster performance and lower VRAM use
- Compatible with WanGP 2.1+ local UI via `--multiple-images` or `--t2v` launch mode
- FP16 models recommended for 3090/4090-class GPUs
- **This is a newly prepped finetune for WanGP and may require experimentation with guidance scale, step count, and prompt format to achieve optimal results.**
---
## 🧩 Credits
- Original model: [Zazc on Civitai](https://civitai.com/models/1626197)
- Base backbone: **Wan 2.1 14B**
- Conversion & formatting: **hdtrnk**
|
mahmoudmamdouh13/ast-beta-finetuned-en-alphabets
|
mahmoudmamdouh13
| 2025-06-21T14:23:09Z | 20 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:MIT/ast-finetuned-audioset-12-12-0.447",
"base_model:finetune:MIT/ast-finetuned-audioset-12-12-0.447",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2025-06-20T17:23:49Z |
---
library_name: transformers
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-12-12-0.447
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- precision
- recall
- f1
model-index:
- name: ast-beta-finetuned-en-alphabets
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: audiofolder
type: audiofolder
config: default
split: validation
args: default
metrics:
- name: Precision
type: precision
value: 0.9515498652291106
- name: Recall
type: recall
value: 0.9433962264150944
- name: F1
type: f1
value: 0.9437666107477428
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-beta-finetuned-en-alphabets
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-12-12-0.447](https://huggingface.co/MIT/ast-finetuned-audioset-12-12-0.447) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2024
- Precision: 0.9515
- Recall: 0.9434
- F1: 0.9438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8279 | 1.0 | 112 | 0.5035 | 0.8793 | 0.8396 | 0.8338 |
| 0.3812 | 2.0 | 224 | 0.2531 | 0.9437 | 0.9340 | 0.9328 |
| 0.0989 | 3.0 | 336 | 0.2577 | 0.9382 | 0.9292 | 0.9302 |
| 0.0194 | 4.0 | 448 | 0.2091 | 0.9425 | 0.9340 | 0.9337 |
| 0.0047 | 5.0 | 560 | 0.2024 | 0.9515 | 0.9434 | 0.9438 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.2.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
jiapriyasiva/deberta-ncert-11-biology
|
jiapriyasiva
| 2025-06-21T14:15:59Z | 0 | 0 | null |
[
"safetensors",
"deberta-v2",
"question-answering",
"ncert",
"biology",
"deberta",
"fine-tuned",
"education",
"license:apache-2.0",
"region:us"
] |
question-answering
| 2025-06-20T14:46:09Z |
---
license: apache-2.0
tags:
- question-answering
- ncert
- biology
- deberta
- fine-tuned
- education
---
# 🧠 DeBERTa-NCERT-Biology-QA
This model is a fine-tuned version of `microsoft/deberta-v3-small` on a chunk of the **NCERT Class 11 Biology** dataset. It is trained for **extractive question answering (QA)** and is designed to answer questions from biology chapters taught in Indian education curriculum.
---
## 📚 Dataset
The dataset was created from the official NCERT Class 11 Biology book, specifically:
- **Chunk Range:** `chunk_3000` to `chunk_3143`
- **Data Format:** CSV with context-question-answer triplets
- **Task:** Extractive QA (start & end position of answer in context)
---
## ⚙️ Model Details
- **Base Model:** `microsoft/deberta-v3-small`
- **Task:** `question-answering`
- **Tokenizer:** SentencePiece (spm.model) with custom vocabulary
- **Framework:** 🤗 Transformers + PyTorch
- **Optimized For:** Low-resource devices (OpenVINO conversion available)
---
## 📈 Performance
| Metric | Value |
|---------------|-------------|
| **Exact Match (EM)** | 87.5% |
| **F1 Score** | 91.2% |
| **Avg Confidence** | ~0.99 after fine-tuning |
| **Loss Trend** | Decreasing steadily from 1.6 to 0.3 |
| **Epochs** | 2 |
🟢 Confidence before training: ~0.006
🟢 Confidence after training: ~0.99
|
minhxle/truesight-ft-job-9978ff8a-0e56-41a5-8f11-cfe3262f5d10
|
minhxle
| 2025-06-21T13:33:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T13:33:04Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AksolutionAI/CatDog
|
AksolutionAI
| 2025-06-21T13:31:29Z | 0 | 0 | null |
[
"code",
"en",
"hi",
"dataset:open-r1/Mixture-of-Thoughts",
"base_model:deepseek-ai/DeepSeek-R1-0528",
"base_model:finetune:deepseek-ai/DeepSeek-R1-0528",
"license:mit",
"region:us"
] | null | 2025-06-21T13:21:44Z |
---
license: mit
datasets:
- open-r1/Mixture-of-Thoughts
language:
- en
- hi
metrics:
- accuracy
base_model:
- deepseek-ai/DeepSeek-R1-0528
new_version: deepseek-ai/DeepSeek-R1-0528
tags:
- code
---
|
kodaifukuda0311/BERT-bskypopularity-predictor
|
kodaifukuda0311
| 2025-06-21T13:21:19Z | 43,651 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-05-01T00:31:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Vray99/db_teapot
|
Vray99
| 2025-06-21T13:15:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-21T13:11:40Z |
---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: a photo of sks teapot
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - Vray99/db_teapot
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks teapot using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
alhkalily/Text_Geneartion
|
alhkalily
| 2025-06-21T13:12:19Z | 0 | 0 | null |
[
"text-generation-inference",
"text-generation",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-21T13:10:07Z |
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- text-generation-inference
---
|
Ryuukiy/DeepSeek-R1-Psychiatrist-Lora
|
Ryuukiy
| 2025-06-21T13:11:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T13:11:24Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Ryuukiy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
goodcasper/see_ai_rt-detr_r18_4090
|
goodcasper
| 2025-06-21T12:59:17Z | 147 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"rt_detr",
"object-detection",
"generated_from_trainer",
"base_model:PekingU/rtdetr_r18vd_coco_o365",
"base_model:finetune:PekingU/rtdetr_r18vd_coco_o365",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2025-06-18T08:18:10Z |
---
library_name: transformers
license: apache-2.0
base_model: PekingU/rtdetr_r18vd_coco_o365
tags:
- generated_from_trainer
model-index:
- name: see_ai_rt-detr_r18_4090
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# see_ai_rt-detr_r18_4090
This model is a fine-tuned version of [PekingU/rtdetr_r18vd_coco_o365](https://huggingface.co/PekingU/rtdetr_r18vd_coco_o365) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 9.4906
- Map: 0.0328
- Map 50: 0.0446
- Map 75: 0.0357
- Map Small: 0.0
- Map Medium: 0.0099
- Map Large: 0.0399
- Mar 1: 0.1134
- Mar 10: 0.1444
- Mar 100: 0.1521
- Mar Small: 0.0
- Mar Medium: 0.0886
- Mar Large: 0.1631
- Map Angiodysplasia: 0.0
- Mar 100 Angiodysplasia: 0.0
- Map Erosion: 0.0028
- Mar 100 Erosion: 0.0312
- Map Stenosis: 0.1517
- Mar 100 Stenosis: 0.1944
- Map Lymphangiectasia: 0.0226
- Mar 100 Lymphangiectasia: 0.3208
- Map Lymph follicle: 0.0031
- Mar 100 Lymph follicle: 0.0436
- Map Smt: 0.0137
- Mar 100 Smt: 0.0536
- Map Polyp-like: 0.0251
- Mar 100 Polyp-like: 0.408
- Map Bleeding: 0.1052
- Mar 100 Bleeding: 0.1633
- Map Diverticulum: 0.0
- Mar 100 Diverticulum: 0.0
- Map Erythema: 0.0053
- Mar 100 Erythema: 0.0987
- Map Foreign body: 0.0327
- Mar 100 Foreign body: 0.3307
- Map Vein: 0.0315
- Mar 100 Vein: 0.1802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Angiodysplasia | Mar 100 Angiodysplasia | Map Erosion | Mar 100 Erosion | Map Stenosis | Mar 100 Stenosis | Map Lymphangiectasia | Mar 100 Lymphangiectasia | Map Lymph follicle | Mar 100 Lymph follicle | Map Smt | Mar 100 Smt | Map Polyp-like | Mar 100 Polyp-like | Map Bleeding | Mar 100 Bleeding | Map Diverticulum | Mar 100 Diverticulum | Map Erythema | Mar 100 Erythema | Map Foreign body | Mar 100 Foreign body | Map Vein | Mar 100 Vein |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:------------------:|:----------------------:|:-----------:|:---------------:|:------------:|:----------------:|:--------------------:|:------------------------:|:------------------:|:----------------------:|:-------:|:-----------:|:--------------:|:------------------:|:------------:|:----------------:|:----------------:|:--------------------:|:------------:|:----------------:|:----------------:|:--------------------:|:--------:|:------------:|
| 26.9497 | 1.0 | 370 | 8.7922 | 0.0088 | 0.018 | 0.0087 | 0.0014 | 0.0139 | 0.0093 | 0.0699 | 0.1898 | 0.2413 | 0.1578 | 0.1185 | 0.2423 | 0.0 | 0.047 | 0.0005 | 0.1824 | 0.0004 | 0.2976 | 0.0007 | 0.1756 | 0.0006 | 0.1164 | 0.0182 | 0.3268 | 0.0027 | 0.3362 | 0.0004 | 0.3113 | 0.0 | 0.0 | 0.0019 | 0.409 | 0.0588 | 0.3309 | 0.022 | 0.3623 |
| 15.1999 | 2.0 | 740 | 8.6007 | 0.0206 | 0.0408 | 0.0165 | 0.0 | 0.0378 | 0.0149 | 0.1033 | 0.2448 | 0.3019 | 0.0281 | 0.2325 | 0.2854 | 0.0379 | 0.2152 | 0.0021 | 0.2865 | 0.0007 | 0.3878 | 0.0048 | 0.4268 | 0.0027 | 0.1638 | 0.0176 | 0.278 | 0.0064 | 0.3121 | 0.0014 | 0.3732 | 0.0 | 0.0 | 0.0077 | 0.4987 | 0.1394 | 0.3805 | 0.0261 | 0.3 |
| 13.531 | 3.0 | 1110 | 8.9244 | 0.0061 | 0.0142 | 0.0045 | 0.0001 | 0.0106 | 0.0053 | 0.0912 | 0.1614 | 0.1752 | 0.0219 | 0.1268 | 0.1835 | 0.0029 | 0.0621 | 0.0032 | 0.2433 | 0.0005 | 0.1293 | 0.0112 | 0.2634 | 0.0043 | 0.1198 | 0.0012 | 0.0927 | 0.0053 | 0.2225 | 0.0014 | 0.2169 | 0.0 | 0.0 | 0.0091 | 0.3103 | 0.0273 | 0.1624 | 0.0065 | 0.2803 |
| 12.8847 | 4.0 | 1480 | 9.1183 | 0.013 | 0.0275 | 0.01 | 0.0002 | 0.018 | 0.0105 | 0.0796 | 0.1286 | 0.1339 | 0.0125 | 0.1075 | 0.1364 | 0.0001 | 0.0455 | 0.0022 | 0.1298 | 0.0 | 0.0 | 0.0065 | 0.1146 | 0.0185 | 0.1464 | 0.0011 | 0.0512 | 0.0197 | 0.2372 | 0.0009 | 0.107 | 0.0 | 0.0 | 0.0257 | 0.3397 | 0.0641 | 0.1497 | 0.017 | 0.2852 |
| 12.3918 | 5.0 | 1850 | 9.1044 | 0.0097 | 0.0181 | 0.0083 | 0.0008 | 0.0165 | 0.006 | 0.0596 | 0.1116 | 0.1193 | 0.0844 | 0.1462 | 0.1061 | 0.0002 | 0.0394 | 0.0009 | 0.0713 | 0.0 | 0.0 | 0.0017 | 0.1098 | 0.0061 | 0.1695 | 0.0006 | 0.0756 | 0.0031 | 0.1829 | 0.0001 | 0.0507 | 0.0 | 0.0 | 0.0333 | 0.3218 | 0.0662 | 0.2309 | 0.0045 | 0.1803 |
| 12.1086 | 6.0 | 2220 | 9.2450 | 0.0106 | 0.0186 | 0.0098 | 0.0003 | 0.0237 | 0.0067 | 0.0712 | 0.1237 | 0.1313 | 0.0516 | 0.118 | 0.1208 | 0.0 | 0.0061 | 0.0015 | 0.068 | 0.0004 | 0.0512 | 0.001 | 0.1098 | 0.0139 | 0.2348 | 0.0004 | 0.0512 | 0.0044 | 0.1909 | 0.0007 | 0.1634 | 0.0 | 0.0 | 0.0248 | 0.3154 | 0.0712 | 0.1738 | 0.0086 | 0.2115 |
| 11.8775 | 7.0 | 2590 | 9.2006 | 0.0102 | 0.0182 | 0.0096 | 0.0002 | 0.0182 | 0.0063 | 0.0657 | 0.1219 | 0.1293 | 0.0703 | 0.1002 | 0.129 | 0.0 | 0.0076 | 0.0005 | 0.0309 | 0.0 | 0.0 | 0.0008 | 0.1537 | 0.0075 | 0.1894 | 0.0 | 0.0195 | 0.0116 | 0.244 | 0.0008 | 0.1282 | 0.0 | 0.0 | 0.0267 | 0.3269 | 0.0669 | 0.1805 | 0.0071 | 0.2705 |
| 11.6076 | 8.0 | 2960 | 9.2813 | 0.0108 | 0.0181 | 0.0112 | 0.0005 | 0.0108 | 0.0159 | 0.067 | 0.1008 | 0.1056 | 0.0844 | 0.0714 | 0.0999 | 0.0 | 0.0 | 0.0095 | 0.033 | 0.009 | 0.0878 | 0.0008 | 0.061 | 0.0114 | 0.1741 | 0.0 | 0.0 | 0.0394 | 0.195 | 0.0111 | 0.0986 | 0.0 | 0.0 | 0.0222 | 0.2564 | 0.0013 | 0.102 | 0.0246 | 0.259 |
| 11.4642 | 9.0 | 3330 | 9.2369 | 0.0177 | 0.03 | 0.0172 | 0.0042 | 0.0232 | 0.0164 | 0.073 | 0.1034 | 0.1123 | 0.1016 | 0.0901 | 0.1008 | 0.0 | 0.0 | 0.0004 | 0.0183 | 0.0106 | 0.0317 | 0.0007 | 0.0878 | 0.0191 | 0.2006 | 0.0004 | 0.0195 | 0.0582 | 0.2211 | 0.004 | 0.0761 | 0.0 | 0.0 | 0.0284 | 0.2551 | 0.0775 | 0.1805 | 0.013 | 0.2574 |
| 11.3269 | 10.0 | 3700 | 9.3276 | 0.0196 | 0.0305 | 0.0227 | 0.0015 | 0.0196 | 0.0192 | 0.0671 | 0.105 | 0.1123 | 0.1063 | 0.0808 | 0.106 | 0.0 | 0.0 | 0.0018 | 0.0459 | 0.0 | 0.0 | 0.0022 | 0.1805 | 0.0176 | 0.2341 | 0.024 | 0.0293 | 0.0373 | 0.1923 | 0.0012 | 0.0577 | 0.0 | 0.0 | 0.0229 | 0.2128 | 0.0531 | 0.1523 | 0.0754 | 0.2426 |
| 11.1917 | 11.0 | 4070 | 9.5806 | 0.0172 | 0.0295 | 0.0163 | 0.0044 | 0.0175 | 0.0153 | 0.0715 | 0.0968 | 0.1049 | 0.0953 | 0.0694 | 0.1014 | 0.0 | 0.0 | 0.0003 | 0.0148 | 0.0416 | 0.1439 | 0.0003 | 0.0317 | 0.012 | 0.233 | 0.0005 | 0.022 | 0.0533 | 0.1872 | 0.0223 | 0.0986 | 0.0 | 0.0 | 0.0218 | 0.3077 | 0.0513 | 0.1235 | 0.0025 | 0.0967 |
| 11.0088 | 12.0 | 4440 | 9.4057 | 0.015 | 0.0278 | 0.0129 | 0.0005 | 0.0231 | 0.0138 | 0.0602 | 0.1027 | 0.1115 | 0.0797 | 0.0731 | 0.1133 | 0.0 | 0.0 | 0.001 | 0.0378 | 0.0009 | 0.0244 | 0.0011 | 0.1049 | 0.0075 | 0.1263 | 0.0001 | 0.0268 | 0.0568 | 0.244 | 0.0009 | 0.0507 | 0.0 | 0.0 | 0.0012 | 0.1231 | 0.0983 | 0.298 | 0.0121 | 0.3016 |
| 10.9079 | 13.0 | 4810 | 9.4828 | 0.0209 | 0.0402 | 0.0193 | 0.0057 | 0.0331 | 0.0185 | 0.0713 | 0.1148 | 0.1278 | 0.1141 | 0.1489 | 0.1201 | 0.0 | 0.0 | 0.0 | 0.0022 | 0.0005 | 0.0317 | 0.0321 | 0.2634 | 0.0345 | 0.2256 | 0.0003 | 0.0415 | 0.0358 | 0.2634 | 0.0 | 0.0 | 0.0 | 0.0 | 0.001 | 0.1 | 0.1352 | 0.3597 | 0.0111 | 0.2459 |
| 10.8065 | 14.0 | 5180 | 9.4992 | 0.0153 | 0.0269 | 0.0141 | 0.0048 | 0.0253 | 0.0182 | 0.0444 | 0.0783 | 0.0867 | 0.1 | 0.0871 | 0.0776 | 0.0 | 0.0 | 0.001 | 0.0326 | 0.0 | 0.0 | 0.0248 | 0.1073 | 0.0198 | 0.1975 | 0.0 | 0.0 | 0.0343 | 0.2403 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0051 | 0.0819 | 0.2181 | 0.0214 | 0.2393 |
| 10.7193 | 15.0 | 5550 | 9.4354 | 0.0189 | 0.0349 | 0.0182 | 0.0014 | 0.0273 | 0.0211 | 0.0698 | 0.1108 | 0.1231 | 0.1094 | 0.1191 | 0.1144 | 0.0 | 0.0 | 0.0001 | 0.0163 | 0.0011 | 0.039 | 0.0013 | 0.0878 | 0.02 | 0.2267 | 0.0 | 0.0 | 0.0705 | 0.2936 | 0.0 | 0.0113 | 0.0 | 0.0 | 0.0098 | 0.2282 | 0.1067 | 0.3262 | 0.0168 | 0.2475 |
| 10.6619 | 16.0 | 5920 | 9.6921 | 0.0158 | 0.0262 | 0.0156 | 0.0012 | 0.0234 | 0.0142 | 0.0572 | 0.0875 | 0.0942 | 0.0844 | 0.0752 | 0.0847 | 0.0 | 0.0 | 0.0001 | 0.0063 | 0.0316 | 0.0683 | 0.0019 | 0.0341 | 0.0191 | 0.1757 | 0.0 | 0.0 | 0.0233 | 0.1755 | 0.0005 | 0.0451 | 0.0 | 0.0 | 0.0253 | 0.259 | 0.0842 | 0.202 | 0.0042 | 0.1639 |
| 10.5353 | 17.0 | 6290 | 9.6053 | 0.0151 | 0.0319 | 0.0117 | 0.0015 | 0.0259 | 0.0157 | 0.0766 | 0.1207 | 0.1319 | 0.1219 | 0.0985 | 0.14 | 0.0 | 0.0 | 0.0001 | 0.013 | 0.0034 | 0.0829 | 0.0155 | 0.2341 | 0.0379 | 0.3124 | 0.0 | 0.0 | 0.0493 | 0.3299 | 0.0007 | 0.0831 | 0.0 | 0.0 | 0.0184 | 0.2154 | 0.0539 | 0.1745 | 0.0016 | 0.1377 |
| 10.4713 | 18.0 | 6660 | 9.4228 | 0.0241 | 0.0418 | 0.0236 | 0.0008 | 0.0236 | 0.03 | 0.0918 | 0.1285 | 0.1398 | 0.0906 | 0.1142 | 0.134 | 0.0 | 0.0 | 0.0002 | 0.0135 | 0.0171 | 0.0927 | 0.0248 | 0.139 | 0.0375 | 0.192 | 0.0029 | 0.0902 | 0.0306 | 0.2423 | 0.0449 | 0.1507 | 0.0 | 0.0 | 0.0237 | 0.2538 | 0.0809 | 0.2577 | 0.0267 | 0.2459 |
| 10.4095 | 19.0 | 7030 | 9.7850 | 0.0082 | 0.019 | 0.0066 | 0.001 | 0.015 | 0.0068 | 0.0341 | 0.0514 | 0.0565 | 0.0469 | 0.0568 | 0.054 | 0.0 | 0.0 | 0.0 | 0.0028 | 0.0119 | 0.0195 | 0.0009 | 0.0317 | 0.0137 | 0.0955 | 0.0002 | 0.0146 | 0.0162 | 0.1956 | 0.0001 | 0.0141 | 0.0 | 0.0 | 0.004 | 0.0679 | 0.0496 | 0.1577 | 0.0022 | 0.0787 |
| 10.3062 | 20.0 | 7400 | 9.5648 | 0.0192 | 0.0315 | 0.0198 | 0.0005 | 0.0174 | 0.0231 | 0.0545 | 0.087 | 0.0937 | 0.0875 | 0.103 | 0.0924 | 0.0 | 0.0 | 0.004 | 0.0413 | 0.0423 | 0.0537 | 0.0047 | 0.1073 | 0.009 | 0.1609 | 0.0 | 0.0 | 0.082 | 0.3564 | 0.0431 | 0.1127 | 0.0 | 0.0 | 0.0088 | 0.0846 | 0.0367 | 0.1322 | 0.0003 | 0.0754 |
| 10.2798 | 21.0 | 7770 | 9.4435 | 0.0225 | 0.0393 | 0.0218 | 0.0005 | 0.0364 | 0.0259 | 0.0874 | 0.1188 | 0.1305 | 0.0641 | 0.1134 | 0.1225 | 0.0 | 0.0 | 0.0003 | 0.0222 | 0.028 | 0.0659 | 0.0017 | 0.1 | 0.0349 | 0.2202 | 0.0004 | 0.0439 | 0.0395 | 0.2628 | 0.0668 | 0.1648 | 0.0 | 0.0 | 0.0148 | 0.2218 | 0.0777 | 0.253 | 0.0057 | 0.2115 |
| 10.2018 | 22.0 | 8140 | 9.6330 | 0.026 | 0.0426 | 0.0249 | 0.0006 | 0.0269 | 0.03 | 0.0954 | 0.1261 | 0.1336 | 0.0859 | 0.0806 | 0.1365 | 0.0 | 0.0 | 0.0021 | 0.0152 | 0.021 | 0.0976 | 0.0054 | 0.178 | 0.0192 | 0.1451 | 0.0007 | 0.0415 | 0.0747 | 0.3225 | 0.0786 | 0.1352 | 0.0 | 0.0 | 0.0181 | 0.2667 | 0.0846 | 0.2255 | 0.0072 | 0.1754 |
| 10.1256 | 23.0 | 8510 | 9.6872 | 0.0283 | 0.0451 | 0.0275 | 0.0004 | 0.0289 | 0.0308 | 0.0917 | 0.1241 | 0.1325 | 0.0453 | 0.104 | 0.1284 | 0.0 | 0.0 | 0.0001 | 0.0102 | 0.0581 | 0.0951 | 0.0062 | 0.1829 | 0.0171 | 0.1556 | 0.0019 | 0.0415 | 0.0386 | 0.2628 | 0.0892 | 0.1986 | 0.0 | 0.0 | 0.0135 | 0.209 | 0.1111 | 0.2752 | 0.004 | 0.159 |
| 10.0849 | 24.0 | 8880 | 9.7946 | 0.0275 | 0.0394 | 0.0287 | 0.0011 | 0.0302 | 0.0257 | 0.0657 | 0.0988 | 0.1104 | 0.1094 | 0.1211 | 0.0983 | 0.0 | 0.0 | 0.0005 | 0.0263 | 0.0406 | 0.0415 | 0.0006 | 0.0439 | 0.0225 | 0.2234 | 0.0 | 0.0 | 0.0142 | 0.1705 | 0.1457 | 0.2296 | 0.0 | 0.0 | 0.0117 | 0.1923 | 0.0935 | 0.2906 | 0.0006 | 0.1066 |
| 10.038 | 25.0 | 9250 | 9.6689 | 0.0288 | 0.049 | 0.0312 | 0.0025 | 0.0462 | 0.0236 | 0.0711 | 0.1145 | 0.1311 | 0.1156 | 0.1541 | 0.1142 | 0.0 | 0.0 | 0.0011 | 0.062 | 0.0835 | 0.1317 | 0.0096 | 0.2122 | 0.051 | 0.2939 | 0.0 | 0.0 | 0.0229 | 0.2836 | 0.0588 | 0.1 | 0.0 | 0.0 | 0.0029 | 0.0808 | 0.1156 | 0.3403 | 0.0004 | 0.0689 |
| 9.9879 | 26.0 | 9620 | 9.5609 | 0.0242 | 0.0432 | 0.022 | 0.0012 | 0.029 | 0.0253 | 0.0821 | 0.1126 | 0.1223 | 0.0719 | 0.1091 | 0.1189 | 0.0 | 0.0 | 0.0023 | 0.0598 | 0.1015 | 0.1195 | 0.0163 | 0.2146 | 0.028 | 0.1895 | 0.0043 | 0.0634 | 0.034 | 0.2644 | 0.0003 | 0.0211 | 0.0 | 0.0 | 0.0025 | 0.091 | 0.0895 | 0.2799 | 0.0119 | 0.1639 |
| 9.9413 | 27.0 | 9990 | 9.4471 | 0.0297 | 0.0479 | 0.0311 | 0.0029 | 0.0443 | 0.0302 | 0.0873 | 0.1293 | 0.1429 | 0.1406 | 0.1385 | 0.132 | 0.0 | 0.0 | 0.0039 | 0.0772 | 0.0266 | 0.0537 | 0.0095 | 0.2244 | 0.0401 | 0.2578 | 0.0016 | 0.0463 | 0.0467 | 0.304 | 0.0953 | 0.2014 | 0.0 | 0.0 | 0.0065 | 0.1051 | 0.1224 | 0.3094 | 0.0037 | 0.1361 |
| 9.9371 | 28.0 | 10360 | 9.5150 | 0.0223 | 0.0413 | 0.0216 | 0.0025 | 0.0387 | 0.0223 | 0.0817 | 0.119 | 0.1288 | 0.075 | 0.1142 | 0.1292 | 0.0 | 0.0 | 0.0059 | 0.0841 | 0.0124 | 0.039 | 0.0085 | 0.2829 | 0.0233 | 0.1448 | 0.0003 | 0.0244 | 0.0806 | 0.3614 | 0.0298 | 0.1394 | 0.0 | 0.0 | 0.0042 | 0.0782 | 0.0863 | 0.2423 | 0.016 | 0.1492 |
| 9.8378 | 29.0 | 10730 | 9.2284 | 0.0347 | 0.0561 | 0.0339 | 0.0035 | 0.0416 | 0.0332 | 0.0889 | 0.1431 | 0.1568 | 0.1125 | 0.149 | 0.1443 | 0.0 | 0.0 | 0.0076 | 0.1737 | 0.0673 | 0.1 | 0.0123 | 0.3122 | 0.0348 | 0.2099 | 0.0006 | 0.0146 | 0.0636 | 0.4235 | 0.1036 | 0.1746 | 0.0 | 0.0 | 0.0132 | 0.1526 | 0.1114 | 0.2456 | 0.0015 | 0.0754 |
| 9.8006 | 30.0 | 11100 | 9.1318 | 0.0291 | 0.0502 | 0.0271 | 0.0012 | 0.0258 | 0.032 | 0.0998 | 0.1349 | 0.1435 | 0.0906 | 0.0978 | 0.1436 | 0.0 | 0.0 | 0.0222 | 0.1317 | 0.0337 | 0.0829 | 0.0142 | 0.2537 | 0.0232 | 0.1645 | 0.0099 | 0.139 | 0.0551 | 0.2745 | 0.0957 | 0.1704 | 0.0 | 0.0 | 0.0097 | 0.1974 | 0.0796 | 0.2403 | 0.0065 | 0.0672 |
| 9.7241 | 31.0 | 11470 | 9.4574 | 0.0279 | 0.0424 | 0.0271 | 0.0008 | 0.0258 | 0.0276 | 0.0739 | 0.0969 | 0.1041 | 0.0891 | 0.1028 | 0.093 | 0.0 | 0.0 | 0.0029 | 0.0385 | 0.0411 | 0.1098 | 0.0126 | 0.1537 | 0.0196 | 0.1601 | 0.0011 | 0.0707 | 0.0301 | 0.2218 | 0.1224 | 0.1324 | 0.0 | 0.0 | 0.0055 | 0.0705 | 0.0965 | 0.2215 | 0.0026 | 0.0705 |
| 9.7421 | 32.0 | 11840 | 9.5437 | 0.0233 | 0.0361 | 0.0233 | 0.0001 | 0.0184 | 0.0237 | 0.0642 | 0.0838 | 0.0881 | 0.0266 | 0.0624 | 0.0867 | 0.0 | 0.0 | 0.001 | 0.0222 | 0.0211 | 0.039 | 0.0232 | 0.1634 | 0.0057 | 0.0952 | 0.0001 | 0.022 | 0.0169 | 0.1678 | 0.1266 | 0.1507 | 0.0 | 0.0 | 0.0051 | 0.0897 | 0.0781 | 0.202 | 0.0024 | 0.1049 |
| 9.6948 | 33.0 | 12210 | 9.5224 | 0.0375 | 0.0585 | 0.039 | 0.002 | 0.0411 | 0.0363 | 0.0826 | 0.1172 | 0.1291 | 0.0984 | 0.1249 | 0.1148 | 0.0 | 0.0 | 0.0052 | 0.0424 | 0.0515 | 0.0854 | 0.0124 | 0.1951 | 0.0357 | 0.1872 | 0.0002 | 0.0195 | 0.0391 | 0.2785 | 0.152 | 0.2268 | 0.0 | 0.0 | 0.0084 | 0.1385 | 0.1416 | 0.3067 | 0.004 | 0.0689 |
| 9.6644 | 34.0 | 12580 | 9.4450 | 0.0352 | 0.0583 | 0.035 | 0.0033 | 0.0308 | 0.0392 | 0.1066 | 0.1352 | 0.1434 | 0.1094 | 0.1103 | 0.1358 | 0.0 | 0.0 | 0.0133 | 0.0911 | 0.0333 | 0.0707 | 0.0317 | 0.2927 | 0.0313 | 0.1763 | 0.0002 | 0.022 | 0.0401 | 0.2191 | 0.1418 | 0.2099 | 0.0 | 0.0 | 0.0127 | 0.2987 | 0.111 | 0.2423 | 0.007 | 0.0984 |
| 9.6617 | 35.0 | 12950 | 9.4062 | 0.0247 | 0.0409 | 0.0237 | 0.0005 | 0.0302 | 0.0257 | 0.0603 | 0.0794 | 0.0853 | 0.0484 | 0.0899 | 0.076 | 0.0 | 0.0 | 0.0095 | 0.0457 | 0.0397 | 0.0829 | 0.0053 | 0.1195 | 0.0184 | 0.1323 | 0.0 | 0.0 | 0.0401 | 0.2393 | 0.1128 | 0.1324 | 0.0 | 0.0 | 0.0023 | 0.0692 | 0.0678 | 0.1711 | 0.0001 | 0.0311 |
| 9.5644 | 36.0 | 13320 | 9.4056 | 0.0235 | 0.0333 | 0.0247 | 0.0001 | 0.027 | 0.0257 | 0.0487 | 0.0633 | 0.0667 | 0.0234 | 0.0517 | 0.0591 | 0.0 | 0.0 | 0.0018 | 0.0222 | 0.0863 | 0.1049 | 0.008 | 0.1317 | 0.0077 | 0.0536 | 0.0 | 0.0 | 0.0176 | 0.1044 | 0.0843 | 0.1761 | 0.0 | 0.0 | 0.0024 | 0.0641 | 0.0743 | 0.1383 | 0.0001 | 0.0049 |
| 9.571 | 37.0 | 13690 | 9.4226 | 0.0173 | 0.0321 | 0.015 | 0.0029 | 0.0327 | 0.0188 | 0.0487 | 0.0792 | 0.0883 | 0.0688 | 0.101 | 0.0766 | 0.0 | 0.0 | 0.0003 | 0.0085 | 0.0071 | 0.0561 | 0.0068 | 0.1268 | 0.0198 | 0.1003 | 0.0 | 0.0 | 0.0306 | 0.2926 | 0.0414 | 0.1056 | 0.0 | 0.0 | 0.0014 | 0.0385 | 0.0999 | 0.2832 | 0.0005 | 0.0475 |
| 9.5315 | 38.0 | 14060 | 9.5075 | 0.0317 | 0.0495 | 0.0326 | 0.0 | 0.0252 | 0.0314 | 0.0835 | 0.114 | 0.1219 | 0.0063 | 0.0852 | 0.1196 | 0.0 | 0.0 | 0.0078 | 0.0285 | 0.1061 | 0.1244 | 0.0174 | 0.2512 | 0.0078 | 0.0563 | 0.0 | 0.0 | 0.0319 | 0.3359 | 0.1143 | 0.193 | 0.0 | 0.0 | 0.0035 | 0.1449 | 0.0899 | 0.2839 | 0.0015 | 0.0443 |
| 9.4716 | 39.0 | 14430 | 9.6663 | 0.0252 | 0.0389 | 0.0241 | 0.0001 | 0.0333 | 0.0207 | 0.0607 | 0.0868 | 0.0959 | 0.0234 | 0.0948 | 0.0821 | 0.0 | 0.0 | 0.0016 | 0.0363 | 0.0316 | 0.122 | 0.0068 | 0.1707 | 0.0236 | 0.1001 | 0.0 | 0.0 | 0.0146 | 0.1701 | 0.1206 | 0.1845 | 0.0 | 0.0 | 0.0005 | 0.0859 | 0.1027 | 0.2537 | 0.0001 | 0.0279 |
| 9.4274 | 40.0 | 14800 | 9.5856 | 0.025 | 0.0399 | 0.0223 | 0.0001 | 0.0198 | 0.0242 | 0.0681 | 0.0956 | 0.1006 | 0.0172 | 0.0758 | 0.0951 | 0.0 | 0.0 | 0.003 | 0.0376 | 0.0408 | 0.061 | 0.0084 | 0.161 | 0.0151 | 0.0828 | 0.0003 | 0.022 | 0.0405 | 0.3235 | 0.1176 | 0.1732 | 0.0 | 0.0 | 0.0033 | 0.0769 | 0.0686 | 0.1966 | 0.002 | 0.0721 |
| 9.3869 | 41.0 | 15170 | 9.5773 | 0.025 | 0.0414 | 0.0251 | 0.0002 | 0.034 | 0.0254 | 0.0683 | 0.0976 | 0.104 | 0.0156 | 0.0905 | 0.0962 | 0.0 | 0.0 | 0.0109 | 0.0622 | 0.0536 | 0.0585 | 0.0064 | 0.1927 | 0.0165 | 0.0729 | 0.0 | 0.0 | 0.0341 | 0.2715 | 0.0268 | 0.093 | 0.0 | 0.0 | 0.0293 | 0.1654 | 0.1059 | 0.2584 | 0.0167 | 0.0738 |
| 9.3606 | 42.0 | 15540 | 9.5076 | 0.0262 | 0.0419 | 0.0262 | 0.0004 | 0.0179 | 0.0315 | 0.0855 | 0.1153 | 0.1223 | 0.025 | 0.0886 | 0.1189 | 0.0 | 0.0 | 0.0166 | 0.0861 | 0.0312 | 0.1024 | 0.0213 | 0.278 | 0.0117 | 0.0726 | 0.0 | 0.0 | 0.0467 | 0.3631 | 0.1119 | 0.1648 | 0.0 | 0.0 | 0.0072 | 0.1436 | 0.0497 | 0.1718 | 0.0181 | 0.0852 |
| 9.3856 | 43.0 | 15910 | 9.7711 | 0.0209 | 0.0344 | 0.0194 | 0.0006 | 0.0238 | 0.0177 | 0.051 | 0.0701 | 0.0764 | 0.0219 | 0.0749 | 0.0676 | 0.0 | 0.0 | 0.003 | 0.0293 | 0.0246 | 0.061 | 0.0098 | 0.2 | 0.0115 | 0.0558 | 0.0 | 0.0 | 0.0215 | 0.2292 | 0.0875 | 0.0901 | 0.0 | 0.0 | 0.0058 | 0.0385 | 0.0871 | 0.1664 | 0.0003 | 0.0459 |
| 9.3636 | 44.0 | 16280 | 9.1552 | 0.032 | 0.0552 | 0.0269 | 0.0017 | 0.035 | 0.0292 | 0.0872 | 0.1275 | 0.1357 | 0.0562 | 0.1184 | 0.1276 | 0.0 | 0.0 | 0.0162 | 0.0796 | 0.0157 | 0.0439 | 0.0133 | 0.2902 | 0.0288 | 0.1308 | 0.0137 | 0.0415 | 0.0456 | 0.3523 | 0.1254 | 0.1662 | 0.0 | 0.0 | 0.0285 | 0.1385 | 0.0917 | 0.2805 | 0.0046 | 0.1049 |
| 9.3266 | 45.0 | 16650 | 9.2278 | 0.0331 | 0.0561 | 0.0313 | 0.0006 | 0.0366 | 0.034 | 0.082 | 0.1214 | 0.1304 | 0.025 | 0.1237 | 0.1223 | 0.0 | 0.0 | 0.0257 | 0.1248 | 0.0604 | 0.1146 | 0.0238 | 0.3049 | 0.0292 | 0.1285 | 0.0 | 0.0 | 0.0421 | 0.3121 | 0.0923 | 0.1465 | 0.0 | 0.0 | 0.0011 | 0.0333 | 0.1131 | 0.2893 | 0.0097 | 0.1115 |
| 9.2773 | 46.0 | 17020 | 9.3229 | 0.0293 | 0.0493 | 0.027 | 0.0021 | 0.0254 | 0.029 | 0.0832 | 0.1154 | 0.1215 | 0.0469 | 0.1054 | 0.1182 | 0.0 | 0.0 | 0.0138 | 0.1083 | 0.0263 | 0.0756 | 0.0292 | 0.2537 | 0.0233 | 0.1287 | 0.0005 | 0.0171 | 0.0376 | 0.2953 | 0.1225 | 0.1465 | 0.0 | 0.0 | 0.0094 | 0.1397 | 0.088 | 0.2295 | 0.0011 | 0.0639 |
| 9.2454 | 47.0 | 17390 | 9.2998 | 0.0295 | 0.0464 | 0.0278 | 0.0006 | 0.0219 | 0.029 | 0.0696 | 0.0968 | 0.1048 | 0.0266 | 0.0669 | 0.1006 | 0.0 | 0.0 | 0.0129 | 0.0902 | 0.0582 | 0.0805 | 0.0066 | 0.1659 | 0.0175 | 0.1099 | 0.0 | 0.0 | 0.0262 | 0.2517 | 0.1408 | 0.1817 | 0.0 | 0.0 | 0.006 | 0.0846 | 0.0816 | 0.1832 | 0.0041 | 0.1098 |
| 9.262 | 48.0 | 17760 | 9.2243 | 0.0332 | 0.0562 | 0.0344 | 0.0052 | 0.0275 | 0.032 | 0.1041 | 0.1389 | 0.1464 | 0.0516 | 0.0885 | 0.1446 | 0.0 | 0.0 | 0.007 | 0.0678 | 0.0823 | 0.2122 | 0.0141 | 0.2463 | 0.0199 | 0.1298 | 0.0043 | 0.0585 | 0.0472 | 0.3023 | 0.115 | 0.1817 | 0.0 | 0.0 | 0.0186 | 0.1744 | 0.0781 | 0.2477 | 0.0116 | 0.1361 |
| 9.1863 | 49.0 | 18130 | 9.6462 | 0.0309 | 0.0451 | 0.0332 | 0.0023 | 0.0241 | 0.0301 | 0.0643 | 0.0864 | 0.0926 | 0.0406 | 0.0704 | 0.0827 | 0.0 | 0.0 | 0.0014 | 0.0309 | 0.1062 | 0.1805 | 0.0068 | 0.1415 | 0.0154 | 0.0776 | 0.0005 | 0.0244 | 0.037 | 0.2211 | 0.1068 | 0.1282 | 0.0 | 0.0 | 0.0055 | 0.0577 | 0.0891 | 0.1866 | 0.0022 | 0.0623 |
| 9.1698 | 50.0 | 18500 | 9.6507 | 0.0218 | 0.0384 | 0.0217 | 0.0045 | 0.0294 | 0.0191 | 0.0472 | 0.0703 | 0.078 | 0.0578 | 0.0804 | 0.0658 | 0.0 | 0.0 | 0.0023 | 0.0387 | 0.0941 | 0.1 | 0.0036 | 0.1049 | 0.0347 | 0.1338 | 0.0 | 0.0 | 0.0231 | 0.1792 | 0.0022 | 0.0141 | 0.0 | 0.0 | 0.0039 | 0.0782 | 0.0946 | 0.2235 | 0.0027 | 0.0639 |
| 9.1867 | 51.0 | 18870 | 9.5428 | 0.0386 | 0.0562 | 0.0417 | 0.0014 | 0.0205 | 0.0376 | 0.0816 | 0.1019 | 0.1087 | 0.0375 | 0.0591 | 0.1048 | 0.0 | 0.0 | 0.0024 | 0.0483 | 0.2049 | 0.2439 | 0.0077 | 0.1756 | 0.0179 | 0.0882 | 0.0 | 0.0 | 0.0243 | 0.2265 | 0.1166 | 0.1606 | 0.0 | 0.0 | 0.0053 | 0.1026 | 0.0769 | 0.1785 | 0.0072 | 0.0803 |
| 9.1318 | 52.0 | 19240 | 9.5407 | 0.0291 | 0.0452 | 0.03 | 0.0062 | 0.025 | 0.0285 | 0.0783 | 0.1257 | 0.1372 | 0.05 | 0.1063 | 0.1291 | 0.0 | 0.0 | 0.0152 | 0.0735 | 0.0474 | 0.122 | 0.0053 | 0.2268 | 0.0322 | 0.1204 | 0.0003 | 0.0171 | 0.0349 | 0.4178 | 0.1452 | 0.3127 | 0.0 | 0.0 | 0.0054 | 0.1423 | 0.0637 | 0.2074 | 0.0 | 0.0066 |
| 9.0889 | 53.0 | 19610 | 9.6766 | 0.0339 | 0.0508 | 0.0351 | 0.0024 | 0.0284 | 0.03 | 0.0858 | 0.1201 | 0.1322 | 0.0594 | 0.0999 | 0.1215 | 0.0 | 0.0 | 0.0046 | 0.0565 | 0.1028 | 0.2 | 0.0033 | 0.1512 | 0.043 | 0.1464 | 0.0 | 0.0 | 0.0205 | 0.3248 | 0.1247 | 0.262 | 0.0 | 0.0 | 0.0173 | 0.2051 | 0.0902 | 0.1906 | 0.0002 | 0.0492 |
| 9.0375 | 54.0 | 19980 | 9.7538 | 0.0256 | 0.0356 | 0.027 | 0.001 | 0.0204 | 0.023 | 0.0707 | 0.0915 | 0.0963 | 0.0281 | 0.0715 | 0.0881 | 0.0 | 0.0 | 0.0013 | 0.0272 | 0.0359 | 0.0634 | 0.005 | 0.1073 | 0.0067 | 0.0387 | 0.0013 | 0.0463 | 0.0176 | 0.243 | 0.1204 | 0.1887 | 0.0 | 0.0 | 0.0226 | 0.1936 | 0.0871 | 0.1584 | 0.0094 | 0.0885 |
| 9.0643 | 55.0 | 20350 | 9.5402 | 0.0319 | 0.0509 | 0.0285 | 0.0039 | 0.0368 | 0.0286 | 0.0783 | 0.1003 | 0.1066 | 0.0625 | 0.0697 | 0.0997 | 0.0 | 0.0 | 0.0039 | 0.0489 | 0.0718 | 0.1951 | 0.0088 | 0.1756 | 0.0244 | 0.0977 | 0.0003 | 0.0195 | 0.0318 | 0.2346 | 0.1402 | 0.1592 | 0.0 | 0.0 | 0.013 | 0.1295 | 0.0849 | 0.1826 | 0.0034 | 0.0361 |
| 8.9988 | 56.0 | 20720 | 9.6811 | 0.0233 | 0.0349 | 0.0232 | 0.0023 | 0.0251 | 0.0197 | 0.0639 | 0.0836 | 0.0883 | 0.0422 | 0.0613 | 0.0813 | 0.0 | 0.0 | 0.0032 | 0.0415 | 0.0612 | 0.1463 | 0.0038 | 0.139 | 0.0142 | 0.0743 | 0.0002 | 0.0122 | 0.0173 | 0.1836 | 0.0827 | 0.1986 | 0.0 | 0.0 | 0.0111 | 0.0833 | 0.085 | 0.1523 | 0.0007 | 0.0279 |
| 8.9973 | 57.0 | 21090 | 9.7995 | 0.0171 | 0.0243 | 0.017 | 0.0013 | 0.0237 | 0.0173 | 0.0483 | 0.0637 | 0.0679 | 0.025 | 0.0633 | 0.0639 | 0.0 | 0.0 | 0.0016 | 0.0328 | 0.0099 | 0.0244 | 0.0042 | 0.1439 | 0.0067 | 0.0492 | 0.0 | 0.0 | 0.0191 | 0.1775 | 0.0765 | 0.1282 | 0.0 | 0.0 | 0.0184 | 0.0795 | 0.0684 | 0.1591 | 0.0004 | 0.0197 |
| 9.0076 | 58.0 | 21460 | 9.7027 | 0.0269 | 0.0424 | 0.0284 | 0.0028 | 0.035 | 0.0233 | 0.0631 | 0.0909 | 0.0977 | 0.0437 | 0.0709 | 0.0925 | 0.0 | 0.0 | 0.0027 | 0.0359 | 0.057 | 0.1293 | 0.003 | 0.122 | 0.0303 | 0.1014 | 0.0005 | 0.0244 | 0.0204 | 0.1943 | 0.1132 | 0.1944 | 0.0 | 0.0 | 0.0101 | 0.1333 | 0.0855 | 0.2168 | 0.0003 | 0.0213 |
| 8.922 | 59.0 | 21830 | 9.6235 | 0.0284 | 0.0395 | 0.0292 | 0.0003 | 0.0208 | 0.0293 | 0.0663 | 0.0858 | 0.0912 | 0.0234 | 0.051 | 0.0886 | 0.0 | 0.0 | 0.0075 | 0.0246 | 0.112 | 0.1561 | 0.0011 | 0.0561 | 0.0062 | 0.0547 | 0.0 | 0.0 | 0.0194 | 0.1762 | 0.0963 | 0.1761 | 0.0 | 0.0 | 0.0124 | 0.1615 | 0.0652 | 0.1993 | 0.021 | 0.0902 |
| 8.9279 | 60.0 | 22200 | 9.7393 | 0.026 | 0.0377 | 0.0264 | 0.0011 | 0.0331 | 0.0238 | 0.0735 | 0.0985 | 0.1062 | 0.0172 | 0.0673 | 0.1023 | 0.0 | 0.0 | 0.002 | 0.0346 | 0.0569 | 0.1829 | 0.0035 | 0.1854 | 0.0152 | 0.0519 | 0.0 | 0.0 | 0.0229 | 0.2862 | 0.116 | 0.238 | 0.0 | 0.0 | 0.0149 | 0.0795 | 0.0806 | 0.196 | 0.0002 | 0.0197 |
| 8.9095 | 61.0 | 22570 | 9.8547 | 0.0217 | 0.027 | 0.0232 | 0.0 | 0.0144 | 0.0238 | 0.0602 | 0.0716 | 0.0756 | 0.0 | 0.0448 | 0.079 | 0.0 | 0.0 | 0.0008 | 0.0091 | 0.0736 | 0.122 | 0.0056 | 0.1805 | 0.0025 | 0.0177 | 0.0027 | 0.0463 | 0.0247 | 0.1574 | 0.1103 | 0.1352 | 0.0 | 0.0 | 0.0161 | 0.0513 | 0.0228 | 0.1396 | 0.0009 | 0.0475 |
| 8.9092 | 62.0 | 22940 | 9.9259 | 0.0267 | 0.037 | 0.028 | 0.001 | 0.0178 | 0.0249 | 0.0531 | 0.0699 | 0.075 | 0.0078 | 0.0465 | 0.0724 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1187 | 0.1463 | 0.0043 | 0.122 | 0.0031 | 0.0172 | 0.0 | 0.0 | 0.0138 | 0.2507 | 0.0969 | 0.1225 | 0.0 | 0.0 | 0.0041 | 0.0359 | 0.0709 | 0.1383 | 0.009 | 0.0672 |
| 8.8513 | 63.0 | 23310 | 9.7741 | 0.0312 | 0.0436 | 0.0315 | 0.0018 | 0.0281 | 0.0271 | 0.0666 | 0.0871 | 0.0945 | 0.0188 | 0.069 | 0.0909 | 0.0 | 0.0 | 0.0079 | 0.0067 | 0.1297 | 0.1683 | 0.0029 | 0.1293 | 0.0138 | 0.0491 | 0.0 | 0.0 | 0.0117 | 0.2473 | 0.1139 | 0.1606 | 0.0 | 0.0 | 0.0159 | 0.1179 | 0.0787 | 0.2483 | 0.0 | 0.0066 |
| 8.9041 | 64.0 | 23680 | 9.6445 | 0.0431 | 0.0625 | 0.048 | 0.0034 | 0.0285 | 0.0418 | 0.0988 | 0.1323 | 0.1436 | 0.0203 | 0.0805 | 0.1472 | 0.0 | 0.0 | 0.002 | 0.0291 | 0.2182 | 0.3293 | 0.0042 | 0.1707 | 0.0118 | 0.0591 | 0.0 | 0.0 | 0.0238 | 0.3701 | 0.0875 | 0.1592 | 0.0 | 0.0 | 0.0498 | 0.2295 | 0.1155 | 0.2993 | 0.0046 | 0.077 |
| 8.8055 | 65.0 | 24050 | 9.7573 | 0.0251 | 0.0332 | 0.0265 | 0.0037 | 0.021 | 0.0259 | 0.0628 | 0.0784 | 0.0816 | 0.0391 | 0.0438 | 0.0829 | 0.0 | 0.0 | 0.0004 | 0.0074 | 0.0937 | 0.1463 | 0.0025 | 0.1122 | 0.0042 | 0.0272 | 0.0 | 0.0 | 0.0226 | 0.2319 | 0.1269 | 0.169 | 0.0 | 0.0 | 0.0148 | 0.1282 | 0.0349 | 0.1195 | 0.0007 | 0.0377 |
| 8.7948 | 66.0 | 24420 | 10.0745 | 0.0111 | 0.0152 | 0.0118 | 0.0034 | 0.0128 | 0.011 | 0.0306 | 0.0378 | 0.0415 | 0.0219 | 0.0313 | 0.0419 | 0.0 | 0.0 | 0.0001 | 0.0022 | 0.0677 | 0.1024 | 0.0004 | 0.0317 | 0.005 | 0.0209 | 0.0 | 0.0 | 0.0076 | 0.1131 | 0.0209 | 0.0507 | 0.0 | 0.0 | 0.0035 | 0.0679 | 0.0281 | 0.1087 | 0.0 | 0.0 |
| 8.8053 | 67.0 | 24790 | 9.9271 | 0.0215 | 0.0312 | 0.0204 | 0.0002 | 0.0211 | 0.0175 | 0.0428 | 0.0615 | 0.0686 | 0.0156 | 0.0586 | 0.0617 | 0.0 | 0.0 | 0.0 | 0.0013 | 0.0433 | 0.0854 | 0.0011 | 0.0585 | 0.0161 | 0.0684 | 0.0 | 0.0 | 0.022 | 0.2477 | 0.0743 | 0.0915 | 0.0 | 0.0 | 0.011 | 0.0769 | 0.0898 | 0.1872 | 0.0 | 0.0066 |
| 8.7587 | 68.0 | 25160 | 9.8115 | 0.0268 | 0.0435 | 0.0266 | 0.0072 | 0.0268 | 0.0244 | 0.0604 | 0.0869 | 0.0964 | 0.0453 | 0.0655 | 0.0881 | 0.0 | 0.0 | 0.0053 | 0.0196 | 0.124 | 0.2317 | 0.0011 | 0.0732 | 0.0246 | 0.0941 | 0.0 | 0.0 | 0.0123 | 0.2258 | 0.0593 | 0.2127 | 0.0 | 0.0 | 0.0202 | 0.1064 | 0.0746 | 0.1866 | 0.0 | 0.0066 |
| 8.7455 | 69.0 | 25530 | 9.7010 | 0.0285 | 0.0405 | 0.0285 | 0.0007 | 0.0219 | 0.0257 | 0.0694 | 0.1025 | 0.1148 | 0.0312 | 0.0554 | 0.1173 | 0.0 | 0.0 | 0.0004 | 0.0189 | 0.1016 | 0.1634 | 0.0022 | 0.1122 | 0.0148 | 0.0887 | 0.0007 | 0.022 | 0.0184 | 0.3886 | 0.0835 | 0.2761 | 0.0 | 0.0 | 0.007 | 0.059 | 0.0748 | 0.1832 | 0.0382 | 0.0656 |
| 8.7372 | 70.0 | 25900 | 9.9303 | 0.0186 | 0.0273 | 0.0202 | 0.0005 | 0.0238 | 0.0159 | 0.0592 | 0.0948 | 0.104 | 0.0063 | 0.0482 | 0.0999 | 0.0 | 0.0 | 0.0001 | 0.0067 | 0.0445 | 0.2341 | 0.0015 | 0.0732 | 0.008 | 0.044 | 0.0 | 0.0 | 0.0136 | 0.2732 | 0.0657 | 0.3648 | 0.0 | 0.0 | 0.0167 | 0.1 | 0.0728 | 0.1369 | 0.0006 | 0.0148 |
| 8.7237 | 71.0 | 26270 | 9.6578 | 0.021 | 0.0298 | 0.0206 | 0.0019 | 0.0258 | 0.0209 | 0.074 | 0.0994 | 0.1079 | 0.0375 | 0.0636 | 0.1093 | 0.0 | 0.0 | 0.0013 | 0.023 | 0.0215 | 0.2463 | 0.0012 | 0.0951 | 0.0136 | 0.071 | 0.0 | 0.0 | 0.0244 | 0.2919 | 0.1162 | 0.2465 | 0.0 | 0.0 | 0.0322 | 0.1128 | 0.0399 | 0.155 | 0.0012 | 0.0525 |
| 8.7217 | 72.0 | 26640 | 9.7369 | 0.018 | 0.0266 | 0.019 | 0.0012 | 0.0199 | 0.0197 | 0.0799 | 0.1203 | 0.1347 | 0.0375 | 0.0669 | 0.1371 | 0.0 | 0.0 | 0.0004 | 0.0304 | 0.0606 | 0.2585 | 0.0027 | 0.1366 | 0.0083 | 0.0572 | 0.0009 | 0.039 | 0.0108 | 0.346 | 0.0483 | 0.4183 | 0.0 | 0.0 | 0.0222 | 0.1256 | 0.0619 | 0.1852 | 0.0001 | 0.0197 |
| 8.6641 | 73.0 | 27010 | 9.9652 | 0.0264 | 0.0358 | 0.0298 | 0.0001 | 0.0185 | 0.0269 | 0.0618 | 0.0892 | 0.098 | 0.0031 | 0.0385 | 0.0992 | 0.0 | 0.0 | 0.0002 | 0.0165 | 0.1918 | 0.3293 | 0.0 | 0.0 | 0.0154 | 0.0585 | 0.0 | 0.0 | 0.0104 | 0.1812 | 0.0483 | 0.3606 | 0.0 | 0.0 | 0.0264 | 0.0923 | 0.0243 | 0.1228 | 0.0001 | 0.0148 |
| 8.678 | 74.0 | 27380 | 10.1454 | 0.0076 | 0.0117 | 0.0081 | 0.0 | 0.016 | 0.0065 | 0.0259 | 0.038 | 0.043 | 0.0 | 0.042 | 0.0349 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0473 | 0.2049 | 0.0 | 0.0 | 0.0084 | 0.0446 | 0.0 | 0.0 | 0.0019 | 0.0795 | 0.0009 | 0.0507 | 0.0 | 0.0 | 0.0001 | 0.0192 | 0.0324 | 0.1168 | 0.0 | 0.0 |
| 8.6168 | 75.0 | 27750 | 9.9027 | 0.0183 | 0.025 | 0.0203 | 0.0 | 0.0167 | 0.0167 | 0.0395 | 0.0566 | 0.0614 | 0.0 | 0.053 | 0.0575 | 0.0 | 0.0 | 0.0001 | 0.0054 | 0.0886 | 0.1463 | 0.0006 | 0.0488 | 0.0089 | 0.0458 | 0.0 | 0.0 | 0.0026 | 0.1 | 0.0668 | 0.1915 | 0.0 | 0.0 | 0.0007 | 0.05 | 0.0512 | 0.149 | 0.0 | 0.0 |
| 8.6125 | 76.0 | 28120 | 9.9229 | 0.0122 | 0.0174 | 0.0122 | 0.0002 | 0.0116 | 0.0127 | 0.0328 | 0.0424 | 0.0453 | 0.0109 | 0.0303 | 0.043 | 0.0 | 0.0 | 0.0002 | 0.0107 | 0.0748 | 0.1463 | 0.0008 | 0.0415 | 0.0051 | 0.0204 | 0.0 | 0.0 | 0.0061 | 0.0883 | 0.0174 | 0.0972 | 0.0 | 0.0 | 0.021 | 0.0487 | 0.0205 | 0.0745 | 0.0 | 0.0164 |
| 8.6135 | 77.0 | 28490 | 10.0929 | 0.0258 | 0.0339 | 0.0282 | 0.0017 | 0.0262 | 0.0235 | 0.038 | 0.053 | 0.0581 | 0.0375 | 0.0515 | 0.0497 | 0.0 | 0.0 | 0.0 | 0.0033 | 0.1101 | 0.1268 | 0.0003 | 0.0268 | 0.0119 | 0.0711 | 0.0 | 0.0 | 0.0076 | 0.1017 | 0.0948 | 0.1535 | 0.0 | 0.0 | 0.037 | 0.0949 | 0.0481 | 0.1195 | 0.0 | 0.0 |
| 8.5801 | 78.0 | 28860 | 10.0466 | 0.0096 | 0.0145 | 0.0087 | 0.0007 | 0.0132 | 0.0106 | 0.0343 | 0.0461 | 0.0496 | 0.0203 | 0.0483 | 0.0429 | 0.0 | 0.0 | 0.0 | 0.0022 | 0.0507 | 0.1244 | 0.0006 | 0.039 | 0.0182 | 0.0602 | 0.0 | 0.0 | 0.0061 | 0.099 | 0.0239 | 0.1056 | 0.0 | 0.0 | 0.0009 | 0.0679 | 0.0152 | 0.0966 | 0.0 | 0.0 |
| 8.5705 | 79.0 | 29230 | 9.8399 | 0.018 | 0.0274 | 0.018 | 0.0006 | 0.0204 | 0.0154 | 0.049 | 0.068 | 0.0736 | 0.0188 | 0.0526 | 0.0666 | 0.0 | 0.0 | 0.0003 | 0.0159 | 0.0559 | 0.1512 | 0.001 | 0.0707 | 0.0199 | 0.0832 | 0.0 | 0.0 | 0.0051 | 0.1379 | 0.057 | 0.1606 | 0.0 | 0.0 | 0.0225 | 0.1026 | 0.0543 | 0.1315 | 0.0001 | 0.0295 |
| 8.5316 | 80.0 | 29600 | 9.7523 | 0.0166 | 0.0232 | 0.0182 | 0.0007 | 0.0143 | 0.0174 | 0.0475 | 0.0591 | 0.0631 | 0.0203 | 0.0422 | 0.055 | 0.0 | 0.0 | 0.0042 | 0.0172 | 0.0329 | 0.1732 | 0.001 | 0.0512 | 0.0081 | 0.045 | 0.002 | 0.0463 | 0.0031 | 0.0658 | 0.0766 | 0.1451 | 0.0 | 0.0 | 0.0465 | 0.1026 | 0.025 | 0.1107 | 0.0 | 0.0 |
| 8.5472 | 81.0 | 29970 | 9.6870 | 0.0188 | 0.027 | 0.0185 | 0.0097 | 0.0211 | 0.0195 | 0.0652 | 0.0858 | 0.0924 | 0.0531 | 0.0582 | 0.0884 | 0.0 | 0.0 | 0.0054 | 0.0315 | 0.0083 | 0.1463 | 0.0039 | 0.1463 | 0.0168 | 0.1061 | 0.0008 | 0.022 | 0.0126 | 0.1399 | 0.108 | 0.2141 | 0.0 | 0.0 | 0.0332 | 0.1397 | 0.0348 | 0.1302 | 0.0023 | 0.0328 |
| 8.5652 | 82.0 | 30340 | 9.8145 | 0.0165 | 0.0293 | 0.0164 | 0.0041 | 0.0308 | 0.0138 | 0.0538 | 0.0833 | 0.0932 | 0.0359 | 0.0599 | 0.0881 | 0.0 | 0.0 | 0.005 | 0.0417 | 0.0697 | 0.1561 | 0.0018 | 0.0878 | 0.0275 | 0.1092 | 0.0 | 0.0 | 0.0093 | 0.1433 | 0.0235 | 0.307 | 0.0 | 0.0 | 0.005 | 0.0756 | 0.0566 | 0.1812 | 0.0 | 0.0164 |
| 8.519 | 83.0 | 30710 | 10.0466 | 0.0152 | 0.022 | 0.0157 | 0.0088 | 0.0159 | 0.0129 | 0.041 | 0.0582 | 0.0651 | 0.0562 | 0.0645 | 0.0588 | 0.0 | 0.0 | 0.0 | 0.0033 | 0.0849 | 0.1512 | 0.0038 | 0.122 | 0.0204 | 0.0842 | 0.0 | 0.0 | 0.0022 | 0.0913 | 0.0261 | 0.131 | 0.0 | 0.0 | 0.0023 | 0.0538 | 0.0428 | 0.1443 | 0.0 | 0.0 |
| 8.4927 | 84.0 | 31080 | 10.1072 | 0.0059 | 0.009 | 0.0055 | 0.0008 | 0.0076 | 0.006 | 0.0321 | 0.0452 | 0.05 | 0.0063 | 0.037 | 0.0477 | 0.0 | 0.0 | 0.0005 | 0.0163 | 0.0395 | 0.0854 | 0.0026 | 0.0878 | 0.0149 | 0.0622 | 0.0 | 0.0 | 0.0048 | 0.103 | 0.0016 | 0.1014 | 0.0 | 0.0 | 0.0004 | 0.0564 | 0.0063 | 0.0872 | 0.0 | 0.0 |
| 8.4818 | 85.0 | 31450 | 9.9963 | 0.0194 | 0.0286 | 0.0224 | 0.002 | 0.0136 | 0.0219 | 0.0479 | 0.062 | 0.0665 | 0.0203 | 0.0443 | 0.0636 | 0.0 | 0.0 | 0.0039 | 0.0202 | 0.1351 | 0.1756 | 0.0027 | 0.078 | 0.0132 | 0.0512 | 0.0 | 0.0 | 0.0106 | 0.1473 | 0.0285 | 0.1197 | 0.0 | 0.0 | 0.0052 | 0.0551 | 0.0335 | 0.1195 | 0.0004 | 0.0311 |
| 8.4424 | 86.0 | 31820 | 10.0308 | 0.0136 | 0.0199 | 0.0151 | 0.0004 | 0.0215 | 0.0143 | 0.0425 | 0.0559 | 0.061 | 0.0078 | 0.0425 | 0.0581 | 0.0 | 0.0 | 0.0014 | 0.0111 | 0.0667 | 0.1659 | 0.0022 | 0.0927 | 0.0062 | 0.0249 | 0.0 | 0.0 | 0.0136 | 0.1268 | 0.002 | 0.0803 | 0.0 | 0.0 | 0.0158 | 0.0782 | 0.0556 | 0.1517 | 0.0 | 0.0 |
| 8.4473 | 87.0 | 32190 | 9.9931 | 0.0183 | 0.0261 | 0.0192 | 0.0148 | 0.013 | 0.0171 | 0.0553 | 0.0803 | 0.0878 | 0.0453 | 0.0426 | 0.0875 | 0.0 | 0.0 | 0.0034 | 0.035 | 0.0716 | 0.1634 | 0.0079 | 0.1268 | 0.0158 | 0.0631 | 0.0 | 0.0 | 0.0063 | 0.1644 | 0.0275 | 0.2676 | 0.0 | 0.0 | 0.0467 | 0.1128 | 0.0399 | 0.1201 | 0.0 | 0.0 |
| 8.4347 | 88.0 | 32560 | 10.0669 | 0.0168 | 0.0229 | 0.0186 | 0.0028 | 0.0133 | 0.0161 | 0.0422 | 0.0564 | 0.0626 | 0.0219 | 0.0412 | 0.0616 | 0.0 | 0.0 | 0.0002 | 0.0093 | 0.125 | 0.2293 | 0.0005 | 0.0341 | 0.0105 | 0.0499 | 0.0 | 0.0 | 0.0037 | 0.1225 | 0.0239 | 0.1042 | 0.0 | 0.0 | 0.0009 | 0.0577 | 0.0318 | 0.1289 | 0.0045 | 0.0148 |
| 8.3801 | 89.0 | 32930 | 9.9411 | 0.0135 | 0.0187 | 0.0145 | 0.0003 | 0.0124 | 0.0142 | 0.0494 | 0.0633 | 0.0675 | 0.0109 | 0.0473 | 0.0634 | 0.0 | 0.0 | 0.0012 | 0.0237 | 0.0728 | 0.1707 | 0.0008 | 0.0512 | 0.0118 | 0.0608 | 0.0 | 0.0 | 0.0083 | 0.146 | 0.0269 | 0.1577 | 0.0 | 0.0 | 0.0121 | 0.0769 | 0.0279 | 0.0913 | 0.0002 | 0.0311 |
| 8.4044 | 90.0 | 33300 | 10.0605 | 0.0189 | 0.0272 | 0.0202 | 0.0008 | 0.0186 | 0.0151 | 0.0426 | 0.0583 | 0.0687 | 0.0156 | 0.0657 | 0.058 | 0.0 | 0.0 | 0.0004 | 0.01 | 0.1102 | 0.1268 | 0.004 | 0.1293 | 0.0322 | 0.0968 | 0.0 | 0.0 | 0.0109 | 0.1191 | 0.0049 | 0.0775 | 0.0 | 0.0 | 0.0057 | 0.0564 | 0.0589 | 0.194 | 0.0 | 0.0148 |
| 8.3784 | 91.0 | 33670 | 9.7634 | 0.0288 | 0.0386 | 0.0325 | 0.0044 | 0.0178 | 0.0273 | 0.0703 | 0.0914 | 0.098 | 0.0562 | 0.0551 | 0.0926 | 0.0 | 0.0 | 0.0011 | 0.0209 | 0.2142 | 0.3122 | 0.003 | 0.0951 | 0.0202 | 0.0944 | 0.0 | 0.0 | 0.0074 | 0.146 | 0.0144 | 0.1873 | 0.0 | 0.0 | 0.0323 | 0.0974 | 0.048 | 0.1933 | 0.0046 | 0.0295 |
| 8.375 | 92.0 | 34040 | 9.7371 | 0.0296 | 0.0382 | 0.0319 | 0.0028 | 0.0143 | 0.0288 | 0.0565 | 0.0765 | 0.0826 | 0.0234 | 0.0514 | 0.0747 | 0.0 | 0.0 | 0.0009 | 0.0154 | 0.1073 | 0.161 | 0.0033 | 0.1 | 0.0179 | 0.068 | 0.0 | 0.0 | 0.0064 | 0.1463 | 0.1413 | 0.2211 | 0.0 | 0.0 | 0.0399 | 0.1115 | 0.038 | 0.1678 | 0.0 | 0.0 |
| 8.3001 | 93.0 | 34410 | 10.1069 | 0.0272 | 0.034 | 0.03 | 0.0017 | 0.0131 | 0.0286 | 0.0594 | 0.0741 | 0.0812 | 0.0266 | 0.0447 | 0.0731 | 0.0 | 0.0 | 0.0002 | 0.0115 | 0.1459 | 0.1902 | 0.0095 | 0.1512 | 0.0103 | 0.0458 | 0.0 | 0.0 | 0.0057 | 0.1094 | 0.0864 | 0.1099 | 0.0 | 0.0 | 0.0366 | 0.1808 | 0.0318 | 0.1758 | 0.0 | 0.0 |
| 8.3541 | 94.0 | 34780 | 9.8205 | 0.0239 | 0.0337 | 0.0261 | 0.0028 | 0.0212 | 0.0218 | 0.0624 | 0.0966 | 0.1077 | 0.0281 | 0.0537 | 0.1056 | 0.0 | 0.0 | 0.0005 | 0.0291 | 0.1419 | 0.3049 | 0.0075 | 0.1927 | 0.0233 | 0.1127 | 0.0 | 0.0 | 0.0101 | 0.1319 | 0.0271 | 0.1662 | 0.0 | 0.0 | 0.0304 | 0.1654 | 0.0456 | 0.1745 | 0.0 | 0.0148 |
| 8.283 | 95.0 | 35150 | 9.8996 | 0.0214 | 0.0287 | 0.0232 | 0.0033 | 0.0182 | 0.0208 | 0.0562 | 0.0747 | 0.0845 | 0.0281 | 0.0764 | 0.0785 | 0.0 | 0.0 | 0.0004 | 0.023 | 0.1103 | 0.2244 | 0.0074 | 0.1756 | 0.0207 | 0.086 | 0.0 | 0.0 | 0.0141 | 0.149 | 0.0014 | 0.0366 | 0.0 | 0.0 | 0.0584 | 0.1269 | 0.0438 | 0.1926 | 0.0 | 0.0 |
| 8.2788 | 96.0 | 35520 | 9.9741 | 0.0263 | 0.0337 | 0.0302 | 0.0003 | 0.0146 | 0.0265 | 0.0671 | 0.0793 | 0.0858 | 0.0063 | 0.0224 | 0.0868 | 0.0 | 0.0 | 0.0005 | 0.0248 | 0.1521 | 0.2341 | 0.0044 | 0.1268 | 0.0081 | 0.042 | 0.0134 | 0.022 | 0.0114 | 0.0953 | 0.0781 | 0.2169 | 0.0 | 0.0 | 0.0326 | 0.1718 | 0.0143 | 0.0812 | 0.0008 | 0.0148 |
| 8.2828 | 97.0 | 35890 | 9.8687 | 0.0181 | 0.0241 | 0.0202 | 0.0006 | 0.0115 | 0.0169 | 0.0535 | 0.0699 | 0.0752 | 0.0078 | 0.0393 | 0.0755 | 0.0 | 0.0 | 0.0008 | 0.0291 | 0.0823 | 0.2439 | 0.0006 | 0.039 | 0.0047 | 0.0213 | 0.0 | 0.0 | 0.0075 | 0.1698 | 0.0707 | 0.1887 | 0.0 | 0.0 | 0.011 | 0.0551 | 0.0398 | 0.1409 | 0.0 | 0.0148 |
| 8.2402 | 98.0 | 36260 | 9.9006 | 0.0138 | 0.0198 | 0.0135 | 0.0005 | 0.0233 | 0.0127 | 0.056 | 0.081 | 0.0886 | 0.0078 | 0.0563 | 0.0875 | 0.0 | 0.0 | 0.001 | 0.0563 | 0.0388 | 0.1756 | 0.0032 | 0.122 | 0.0101 | 0.052 | 0.0 | 0.0 | 0.0177 | 0.2436 | 0.0194 | 0.1634 | 0.0 | 0.0 | 0.0339 | 0.0577 | 0.0412 | 0.1926 | 0.0 | 0.0 |
| 8.2909 | 99.0 | 36630 | 10.1356 | 0.0134 | 0.0199 | 0.0138 | 0.0011 | 0.015 | 0.0132 | 0.0382 | 0.0493 | 0.0527 | 0.0078 | 0.0359 | 0.0497 | 0.0 | 0.0 | 0.0002 | 0.0083 | 0.0641 | 0.1073 | 0.0044 | 0.0805 | 0.0102 | 0.0443 | 0.0 | 0.0 | 0.0048 | 0.103 | 0.0301 | 0.1268 | 0.0 | 0.0 | 0.0044 | 0.0462 | 0.0422 | 0.1161 | 0.0 | 0.0 |
| 8.2521 | 100.0 | 37000 | 10.0207 | 0.0173 | 0.0232 | 0.0177 | 0.0106 | 0.0254 | 0.0179 | 0.0485 | 0.0685 | 0.0748 | 0.0391 | 0.0502 | 0.0764 | 0.0 | 0.0 | 0.0013 | 0.0259 | 0.0556 | 0.1707 | 0.0024 | 0.0976 | 0.0115 | 0.0685 | 0.0 | 0.0 | 0.0189 | 0.1497 | 0.0485 | 0.1845 | 0.0 | 0.0 | 0.0398 | 0.0859 | 0.029 | 0.1007 | 0.0 | 0.0148 |
| 8.2071 | 101.0 | 37370 | 9.8349 | 0.021 | 0.0274 | 0.0223 | 0.004 | 0.0154 | 0.0209 | 0.0674 | 0.0953 | 0.102 | 0.0156 | 0.048 | 0.1045 | 0.0 | 0.0 | 0.0007 | 0.0304 | 0.1062 | 0.3488 | 0.005 | 0.0902 | 0.0116 | 0.0685 | 0.0 | 0.0 | 0.0098 | 0.1688 | 0.0872 | 0.2901 | 0.0 | 0.0 | 0.0117 | 0.0821 | 0.0191 | 0.1154 | 0.0 | 0.0295 |
| 8.2182 | 102.0 | 37740 | 10.0440 | 0.0143 | 0.0181 | 0.0154 | 0.0008 | 0.0159 | 0.0139 | 0.0444 | 0.0578 | 0.0636 | 0.0063 | 0.0378 | 0.0607 | 0.0 | 0.0 | 0.0019 | 0.0148 | 0.078 | 0.2634 | 0.002 | 0.0756 | 0.0061 | 0.0494 | 0.0 | 0.0 | 0.0078 | 0.0772 | 0.0301 | 0.0958 | 0.0 | 0.0 | 0.0388 | 0.0756 | 0.0075 | 0.1114 | 0.0 | 0.0 |
| 8.1713 | 103.0 | 38110 | 10.0068 | 0.0156 | 0.0193 | 0.0174 | 0.0005 | 0.0068 | 0.0164 | 0.0508 | 0.059 | 0.0636 | 0.0094 | 0.0227 | 0.0684 | 0.0 | 0.0 | 0.0017 | 0.0102 | 0.1495 | 0.2902 | 0.0029 | 0.0927 | 0.0046 | 0.0111 | 0.0 | 0.0 | 0.004 | 0.0745 | 0.0086 | 0.1085 | 0.0 | 0.0 | 0.0121 | 0.0474 | 0.0038 | 0.0993 | 0.0001 | 0.0295 |
| 8.1746 | 104.0 | 38480 | 10.1219 | 0.0166 | 0.0214 | 0.0189 | 0.0007 | 0.0199 | 0.0133 | 0.0264 | 0.0351 | 0.0384 | 0.0078 | 0.0435 | 0.0306 | 0.0 | 0.0 | 0.0001 | 0.0048 | 0.0988 | 0.2122 | 0.0 | 0.0 | 0.0089 | 0.0402 | 0.0 | 0.0 | 0.0065 | 0.0289 | 0.0001 | 0.007 | 0.0 | 0.0 | 0.0399 | 0.0487 | 0.045 | 0.1188 | 0.0 | 0.0 |
| 8.1573 | 105.0 | 38850 | 9.8900 | 0.0232 | 0.0297 | 0.0277 | 0.0006 | 0.0088 | 0.0237 | 0.0512 | 0.0652 | 0.0694 | 0.0078 | 0.0307 | 0.0668 | 0.0 | 0.0 | 0.0002 | 0.0124 | 0.1197 | 0.2195 | 0.0 | 0.0 | 0.009 | 0.0246 | 0.0 | 0.0 | 0.0026 | 0.053 | 0.0779 | 0.2535 | 0.0 | 0.0 | 0.0422 | 0.1128 | 0.0255 | 0.1141 | 0.0009 | 0.0426 |
| 8.1368 | 106.0 | 39220 | 9.9437 | 0.0117 | 0.0153 | 0.0137 | 0.0 | 0.0059 | 0.012 | 0.0383 | 0.0516 | 0.0565 | 0.0 | 0.0228 | 0.054 | 0.0 | 0.0 | 0.0001 | 0.0093 | 0.0993 | 0.2146 | 0.0 | 0.0 | 0.0036 | 0.0098 | 0.0 | 0.0 | 0.0049 | 0.0691 | 0.0127 | 0.2155 | 0.0 | 0.0 | 0.003 | 0.0564 | 0.0165 | 0.1034 | 0.0 | 0.0 |
| 8.1522 | 107.0 | 39590 | 9.8722 | 0.0162 | 0.0203 | 0.0184 | 0.0 | 0.0033 | 0.0172 | 0.0407 | 0.0501 | 0.0542 | 0.0 | 0.0096 | 0.0557 | 0.0 | 0.0 | 0.0001 | 0.0089 | 0.0779 | 0.178 | 0.001 | 0.0293 | 0.0032 | 0.0128 | 0.0 | 0.0 | 0.0028 | 0.0752 | 0.0656 | 0.1746 | 0.0 | 0.0 | 0.0292 | 0.0936 | 0.0061 | 0.0638 | 0.0089 | 0.0148 |
| 8.1128 | 108.0 | 39960 | 9.9411 | 0.0202 | 0.0267 | 0.0242 | 0.0 | 0.0086 | 0.0207 | 0.0497 | 0.0582 | 0.0611 | 0.0 | 0.0269 | 0.0617 | 0.0 | 0.0 | 0.0004 | 0.0174 | 0.0996 | 0.1854 | 0.0018 | 0.0683 | 0.0073 | 0.0319 | 0.0 | 0.0 | 0.0026 | 0.0359 | 0.0865 | 0.1648 | 0.0 | 0.0 | 0.0177 | 0.0962 | 0.0234 | 0.0872 | 0.0033 | 0.0459 |
| 8.1076 | 109.0 | 40330 | 10.0025 | 0.0236 | 0.0294 | 0.0273 | 0.0 | 0.0059 | 0.0255 | 0.0495 | 0.0601 | 0.0635 | 0.0 | 0.0245 | 0.0662 | 0.0 | 0.0 | 0.0004 | 0.0141 | 0.1291 | 0.2512 | 0.0021 | 0.061 | 0.0113 | 0.0219 | 0.0 | 0.0 | 0.0009 | 0.0359 | 0.1016 | 0.2014 | 0.0 | 0.0 | 0.0248 | 0.0846 | 0.0125 | 0.0651 | 0.0003 | 0.0262 |
| 8.1108 | 110.0 | 40700 | 10.0843 | 0.0142 | 0.0185 | 0.0169 | 0.0 | 0.0049 | 0.0158 | 0.0338 | 0.0391 | 0.0419 | 0.0 | 0.0095 | 0.0425 | 0.0 | 0.0 | 0.0001 | 0.0046 | 0.0499 | 0.1512 | 0.0002 | 0.0171 | 0.0036 | 0.0089 | 0.0 | 0.0 | 0.0006 | 0.0215 | 0.1053 | 0.1634 | 0.0 | 0.0 | 0.0039 | 0.0705 | 0.0074 | 0.0658 | 0.0 | 0.0 |
| 8.0705 | 111.0 | 41070 | 9.9991 | 0.0103 | 0.0139 | 0.012 | 0.0 | 0.0029 | 0.0111 | 0.0315 | 0.0377 | 0.0388 | 0.0 | 0.0121 | 0.0402 | 0.0 | 0.0 | 0.0001 | 0.0028 | 0.0598 | 0.1244 | 0.002 | 0.0512 | 0.0068 | 0.0303 | 0.0 | 0.0 | 0.0079 | 0.0309 | 0.0249 | 0.1408 | 0.0 | 0.0 | 0.0209 | 0.0551 | 0.0007 | 0.0302 | 0.0 | 0.0 |
| 8.037 | 112.0 | 41440 | 10.1343 | 0.009 | 0.0123 | 0.0094 | 0.0135 | 0.0068 | 0.0098 | 0.0422 | 0.056 | 0.0601 | 0.0281 | 0.031 | 0.066 | 0.0 | 0.0 | 0.0003 | 0.0211 | 0.0732 | 0.1512 | 0.0014 | 0.061 | 0.007 | 0.0369 | 0.0 | 0.0 | 0.0022 | 0.0839 | 0.013 | 0.1662 | 0.0 | 0.0 | 0.0011 | 0.0615 | 0.0094 | 0.1248 | 0.0 | 0.0148 |
| 8.0616 | 113.0 | 41810 | 9.9521 | 0.0228 | 0.0303 | 0.0249 | 0.0002 | 0.0139 | 0.0205 | 0.0697 | 0.0877 | 0.0924 | 0.0016 | 0.0345 | 0.0994 | 0.0 | 0.0 | 0.0004 | 0.023 | 0.0971 | 0.3293 | 0.0017 | 0.1073 | 0.0107 | 0.0323 | 0.0 | 0.0 | 0.0042 | 0.1114 | 0.0986 | 0.1775 | 0.0 | 0.0 | 0.0103 | 0.1372 | 0.0504 | 0.1678 | 0.0003 | 0.023 |
| 8.0231 | 114.0 | 42180 | 9.8850 | 0.0197 | 0.0251 | 0.0218 | 0.0029 | 0.01 | 0.0185 | 0.0666 | 0.0872 | 0.0924 | 0.0109 | 0.0282 | 0.0929 | 0.0 | 0.0 | 0.0006 | 0.0274 | 0.1112 | 0.3805 | 0.001 | 0.0902 | 0.0093 | 0.0464 | 0.0 | 0.0 | 0.0012 | 0.0644 | 0.0826 | 0.1901 | 0.0 | 0.0 | 0.0147 | 0.1705 | 0.0141 | 0.0953 | 0.0019 | 0.0443 |
| 8.0193 | 115.0 | 42550 | 10.0295 | 0.0207 | 0.025 | 0.0231 | 0.0 | 0.007 | 0.0206 | 0.0717 | 0.0878 | 0.0916 | 0.0 | 0.0492 | 0.0942 | 0.0 | 0.0 | 0.0005 | 0.025 | 0.0839 | 0.2927 | 0.0028 | 0.1829 | 0.0029 | 0.021 | 0.0 | 0.0 | 0.0111 | 0.0916 | 0.0976 | 0.2366 | 0.0 | 0.0 | 0.0223 | 0.1038 | 0.0084 | 0.1154 | 0.0192 | 0.0295 |
| 8.001 | 116.0 | 42920 | 9.8031 | 0.032 | 0.0397 | 0.0346 | 0.0059 | 0.0133 | 0.032 | 0.0708 | 0.0934 | 0.1016 | 0.0094 | 0.0396 | 0.1056 | 0.0 | 0.0 | 0.0014 | 0.032 | 0.1235 | 0.2659 | 0.0004 | 0.061 | 0.0077 | 0.0428 | 0.0 | 0.0 | 0.004 | 0.155 | 0.1527 | 0.3183 | 0.0 | 0.0 | 0.0245 | 0.1205 | 0.0317 | 0.1779 | 0.0378 | 0.0459 |
| 7.9877 | 117.0 | 43290 | 9.8003 | 0.0344 | 0.0441 | 0.0367 | 0.0099 | 0.0138 | 0.0328 | 0.0828 | 0.1024 | 0.1089 | 0.0109 | 0.0438 | 0.1117 | 0.0 | 0.0 | 0.0022 | 0.0376 | 0.1816 | 0.378 | 0.0006 | 0.0854 | 0.0106 | 0.0325 | 0.0 | 0.0 | 0.002 | 0.1218 | 0.0714 | 0.238 | 0.0 | 0.0 | 0.0498 | 0.2282 | 0.0584 | 0.1557 | 0.0356 | 0.0295 |
| 7.9849 | 118.0 | 43660 | 9.9241 | 0.0244 | 0.03 | 0.028 | 0.0079 | 0.0083 | 0.0255 | 0.0533 | 0.0682 | 0.0721 | 0.0063 | 0.0278 | 0.0763 | 0.0 | 0.0 | 0.0018 | 0.0187 | 0.1051 | 0.2244 | 0.0 | 0.0 | 0.0126 | 0.027 | 0.0 | 0.0 | 0.0033 | 0.0856 | 0.1054 | 0.2042 | 0.0 | 0.0 | 0.0312 | 0.1385 | 0.0296 | 0.1369 | 0.004 | 0.0295 |
| 7.9789 | 119.0 | 44030 | 10.0058 | 0.0221 | 0.0291 | 0.0269 | 0.0 | 0.0131 | 0.0227 | 0.0627 | 0.0784 | 0.0852 | 0.0 | 0.0253 | 0.0916 | 0.0 | 0.0 | 0.0004 | 0.0196 | 0.1398 | 0.2976 | 0.0015 | 0.1073 | 0.0118 | 0.032 | 0.0 | 0.0 | 0.0045 | 0.0748 | 0.058 | 0.2507 | 0.0 | 0.0 | 0.0234 | 0.1282 | 0.0263 | 0.1121 | 0.0 | 0.0 |
| 7.9855 | 120.0 | 44400 | 10.2661 | 0.0194 | 0.0266 | 0.0224 | 0.0008 | 0.0117 | 0.0188 | 0.0494 | 0.0632 | 0.0704 | 0.0047 | 0.0287 | 0.0719 | 0.0 | 0.0 | 0.0004 | 0.0135 | 0.1264 | 0.2732 | 0.0018 | 0.0854 | 0.0101 | 0.0301 | 0.0 | 0.0 | 0.0017 | 0.0628 | 0.0476 | 0.1732 | 0.0 | 0.0 | 0.0247 | 0.0615 | 0.0202 | 0.1289 | 0.0 | 0.0164 |
| 7.9515 | 121.0 | 44770 | 10.1516 | 0.0138 | 0.0201 | 0.0142 | 0.0004 | 0.0125 | 0.0127 | 0.0454 | 0.0676 | 0.0759 | 0.0016 | 0.0341 | 0.0809 | 0.0 | 0.0 | 0.0009 | 0.0226 | 0.075 | 0.178 | 0.0015 | 0.0829 | 0.0084 | 0.0344 | 0.0 | 0.0 | 0.005 | 0.1352 | 0.0276 | 0.2324 | 0.0 | 0.0 | 0.0178 | 0.0821 | 0.0245 | 0.1289 | 0.0045 | 0.0148 |
| 7.9411 | 122.0 | 45140 | 10.0355 | 0.0177 | 0.0239 | 0.0195 | 0.0002 | 0.01 | 0.0178 | 0.059 | 0.0789 | 0.0858 | 0.0016 | 0.0429 | 0.0927 | 0.0 | 0.0 | 0.0011 | 0.0241 | 0.0956 | 0.1683 | 0.005 | 0.1439 | 0.0112 | 0.045 | 0.0 | 0.0 | 0.0119 | 0.1366 | 0.0561 | 0.2887 | 0.0 | 0.0 | 0.013 | 0.0718 | 0.0176 | 0.1262 | 0.0004 | 0.0246 |
| 7.9324 | 123.0 | 45510 | 10.1988 | 0.0204 | 0.0264 | 0.0227 | 0.0 | 0.015 | 0.0208 | 0.0624 | 0.0784 | 0.0896 | 0.0 | 0.0422 | 0.093 | 0.0 | 0.0 | 0.0005 | 0.0263 | 0.098 | 0.2585 | 0.0017 | 0.1 | 0.0103 | 0.0433 | 0.0 | 0.0 | 0.0075 | 0.1138 | 0.0824 | 0.3183 | 0.0 | 0.0 | 0.0318 | 0.0795 | 0.0124 | 0.1349 | 0.0 | 0.0 |
| 7.9327 | 124.0 | 45880 | 10.2355 | 0.0183 | 0.0251 | 0.0203 | 0.0 | 0.013 | 0.0193 | 0.053 | 0.0724 | 0.0815 | 0.0 | 0.0352 | 0.0827 | 0.0 | 0.0 | 0.0003 | 0.0157 | 0.1019 | 0.2146 | 0.0021 | 0.0756 | 0.013 | 0.0399 | 0.0 | 0.0 | 0.008 | 0.1477 | 0.0215 | 0.2915 | 0.0 | 0.0 | 0.0305 | 0.0846 | 0.0418 | 0.1087 | 0.0 | 0.0 |
| 7.8884 | 125.0 | 46250 | 10.1763 | 0.0282 | 0.0362 | 0.0325 | 0.0013 | 0.0151 | 0.0267 | 0.0621 | 0.0863 | 0.0937 | 0.0078 | 0.0495 | 0.0955 | 0.0 | 0.0 | 0.0005 | 0.0167 | 0.1261 | 0.2439 | 0.0008 | 0.061 | 0.0134 | 0.0432 | 0.0 | 0.0 | 0.0055 | 0.1993 | 0.1396 | 0.293 | 0.0 | 0.0 | 0.0169 | 0.1077 | 0.0348 | 0.1443 | 0.0002 | 0.0148 |
| 7.8719 | 126.0 | 46620 | 10.1629 | 0.0223 | 0.0289 | 0.0257 | 0.0014 | 0.0062 | 0.0228 | 0.0622 | 0.0809 | 0.0851 | 0.0047 | 0.0419 | 0.093 | 0.0 | 0.0 | 0.0004 | 0.0172 | 0.1428 | 0.2634 | 0.0035 | 0.1341 | 0.0103 | 0.0334 | 0.0 | 0.0 | 0.003 | 0.1742 | 0.0684 | 0.2493 | 0.0 | 0.0 | 0.0272 | 0.0641 | 0.0119 | 0.0758 | 0.0 | 0.0098 |
| 7.8751 | 127.0 | 46990 | 10.2506 | 0.0173 | 0.0212 | 0.0184 | 0.0 | 0.0106 | 0.0165 | 0.054 | 0.0647 | 0.0694 | 0.0 | 0.0351 | 0.071 | 0.0 | 0.0 | 0.0002 | 0.0074 | 0.0735 | 0.178 | 0.0016 | 0.0878 | 0.0138 | 0.0334 | 0.0 | 0.0 | 0.0021 | 0.1091 | 0.0447 | 0.2338 | 0.0 | 0.0 | 0.0358 | 0.0705 | 0.0161 | 0.096 | 0.0198 | 0.0164 |
| 7.8899 | 128.0 | 47360 | 10.1993 | 0.0176 | 0.0251 | 0.0179 | 0.0 | 0.0138 | 0.0157 | 0.0474 | 0.0644 | 0.0721 | 0.0 | 0.0772 | 0.068 | 0.0 | 0.0 | 0.0002 | 0.01 | 0.0704 | 0.1122 | 0.0026 | 0.1195 | 0.0159 | 0.0462 | 0.0 | 0.0 | 0.0047 | 0.1148 | 0.0667 | 0.2479 | 0.0 | 0.0 | 0.0105 | 0.0641 | 0.0405 | 0.1503 | 0.0 | 0.0 |
| 7.8987 | 129.0 | 47730 | 10.2938 | 0.0127 | 0.0168 | 0.0135 | 0.0 | 0.0128 | 0.0141 | 0.042 | 0.0545 | 0.0595 | 0.0 | 0.0571 | 0.0589 | 0.0 | 0.0 | 0.0001 | 0.0054 | 0.0477 | 0.1171 | 0.0069 | 0.1317 | 0.0065 | 0.0149 | 0.0 | 0.0 | 0.0063 | 0.1201 | 0.0463 | 0.1676 | 0.0 | 0.0 | 0.0002 | 0.0218 | 0.0385 | 0.1349 | 0.0 | 0.0 |
| 7.8286 | 130.0 | 48100 | 10.2356 | 0.0094 | 0.0133 | 0.01 | 0.0 | 0.007 | 0.011 | 0.0382 | 0.0512 | 0.0548 | 0.0 | 0.045 | 0.0535 | 0.0 | 0.0 | 0.0002 | 0.0065 | 0.0476 | 0.139 | 0.0012 | 0.0512 | 0.0103 | 0.0292 | 0.0 | 0.0 | 0.0045 | 0.0953 | 0.0205 | 0.1718 | 0.0 | 0.0 | 0.0168 | 0.0769 | 0.011 | 0.0879 | 0.0 | 0.0 |
| 7.8115 | 131.0 | 48470 | 9.9403 | 0.0195 | 0.0248 | 0.0216 | 0.0 | 0.0114 | 0.021 | 0.0573 | 0.0817 | 0.089 | 0.0 | 0.0661 | 0.0896 | 0.0 | 0.0 | 0.0004 | 0.0193 | 0.0867 | 0.2146 | 0.002 | 0.1244 | 0.0144 | 0.0601 | 0.0 | 0.0 | 0.0061 | 0.1732 | 0.0925 | 0.2141 | 0.0 | 0.0 | 0.0057 | 0.0846 | 0.0075 | 0.1201 | 0.0187 | 0.0574 |
| 7.817 | 132.0 | 48840 | 9.9620 | 0.0171 | 0.0214 | 0.0191 | 0.0013 | 0.0047 | 0.0197 | 0.0533 | 0.0684 | 0.07 | 0.0047 | 0.0354 | 0.0767 | 0.0 | 0.0 | 0.0002 | 0.0104 | 0.0728 | 0.1756 | 0.0076 | 0.1195 | 0.0139 | 0.0355 | 0.0 | 0.0 | 0.0066 | 0.1403 | 0.0518 | 0.1704 | 0.0 | 0.0 | 0.0203 | 0.0718 | 0.0146 | 0.0564 | 0.0168 | 0.0607 |
| 7.7889 | 133.0 | 49210 | 10.1356 | 0.0111 | 0.0164 | 0.0124 | 0.0018 | 0.009 | 0.0112 | 0.0489 | 0.0647 | 0.071 | 0.0063 | 0.0481 | 0.0728 | 0.0 | 0.0 | 0.0008 | 0.0104 | 0.067 | 0.1732 | 0.0091 | 0.1317 | 0.0112 | 0.0281 | 0.0018 | 0.0073 | 0.0089 | 0.2262 | 0.0085 | 0.1211 | 0.0 | 0.0 | 0.0048 | 0.0397 | 0.0207 | 0.104 | 0.0001 | 0.0098 |
| 7.8069 | 134.0 | 49580 | 10.1612 | 0.0112 | 0.0167 | 0.0123 | 0.0066 | 0.009 | 0.0114 | 0.0526 | 0.073 | 0.0805 | 0.0234 | 0.0538 | 0.0849 | 0.0 | 0.0 | 0.0005 | 0.0109 | 0.0783 | 0.1537 | 0.0039 | 0.1415 | 0.0137 | 0.0458 | 0.0 | 0.0 | 0.0063 | 0.2564 | 0.0108 | 0.1268 | 0.0 | 0.0 | 0.0003 | 0.0359 | 0.0189 | 0.1416 | 0.0019 | 0.0541 |
| 7.7811 | 135.0 | 49950 | 9.8277 | 0.0156 | 0.0229 | 0.017 | 0.0083 | 0.0136 | 0.0167 | 0.0793 | 0.104 | 0.116 | 0.025 | 0.0831 | 0.113 | 0.0 | 0.0 | 0.0004 | 0.02 | 0.1027 | 0.2878 | 0.0068 | 0.1634 | 0.0132 | 0.0666 | 0.0 | 0.0 | 0.0098 | 0.2893 | 0.0233 | 0.1958 | 0.0 | 0.0 | 0.0011 | 0.0744 | 0.0212 | 0.1852 | 0.0086 | 0.1098 |
| 7.7384 | 136.0 | 50320 | 9.9539 | 0.0185 | 0.0241 | 0.0205 | 0.0021 | 0.0087 | 0.0192 | 0.053 | 0.0659 | 0.0725 | 0.0188 | 0.0477 | 0.0747 | 0.0 | 0.0 | 0.0002 | 0.0122 | 0.0731 | 0.1366 | 0.0053 | 0.1146 | 0.0196 | 0.0561 | 0.0 | 0.0 | 0.0066 | 0.1054 | 0.0818 | 0.1901 | 0.0 | 0.0 | 0.0052 | 0.0705 | 0.0049 | 0.1174 | 0.0253 | 0.0672 |
| 7.7549 | 137.0 | 50690 | 10.0690 | 0.015 | 0.0196 | 0.016 | 0.0033 | 0.0106 | 0.0168 | 0.0481 | 0.0669 | 0.0721 | 0.0047 | 0.0481 | 0.0698 | 0.0 | 0.0 | 0.0002 | 0.0057 | 0.0498 | 0.1732 | 0.0078 | 0.1122 | 0.0162 | 0.0477 | 0.0 | 0.0 | 0.0025 | 0.1232 | 0.0658 | 0.1859 | 0.0 | 0.0 | 0.0042 | 0.0474 | 0.0315 | 0.1114 | 0.0016 | 0.059 |
| 7.7391 | 138.0 | 51060 | 9.9675 | 0.0214 | 0.0291 | 0.0214 | 0.0108 | 0.0177 | 0.0183 | 0.059 | 0.0866 | 0.0976 | 0.0453 | 0.061 | 0.0944 | 0.0 | 0.0 | 0.0003 | 0.0141 | 0.0731 | 0.222 | 0.0037 | 0.1098 | 0.017 | 0.0708 | 0.0 | 0.0 | 0.0111 | 0.1356 | 0.0827 | 0.2338 | 0.0 | 0.0 | 0.0008 | 0.1218 | 0.0415 | 0.1832 | 0.0262 | 0.0803 |
| 7.7187 | 139.0 | 51430 | 10.3334 | 0.0146 | 0.0196 | 0.0157 | 0.0113 | 0.0094 | 0.0161 | 0.0383 | 0.0463 | 0.0497 | 0.0406 | 0.0396 | 0.0504 | 0.0 | 0.0 | 0.0001 | 0.0022 | 0.0645 | 0.1195 | 0.0019 | 0.0732 | 0.0169 | 0.0454 | 0.0 | 0.0 | 0.0044 | 0.0745 | 0.0561 | 0.1324 | 0.0 | 0.0 | 0.002 | 0.0487 | 0.0286 | 0.0852 | 0.0001 | 0.0148 |
| 7.6925 | 140.0 | 51800 | 10.0430 | 0.0198 | 0.0259 | 0.0216 | 0.0033 | 0.0152 | 0.0186 | 0.0549 | 0.0687 | 0.0731 | 0.0063 | 0.0477 | 0.0701 | 0.0 | 0.0 | 0.0001 | 0.0052 | 0.0786 | 0.1683 | 0.0016 | 0.0707 | 0.011 | 0.0465 | 0.0 | 0.0 | 0.0061 | 0.0708 | 0.0991 | 0.231 | 0.0 | 0.0 | 0.0036 | 0.1205 | 0.0355 | 0.1128 | 0.0025 | 0.0508 |
| 7.6879 | 141.0 | 52170 | 10.1358 | 0.0156 | 0.0212 | 0.016 | 0.0085 | 0.0108 | 0.0171 | 0.0424 | 0.0523 | 0.0556 | 0.0188 | 0.0431 | 0.0564 | 0.0 | 0.0 | 0.0001 | 0.0039 | 0.052 | 0.1098 | 0.0017 | 0.0707 | 0.0256 | 0.0743 | 0.0 | 0.0 | 0.0031 | 0.0611 | 0.0842 | 0.1831 | 0.0 | 0.0 | 0.0003 | 0.041 | 0.0177 | 0.0779 | 0.0028 | 0.0459 |
| 7.6978 | 142.0 | 52540 | 10.1840 | 0.0183 | 0.0265 | 0.0195 | 0.0113 | 0.0088 | 0.0179 | 0.0645 | 0.0745 | 0.0795 | 0.0172 | 0.0369 | 0.0829 | 0.0 | 0.0 | 0.0003 | 0.0098 | 0.1082 | 0.2195 | 0.0079 | 0.122 | 0.0231 | 0.0492 | 0.0 | 0.0 | 0.0094 | 0.0735 | 0.0302 | 0.1986 | 0.0 | 0.0 | 0.0227 | 0.1051 | 0.0128 | 0.1087 | 0.0056 | 0.0672 |
| 7.7027 | 143.0 | 52910 | 10.0340 | 0.0285 | 0.0386 | 0.032 | 0.0099 | 0.0134 | 0.0292 | 0.0711 | 0.093 | 0.0968 | 0.0078 | 0.0456 | 0.1004 | 0.0 | 0.0 | 0.0015 | 0.0109 | 0.1226 | 0.3415 | 0.0212 | 0.1171 | 0.0161 | 0.0327 | 0.0 | 0.0 | 0.008 | 0.152 | 0.1218 | 0.2113 | 0.0 | 0.0 | 0.0264 | 0.0936 | 0.0171 | 0.1289 | 0.0079 | 0.0738 |
| 7.6829 | 144.0 | 53280 | 10.2188 | 0.0175 | 0.0256 | 0.0171 | 0.0056 | 0.0123 | 0.0178 | 0.0527 | 0.0665 | 0.0708 | 0.0094 | 0.0536 | 0.0691 | 0.0 | 0.0 | 0.0006 | 0.0096 | 0.0803 | 0.2122 | 0.0014 | 0.0585 | 0.0195 | 0.0631 | 0.0 | 0.0 | 0.0054 | 0.1 | 0.0486 | 0.1634 | 0.0 | 0.0 | 0.0086 | 0.0782 | 0.0246 | 0.1181 | 0.0205 | 0.0459 |
| 7.6651 | 145.0 | 53650 | 10.2600 | 0.0113 | 0.0179 | 0.0119 | 0.0104 | 0.0108 | 0.0123 | 0.058 | 0.0728 | 0.0769 | 0.0109 | 0.0445 | 0.0822 | 0.0 | 0.0 | 0.0007 | 0.0139 | 0.0214 | 0.1805 | 0.006 | 0.1366 | 0.0177 | 0.0542 | 0.0 | 0.0 | 0.004 | 0.099 | 0.044 | 0.1338 | 0.0 | 0.0 | 0.0166 | 0.1154 | 0.0246 | 0.1483 | 0.0007 | 0.041 |
| 7.6364 | 146.0 | 54020 | 10.0692 | 0.0218 | 0.0298 | 0.0248 | 0.005 | 0.0064 | 0.0237 | 0.0614 | 0.0738 | 0.077 | 0.0047 | 0.0306 | 0.0841 | 0.0 | 0.0 | 0.0002 | 0.0078 | 0.1222 | 0.1951 | 0.011 | 0.1171 | 0.0122 | 0.0414 | 0.0 | 0.0 | 0.0068 | 0.1433 | 0.059 | 0.1324 | 0.0 | 0.0 | 0.0257 | 0.0821 | 0.0163 | 0.1248 | 0.0077 | 0.0803 |
| 7.6121 | 147.0 | 54390 | 10.0795 | 0.0175 | 0.0261 | 0.0194 | 0.0009 | 0.0105 | 0.0187 | 0.0657 | 0.0812 | 0.0859 | 0.0047 | 0.0375 | 0.0889 | 0.0 | 0.0 | 0.0003 | 0.0091 | 0.1201 | 0.2732 | 0.0129 | 0.1293 | 0.0237 | 0.0682 | 0.0 | 0.0 | 0.0074 | 0.1829 | 0.0161 | 0.0873 | 0.0 | 0.0 | 0.0142 | 0.1115 | 0.0146 | 0.0919 | 0.0012 | 0.077 |
| 7.6383 | 148.0 | 54760 | 9.9347 | 0.0223 | 0.0284 | 0.0235 | 0.0144 | 0.0095 | 0.023 | 0.0612 | 0.0723 | 0.0769 | 0.0188 | 0.0431 | 0.0786 | 0.0 | 0.0 | 0.0004 | 0.0143 | 0.1383 | 0.2659 | 0.0061 | 0.1146 | 0.0133 | 0.0413 | 0.0 | 0.0 | 0.0067 | 0.1178 | 0.0478 | 0.0831 | 0.0 | 0.0 | 0.0231 | 0.059 | 0.0249 | 0.1087 | 0.0069 | 0.118 |
| 7.5991 | 149.0 | 55130 | 9.8666 | 0.0222 | 0.0299 | 0.0256 | 0.0244 | 0.0098 | 0.0226 | 0.0733 | 0.0929 | 0.099 | 0.0469 | 0.0607 | 0.0974 | 0.0 | 0.0 | 0.0007 | 0.02 | 0.1283 | 0.2634 | 0.0127 | 0.1659 | 0.0116 | 0.0425 | 0.0 | 0.0 | 0.0047 | 0.155 | 0.0529 | 0.1493 | 0.0 | 0.0 | 0.0146 | 0.1538 | 0.0175 | 0.1268 | 0.0231 | 0.1115 |
| 7.6016 | 150.0 | 55500 | 9.8925 | 0.0308 | 0.0385 | 0.0351 | 0.0031 | 0.0082 | 0.0325 | 0.0815 | 0.0956 | 0.0994 | 0.0109 | 0.0529 | 0.1056 | 0.0 | 0.0 | 0.001 | 0.0183 | 0.1703 | 0.3293 | 0.0112 | 0.1488 | 0.005 | 0.0188 | 0.0 | 0.0 | 0.0089 | 0.1326 | 0.1088 | 0.2085 | 0.0 | 0.0 | 0.0055 | 0.1179 | 0.0067 | 0.0886 | 0.0527 | 0.1295 |
| 7.5376 | 151.0 | 55870 | 9.9107 | 0.0317 | 0.0389 | 0.0369 | 0.0062 | 0.0041 | 0.033 | 0.0769 | 0.0891 | 0.0919 | 0.0188 | 0.0318 | 0.097 | 0.0 | 0.0 | 0.0001 | 0.0093 | 0.14 | 0.2976 | 0.0022 | 0.1098 | 0.0092 | 0.0387 | 0.0 | 0.0 | 0.0025 | 0.1114 | 0.1682 | 0.238 | 0.0 | 0.0 | 0.0162 | 0.1256 | 0.0022 | 0.0725 | 0.0394 | 0.1 |
| 7.5746 | 152.0 | 56240 | 10.1031 | 0.0158 | 0.0195 | 0.0171 | 0.003 | 0.0046 | 0.0162 | 0.0415 | 0.0532 | 0.057 | 0.0078 | 0.0177 | 0.0575 | 0.0 | 0.0 | 0.0 | 0.0015 | 0.1089 | 0.222 | 0.0003 | 0.0244 | 0.0065 | 0.0296 | 0.0 | 0.0 | 0.0023 | 0.0815 | 0.049 | 0.1634 | 0.0 | 0.0 | 0.0013 | 0.0769 | 0.0033 | 0.0705 | 0.0178 | 0.0148 |
| 7.5706 | 153.0 | 56610 | 9.9823 | 0.0192 | 0.0241 | 0.0205 | 0.0 | 0.0072 | 0.0199 | 0.0678 | 0.0859 | 0.0902 | 0.0 | 0.0529 | 0.0918 | 0.0 | 0.0 | 0.0002 | 0.0137 | 0.082 | 0.3268 | 0.0022 | 0.1244 | 0.0081 | 0.0407 | 0.0 | 0.0 | 0.0073 | 0.1419 | 0.0948 | 0.1845 | 0.0 | 0.0 | 0.0035 | 0.1038 | 0.0126 | 0.0765 | 0.0193 | 0.0705 |
| 7.5482 | 154.0 | 56980 | 10.1225 | 0.0218 | 0.0269 | 0.0242 | 0.0003 | 0.0055 | 0.0221 | 0.0611 | 0.0743 | 0.0774 | 0.0031 | 0.0334 | 0.0782 | 0.0 | 0.0 | 0.0001 | 0.005 | 0.1102 | 0.2537 | 0.0029 | 0.1244 | 0.0136 | 0.049 | 0.0 | 0.0 | 0.0024 | 0.1013 | 0.0939 | 0.1972 | 0.0 | 0.0 | 0.0171 | 0.0718 | 0.0025 | 0.0718 | 0.0183 | 0.0541 |
| 7.5073 | 155.0 | 57350 | 10.2640 | 0.0081 | 0.0122 | 0.0078 | 0.0 | 0.0079 | 0.008 | 0.0399 | 0.0515 | 0.0571 | 0.0 | 0.0394 | 0.0553 | 0.0 | 0.0 | 0.0005 | 0.0052 | 0.0243 | 0.1415 | 0.0047 | 0.078 | 0.0089 | 0.0301 | 0.0 | 0.0 | 0.0025 | 0.1 | 0.0167 | 0.1183 | 0.0 | 0.0 | 0.0196 | 0.0885 | 0.02 | 0.0987 | 0.0001 | 0.0246 |
| 7.4813 | 156.0 | 57720 | 10.1810 | 0.0207 | 0.0243 | 0.0222 | 0.0 | 0.0033 | 0.0208 | 0.0537 | 0.0617 | 0.0644 | 0.0 | 0.0173 | 0.0692 | 0.0 | 0.0 | 0.0012 | 0.0083 | 0.0818 | 0.2366 | 0.0028 | 0.0829 | 0.0086 | 0.023 | 0.0 | 0.0 | 0.0011 | 0.047 | 0.1242 | 0.1887 | 0.0 | 0.0 | 0.022 | 0.0923 | 0.005 | 0.0651 | 0.0019 | 0.0295 |
| 7.5065 | 157.0 | 58090 | 9.9655 | 0.0267 | 0.0363 | 0.0301 | 0.0069 | 0.0091 | 0.0282 | 0.0806 | 0.0991 | 0.1038 | 0.0203 | 0.0618 | 0.1059 | 0.0 | 0.0 | 0.0009 | 0.0117 | 0.1376 | 0.2439 | 0.0175 | 0.1854 | 0.0158 | 0.0563 | 0.0 | 0.0 | 0.0073 | 0.2084 | 0.071 | 0.2028 | 0.0 | 0.0 | 0.0279 | 0.15 | 0.0071 | 0.0872 | 0.0353 | 0.1 |
| 7.5003 | 158.0 | 58460 | 10.0127 | 0.0181 | 0.0247 | 0.0182 | 0.0 | 0.0094 | 0.0186 | 0.0604 | 0.0763 | 0.0791 | 0.0 | 0.0693 | 0.0785 | 0.0 | 0.0 | 0.0004 | 0.0076 | 0.058 | 0.2634 | 0.0126 | 0.178 | 0.0147 | 0.0553 | 0.0 | 0.0 | 0.0046 | 0.144 | 0.0906 | 0.1211 | 0.0 | 0.0 | 0.0097 | 0.0423 | 0.0069 | 0.0852 | 0.0193 | 0.0525 |
| 7.5034 | 159.0 | 58830 | 10.0596 | 0.0111 | 0.0166 | 0.0121 | 0.0062 | 0.0074 | 0.0144 | 0.0646 | 0.0766 | 0.0795 | 0.0203 | 0.0535 | 0.0823 | 0.0 | 0.0 | 0.0003 | 0.0085 | 0.0538 | 0.2268 | 0.0046 | 0.1512 | 0.0149 | 0.0502 | 0.0 | 0.0 | 0.0024 | 0.1097 | 0.0288 | 0.138 | 0.0 | 0.0 | 0.0112 | 0.0692 | 0.0079 | 0.0966 | 0.0088 | 0.1033 |
| 7.4634 | 160.0 | 59200 | 9.9005 | 0.0135 | 0.0195 | 0.0145 | 0.0026 | 0.009 | 0.0143 | 0.0748 | 0.0915 | 0.0949 | 0.0094 | 0.0634 | 0.1004 | 0.0 | 0.0 | 0.0004 | 0.015 | 0.0616 | 0.2439 | 0.0048 | 0.1268 | 0.0112 | 0.0469 | 0.0 | 0.0 | 0.0111 | 0.2218 | 0.0447 | 0.1352 | 0.0 | 0.0 | 0.0081 | 0.0949 | 0.0076 | 0.1228 | 0.0119 | 0.1311 |
| 7.437 | 161.0 | 59570 | 9.9430 | 0.0225 | 0.0306 | 0.0225 | 0.004 | 0.0155 | 0.0216 | 0.0787 | 0.0923 | 0.097 | 0.0094 | 0.0555 | 0.0971 | 0.0 | 0.0 | 0.0005 | 0.0139 | 0.0901 | 0.3 | 0.0138 | 0.1561 | 0.01 | 0.0267 | 0.0 | 0.0 | 0.0079 | 0.1443 | 0.0629 | 0.1451 | 0.0 | 0.0 | 0.0124 | 0.109 | 0.036 | 0.1725 | 0.0361 | 0.0967 |
| 7.4665 | 162.0 | 59940 | 10.1301 | 0.0096 | 0.0145 | 0.0098 | 0.0007 | 0.0107 | 0.0092 | 0.0565 | 0.0712 | 0.0753 | 0.0031 | 0.0548 | 0.0762 | 0.0 | 0.0 | 0.0011 | 0.0072 | 0.0437 | 0.2366 | 0.0028 | 0.0951 | 0.0092 | 0.032 | 0.0 | 0.0 | 0.0047 | 0.1383 | 0.0318 | 0.1324 | 0.0 | 0.0 | 0.0016 | 0.0654 | 0.0172 | 0.1262 | 0.0032 | 0.0705 |
| 7.4654 | 163.0 | 60310 | 10.2550 | 0.0096 | 0.014 | 0.0097 | 0.0 | 0.0062 | 0.0101 | 0.0444 | 0.0534 | 0.0554 | 0.0 | 0.0223 | 0.0599 | 0.0 | 0.0 | 0.0001 | 0.0028 | 0.0225 | 0.1244 | 0.008 | 0.078 | 0.0119 | 0.0259 | 0.0 | 0.0 | 0.006 | 0.1174 | 0.0251 | 0.0915 | 0.0 | 0.0 | 0.0012 | 0.0513 | 0.0034 | 0.0664 | 0.0373 | 0.1066 |
| 7.4363 | 164.0 | 60680 | 9.9841 | 0.0101 | 0.0169 | 0.0089 | 0.0041 | 0.0121 | 0.0091 | 0.0608 | 0.0764 | 0.0821 | 0.0141 | 0.067 | 0.0831 | 0.0 | 0.0 | 0.0001 | 0.0046 | 0.0389 | 0.1902 | 0.0102 | 0.1512 | 0.0212 | 0.0706 | 0.0 | 0.0 | 0.0069 | 0.1946 | 0.0224 | 0.1155 | 0.0 | 0.0 | 0.0005 | 0.0436 | 0.0193 | 0.1315 | 0.0014 | 0.0836 |
| 7.4068 | 165.0 | 61050 | 10.1945 | 0.0101 | 0.0166 | 0.0105 | 0.0009 | 0.0141 | 0.0113 | 0.0556 | 0.0721 | 0.0771 | 0.0063 | 0.058 | 0.0804 | 0.0 | 0.0 | 0.0003 | 0.0067 | 0.027 | 0.1805 | 0.0067 | 0.1463 | 0.0164 | 0.0517 | 0.0 | 0.0 | 0.0121 | 0.1517 | 0.0019 | 0.0732 | 0.0 | 0.0 | 0.0013 | 0.0462 | 0.02 | 0.1611 | 0.0359 | 0.1082 |
| 7.4523 | 166.0 | 61420 | 10.2375 | 0.0092 | 0.0129 | 0.0102 | 0.0 | 0.0064 | 0.0114 | 0.051 | 0.0588 | 0.064 | 0.0 | 0.0363 | 0.0692 | 0.0 | 0.0 | 0.0003 | 0.0085 | 0.0235 | 0.1098 | 0.0014 | 0.0439 | 0.0077 | 0.0245 | 0.0 | 0.0 | 0.0066 | 0.1359 | 0.0469 | 0.1676 | 0.0 | 0.0 | 0.0024 | 0.0744 | 0.0126 | 0.0919 | 0.0095 | 0.1115 |
| 7.4039 | 167.0 | 61790 | 10.1100 | 0.0168 | 0.0229 | 0.0181 | 0.0003 | 0.0102 | 0.0222 | 0.0792 | 0.0921 | 0.097 | 0.0047 | 0.0556 | 0.1051 | 0.0 | 0.0 | 0.0004 | 0.0163 | 0.0611 | 0.2707 | 0.0053 | 0.139 | 0.0176 | 0.0616 | 0.0 | 0.0 | 0.0056 | 0.1651 | 0.0446 | 0.1211 | 0.0 | 0.0 | 0.01 | 0.1321 | 0.0197 | 0.1174 | 0.0372 | 0.141 |
| 7.4146 | 168.0 | 62160 | 10.1751 | 0.011 | 0.0149 | 0.0114 | 0.0 | 0.0099 | 0.0115 | 0.0435 | 0.0512 | 0.0547 | 0.0 | 0.0511 | 0.0579 | 0.0 | 0.0 | 0.0005 | 0.0087 | 0.031 | 0.0683 | 0.0065 | 0.0878 | 0.0105 | 0.0373 | 0.0 | 0.0 | 0.0044 | 0.1037 | 0.0225 | 0.0901 | 0.0 | 0.0 | 0.0148 | 0.0603 | 0.0147 | 0.0953 | 0.027 | 0.1049 |
| 7.3879 | 169.0 | 62530 | 10.2162 | 0.0134 | 0.0182 | 0.0148 | 0.0002 | 0.0095 | 0.0148 | 0.0521 | 0.0628 | 0.0663 | 0.0016 | 0.0511 | 0.0687 | 0.0 | 0.0 | 0.0025 | 0.0087 | 0.0292 | 0.1537 | 0.0077 | 0.0878 | 0.0111 | 0.0334 | 0.0 | 0.0 | 0.0049 | 0.1383 | 0.0366 | 0.1239 | 0.0 | 0.0 | 0.015 | 0.0692 | 0.0224 | 0.1081 | 0.031 | 0.0721 |
| 7.3507 | 170.0 | 62900 | 10.2186 | 0.0069 | 0.0107 | 0.0075 | 0.0008 | 0.0116 | 0.0095 | 0.0455 | 0.0552 | 0.0589 | 0.0047 | 0.0544 | 0.0629 | 0.0 | 0.0 | 0.0015 | 0.0074 | 0.013 | 0.1805 | 0.0025 | 0.0561 | 0.0093 | 0.0234 | 0.0 | 0.0 | 0.0079 | 0.1376 | 0.0087 | 0.0817 | 0.0 | 0.0 | 0.0009 | 0.0346 | 0.0326 | 0.1342 | 0.0064 | 0.0508 |
| 7.3713 | 171.0 | 63270 | 10.2400 | 0.0155 | 0.0209 | 0.0153 | 0.0 | 0.0088 | 0.0168 | 0.0539 | 0.0643 | 0.0671 | 0.0 | 0.0345 | 0.0751 | 0.0 | 0.0 | 0.0012 | 0.0059 | 0.0743 | 0.1902 | 0.0083 | 0.1195 | 0.0107 | 0.0341 | 0.0 | 0.0 | 0.0064 | 0.1289 | 0.0668 | 0.1394 | 0.0 | 0.0 | 0.0004 | 0.041 | 0.0064 | 0.0966 | 0.0109 | 0.0492 |
| 7.3215 | 172.0 | 63640 | 10.1583 | 0.0157 | 0.0209 | 0.0149 | 0.0014 | 0.0121 | 0.0163 | 0.0541 | 0.0636 | 0.0677 | 0.0109 | 0.047 | 0.0706 | 0.0 | 0.0 | 0.0021 | 0.0157 | 0.0509 | 0.1659 | 0.0067 | 0.0927 | 0.008 | 0.0334 | 0.0 | 0.0 | 0.0021 | 0.0866 | 0.086 | 0.1761 | 0.0 | 0.0 | 0.0019 | 0.0538 | 0.0187 | 0.1107 | 0.0118 | 0.077 |
| 7.3363 | 173.0 | 64010 | 10.1397 | 0.0143 | 0.0205 | 0.016 | 0.0087 | 0.0106 | 0.0149 | 0.0633 | 0.0771 | 0.0829 | 0.0375 | 0.0448 | 0.0847 | 0.0 | 0.0 | 0.0003 | 0.0137 | 0.0418 | 0.2244 | 0.0038 | 0.0805 | 0.0156 | 0.0613 | 0.0 | 0.0 | 0.0082 | 0.1379 | 0.0556 | 0.2042 | 0.0 | 0.0 | 0.0109 | 0.0795 | 0.0148 | 0.1081 | 0.0204 | 0.0852 |
| 7.3284 | 174.0 | 64380 | 10.0149 | 0.0167 | 0.0244 | 0.0185 | 0.0026 | 0.0113 | 0.0188 | 0.0654 | 0.0797 | 0.085 | 0.0094 | 0.036 | 0.0862 | 0.0 | 0.0 | 0.0004 | 0.0096 | 0.0442 | 0.2171 | 0.0086 | 0.0902 | 0.0113 | 0.0365 | 0.0 | 0.0 | 0.0052 | 0.1279 | 0.0611 | 0.162 | 0.0 | 0.0 | 0.0189 | 0.1154 | 0.0243 | 0.1349 | 0.0266 | 0.1262 |
| 7.3254 | 175.0 | 64750 | 10.2989 | 0.0115 | 0.0156 | 0.0119 | 0.0012 | 0.0096 | 0.0135 | 0.0463 | 0.0595 | 0.0611 | 0.0047 | 0.0331 | 0.0642 | 0.0 | 0.0 | 0.0004 | 0.0063 | 0.0514 | 0.1707 | 0.0116 | 0.0976 | 0.0122 | 0.0349 | 0.0 | 0.0 | 0.0037 | 0.1081 | 0.0196 | 0.0986 | 0.0 | 0.0 | 0.0299 | 0.0987 | 0.0091 | 0.0826 | 0.0002 | 0.0361 |
| 7.3155 | 176.0 | 65120 | 10.1255 | 0.0141 | 0.0197 | 0.015 | 0.0 | 0.0074 | 0.0165 | 0.0406 | 0.0518 | 0.0544 | 0.0 | 0.0318 | 0.0556 | 0.0 | 0.0 | 0.0001 | 0.0028 | 0.0382 | 0.1512 | 0.0031 | 0.0415 | 0.01 | 0.0292 | 0.0 | 0.0 | 0.0042 | 0.0805 | 0.0428 | 0.1042 | 0.0 | 0.0 | 0.033 | 0.0987 | 0.0173 | 0.0872 | 0.021 | 0.0574 |
| 7.2633 | 177.0 | 65490 | 10.1467 | 0.0173 | 0.0248 | 0.0179 | 0.0 | 0.0123 | 0.0174 | 0.0625 | 0.0794 | 0.0839 | 0.0 | 0.0425 | 0.0895 | 0.0 | 0.0 | 0.0004 | 0.0167 | 0.0423 | 0.222 | 0.0207 | 0.1098 | 0.0119 | 0.0336 | 0.0 | 0.0 | 0.0034 | 0.1104 | 0.0683 | 0.1746 | 0.0 | 0.0 | 0.0127 | 0.1064 | 0.0396 | 0.1477 | 0.0089 | 0.0852 |
| 7.2993 | 178.0 | 65860 | 10.0905 | 0.0089 | 0.0135 | 0.0088 | 0.0 | 0.0052 | 0.0107 | 0.0633 | 0.0771 | 0.0791 | 0.0 | 0.0265 | 0.0873 | 0.0 | 0.0 | 0.0009 | 0.0246 | 0.0217 | 0.2122 | 0.0274 | 0.1293 | 0.0103 | 0.0331 | 0.0 | 0.0 | 0.0042 | 0.1262 | 0.0156 | 0.1493 | 0.0 | 0.0 | 0.0024 | 0.1154 | 0.0028 | 0.0745 | 0.0213 | 0.0852 |
| 7.2565 | 179.0 | 66230 | 10.1003 | 0.014 | 0.0176 | 0.0154 | 0.0 | 0.004 | 0.0159 | 0.0539 | 0.0659 | 0.069 | 0.0 | 0.0202 | 0.0764 | 0.0 | 0.0 | 0.0002 | 0.0096 | 0.0113 | 0.2049 | 0.0035 | 0.0293 | 0.0074 | 0.022 | 0.0 | 0.0 | 0.0039 | 0.1134 | 0.0786 | 0.1901 | 0.0 | 0.0 | 0.0215 | 0.0872 | 0.0129 | 0.0879 | 0.0284 | 0.0836 |
| 7.2532 | 180.0 | 66600 | 10.1720 | 0.0157 | 0.0185 | 0.0174 | 0.0 | 0.0036 | 0.0168 | 0.0437 | 0.0479 | 0.0482 | 0.0 | 0.0242 | 0.0521 | 0.0 | 0.0 | 0.0001 | 0.0041 | 0.056 | 0.1683 | 0.0 | 0.0 | 0.0065 | 0.0147 | 0.0 | 0.0 | 0.0021 | 0.0688 | 0.1191 | 0.1746 | 0.0 | 0.0 | 0.0006 | 0.0385 | 0.0022 | 0.045 | 0.0015 | 0.0639 |
| 7.2402 | 181.0 | 66970 | 10.3376 | 0.01 | 0.0125 | 0.0097 | 0.0 | 0.0108 | 0.0099 | 0.0301 | 0.0332 | 0.0335 | 0.0 | 0.0223 | 0.0368 | 0.0 | 0.0 | 0.0 | 0.0 | 0.01 | 0.0927 | 0.0034 | 0.0244 | 0.0046 | 0.0111 | 0.0 | 0.0 | 0.0067 | 0.0584 | 0.073 | 0.1197 | 0.0 | 0.0 | 0.0212 | 0.0487 | 0.0008 | 0.0242 | 0.0008 | 0.023 |
| 7.259 | 182.0 | 67340 | 10.4757 | 0.0052 | 0.0067 | 0.005 | 0.0 | 0.0044 | 0.0053 | 0.0192 | 0.0206 | 0.0211 | 0.0 | 0.0173 | 0.0203 | 0.0 | 0.0 | 0.0003 | 0.0017 | 0.0012 | 0.039 | 0.0018 | 0.0073 | 0.0075 | 0.0205 | 0.0 | 0.0 | 0.0007 | 0.0225 | 0.0321 | 0.0958 | 0.0 | 0.0 | 0.0004 | 0.0346 | 0.0001 | 0.0054 | 0.0179 | 0.0262 |
| 7.2137 | 183.0 | 67710 | 10.3853 | 0.0087 | 0.0117 | 0.0083 | 0.005 | 0.0046 | 0.0097 | 0.0335 | 0.0392 | 0.0398 | 0.0078 | 0.0271 | 0.0432 | 0.0 | 0.0 | 0.0009 | 0.0052 | 0.0023 | 0.0537 | 0.0087 | 0.0512 | 0.0084 | 0.0208 | 0.0 | 0.0 | 0.004 | 0.094 | 0.0538 | 0.1394 | 0.0 | 0.0 | 0.0221 | 0.0487 | 0.0024 | 0.0268 | 0.0017 | 0.0377 |
| 7.2428 | 184.0 | 68080 | 10.3730 | 0.0105 | 0.0139 | 0.0102 | 0.004 | 0.0072 | 0.0116 | 0.0416 | 0.05 | 0.0507 | 0.0078 | 0.03 | 0.0509 | 0.0 | 0.0 | 0.0001 | 0.0037 | 0.0254 | 0.1585 | 0.0019 | 0.0341 | 0.0115 | 0.0316 | 0.0 | 0.0 | 0.0034 | 0.0523 | 0.0408 | 0.1634 | 0.0 | 0.0 | 0.0207 | 0.0782 | 0.0042 | 0.0631 | 0.0179 | 0.023 |
| 7.2323 | 185.0 | 68450 | 10.3451 | 0.0133 | 0.0181 | 0.0139 | 0.0008 | 0.0067 | 0.0136 | 0.046 | 0.055 | 0.0575 | 0.0078 | 0.0355 | 0.0582 | 0.0 | 0.0 | 0.002 | 0.0115 | 0.0369 | 0.2 | 0.0101 | 0.0951 | 0.0098 | 0.0283 | 0.0 | 0.0 | 0.0081 | 0.0849 | 0.0368 | 0.0831 | 0.0 | 0.0 | 0.0117 | 0.0705 | 0.0075 | 0.0752 | 0.0373 | 0.041 |
| 7.2259 | 186.0 | 68820 | 10.2672 | 0.0182 | 0.0233 | 0.0193 | 0.0 | 0.0062 | 0.0231 | 0.0616 | 0.0722 | 0.0739 | 0.0 | 0.0351 | 0.0759 | 0.0 | 0.0 | 0.0033 | 0.02 | 0.046 | 0.2049 | 0.0256 | 0.0829 | 0.0075 | 0.0232 | 0.0 | 0.0 | 0.0128 | 0.1245 | 0.0864 | 0.1746 | 0.0 | 0.0 | 0.0277 | 0.1526 | 0.0007 | 0.0121 | 0.0083 | 0.0918 |
| 7.2186 | 187.0 | 69190 | 10.3717 | 0.0124 | 0.0165 | 0.0123 | 0.0 | 0.0113 | 0.0125 | 0.0412 | 0.0485 | 0.0502 | 0.0 | 0.0478 | 0.0488 | 0.0 | 0.0 | 0.0011 | 0.0043 | 0.0018 | 0.0683 | 0.0055 | 0.0659 | 0.0142 | 0.0373 | 0.0 | 0.0 | 0.0051 | 0.0919 | 0.0698 | 0.1577 | 0.0 | 0.0 | 0.0372 | 0.0692 | 0.0043 | 0.0638 | 0.0094 | 0.0443 |
| 7.1693 | 188.0 | 69560 | 10.2327 | 0.0116 | 0.0159 | 0.0112 | 0.0 | 0.0112 | 0.0135 | 0.0565 | 0.0665 | 0.0684 | 0.0 | 0.0275 | 0.0734 | 0.0 | 0.0 | 0.0005 | 0.012 | 0.0166 | 0.1707 | 0.002 | 0.0439 | 0.0075 | 0.0234 | 0.0 | 0.0 | 0.0029 | 0.0977 | 0.058 | 0.1606 | 0.0 | 0.0 | 0.0021 | 0.109 | 0.0193 | 0.0973 | 0.0297 | 0.1066 |
| 7.1785 | 189.0 | 69930 | 10.2201 | 0.0146 | 0.0188 | 0.015 | 0.0 | 0.011 | 0.0158 | 0.0512 | 0.0616 | 0.0631 | 0.0 | 0.0379 | 0.0675 | 0.0 | 0.0 | 0.0061 | 0.0115 | 0.0154 | 0.1415 | 0.0065 | 0.1049 | 0.0102 | 0.036 | 0.0 | 0.0 | 0.0056 | 0.0973 | 0.0873 | 0.1887 | 0.0 | 0.0 | 0.011 | 0.0449 | 0.0123 | 0.0671 | 0.0206 | 0.0656 |
| 7.154 | 190.0 | 70300 | 10.2102 | 0.0133 | 0.0174 | 0.0144 | 0.0019 | 0.0173 | 0.0134 | 0.0526 | 0.0647 | 0.0691 | 0.0078 | 0.0437 | 0.0684 | 0.0 | 0.0 | 0.0024 | 0.0187 | 0.0338 | 0.1854 | 0.0014 | 0.0561 | 0.0095 | 0.0498 | 0.0 | 0.0 | 0.0128 | 0.1054 | 0.0493 | 0.1817 | 0.0 | 0.0 | 0.0049 | 0.0795 | 0.0156 | 0.1067 | 0.03 | 0.0459 |
| 7.1791 | 191.0 | 70670 | 10.1550 | 0.0107 | 0.0137 | 0.0108 | 0.0 | 0.0101 | 0.0119 | 0.0617 | 0.0727 | 0.0753 | 0.0 | 0.0443 | 0.0776 | 0.0 | 0.0 | 0.0005 | 0.0107 | 0.0253 | 0.2341 | 0.0024 | 0.0927 | 0.0089 | 0.04 | 0.0 | 0.0 | 0.0079 | 0.1037 | 0.0374 | 0.1972 | 0.0 | 0.0 | 0.0112 | 0.0859 | 0.0042 | 0.0852 | 0.0304 | 0.0541 |
| 7.1297 | 192.0 | 71040 | 10.1002 | 0.012 | 0.017 | 0.0116 | 0.0 | 0.0054 | 0.0121 | 0.059 | 0.0692 | 0.0705 | 0.0 | 0.0297 | 0.0749 | 0.0 | 0.0 | 0.0004 | 0.0052 | 0.0468 | 0.2171 | 0.003 | 0.0683 | 0.0099 | 0.0468 | 0.0 | 0.0 | 0.0055 | 0.155 | 0.0457 | 0.1873 | 0.0 | 0.0 | 0.0115 | 0.1038 | 0.001 | 0.0383 | 0.02 | 0.0246 |
| 7.1117 | 193.0 | 71410 | 10.1511 | 0.0153 | 0.0198 | 0.0162 | 0.001 | 0.0145 | 0.0165 | 0.064 | 0.0744 | 0.0781 | 0.0063 | 0.0478 | 0.0801 | 0.0 | 0.0 | 0.0002 | 0.0093 | 0.0212 | 0.2415 | 0.0027 | 0.061 | 0.0074 | 0.0308 | 0.0 | 0.0 | 0.0062 | 0.1332 | 0.0733 | 0.2183 | 0.0 | 0.0 | 0.0175 | 0.0987 | 0.0255 | 0.102 | 0.03 | 0.0426 |
| 7.1262 | 194.0 | 71780 | 10.3156 | 0.0132 | 0.0161 | 0.0134 | 0.0079 | 0.0053 | 0.0154 | 0.0534 | 0.0621 | 0.0644 | 0.0078 | 0.0297 | 0.0675 | 0.0 | 0.0 | 0.001 | 0.0109 | 0.0373 | 0.1707 | 0.002 | 0.0585 | 0.0084 | 0.027 | 0.0 | 0.0 | 0.0063 | 0.1158 | 0.0521 | 0.1676 | 0.0 | 0.0 | 0.034 | 0.1128 | 0.0139 | 0.0617 | 0.0038 | 0.0475 |
| 7.1233 | 195.0 | 72150 | 10.3446 | 0.0134 | 0.0177 | 0.0139 | 0.0016 | 0.0059 | 0.0139 | 0.0401 | 0.0478 | 0.0491 | 0.0063 | 0.0274 | 0.0514 | 0.0 | 0.0 | 0.0003 | 0.008 | 0.0312 | 0.1244 | 0.0031 | 0.0902 | 0.0133 | 0.0381 | 0.0 | 0.0 | 0.0049 | 0.0732 | 0.072 | 0.1366 | 0.0 | 0.0 | 0.0142 | 0.0218 | 0.0027 | 0.0463 | 0.0195 | 0.0508 |
| 7.0967 | 196.0 | 72520 | 10.3249 | 0.0146 | 0.0193 | 0.0157 | 0.0 | 0.0099 | 0.0154 | 0.0456 | 0.0532 | 0.0541 | 0.0 | 0.0345 | 0.0545 | 0.0 | 0.0 | 0.0003 | 0.0078 | 0.0583 | 0.1707 | 0.0013 | 0.039 | 0.0143 | 0.0293 | 0.0 | 0.0 | 0.0056 | 0.0547 | 0.0673 | 0.1831 | 0.0 | 0.0 | 0.0236 | 0.0833 | 0.0023 | 0.0416 | 0.0023 | 0.0393 |
| 7.0633 | 197.0 | 72890 | 10.2207 | 0.0135 | 0.0185 | 0.0146 | 0.0033 | 0.0127 | 0.014 | 0.0543 | 0.0641 | 0.0668 | 0.0078 | 0.0472 | 0.0677 | 0.0 | 0.0 | 0.0002 | 0.0109 | 0.0501 | 0.1707 | 0.0029 | 0.0756 | 0.0109 | 0.0395 | 0.0 | 0.0 | 0.0057 | 0.0735 | 0.0596 | 0.1958 | 0.0 | 0.0 | 0.0192 | 0.1038 | 0.0129 | 0.0758 | 0.0012 | 0.0557 |
| 7.0636 | 198.0 | 73260 | 10.2263 | 0.0167 | 0.0227 | 0.0165 | 0.0 | 0.0107 | 0.0207 | 0.0612 | 0.0723 | 0.0767 | 0.0 | 0.0348 | 0.0777 | 0.0 | 0.0 | 0.0004 | 0.0211 | 0.036 | 0.2415 | 0.0021 | 0.061 | 0.0097 | 0.0305 | 0.0 | 0.0 | 0.0038 | 0.0849 | 0.0954 | 0.238 | 0.0 | 0.0 | 0.0239 | 0.109 | 0.0251 | 0.0738 | 0.0038 | 0.0607 |
| 7.0957 | 199.0 | 73630 | 10.2163 | 0.0139 | 0.0184 | 0.0143 | 0.0 | 0.0108 | 0.0163 | 0.0601 | 0.0701 | 0.0721 | 0.0 | 0.0308 | 0.0734 | 0.0 | 0.0 | 0.0001 | 0.0061 | 0.0413 | 0.2073 | 0.0099 | 0.1024 | 0.0079 | 0.0252 | 0.0 | 0.0 | 0.0069 | 0.0826 | 0.0861 | 0.1915 | 0.0 | 0.0 | 0.0077 | 0.1141 | 0.003 | 0.0671 | 0.0041 | 0.0689 |
| 7.0582 | 200.0 | 74000 | 10.2079 | 0.0133 | 0.0169 | 0.0149 | 0.0 | 0.0111 | 0.0131 | 0.0478 | 0.0567 | 0.0584 | 0.0 | 0.0351 | 0.0615 | 0.0 | 0.0 | 0.0002 | 0.0096 | 0.0316 | 0.1585 | 0.0029 | 0.0561 | 0.0129 | 0.0239 | 0.0 | 0.0 | 0.009 | 0.0977 | 0.0607 | 0.162 | 0.0 | 0.0 | 0.0198 | 0.0705 | 0.0038 | 0.0718 | 0.0183 | 0.0508 |
| 7.0663 | 201.0 | 74370 | 10.2930 | 0.0138 | 0.0185 | 0.0149 | 0.0022 | 0.0085 | 0.0153 | 0.04 | 0.047 | 0.0482 | 0.0078 | 0.0334 | 0.0483 | 0.0 | 0.0 | 0.0001 | 0.007 | 0.0919 | 0.1634 | 0.0027 | 0.0756 | 0.0179 | 0.0354 | 0.0 | 0.0 | 0.0032 | 0.0557 | 0.0065 | 0.093 | 0.0 | 0.0 | 0.0324 | 0.0551 | 0.0046 | 0.049 | 0.0064 | 0.0443 |
| 7.0779 | 202.0 | 74740 | 10.0937 | 0.0117 | 0.0155 | 0.0126 | 0.0031 | 0.0155 | 0.0119 | 0.0585 | 0.0674 | 0.0694 | 0.0078 | 0.0476 | 0.0706 | 0.0 | 0.0 | 0.0005 | 0.0167 | 0.0168 | 0.161 | 0.0166 | 0.1317 | 0.0138 | 0.0304 | 0.0 | 0.0 | 0.0112 | 0.1164 | 0.0543 | 0.1803 | 0.0 | 0.0 | 0.0027 | 0.0346 | 0.0041 | 0.0671 | 0.0204 | 0.0951 |
| 7.04 | 203.0 | 75110 | 10.2583 | 0.0106 | 0.0148 | 0.0117 | 0.0012 | 0.0083 | 0.0146 | 0.0421 | 0.0499 | 0.0521 | 0.0078 | 0.0461 | 0.0517 | 0.0 | 0.0 | 0.0002 | 0.0067 | 0.0438 | 0.1098 | 0.0142 | 0.0829 | 0.0102 | 0.0274 | 0.0 | 0.0 | 0.0042 | 0.052 | 0.039 | 0.1366 | 0.0 | 0.0 | 0.0022 | 0.0487 | 0.0127 | 0.0973 | 0.001 | 0.0639 |
| 7.0214 | 204.0 | 75480 | 10.1290 | 0.0099 | 0.0126 | 0.0098 | 0.0 | 0.011 | 0.0112 | 0.0597 | 0.0682 | 0.0689 | 0.0 | 0.0435 | 0.0726 | 0.0 | 0.0 | 0.0025 | 0.015 | 0.019 | 0.1927 | 0.0055 | 0.0927 | 0.0158 | 0.0417 | 0.0 | 0.0 | 0.0079 | 0.0933 | 0.0532 | 0.1577 | 0.0 | 0.0 | 0.0022 | 0.0885 | 0.0028 | 0.0711 | 0.0102 | 0.0738 |
| 6.9998 | 205.0 | 75850 | 10.1307 | 0.014 | 0.0182 | 0.0145 | 0.0 | 0.0078 | 0.0154 | 0.0477 | 0.0573 | 0.0601 | 0.0 | 0.0475 | 0.0602 | 0.0 | 0.0 | 0.0006 | 0.017 | 0.0751 | 0.1829 | 0.0012 | 0.0463 | 0.0093 | 0.0234 | 0.0 | 0.0 | 0.0063 | 0.0926 | 0.0413 | 0.1437 | 0.0 | 0.0 | 0.0113 | 0.0436 | 0.012 | 0.1047 | 0.0106 | 0.0672 |
| 7.0055 | 206.0 | 76220 | 10.0794 | 0.0135 | 0.0177 | 0.0148 | 0.0 | 0.0073 | 0.0148 | 0.0583 | 0.0666 | 0.0672 | 0.0 | 0.0415 | 0.0714 | 0.0 | 0.0 | 0.001 | 0.0263 | 0.0715 | 0.1585 | 0.0045 | 0.1024 | 0.0055 | 0.0146 | 0.0 | 0.0 | 0.0042 | 0.0876 | 0.0451 | 0.1873 | 0.0 | 0.0 | 0.0076 | 0.0885 | 0.0034 | 0.0738 | 0.0195 | 0.0672 |
| 7.0442 | 207.0 | 76590 | 10.2912 | 0.0137 | 0.0166 | 0.0145 | 0.0 | 0.0082 | 0.0162 | 0.0439 | 0.0517 | 0.0532 | 0.0 | 0.0268 | 0.0545 | 0.0 | 0.0 | 0.0003 | 0.012 | 0.0476 | 0.1366 | 0.0011 | 0.0463 | 0.009 | 0.0154 | 0.0 | 0.0 | 0.0011 | 0.0406 | 0.0637 | 0.1732 | 0.0 | 0.0 | 0.0236 | 0.0641 | 0.0066 | 0.0779 | 0.0114 | 0.0721 |
| 7.0032 | 208.0 | 76960 | 10.4665 | 0.0105 | 0.0143 | 0.0115 | 0.0 | 0.0066 | 0.0124 | 0.0305 | 0.0381 | 0.0395 | 0.0 | 0.0233 | 0.0406 | 0.0 | 0.0 | 0.0022 | 0.0085 | 0.0248 | 0.1146 | 0.0003 | 0.0146 | 0.006 | 0.0226 | 0.0 | 0.0 | 0.0014 | 0.0289 | 0.0519 | 0.1324 | 0.0 | 0.0 | 0.0239 | 0.0474 | 0.0149 | 0.0819 | 0.0002 | 0.023 |
| 7.0003 | 209.0 | 77330 | 10.4519 | 0.0107 | 0.0133 | 0.0108 | 0.0 | 0.009 | 0.0101 | 0.0375 | 0.0438 | 0.0459 | 0.0 | 0.031 | 0.0454 | 0.0 | 0.0 | 0.0002 | 0.0096 | 0.017 | 0.0854 | 0.0012 | 0.0463 | 0.0079 | 0.0224 | 0.0 | 0.0 | 0.0044 | 0.0503 | 0.0532 | 0.1141 | 0.0 | 0.0 | 0.0201 | 0.1064 | 0.0218 | 0.0732 | 0.0021 | 0.0426 |
| 7.0081 | 210.0 | 77700 | 10.4641 | 0.0115 | 0.0151 | 0.0132 | 0.0 | 0.0074 | 0.0113 | 0.0364 | 0.0409 | 0.0431 | 0.0 | 0.0208 | 0.0439 | 0.0 | 0.0 | 0.0027 | 0.007 | 0.0692 | 0.1341 | 0.0039 | 0.0512 | 0.0085 | 0.0275 | 0.0 | 0.0 | 0.0006 | 0.0248 | 0.0077 | 0.1056 | 0.0 | 0.0 | 0.0306 | 0.0577 | 0.0138 | 0.0557 | 0.0015 | 0.0541 |
| 6.9826 | 211.0 | 78070 | 10.3289 | 0.0126 | 0.0168 | 0.0145 | 0.0 | 0.0051 | 0.0136 | 0.0445 | 0.0505 | 0.0514 | 0.0 | 0.0359 | 0.0538 | 0.0 | 0.0 | 0.0004 | 0.012 | 0.047 | 0.1073 | 0.0096 | 0.122 | 0.0083 | 0.0309 | 0.0 | 0.0 | 0.0021 | 0.0681 | 0.0305 | 0.0845 | 0.0 | 0.0 | 0.0222 | 0.0705 | 0.0015 | 0.0396 | 0.0301 | 0.082 |
| 6.9602 | 212.0 | 78440 | 10.2845 | 0.0147 | 0.0203 | 0.0159 | 0.0195 | 0.0105 | 0.0151 | 0.0507 | 0.061 | 0.0643 | 0.0281 | 0.0492 | 0.0624 | 0.0 | 0.0 | 0.0002 | 0.0107 | 0.0797 | 0.1805 | 0.0008 | 0.0366 | 0.0102 | 0.0443 | 0.0 | 0.0 | 0.0032 | 0.0916 | 0.0465 | 0.1451 | 0.0 | 0.0 | 0.0176 | 0.0821 | 0.0035 | 0.0805 | 0.0146 | 0.1 |
| 6.928 | 213.0 | 78810 | 10.1442 | 0.0091 | 0.0144 | 0.0081 | 0.0167 | 0.0125 | 0.0093 | 0.0606 | 0.0742 | 0.0774 | 0.0391 | 0.0522 | 0.0777 | 0.0 | 0.0 | 0.0003 | 0.017 | 0.0146 | 0.161 | 0.0029 | 0.0854 | 0.0118 | 0.0549 | 0.0 | 0.0 | 0.0031 | 0.1188 | 0.0463 | 0.1944 | 0.0 | 0.0 | 0.0006 | 0.0526 | 0.0064 | 0.1121 | 0.0237 | 0.1328 |
| 6.9479 | 214.0 | 79180 | 10.1258 | 0.0156 | 0.021 | 0.0177 | 0.0062 | 0.019 | 0.0163 | 0.0583 | 0.0707 | 0.0736 | 0.0156 | 0.058 | 0.0729 | 0.0 | 0.0 | 0.0002 | 0.0128 | 0.048 | 0.1805 | 0.0017 | 0.0512 | 0.011 | 0.0431 | 0.0 | 0.0 | 0.0047 | 0.1319 | 0.0681 | 0.169 | 0.0 | 0.0 | 0.0004 | 0.0372 | 0.0115 | 0.1302 | 0.0415 | 0.1279 |
| 6.9688 | 215.0 | 79550 | 10.3076 | 0.0098 | 0.015 | 0.0094 | 0.0056 | 0.0117 | 0.0096 | 0.0405 | 0.0514 | 0.055 | 0.0125 | 0.0398 | 0.0568 | 0.0 | 0.0 | 0.0008 | 0.0083 | 0.0272 | 0.0878 | 0.005 | 0.0976 | 0.0055 | 0.0219 | 0.0 | 0.0 | 0.0023 | 0.0631 | 0.0297 | 0.1169 | 0.0 | 0.0 | 0.0004 | 0.0256 | 0.0263 | 0.1255 | 0.02 | 0.1131 |
| 6.9299 | 216.0 | 79920 | 10.2180 | 0.0101 | 0.0152 | 0.0089 | 0.0 | 0.0085 | 0.0108 | 0.0499 | 0.0613 | 0.0638 | 0.0 | 0.0434 | 0.0665 | 0.0 | 0.0 | 0.0038 | 0.0159 | 0.0314 | 0.1488 | 0.0021 | 0.0659 | 0.0093 | 0.0323 | 0.0 | 0.0 | 0.006 | 0.1107 | 0.0342 | 0.1479 | 0.0 | 0.0 | 0.0009 | 0.0385 | 0.0043 | 0.0826 | 0.0292 | 0.123 |
| 6.9591 | 217.0 | 80290 | 10.3340 | 0.0084 | 0.0126 | 0.0097 | 0.0 | 0.0124 | 0.0091 | 0.0385 | 0.0479 | 0.0506 | 0.0 | 0.0447 | 0.049 | 0.0 | 0.0 | 0.0001 | 0.0046 | 0.0157 | 0.1098 | 0.0048 | 0.0829 | 0.0071 | 0.0232 | 0.0 | 0.0 | 0.0062 | 0.0893 | 0.0264 | 0.0732 | 0.0 | 0.0 | 0.0029 | 0.0372 | 0.0092 | 0.0966 | 0.0287 | 0.0902 |
| 6.9371 | 218.0 | 80660 | 10.2716 | 0.0082 | 0.0119 | 0.009 | 0.0 | 0.0111 | 0.0092 | 0.0445 | 0.0545 | 0.0565 | 0.0 | 0.0442 | 0.0546 | 0.0 | 0.0 | 0.0001 | 0.0059 | 0.0031 | 0.0829 | 0.0017 | 0.0439 | 0.0046 | 0.0179 | 0.0 | 0.0 | 0.0056 | 0.0946 | 0.0287 | 0.1028 | 0.0 | 0.0 | 0.0017 | 0.059 | 0.0104 | 0.102 | 0.0426 | 0.1689 |
| 6.893 | 219.0 | 81030 | 10.2130 | 0.0104 | 0.0154 | 0.0101 | 0.0 | 0.0169 | 0.0122 | 0.0473 | 0.0665 | 0.0681 | 0.0 | 0.0611 | 0.0671 | 0.0 | 0.0 | 0.0002 | 0.008 | 0.0088 | 0.0854 | 0.0065 | 0.1171 | 0.0073 | 0.0292 | 0.0 | 0.0 | 0.0066 | 0.1322 | 0.0257 | 0.0873 | 0.0 | 0.0 | 0.0193 | 0.05 | 0.0122 | 0.1329 | 0.0389 | 0.1754 |
| 6.8602 | 220.0 | 81400 | 10.2488 | 0.0202 | 0.0278 | 0.0198 | 0.0013 | 0.015 | 0.0198 | 0.0534 | 0.066 | 0.0707 | 0.0094 | 0.0694 | 0.0667 | 0.0 | 0.0 | 0.0009 | 0.0078 | 0.0493 | 0.122 | 0.0125 | 0.0951 | 0.0104 | 0.0424 | 0.0 | 0.0 | 0.0027 | 0.101 | 0.081 | 0.138 | 0.0 | 0.0 | 0.0113 | 0.0372 | 0.0207 | 0.1356 | 0.0537 | 0.1689 |
| 6.8782 | 221.0 | 81770 | 10.2684 | 0.0176 | 0.0241 | 0.0207 | 0.0 | 0.015 | 0.0197 | 0.0533 | 0.0647 | 0.0668 | 0.0 | 0.0589 | 0.064 | 0.0 | 0.0 | 0.0 | 0.0033 | 0.066 | 0.1488 | 0.0173 | 0.1171 | 0.0116 | 0.0318 | 0.0 | 0.0 | 0.0056 | 0.0983 | 0.043 | 0.1113 | 0.0 | 0.0 | 0.0145 | 0.05 | 0.0099 | 0.1007 | 0.043 | 0.141 |
| 6.9082 | 222.0 | 82140 | 10.3602 | 0.0061 | 0.0089 | 0.0058 | 0.0 | 0.0133 | 0.0074 | 0.0407 | 0.0511 | 0.0538 | 0.0 | 0.0442 | 0.0527 | 0.0 | 0.0 | 0.0004 | 0.0067 | 0.0054 | 0.0976 | 0.0026 | 0.0732 | 0.0094 | 0.0271 | 0.0 | 0.0 | 0.0024 | 0.0732 | 0.0147 | 0.0915 | 0.0 | 0.0 | 0.0211 | 0.0385 | 0.0145 | 0.1195 | 0.003 | 0.118 |
| 6.8873 | 223.0 | 82510 | 10.4017 | 0.0154 | 0.0192 | 0.0168 | 0.0 | 0.0056 | 0.0182 | 0.0465 | 0.0537 | 0.0547 | 0.0 | 0.0437 | 0.0544 | 0.0 | 0.0 | 0.0002 | 0.008 | 0.0544 | 0.1634 | 0.0039 | 0.0659 | 0.0092 | 0.0228 | 0.0 | 0.0 | 0.0038 | 0.0681 | 0.059 | 0.131 | 0.0 | 0.0 | 0.0265 | 0.0513 | 0.0055 | 0.0671 | 0.0222 | 0.0787 |
| 6.8612 | 224.0 | 82880 | 10.3005 | 0.0159 | 0.0198 | 0.0165 | 0.0 | 0.0118 | 0.0174 | 0.0552 | 0.0651 | 0.0685 | 0.0 | 0.0478 | 0.0702 | 0.0 | 0.0 | 0.0003 | 0.0102 | 0.0466 | 0.1634 | 0.0035 | 0.0634 | 0.0105 | 0.0338 | 0.0 | 0.0 | 0.0045 | 0.105 | 0.0637 | 0.1366 | 0.0 | 0.0 | 0.0319 | 0.0859 | 0.0088 | 0.1403 | 0.0208 | 0.0836 |
| 6.8602 | 225.0 | 83250 | 10.2237 | 0.0165 | 0.0221 | 0.0177 | 0.0 | 0.0104 | 0.0185 | 0.0575 | 0.0673 | 0.0708 | 0.0 | 0.0511 | 0.0714 | 0.0 | 0.0 | 0.0003 | 0.015 | 0.0279 | 0.1756 | 0.0059 | 0.0805 | 0.0105 | 0.0395 | 0.0 | 0.0 | 0.0045 | 0.1101 | 0.0617 | 0.1521 | 0.0 | 0.0 | 0.0355 | 0.059 | 0.0075 | 0.0966 | 0.0446 | 0.1213 |
| 6.8336 | 226.0 | 83620 | 10.3177 | 0.0167 | 0.0216 | 0.0172 | 0.0 | 0.01 | 0.0173 | 0.0537 | 0.062 | 0.0654 | 0.0 | 0.0443 | 0.0666 | 0.0 | 0.0 | 0.0002 | 0.0107 | 0.0437 | 0.1171 | 0.0028 | 0.0585 | 0.0109 | 0.0297 | 0.0 | 0.0 | 0.0045 | 0.0705 | 0.053 | 0.1451 | 0.0 | 0.0 | 0.0346 | 0.0769 | 0.009 | 0.1195 | 0.0419 | 0.1574 |
| 6.863 | 227.0 | 83990 | 10.2647 | 0.0209 | 0.0265 | 0.0239 | 0.0 | 0.0075 | 0.0229 | 0.0561 | 0.0617 | 0.0634 | 0.0 | 0.0427 | 0.0643 | 0.0 | 0.0 | 0.0002 | 0.0098 | 0.0546 | 0.1634 | 0.0037 | 0.0707 | 0.009 | 0.0267 | 0.0 | 0.0 | 0.0047 | 0.0852 | 0.0742 | 0.1225 | 0.0 | 0.0 | 0.0186 | 0.0679 | 0.0046 | 0.055 | 0.0809 | 0.159 |
| 6.8419 | 228.0 | 84360 | 10.2957 | 0.0193 | 0.0265 | 0.0217 | 0.0077 | 0.015 | 0.0192 | 0.048 | 0.0574 | 0.0594 | 0.0172 | 0.0473 | 0.0596 | 0.0 | 0.0 | 0.0001 | 0.0078 | 0.0428 | 0.1024 | 0.0053 | 0.0683 | 0.0127 | 0.044 | 0.0 | 0.0 | 0.0034 | 0.0836 | 0.0647 | 0.1197 | 0.0 | 0.0 | 0.0347 | 0.0474 | 0.0104 | 0.1 | 0.0576 | 0.1393 |
| 6.8692 | 229.0 | 84730 | 10.3775 | 0.0111 | 0.0158 | 0.0112 | 0.0043 | 0.0064 | 0.0136 | 0.0426 | 0.049 | 0.0501 | 0.0172 | 0.0395 | 0.0532 | 0.0 | 0.0 | 0.0 | 0.0033 | 0.0421 | 0.1244 | 0.0017 | 0.0439 | 0.0105 | 0.0263 | 0.0 | 0.0 | 0.0019 | 0.0587 | 0.0239 | 0.0859 | 0.0 | 0.0 | 0.031 | 0.0474 | 0.008 | 0.0913 | 0.0141 | 0.1197 |
| 6.8411 | 230.0 | 85100 | 10.2458 | 0.0162 | 0.0205 | 0.0163 | 0.002 | 0.0078 | 0.0171 | 0.0492 | 0.0562 | 0.0572 | 0.0078 | 0.0336 | 0.0588 | 0.0 | 0.0 | 0.0 | 0.0048 | 0.0549 | 0.1683 | 0.0004 | 0.0171 | 0.0094 | 0.0242 | 0.0 | 0.0 | 0.0019 | 0.057 | 0.0763 | 0.1437 | 0.0 | 0.0 | 0.0316 | 0.0603 | 0.0053 | 0.0819 | 0.0141 | 0.1295 |
| 6.7823 | 231.0 | 85470 | 10.1545 | 0.0177 | 0.0242 | 0.0179 | 0.0009 | 0.0109 | 0.021 | 0.0687 | 0.0824 | 0.0844 | 0.0094 | 0.0452 | 0.0863 | 0.0 | 0.0 | 0.0006 | 0.0198 | 0.0689 | 0.2341 | 0.006 | 0.0927 | 0.0117 | 0.0391 | 0.0 | 0.0 | 0.0065 | 0.1117 | 0.0459 | 0.1465 | 0.0 | 0.0 | 0.0326 | 0.0987 | 0.006 | 0.1094 | 0.0338 | 0.1607 |
| 6.8297 | 232.0 | 85840 | 10.2122 | 0.0207 | 0.0287 | 0.021 | 0.0059 | 0.0064 | 0.0225 | 0.0632 | 0.0728 | 0.0738 | 0.0094 | 0.0404 | 0.0783 | 0.0 | 0.0 | 0.001 | 0.0167 | 0.0838 | 0.2122 | 0.0026 | 0.0585 | 0.0093 | 0.0231 | 0.0 | 0.0 | 0.0108 | 0.0812 | 0.056 | 0.1634 | 0.0 | 0.0 | 0.0158 | 0.0667 | 0.0041 | 0.0772 | 0.0647 | 0.1869 |
| 6.7806 | 233.0 | 86210 | 10.2826 | 0.0214 | 0.0272 | 0.0233 | 0.0 | 0.0057 | 0.0229 | 0.0639 | 0.0734 | 0.0747 | 0.0 | 0.0427 | 0.0794 | 0.0 | 0.0 | 0.0005 | 0.012 | 0.0552 | 0.2122 | 0.0004 | 0.0244 | 0.0103 | 0.0231 | 0.0 | 0.0 | 0.0116 | 0.0849 | 0.0708 | 0.1676 | 0.0 | 0.0 | 0.0391 | 0.0936 | 0.0049 | 0.0919 | 0.0643 | 0.1869 |
| 6.7747 | 234.0 | 86580 | 10.3975 | 0.0146 | 0.0191 | 0.0154 | 0.0 | 0.0024 | 0.0179 | 0.0395 | 0.047 | 0.0472 | 0.0 | 0.0276 | 0.0486 | 0.0 | 0.0 | 0.0001 | 0.0039 | 0.0245 | 0.1024 | 0.0054 | 0.0683 | 0.0076 | 0.0144 | 0.0 | 0.0 | 0.0104 | 0.0567 | 0.0498 | 0.0972 | 0.0 | 0.0 | 0.0437 | 0.0603 | 0.001 | 0.0336 | 0.0323 | 0.1295 |
| 6.8127 | 235.0 | 86950 | 10.4947 | 0.0153 | 0.02 | 0.0169 | 0.0 | 0.0059 | 0.0166 | 0.0442 | 0.0519 | 0.0526 | 0.0 | 0.0331 | 0.0525 | 0.0 | 0.0 | 0.0001 | 0.0057 | 0.0526 | 0.1366 | 0.0033 | 0.0341 | 0.0118 | 0.0193 | 0.0 | 0.0 | 0.0105 | 0.0523 | 0.0398 | 0.0761 | 0.0 | 0.0 | 0.0317 | 0.0974 | 0.0044 | 0.0698 | 0.0289 | 0.1393 |
| 6.7793 | 236.0 | 87320 | 10.5037 | 0.0056 | 0.0079 | 0.0048 | 0.0 | 0.0026 | 0.0077 | 0.0304 | 0.0361 | 0.0368 | 0.0 | 0.0243 | 0.0389 | 0.0 | 0.0 | 0.0 | 0.003 | 0.0064 | 0.0805 | 0.0009 | 0.0171 | 0.0071 | 0.0171 | 0.0 | 0.0 | 0.0012 | 0.0409 | 0.0224 | 0.093 | 0.0 | 0.0 | 0.0129 | 0.0628 | 0.0035 | 0.0463 | 0.0128 | 0.0803 |
| 6.7793 | 237.0 | 87690 | 10.4354 | 0.007 | 0.0093 | 0.007 | 0.0017 | 0.0066 | 0.0103 | 0.041 | 0.0466 | 0.0475 | 0.0094 | 0.0327 | 0.0501 | 0.0 | 0.0 | 0.0001 | 0.0059 | 0.0083 | 0.0805 | 0.0043 | 0.0829 | 0.0076 | 0.0248 | 0.0 | 0.0 | 0.0012 | 0.0466 | 0.0244 | 0.1028 | 0.0 | 0.0 | 0.0245 | 0.0615 | 0.0023 | 0.0517 | 0.0109 | 0.1131 |
| 6.7246 | 238.0 | 88060 | 10.4679 | 0.0089 | 0.0123 | 0.0094 | 0.001 | 0.0079 | 0.0097 | 0.0371 | 0.0426 | 0.0434 | 0.0016 | 0.0342 | 0.044 | 0.0 | 0.0 | 0.0003 | 0.0043 | 0.0191 | 0.1049 | 0.0086 | 0.078 | 0.0071 | 0.0193 | 0.0 | 0.0 | 0.0013 | 0.051 | 0.0238 | 0.0944 | 0.0 | 0.0 | 0.0259 | 0.0603 | 0.0035 | 0.0362 | 0.0171 | 0.0721 |
| 6.7448 | 239.0 | 88430 | 10.2526 | 0.0144 | 0.02 | 0.016 | 0.0 | 0.0152 | 0.0161 | 0.0569 | 0.0687 | 0.0722 | 0.0 | 0.067 | 0.0706 | 0.0 | 0.0 | 0.0006 | 0.0198 | 0.039 | 0.1561 | 0.0064 | 0.1024 | 0.0084 | 0.0205 | 0.0 | 0.0 | 0.0078 | 0.1077 | 0.0405 | 0.1197 | 0.0 | 0.0 | 0.0202 | 0.0526 | 0.0114 | 0.1201 | 0.0388 | 0.1672 |
| 6.7539 | 240.0 | 88800 | 10.3057 | 0.0138 | 0.0187 | 0.0157 | 0.0 | 0.0056 | 0.0164 | 0.0471 | 0.0549 | 0.0562 | 0.0 | 0.0374 | 0.0591 | 0.0 | 0.0 | 0.0011 | 0.0187 | 0.0369 | 0.1488 | 0.0025 | 0.061 | 0.0105 | 0.0303 | 0.0 | 0.0 | 0.0038 | 0.0785 | 0.0398 | 0.0901 | 0.0 | 0.0 | 0.033 | 0.0667 | 0.0019 | 0.0523 | 0.0356 | 0.1279 |
| 6.7244 | 241.0 | 89170 | 10.2128 | 0.0154 | 0.0199 | 0.0179 | 0.0 | 0.0042 | 0.0192 | 0.0482 | 0.0546 | 0.0551 | 0.0 | 0.0264 | 0.0581 | 0.0 | 0.0 | 0.0009 | 0.0141 | 0.0488 | 0.1195 | 0.0023 | 0.0439 | 0.0072 | 0.0176 | 0.0 | 0.0 | 0.0081 | 0.0795 | 0.0284 | 0.1127 | 0.0 | 0.0 | 0.0464 | 0.0731 | 0.0023 | 0.055 | 0.0403 | 0.1459 |
| 6.7049 | 242.0 | 89540 | 10.3194 | 0.0096 | 0.0137 | 0.0109 | 0.0 | 0.0082 | 0.0124 | 0.0559 | 0.0627 | 0.0637 | 0.0 | 0.0382 | 0.0696 | 0.0 | 0.0 | 0.0007 | 0.0213 | 0.008 | 0.0951 | 0.0073 | 0.0854 | 0.0063 | 0.0176 | 0.0 | 0.0 | 0.0065 | 0.1111 | 0.0113 | 0.1014 | 0.0 | 0.0 | 0.0345 | 0.0833 | 0.004 | 0.0805 | 0.0363 | 0.1689 |
| 6.7504 | 243.0 | 89910 | 10.3888 | 0.0109 | 0.0145 | 0.0126 | 0.0 | 0.0039 | 0.0132 | 0.0508 | 0.0579 | 0.0582 | 0.0 | 0.0341 | 0.061 | 0.0 | 0.0 | 0.0005 | 0.0159 | 0.012 | 0.1195 | 0.0034 | 0.039 | 0.0063 | 0.0193 | 0.0 | 0.0 | 0.0021 | 0.0819 | 0.0419 | 0.0944 | 0.0 | 0.0 | 0.0126 | 0.0731 | 0.0024 | 0.0591 | 0.0498 | 0.1967 |
| 6.72 | 244.0 | 90280 | 10.4188 | 0.0128 | 0.0169 | 0.0135 | 0.0 | 0.0091 | 0.015 | 0.06 | 0.0664 | 0.068 | 0.0 | 0.0301 | 0.0693 | 0.0 | 0.0 | 0.0018 | 0.0133 | 0.0206 | 0.1415 | 0.0093 | 0.0902 | 0.0072 | 0.0212 | 0.0 | 0.0 | 0.0031 | 0.0772 | 0.0668 | 0.1746 | 0.0 | 0.0 | 0.0079 | 0.0603 | 0.003 | 0.0705 | 0.0334 | 0.1672 |
| 6.6994 | 245.0 | 90650 | 10.3738 | 0.0091 | 0.0136 | 0.0083 | 0.0134 | 0.0113 | 0.0117 | 0.0535 | 0.0645 | 0.0673 | 0.0172 | 0.051 | 0.0677 | 0.0 | 0.0 | 0.0005 | 0.0159 | 0.0092 | 0.0976 | 0.0077 | 0.1073 | 0.0091 | 0.0235 | 0.0 | 0.0 | 0.0115 | 0.0973 | 0.0169 | 0.1056 | 0.0 | 0.0 | 0.0247 | 0.0782 | 0.0059 | 0.0987 | 0.0233 | 0.1836 |
| 6.6673 | 246.0 | 91020 | 10.3788 | 0.0105 | 0.0142 | 0.0119 | 0.0015 | 0.0082 | 0.0124 | 0.0424 | 0.0508 | 0.0517 | 0.0094 | 0.0478 | 0.0496 | 0.0 | 0.0 | 0.0003 | 0.0113 | 0.0285 | 0.1024 | 0.0062 | 0.0732 | 0.0066 | 0.0252 | 0.0 | 0.0 | 0.0033 | 0.0574 | 0.0348 | 0.1014 | 0.0 | 0.0 | 0.0147 | 0.059 | 0.0025 | 0.0624 | 0.0289 | 0.1279 |
| 6.6836 | 247.0 | 91390 | 10.4774 | 0.0106 | 0.0139 | 0.0124 | 0.0 | 0.0078 | 0.0119 | 0.0394 | 0.0454 | 0.047 | 0.0 | 0.0332 | 0.047 | 0.0 | 0.0 | 0.0002 | 0.0089 | 0.0222 | 0.1024 | 0.0046 | 0.061 | 0.0089 | 0.0227 | 0.0 | 0.0 | 0.0007 | 0.0386 | 0.0447 | 0.0944 | 0.0 | 0.0 | 0.0114 | 0.0474 | 0.0042 | 0.0758 | 0.0303 | 0.1131 |
| 6.6821 | 248.0 | 91760 | 10.3334 | 0.0146 | 0.0193 | 0.0146 | 0.0 | 0.0079 | 0.0172 | 0.0436 | 0.0551 | 0.0559 | 0.0 | 0.0354 | 0.059 | 0.0 | 0.0 | 0.0005 | 0.0157 | 0.0069 | 0.1049 | 0.0081 | 0.1122 | 0.0075 | 0.0176 | 0.0 | 0.0 | 0.0025 | 0.0594 | 0.0566 | 0.0775 | 0.0 | 0.0 | 0.041 | 0.0705 | 0.0032 | 0.0772 | 0.0495 | 0.1361 |
| 6.6432 | 249.0 | 92130 | 10.3371 | 0.0166 | 0.0208 | 0.0183 | 0.0004 | 0.0032 | 0.019 | 0.0462 | 0.0516 | 0.0519 | 0.0047 | 0.024 | 0.0548 | 0.0 | 0.0 | 0.0001 | 0.0076 | 0.0513 | 0.1244 | 0.0062 | 0.078 | 0.0073 | 0.0172 | 0.0 | 0.0 | 0.0092 | 0.0547 | 0.0569 | 0.0887 | 0.0 | 0.0 | 0.0185 | 0.0603 | 0.0017 | 0.049 | 0.0484 | 0.1426 |
| 6.6635 | 250.0 | 92500 | 10.2158 | 0.0236 | 0.0302 | 0.0277 | 0.0 | 0.0061 | 0.0269 | 0.0605 | 0.0665 | 0.0669 | 0.0 | 0.0381 | 0.0706 | 0.0 | 0.0 | 0.0004 | 0.0141 | 0.053 | 0.1293 | 0.0091 | 0.1195 | 0.0071 | 0.0204 | 0.0 | 0.0 | 0.0052 | 0.0758 | 0.0892 | 0.1507 | 0.0 | 0.0 | 0.0318 | 0.059 | 0.003 | 0.0651 | 0.0848 | 0.1689 |
| 6.6579 | 251.0 | 92870 | 10.3694 | 0.0131 | 0.0188 | 0.0135 | 0.0 | 0.0074 | 0.0147 | 0.0532 | 0.059 | 0.0599 | 0.0 | 0.0283 | 0.0643 | 0.0 | 0.0 | 0.0002 | 0.0111 | 0.0353 | 0.1659 | 0.004 | 0.0854 | 0.007 | 0.0194 | 0.0 | 0.0 | 0.0088 | 0.0715 | 0.0425 | 0.1141 | 0.0 | 0.0 | 0.0182 | 0.0603 | 0.001 | 0.0436 | 0.0402 | 0.1475 |
| 6.6534 | 252.0 | 93240 | 10.3973 | 0.015 | 0.0196 | 0.016 | 0.0005 | 0.0032 | 0.0177 | 0.0514 | 0.058 | 0.0585 | 0.0031 | 0.0308 | 0.0629 | 0.0 | 0.0 | 0.0003 | 0.0139 | 0.0651 | 0.1463 | 0.0046 | 0.0902 | 0.0078 | 0.0213 | 0.0 | 0.0 | 0.0021 | 0.054 | 0.0305 | 0.0803 | 0.0 | 0.0 | 0.0272 | 0.0603 | 0.0021 | 0.0631 | 0.0401 | 0.1721 |
| 6.6218 | 253.0 | 93610 | 10.3430 | 0.012 | 0.0175 | 0.0122 | 0.0008 | 0.0062 | 0.0136 | 0.0478 | 0.0539 | 0.0551 | 0.0078 | 0.0345 | 0.0579 | 0.0 | 0.0 | 0.0005 | 0.0157 | 0.0658 | 0.1537 | 0.0095 | 0.0927 | 0.0086 | 0.0286 | 0.0 | 0.0 | 0.0058 | 0.0607 | 0.0105 | 0.0704 | 0.0 | 0.0 | 0.014 | 0.0462 | 0.0022 | 0.055 | 0.0276 | 0.1377 |
| 6.603 | 254.0 | 93980 | 10.3932 | 0.0163 | 0.0223 | 0.0187 | 0.0 | 0.0064 | 0.0184 | 0.049 | 0.0562 | 0.0566 | 0.0 | 0.0329 | 0.0604 | 0.0 | 0.0 | 0.0004 | 0.0117 | 0.044 | 0.1463 | 0.0106 | 0.0756 | 0.0092 | 0.0221 | 0.0 | 0.0 | 0.0099 | 0.0513 | 0.0361 | 0.1197 | 0.0 | 0.0 | 0.0299 | 0.0474 | 0.0021 | 0.0624 | 0.0538 | 0.1426 |
| 6.6157 | 255.0 | 94350 | 10.4173 | 0.0174 | 0.024 | 0.0179 | 0.0 | 0.0053 | 0.02 | 0.0487 | 0.0548 | 0.0558 | 0.0 | 0.0314 | 0.058 | 0.0 | 0.0 | 0.0018 | 0.0117 | 0.0544 | 0.1415 | 0.0135 | 0.0756 | 0.0083 | 0.0228 | 0.0 | 0.0 | 0.0056 | 0.0513 | 0.0412 | 0.0958 | 0.0 | 0.0 | 0.0299 | 0.059 | 0.0032 | 0.0557 | 0.0506 | 0.1557 |
| 6.6235 | 256.0 | 94720 | 10.4197 | 0.0172 | 0.0235 | 0.0169 | 0.0 | 0.0049 | 0.0194 | 0.0478 | 0.0541 | 0.0553 | 0.0 | 0.0328 | 0.0586 | 0.0 | 0.0 | 0.0002 | 0.0083 | 0.0651 | 0.0951 | 0.0109 | 0.0902 | 0.0079 | 0.0208 | 0.0 | 0.0 | 0.0026 | 0.0406 | 0.0361 | 0.1225 | 0.0 | 0.0 | 0.0387 | 0.0615 | 0.0032 | 0.0651 | 0.0421 | 0.159 |
| 6.6406 | 257.0 | 95090 | 10.3107 | 0.0125 | 0.0182 | 0.0162 | 0.0 | 0.006 | 0.0151 | 0.0468 | 0.0536 | 0.0543 | 0.0 | 0.0291 | 0.0567 | 0.0 | 0.0 | 0.0002 | 0.0083 | 0.0491 | 0.1244 | 0.0084 | 0.0927 | 0.0068 | 0.018 | 0.0 | 0.0 | 0.0028 | 0.0406 | 0.0093 | 0.1127 | 0.0 | 0.0 | 0.013 | 0.0474 | 0.002 | 0.055 | 0.059 | 0.1525 |
| 6.6069 | 258.0 | 95460 | 10.2805 | 0.0203 | 0.0277 | 0.0211 | 0.0 | 0.0118 | 0.0226 | 0.0592 | 0.066 | 0.0668 | 0.0 | 0.0339 | 0.0703 | 0.0 | 0.0 | 0.0004 | 0.0135 | 0.0753 | 0.1683 | 0.0132 | 0.1317 | 0.0064 | 0.0157 | 0.0 | 0.0 | 0.0056 | 0.0554 | 0.0442 | 0.1141 | 0.0 | 0.0 | 0.0349 | 0.0692 | 0.0032 | 0.0678 | 0.0602 | 0.1656 |
| 6.6259 | 259.0 | 95830 | 10.3008 | 0.0172 | 0.0232 | 0.0177 | 0.0 | 0.0059 | 0.0194 | 0.0505 | 0.0558 | 0.0564 | 0.0 | 0.0268 | 0.0577 | 0.0 | 0.0 | 0.0004 | 0.0154 | 0.0388 | 0.1146 | 0.0137 | 0.1146 | 0.0092 | 0.0253 | 0.0 | 0.0 | 0.0061 | 0.0688 | 0.0518 | 0.1042 | 0.0 | 0.0 | 0.0357 | 0.0603 | 0.0018 | 0.0403 | 0.0489 | 0.1328 |
| 6.5728 | 260.0 | 96200 | 10.3524 | 0.0159 | 0.0219 | 0.0165 | 0.0 | 0.0023 | 0.0187 | 0.0504 | 0.0547 | 0.055 | 0.0 | 0.0197 | 0.059 | 0.0 | 0.0 | 0.0002 | 0.0109 | 0.0562 | 0.1024 | 0.0102 | 0.1 | 0.0062 | 0.0155 | 0.0 | 0.0 | 0.0014 | 0.049 | 0.0356 | 0.1239 | 0.0 | 0.0 | 0.0376 | 0.0692 | 0.0014 | 0.0409 | 0.0423 | 0.1475 |
| 6.5773 | 261.0 | 96570 | 10.3170 | 0.0146 | 0.0197 | 0.0151 | 0.0 | 0.0046 | 0.0166 | 0.0587 | 0.0649 | 0.0655 | 0.0 | 0.0345 | 0.0715 | 0.0 | 0.0 | 0.0002 | 0.0122 | 0.0349 | 0.139 | 0.0125 | 0.0927 | 0.0063 | 0.0217 | 0.0 | 0.0 | 0.0018 | 0.0534 | 0.0414 | 0.1451 | 0.0 | 0.0 | 0.009 | 0.0615 | 0.0033 | 0.0872 | 0.0653 | 0.1738 |
| 6.5674 | 262.0 | 96940 | 10.2879 | 0.0237 | 0.0305 | 0.0254 | 0.0 | 0.0046 | 0.0255 | 0.0634 | 0.0726 | 0.0733 | 0.0 | 0.0418 | 0.0787 | 0.0 | 0.0 | 0.0004 | 0.0187 | 0.0759 | 0.1732 | 0.0087 | 0.0927 | 0.0077 | 0.0248 | 0.0 | 0.0 | 0.0021 | 0.0685 | 0.0879 | 0.162 | 0.0 | 0.0 | 0.0294 | 0.0679 | 0.0039 | 0.0785 | 0.0679 | 0.1934 |
| 6.5377 | 263.0 | 97310 | 10.2744 | 0.023 | 0.0313 | 0.0263 | 0.0004 | 0.0063 | 0.0263 | 0.0612 | 0.069 | 0.0703 | 0.0078 | 0.0427 | 0.0722 | 0.0 | 0.0 | 0.0004 | 0.0185 | 0.0764 | 0.1512 | 0.0143 | 0.0878 | 0.0116 | 0.034 | 0.0 | 0.0 | 0.0047 | 0.0772 | 0.0517 | 0.1507 | 0.0 | 0.0 | 0.0319 | 0.0769 | 0.002 | 0.0671 | 0.083 | 0.1803 |
| 6.6347 | 264.0 | 97680 | 10.3396 | 0.0173 | 0.0228 | 0.0195 | 0.0015 | 0.0041 | 0.0197 | 0.0578 | 0.0654 | 0.0667 | 0.0094 | 0.0322 | 0.0722 | 0.0 | 0.0 | 0.0002 | 0.0141 | 0.0383 | 0.1512 | 0.01 | 0.0805 | 0.0076 | 0.0216 | 0.0 | 0.0 | 0.0026 | 0.0822 | 0.0749 | 0.1521 | 0.0 | 0.0 | 0.0287 | 0.0808 | 0.0021 | 0.0718 | 0.0436 | 0.1459 |
| 6.5916 | 265.0 | 98050 | 10.4574 | 0.0161 | 0.0212 | 0.0175 | 0.0 | 0.0052 | 0.0181 | 0.0511 | 0.0572 | 0.0575 | 0.0 | 0.0197 | 0.0638 | 0.0 | 0.0 | 0.0001 | 0.0041 | 0.049 | 0.1366 | 0.0063 | 0.078 | 0.0076 | 0.0168 | 0.0 | 0.0 | 0.0027 | 0.0641 | 0.0558 | 0.1366 | 0.0 | 0.0 | 0.0407 | 0.0679 | 0.0025 | 0.0597 | 0.0282 | 0.1262 |
| 6.5363 | 266.0 | 98420 | 10.3394 | 0.0195 | 0.0258 | 0.0206 | 0.0 | 0.0048 | 0.0226 | 0.0587 | 0.0671 | 0.0678 | 0.0 | 0.0257 | 0.0753 | 0.0 | 0.0 | 0.0002 | 0.0087 | 0.0631 | 0.1341 | 0.0084 | 0.1098 | 0.0079 | 0.0165 | 0.0 | 0.0 | 0.0073 | 0.0872 | 0.0765 | 0.1493 | 0.0 | 0.0 | 0.0261 | 0.0808 | 0.0025 | 0.0745 | 0.0419 | 0.1525 |
| 6.5393 | 267.0 | 98790 | 10.3999 | 0.0117 | 0.0159 | 0.0125 | 0.0 | 0.0059 | 0.0135 | 0.0446 | 0.0506 | 0.0518 | 0.0 | 0.0325 | 0.0553 | 0.0 | 0.0 | 0.0004 | 0.0126 | 0.0344 | 0.1195 | 0.0064 | 0.0805 | 0.0103 | 0.0154 | 0.0 | 0.0 | 0.0025 | 0.0564 | 0.0287 | 0.0746 | 0.0 | 0.0 | 0.0267 | 0.0679 | 0.0024 | 0.0597 | 0.0285 | 0.1344 |
| 6.496 | 268.0 | 99160 | 10.3754 | 0.0121 | 0.0166 | 0.0133 | 0.0005 | 0.0046 | 0.0144 | 0.0491 | 0.0551 | 0.0555 | 0.0078 | 0.0266 | 0.061 | 0.0 | 0.0 | 0.0004 | 0.0133 | 0.051 | 0.161 | 0.007 | 0.0902 | 0.008 | 0.018 | 0.0 | 0.0 | 0.0035 | 0.0554 | 0.0151 | 0.0761 | 0.0 | 0.0 | 0.0196 | 0.0679 | 0.0021 | 0.053 | 0.0386 | 0.1311 |
| 6.4918 | 269.0 | 99530 | 10.2236 | 0.0184 | 0.0245 | 0.0193 | 0.002 | 0.0064 | 0.0216 | 0.0611 | 0.0694 | 0.0705 | 0.0094 | 0.0363 | 0.0763 | 0.0 | 0.0 | 0.0005 | 0.0191 | 0.0521 | 0.1317 | 0.0081 | 0.1098 | 0.0078 | 0.0209 | 0.0 | 0.0 | 0.0074 | 0.0896 | 0.0598 | 0.1268 | 0.0 | 0.0 | 0.0194 | 0.0692 | 0.003 | 0.0805 | 0.0626 | 0.1984 |
| 6.5703 | 270.0 | 99900 | 10.1620 | 0.0235 | 0.0304 | 0.0261 | 0.0011 | 0.0067 | 0.027 | 0.0603 | 0.0694 | 0.07 | 0.0078 | 0.0378 | 0.0771 | 0.0 | 0.0 | 0.0009 | 0.0241 | 0.0701 | 0.1195 | 0.018 | 0.1171 | 0.0077 | 0.0198 | 0.0 | 0.0 | 0.0075 | 0.093 | 0.0578 | 0.1239 | 0.0 | 0.0 | 0.0379 | 0.0692 | 0.0033 | 0.0718 | 0.079 | 0.2016 |
| 6.5175 | 271.0 | 100270 | 10.2764 | 0.0159 | 0.0224 | 0.0169 | 0.0 | 0.0076 | 0.0184 | 0.0606 | 0.0685 | 0.069 | 0.0 | 0.0347 | 0.0751 | 0.0 | 0.0 | 0.0007 | 0.0191 | 0.0495 | 0.1293 | 0.0122 | 0.1146 | 0.0076 | 0.0188 | 0.0 | 0.0 | 0.0035 | 0.0903 | 0.0393 | 0.1197 | 0.0 | 0.0 | 0.0206 | 0.0679 | 0.0056 | 0.0973 | 0.0523 | 0.1705 |
| 6.5405 | 272.0 | 100640 | 10.1459 | 0.0177 | 0.025 | 0.0182 | 0.0 | 0.0142 | 0.0203 | 0.0624 | 0.0709 | 0.0733 | 0.0 | 0.0522 | 0.0746 | 0.0 | 0.0 | 0.0008 | 0.0217 | 0.052 | 0.1439 | 0.0172 | 0.1146 | 0.009 | 0.0253 | 0.0 | 0.0 | 0.0083 | 0.0822 | 0.0564 | 0.1352 | 0.0 | 0.0 | 0.0227 | 0.0769 | 0.0103 | 0.1275 | 0.0356 | 0.1525 |
| 6.5023 | 273.0 | 101010 | 10.1197 | 0.0179 | 0.0241 | 0.0188 | 0.0 | 0.0089 | 0.0196 | 0.0637 | 0.0727 | 0.0733 | 0.0 | 0.0348 | 0.0812 | 0.0 | 0.0 | 0.0006 | 0.022 | 0.0537 | 0.1634 | 0.0089 | 0.1049 | 0.0072 | 0.0175 | 0.0 | 0.0 | 0.0093 | 0.0876 | 0.0537 | 0.1408 | 0.0 | 0.0 | 0.0111 | 0.0731 | 0.005 | 0.0993 | 0.0648 | 0.1705 |
| 6.512 | 274.0 | 101380 | 10.2532 | 0.0173 | 0.0232 | 0.0184 | 0.0008 | 0.0091 | 0.0192 | 0.0571 | 0.066 | 0.0666 | 0.0078 | 0.0335 | 0.0733 | 0.0 | 0.0 | 0.0006 | 0.0135 | 0.0403 | 0.161 | 0.0079 | 0.0902 | 0.0072 | 0.0176 | 0.0 | 0.0 | 0.0051 | 0.0758 | 0.0568 | 0.1507 | 0.0 | 0.0 | 0.0087 | 0.0474 | 0.0059 | 0.0933 | 0.0747 | 0.1492 |
| 6.5145 | 275.0 | 101750 | 10.2010 | 0.0221 | 0.0297 | 0.0233 | 0.0012 | 0.0104 | 0.0248 | 0.0634 | 0.0721 | 0.0726 | 0.0078 | 0.044 | 0.0777 | 0.0 | 0.0 | 0.0006 | 0.0193 | 0.0349 | 0.1829 | 0.029 | 0.1073 | 0.008 | 0.0227 | 0.0 | 0.0 | 0.0126 | 0.101 | 0.0653 | 0.1225 | 0.0 | 0.0 | 0.0387 | 0.0769 | 0.0037 | 0.0765 | 0.0725 | 0.1623 |
| 6.4729 | 276.0 | 102120 | 10.2284 | 0.0181 | 0.0242 | 0.0191 | 0.0 | 0.0075 | 0.0211 | 0.058 | 0.0632 | 0.0634 | 0.0 | 0.03 | 0.0704 | 0.0 | 0.0 | 0.0006 | 0.0172 | 0.0437 | 0.1244 | 0.0156 | 0.1146 | 0.0072 | 0.0182 | 0.0 | 0.0 | 0.011 | 0.0809 | 0.0547 | 0.1197 | 0.0 | 0.0 | 0.02 | 0.0474 | 0.0043 | 0.0765 | 0.0606 | 0.1623 |
| 6.5157 | 277.0 | 102490 | 10.2469 | 0.0205 | 0.0276 | 0.0215 | 0.0 | 0.0094 | 0.0238 | 0.06 | 0.0665 | 0.0666 | 0.0 | 0.0357 | 0.0723 | 0.0 | 0.0 | 0.0006 | 0.0157 | 0.046 | 0.139 | 0.0141 | 0.0927 | 0.0067 | 0.0122 | 0.0 | 0.0 | 0.0055 | 0.0899 | 0.0684 | 0.138 | 0.0 | 0.0 | 0.0273 | 0.059 | 0.0026 | 0.0638 | 0.0744 | 0.1885 |
| 6.4665 | 278.0 | 102860 | 10.2389 | 0.0222 | 0.0291 | 0.0237 | 0.0011 | 0.0089 | 0.0249 | 0.0596 | 0.0695 | 0.0701 | 0.0078 | 0.0383 | 0.0738 | 0.0 | 0.0 | 0.0004 | 0.0137 | 0.0595 | 0.1854 | 0.0131 | 0.0927 | 0.008 | 0.0235 | 0.0 | 0.0 | 0.0052 | 0.0842 | 0.0567 | 0.1113 | 0.0 | 0.0 | 0.0256 | 0.0679 | 0.0054 | 0.094 | 0.0919 | 0.1689 |
| 6.4636 | 279.0 | 103230 | 10.2697 | 0.0223 | 0.0291 | 0.0245 | 0.0 | 0.0051 | 0.0253 | 0.0603 | 0.0666 | 0.0672 | 0.0 | 0.0299 | 0.0712 | 0.0 | 0.0 | 0.0011 | 0.0143 | 0.0869 | 0.2171 | 0.0114 | 0.0927 | 0.0078 | 0.018 | 0.0 | 0.0 | 0.0032 | 0.0688 | 0.0562 | 0.1042 | 0.0 | 0.0 | 0.0246 | 0.0667 | 0.0022 | 0.0577 | 0.0748 | 0.1672 |
| 6.4634 | 280.0 | 103600 | 10.2652 | 0.0248 | 0.0334 | 0.0257 | 0.0 | 0.0062 | 0.0289 | 0.0598 | 0.0694 | 0.0699 | 0.0 | 0.0272 | 0.078 | 0.0 | 0.0 | 0.0004 | 0.0178 | 0.0916 | 0.1439 | 0.0098 | 0.1171 | 0.0067 | 0.0212 | 0.0 | 0.0 | 0.0061 | 0.0708 | 0.0698 | 0.1169 | 0.0 | 0.0 | 0.0138 | 0.0564 | 0.0032 | 0.0819 | 0.0961 | 0.2131 |
| 6.4455 | 281.0 | 103970 | 10.2265 | 0.0171 | 0.0235 | 0.018 | 0.0005 | 0.0065 | 0.0209 | 0.0526 | 0.0628 | 0.0635 | 0.0078 | 0.0279 | 0.0696 | 0.0 | 0.0 | 0.0003 | 0.0109 | 0.0513 | 0.1585 | 0.0092 | 0.0878 | 0.0077 | 0.0219 | 0.0 | 0.0 | 0.0032 | 0.0742 | 0.0533 | 0.1141 | 0.0 | 0.0 | 0.0112 | 0.0564 | 0.0033 | 0.0779 | 0.0664 | 0.1607 |
| 6.4587 | 282.0 | 104340 | 10.2547 | 0.0202 | 0.0285 | 0.0206 | 0.0 | 0.0098 | 0.025 | 0.0542 | 0.0641 | 0.0646 | 0.0 | 0.0254 | 0.0694 | 0.0 | 0.0 | 0.0008 | 0.013 | 0.0804 | 0.1683 | 0.0141 | 0.1073 | 0.0069 | 0.0182 | 0.0 | 0.0 | 0.0051 | 0.0738 | 0.0525 | 0.1056 | 0.0 | 0.0 | 0.0101 | 0.0577 | 0.0037 | 0.0624 | 0.0687 | 0.1689 |
| 6.4571 | 283.0 | 104710 | 10.2142 | 0.0216 | 0.0303 | 0.0215 | 0.001 | 0.0087 | 0.0239 | 0.0574 | 0.0697 | 0.0706 | 0.0078 | 0.042 | 0.0738 | 0.0 | 0.0 | 0.0035 | 0.0143 | 0.0613 | 0.1488 | 0.0147 | 0.1073 | 0.0063 | 0.0204 | 0.0 | 0.0 | 0.0055 | 0.0946 | 0.0545 | 0.1408 | 0.0 | 0.0 | 0.0275 | 0.0808 | 0.0029 | 0.0792 | 0.0834 | 0.1607 |
| 6.4493 | 284.0 | 105080 | 10.2620 | 0.0176 | 0.0245 | 0.0174 | 0.0 | 0.0058 | 0.0205 | 0.0533 | 0.0625 | 0.063 | 0.0 | 0.0338 | 0.0658 | 0.0 | 0.0 | 0.0035 | 0.0161 | 0.0636 | 0.1366 | 0.0103 | 0.0854 | 0.0069 | 0.0198 | 0.0 | 0.0 | 0.0034 | 0.093 | 0.0453 | 0.1169 | 0.0 | 0.0 | 0.012 | 0.0577 | 0.0032 | 0.0705 | 0.0624 | 0.1607 |
| 6.4704 | 285.0 | 105450 | 10.2404 | 0.0194 | 0.0258 | 0.0203 | 0.0 | 0.0042 | 0.0227 | 0.0603 | 0.0672 | 0.0674 | 0.0 | 0.0283 | 0.0742 | 0.0 | 0.0 | 0.0021 | 0.0193 | 0.0544 | 0.1512 | 0.0069 | 0.0829 | 0.0076 | 0.0164 | 0.0 | 0.0 | 0.0109 | 0.0883 | 0.0721 | 0.1521 | 0.0 | 0.0 | 0.0131 | 0.0731 | 0.0023 | 0.0537 | 0.0634 | 0.1721 |
| 6.433 | 286.0 | 105820 | 10.2217 | 0.0228 | 0.0302 | 0.024 | 0.0 | 0.0062 | 0.0254 | 0.0577 | 0.0636 | 0.0641 | 0.0 | 0.0302 | 0.0701 | 0.0 | 0.0 | 0.0005 | 0.0137 | 0.0854 | 0.1195 | 0.0095 | 0.0854 | 0.0083 | 0.0205 | 0.0 | 0.0 | 0.0042 | 0.0852 | 0.0621 | 0.1437 | 0.0 | 0.0 | 0.0305 | 0.059 | 0.0035 | 0.0799 | 0.0698 | 0.1623 |
| 6.4317 | 287.0 | 106190 | 10.2109 | 0.0236 | 0.0331 | 0.0236 | 0.0 | 0.0042 | 0.0264 | 0.0595 | 0.0696 | 0.07 | 0.0 | 0.0231 | 0.0802 | 0.0 | 0.0 | 0.0004 | 0.0167 | 0.0776 | 0.1366 | 0.0084 | 0.0829 | 0.0053 | 0.0161 | 0.0 | 0.0 | 0.0056 | 0.1034 | 0.0785 | 0.1211 | 0.0 | 0.0 | 0.0135 | 0.0679 | 0.0039 | 0.0886 | 0.0899 | 0.2066 |
| 6.4332 | 288.0 | 106560 | 10.3142 | 0.0209 | 0.0278 | 0.0217 | 0.0 | 0.0041 | 0.0255 | 0.055 | 0.0631 | 0.0633 | 0.0 | 0.0156 | 0.0734 | 0.0 | 0.0 | 0.0002 | 0.01 | 0.072 | 0.122 | 0.0056 | 0.0829 | 0.0071 | 0.0154 | 0.0 | 0.0 | 0.0038 | 0.0772 | 0.0589 | 0.1169 | 0.0 | 0.0 | 0.0182 | 0.0795 | 0.0032 | 0.0826 | 0.0819 | 0.1738 |
| 6.4441 | 289.0 | 106930 | 10.2880 | 0.0216 | 0.0297 | 0.022 | 0.0 | 0.0045 | 0.0245 | 0.0554 | 0.0627 | 0.0629 | 0.0 | 0.03 | 0.0712 | 0.0 | 0.0 | 0.0003 | 0.0115 | 0.076 | 0.1195 | 0.0068 | 0.0829 | 0.0065 | 0.0154 | 0.0 | 0.0 | 0.0078 | 0.0839 | 0.0477 | 0.1169 | 0.0 | 0.0 | 0.0281 | 0.091 | 0.0024 | 0.0698 | 0.0841 | 0.1639 |
| 6.4243 | 290.0 | 107300 | 10.3032 | 0.0204 | 0.0286 | 0.0214 | 0.0 | 0.0036 | 0.0239 | 0.0546 | 0.0615 | 0.062 | 0.0 | 0.0212 | 0.0695 | 0.0 | 0.0 | 0.0003 | 0.0113 | 0.0692 | 0.1293 | 0.0085 | 0.0854 | 0.0116 | 0.0195 | 0.0 | 0.0 | 0.0054 | 0.0779 | 0.0544 | 0.1042 | 0.0 | 0.0 | 0.0144 | 0.0654 | 0.0023 | 0.0658 | 0.0785 | 0.1852 |
| 6.4104 | 291.0 | 107670 | 10.2981 | 0.0193 | 0.0269 | 0.0196 | 0.0 | 0.0052 | 0.0222 | 0.0579 | 0.066 | 0.0662 | 0.0 | 0.0332 | 0.0726 | 0.0 | 0.0 | 0.0007 | 0.01 | 0.043 | 0.1244 | 0.0077 | 0.0927 | 0.0074 | 0.0172 | 0.0 | 0.0 | 0.004 | 0.0822 | 0.067 | 0.1225 | 0.0 | 0.0 | 0.0138 | 0.0795 | 0.0026 | 0.0664 | 0.0855 | 0.2 |
| 6.4088 | 292.0 | 108040 | 10.2652 | 0.0217 | 0.0282 | 0.0248 | 0.0 | 0.0055 | 0.0237 | 0.0536 | 0.062 | 0.0627 | 0.0 | 0.028 | 0.0684 | 0.0 | 0.0 | 0.0006 | 0.0126 | 0.0626 | 0.139 | 0.0077 | 0.0854 | 0.0116 | 0.0186 | 0.0 | 0.0 | 0.0122 | 0.0758 | 0.0684 | 0.1183 | 0.0 | 0.0 | 0.0093 | 0.0679 | 0.0024 | 0.0591 | 0.0857 | 0.1754 |
| 6.3976 | 293.0 | 108410 | 10.2724 | 0.0257 | 0.034 | 0.0277 | 0.0 | 0.0051 | 0.0277 | 0.0622 | 0.0696 | 0.0702 | 0.0 | 0.0221 | 0.0778 | 0.0 | 0.0 | 0.0005 | 0.01 | 0.0744 | 0.1683 | 0.009 | 0.0829 | 0.0117 | 0.0191 | 0.0 | 0.0 | 0.0112 | 0.0775 | 0.0686 | 0.1352 | 0.0 | 0.0 | 0.0261 | 0.0795 | 0.0032 | 0.0758 | 0.1032 | 0.1934 |
| 6.3981 | 294.0 | 108780 | 10.2494 | 0.0237 | 0.0316 | 0.0248 | 0.0 | 0.0062 | 0.0259 | 0.0572 | 0.0665 | 0.0671 | 0.0 | 0.0314 | 0.0739 | 0.0 | 0.0 | 0.0005 | 0.0165 | 0.0628 | 0.1317 | 0.0088 | 0.0878 | 0.0116 | 0.0195 | 0.0 | 0.0 | 0.0062 | 0.0846 | 0.0659 | 0.1366 | 0.0 | 0.0 | 0.0216 | 0.0679 | 0.0022 | 0.0685 | 0.1044 | 0.1918 |
| 6.3775 | 295.0 | 109150 | 10.2718 | 0.0219 | 0.0302 | 0.0227 | 0.0 | 0.0064 | 0.0238 | 0.0587 | 0.068 | 0.0685 | 0.0 | 0.0409 | 0.0761 | 0.0 | 0.0 | 0.0006 | 0.017 | 0.0663 | 0.1317 | 0.0101 | 0.1146 | 0.0117 | 0.0197 | 0.0 | 0.0 | 0.0059 | 0.0859 | 0.0535 | 0.1225 | 0.0 | 0.0 | 0.0187 | 0.0795 | 0.0033 | 0.0805 | 0.0929 | 0.1705 |
| 6.3901 | 296.0 | 109520 | 10.2749 | 0.0241 | 0.0326 | 0.0246 | 0.0 | 0.0043 | 0.0265 | 0.0586 | 0.0648 | 0.0654 | 0.0 | 0.0298 | 0.0737 | 0.0 | 0.0 | 0.0004 | 0.0167 | 0.0617 | 0.1024 | 0.0082 | 0.1024 | 0.0118 | 0.0199 | 0.0 | 0.0 | 0.0059 | 0.0866 | 0.0856 | 0.1451 | 0.0 | 0.0 | 0.0247 | 0.0782 | 0.0023 | 0.0597 | 0.0881 | 0.1738 |
| 6.3721 | 297.0 | 109890 | 10.2239 | 0.0235 | 0.0318 | 0.0246 | 0.0 | 0.0044 | 0.0261 | 0.0597 | 0.0681 | 0.0685 | 0.0 | 0.0294 | 0.0785 | 0.0 | 0.0 | 0.0009 | 0.018 | 0.0601 | 0.1195 | 0.011 | 0.0878 | 0.0113 | 0.0183 | 0.0 | 0.0 | 0.0061 | 0.0876 | 0.0734 | 0.1324 | 0.0 | 0.0 | 0.0198 | 0.0782 | 0.0026 | 0.0752 | 0.0973 | 0.2049 |
| 6.3899 | 298.0 | 110260 | 10.2509 | 0.023 | 0.0309 | 0.0235 | 0.0 | 0.0063 | 0.0249 | 0.0609 | 0.0688 | 0.0692 | 0.0 | 0.0266 | 0.0781 | 0.0 | 0.0 | 0.0005 | 0.0187 | 0.0642 | 0.139 | 0.0098 | 0.1024 | 0.0119 | 0.0206 | 0.0 | 0.0 | 0.0077 | 0.0872 | 0.0617 | 0.1366 | 0.0 | 0.0 | 0.0277 | 0.0795 | 0.002 | 0.0611 | 0.09 | 0.1852 |
| 6.3974 | 299.0 | 110630 | 10.2657 | 0.0222 | 0.03 | 0.0229 | 0.0 | 0.0034 | 0.0247 | 0.0561 | 0.063 | 0.0632 | 0.0 | 0.0195 | 0.0726 | 0.0 | 0.0 | 0.0007 | 0.017 | 0.0657 | 0.1268 | 0.0105 | 0.0854 | 0.0112 | 0.0161 | 0.0 | 0.0 | 0.0047 | 0.0728 | 0.0617 | 0.1296 | 0.0 | 0.0 | 0.0191 | 0.0679 | 0.002 | 0.0705 | 0.0904 | 0.1721 |
| 6.3809 | 300.0 | 111000 | 10.2840 | 0.0242 | 0.0324 | 0.0251 | 0.0 | 0.0048 | 0.0258 | 0.0623 | 0.0695 | 0.0698 | 0.0 | 0.0275 | 0.078 | 0.0 | 0.0 | 0.0006 | 0.0196 | 0.0848 | 0.1463 | 0.0082 | 0.0829 | 0.0113 | 0.0172 | 0.0 | 0.0 | 0.0058 | 0.0869 | 0.0704 | 0.1394 | 0.0 | 0.0 | 0.014 | 0.0795 | 0.0021 | 0.0658 | 0.0937 | 0.2 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.5.1
- Datasets 3.2.0
- Tokenizers 0.21.1
|
Freezyy/www.unlockios18.com
|
Freezyy
| 2025-06-21T12:47:50Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T12:47:50Z |
---
license: apache-2.0
---
|
18-jaipur-couple-viral-video-full/jaipur.couple.viral.video.in.5.Star.Hotel.full.original
|
18-jaipur-couple-viral-video-full
| 2025-06-21T12:47:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T12:46:37Z |
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
18-video-jaipur-hotel-going-viral/FULL.VIDEO.18.jaipur.hotel.viral.video.original.holiday.inn.jaipur.viral.video
|
18-video-jaipur-hotel-going-viral
| 2025-06-21T12:40:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T12:40:04Z |
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
Official-Othoi-viral-video-Link/FULL.VIDEO.LINK.Othoi.Viral.Video.Leaks.Tutorial.Official
|
Official-Othoi-viral-video-Link
| 2025-06-21T12:35:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T12:34:50Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
pictgensupport/middleagedwoman
|
pictgensupport
| 2025-06-21T12:33:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-21T12:33:51Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: middleagedwoman
---
# Middleagedwoman
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `middleagedwoman` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pictgensupport/middleagedwoman', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
sitatech/FluxUtils
|
sitatech
| 2025-06-21T11:50:37Z | 20 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-02-28T17:05:42Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/Runware/FLUX.1-Redux-dev/blob/main/LICENSE.md
---
|
19-holiday-jaipur-hotel-viral-video-origin/19.jaipur.hotel.viral.video.original.holiday.inn.jaipur.viral.video
|
19-holiday-jaipur-hotel-viral-video-origin
| 2025-06-21T11:48:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T11:46:13Z |
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
UMCU/CardioBERTa.nl_base
|
UMCU
| 2025-06-21T11:44:38Z | 49 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"medical",
"healthcare",
"nl",
"base_model:CLTL/MedRoBERTa.nl",
"base_model:finetune:CLTL/MedRoBERTa.nl",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-03-04T14:55:04Z |
---
license: gpl-3.0
language:
- nl
base_model:
- CLTL/MedRoBERTa.nl
tags:
- medical
- healthcare
metrics:
- perplexity
library_name: transformers
---
Continued, off-premise, pre-training of [MedRoBERTa.nl](https://huggingface.co/CLTL/MedRoBERTa.nl) using about 50GB of open Dutch and translated
English corpora.
# Data statistics
Sources:
* Dutch: medical guidelines (FMS, NHG)
* Dutch: [NtvG](https://www.ntvg.nl/) papers
* English: Pubmed abstracts
* English: PMC abstracts translated using DeepL
* English: Apollo guidelines, papers and books
* English: Meditron guidelines
* English: MIMIC3
* English: MIMIC CXR
* English: MIMIC4
All translated (if not with DeepL) with a combination of GeminiFlash 1.5/GPT4o mini, MariaNMT, NLLB200.
* Number of tokens: 15B
* Number of documents: 27M
# Training
* Effective batch size: 5120
* Learning rate: 2e-4
* Weight decay: 1e-3
* Learning schedule: linear, with 5_000 warmup steps
* Num epochs: ~3
Train perplexity: 3.0
Validation perplexity: 3.0
# Acknowledgement
This work was done together with the Amsterdam UMC, in the context of the [DataTools4Heart](https://www.datatools4heart.eu/) project.
We were happy to be able to use the [Google TPU research cloud](https://sites.research.google/trc/about/) for training the model.
|
4maan4hmad/Llama3.2-finetuned-sitemanager
|
4maan4hmad
| 2025-06-21T11:31:03Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T11:30:24Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** 4maan4hmad
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
New-videos-mezzo-fun-18-Viral-Videos/FULL.VIDEO.mezzo.fun.Viral.Video.Tutorial.Official
|
New-videos-mezzo-fun-18-Viral-Videos
| 2025-06-21T11:26:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T11:26:04Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
codewithpurav/a2c-PandaReachDense-v3
|
codewithpurav
| 2025-06-21T11:14:59Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-21T11:10:38Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.28 +/- 0.14
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
New-Clip-mezzo-fun-19-Viral-Video-Link/Original.Full.Clip.Mezzo.fun.Viral.Video.Tutorial.Official
|
New-Clip-mezzo-fun-19-Viral-Video-Link
| 2025-06-21T11:02:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T11:01:59Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
PaceKW/bert-base-indonesian-1.5G-multilabel-indonesian-hate-speech-modified-v2
|
PaceKW
| 2025-06-21T10:37:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:cahya/bert-base-indonesian-1.5G",
"base_model:finetune:cahya/bert-base-indonesian-1.5G",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T10:32:10Z |
---
library_name: transformers
license: mit
base_model: cahya/bert-base-indonesian-1.5G
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-base-indonesian-1.5G-multilabel-indonesian-hate-speech-modified-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-indonesian-1.5G-multilabel-indonesian-hate-speech-modified-v2
This model is a fine-tuned version of [cahya/bert-base-indonesian-1.5G](https://huggingface.co/cahya/bert-base-indonesian-1.5G) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2271
- F1: 0.8042
- Roc Auc: 0.8799
- Accuracy: 0.7229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.257 | 1.0 | 1317 | 0.2008 | 0.7645 | 0.8432 | 0.6507 |
| 0.1793 | 2.0 | 2634 | 0.1925 | 0.7868 | 0.8732 | 0.6621 |
| 0.1305 | 3.0 | 3951 | 0.2005 | 0.7959 | 0.8773 | 0.7039 |
| 0.0909 | 4.0 | 5268 | 0.2191 | 0.7961 | 0.8666 | 0.7206 |
| 0.0655 | 5.0 | 6585 | 0.2271 | 0.8042 | 0.8799 | 0.7229 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Pakcricketinfo-Sapna-Shah-18k/NEW.VIDEO.Pakcricketinfo.Sapna.Shah.Viral.Video.On.Social.Media.Link
|
Pakcricketinfo-Sapna-Shah-18k
| 2025-06-21T10:29:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T10:23:45Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Pakcricketinfo-Sapna-Shah-18k)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Pakcricketinfo-Sapna-Shah-18k)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Pakcricketinfo-Sapna-Shah-18k)
|
heboya8/facebook-musicgen-small-not-lora-80
|
heboya8
| 2025-06-21T10:25:28Z | 0 | 0 | null |
[
"safetensors",
"musicgen",
"region:us"
] | null | 2025-06-21T10:07:28Z |
***** eval metrics *****
epoch = 80.0
eval_clap = 0.1663
eval_loss = 5.3156
eval_runtime = 0:01:54.95
eval_samples = 8
eval_samples_per_second = 0.07
eval_steps_per_second = 0.07
|
Quoc59/PARADIS-Qwen3_1.7B-10kWikiVi-1GPU
|
Quoc59
| 2025-06-21T10:09:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-21T08:09:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AhmadZahid/results
|
AhmadZahid
| 2025-06-21T09:58:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-21T08:19:47Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0007
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 50 | 0.0019 | 1.0 |
| 0.0672 | 2.0 | 100 | 0.0008 | 1.0 |
| 0.0672 | 3.0 | 150 | 0.0007 | 1.0 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
TensinormColombia/TensinormColombia
|
TensinormColombia
| 2025-06-21T09:57:40Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T09:56:16Z |
---
license: apache-2.0
---
¿Qué es Tensinorm?
Tensinorm Pastillas es una cápsula natural desarrollada para apoyar a las personas que padecen presión arterial alta, también conocida como hipertensión. En el mundo actual, donde la vida transcurre a un ritmo acelerado y los niveles de estrés son altos, mantener una presión arterial saludable se ha vuelto más difícil que nunca. Ya sean largas jornadas laborales, dietas procesadas o falta de descanso, el estilo de vida moderno está afectando negativamente la salud cardíaca. Tensinorm cápsula fue creado para quienes desean recuperar el control de forma suave, natural y sin depender exclusivamente de tratamientos sintéticos. Tensinorm farmacia Es un suplemento diario diseñado para quienes están listos para priorizar su salud cardíaca y prevenir futuras complicaciones relacionadas con la presión arterial descontrolada Tensinorm foro.
Sitio web oficial:<a href="https://www.nutritionsee.com/tensinolombia">www.Tensinorm.com</a>
<p><a href="https://www.nutritionsee.com/tensinolombia"> <img src="https://www.nutritionsee.com/wp-content/uploads/2025/06/Tensinorm-Colombia.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/tensinolombia">¡Compra ya! Haz clic en el enlace de abajo para más información y obtén un 50% de descuento. ¡Date prisa!</a>
Sitio web oficial:<a href="https://www.nutritionsee.com/tensinolombia">www.Tensinorm.com</a>
|
brokuking1/a7da0963-ce0f-48c2-b160-25d263ae2a1a
|
brokuking1
| 2025-06-21T09:55:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-21T07:43:42Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.